AWS |
AWS - Webinar Channel |
Getting Started with Amazon DynamoDB Global Tables- AWS Database in 15 |
https://www.youtube.com/watch?v=TR01wE_Oo7M
|
Getting Started with Amazon DynamoDB Global Tables AWS Database in Amazon DynamoDB is the most scalable database service offered by AWS Amazon DynamoDB was built to scale with a serverless architecture that supports the largest internet scale applications including Amazon com Snapchat and Hulu Being a NoSQL database DynamoDB is the perfect choice for microservice based architectures For customers starting their journey of application modernization DynamoDB is a great place to start This session will cover DynamoDB global tables for automated global replication supporting applications with single digit millisecond latency read and write performance in any selected region Learning Objectives Objective Understand what DynamoDB global tables are and the benefits Objective Hear use cases of customers who have implemented global tables Objective See firsthand how easy DynamoDB global tables can be implemented via the AWS console To learn more about the services featured in this talk please visit To download a copy of the slide deck from this webinar visit |
2023-02-23 17:15:01 |
Git |
Gitタグが付けられた新着投稿 - Qiita |
git push エラー:git@github.com: Permission denied (publickey). fatal: Could not read from remote repository. |
https://qiita.com/kengo-sk8/items/5bd8b00dde3d0b28ae5f
|
gitpushuoriginma |
2023-02-24 02:23:10 |
海外TECH |
Ars Technica |
Florida surgeon general fudged data for dubious COVID analysis, tipster says |
https://arstechnica.com/?p=1919585
|
analysis |
2023-02-23 17:26:46 |
海外TECH |
MakeUseOf |
Is the Voilà AI Artist App Safe to Use? |
https://www.makeuseof.com/is-voila-ai-artist-safe-to-use/
|
artistic |
2023-02-23 17:01:16 |
海外TECH |
DEV Community |
Building APOD color search part I: Image analysis in Rust |
https://dev.to/bryce/building-apod-color-search-part-i-image-analysis-in-rust-24a5
|
Building APOD color search part I Image analysis in RustKicking off the first of a series about how I built APOD color search For an intro to the project go here Building an app to search the APOD archive by color Bryce Dorn・Feb ・ min read webdev showdev javascript rust The first step was to devise a way to extract color information from each image From a high level a search like this cannot be done on the fly as it requires a static index of image and color information to return results Gathering the data On to populating the dataset image analysis is a low level operation Given that there s nearly two decades worth of images to process this needed to be fast and performant The full source code can be found on GitHub but I ll go through some of the essential bits here Processing data for an APOD To populate the searchable color based index of each APOD three things must be done Fetch APOD information via apod api Extract color information from image Store color metadata in database One of my goals for this project was to use cloud first and free resources whenever possible to save headaches later on with deployments amp environments For the database above I created a Postgres instance using supabase s free tier Getting a day s APOD informationFetching this data is easy enough using reqwest and the apod api just need an API key let api url let api key std env var APOD API KEY unwrap let request url format api key amp start date amp end date api url api key start date end date let resp reqwest get request url await let body resp text await To do more with this data however Rust requires that it be properly typed serde streamlines this with built in JSON serialization it only requires a static type and can handle the rest Here s the type I added to correspond to the API response use serde Deserialize Serialize derive Debug Deserialize Serialize pub struct Day id Option lt u gt copyright Option lt String gt date String explanation Option lt String gt hdurl Option lt String gt media type String service version Option lt String gt title String url Option lt String gt Then calling serde json from str will deserialize it to the typed data structure use serde json json let days Vec lt Day gt serde json from str amp body unwrap Lastly once we have a Day object to work with we need to fetch the actual image bytes to do pixel based analysis use crate image utils let img bytes reqwest get amp image url await bytes await let img image load from memory img bytes Processing the colors Now all that s left is some low level pixel processing This isn t the most efficient algorithm as I m still a novice Rustacean so it s the best I could do Because these images tend to be massive the most important pieces are around removing noisy data to avoid unnecessary computation In this case only analyzing significant pixels and then counting ranking and grouping them together Many images have a minimal amount of color information being either grayscale or mostly black due to the vast emptiness of space As color is in the name the project is more intended to enable finding colorful pictures not specific hues of black gray To not waste computation time on these I filtered out the non colored pixels let gray pixels HashSet lt Rgba lt u gt gt img grayscale pixels into iter map p p collect let all pixels Vec lt Rgba lt u gt gt img pixels into iter map p p collect let colored pixels Vec lt Rgba lt u gt gt all pixels into iter filter p gray pixels contains p collect Then using a relative luminance function only included the most luminous pixels let luminous pixels Vec lt Rgba lt u gt gt colored pixels into iter filter p get luminance p gt collect Now we re left with a cleaner dataset to work on Generate frequency arrayTo get the most frequent colors of the image the primary goal of this analysis a frequency hash can be used Put simply a string number map of color values to how many times they occur For easier typing each pixel is converted from RGB to String hex value use colorsys Rgb pub fn generate hex pixel Rgba lt u gt gt String Rgb from pixel as f pixel as f pixel as f to hex string Vec from iter input into iter map generate hex collect lt Vec lt String gt gt We can then generate a BTreeMap of type hex string frequency number by iterating over this list and incrementing when the same hex value is found To optimize for performance the function splits the image into chunks and spawns multiple threads to run in parallel before joining them together once complete I experimented with different values for worker count and landed on as most optimal use std collections BTreeMap HashMap HashSet use std sync Arc Mutex use std i thread pub fn get frequency input Vec lt String gt worker count usize gt BTreeMap lt String usize gt let result Arc new Mutex new BTreeMap lt String usize gt new input chunks input len as f worker count as f ceil as usize enumerate map chunk let chunk chunk iter map String from collect lt Vec lt String gt gt let rresult result clone thread spawn move chunk iter for each h rresult lock unwrap entry h to string and modify e e or insert for each handle handle join unwrap Arc try unwrap result unwrap into inner unwrap Once the most frequent color values are found similar ones can be grouped together I refer to these as Clusters if a color has R G amp B values within a certain threshold of each other they are combined into the same Cluster The threshold algorithm is a simple series of conditions that only checks for the green value if the value for red is within the threshold and so on pub fn within threshold a amp Rgba lt u gt b amp Rgba lt u gt color usize threshold i gt bool let color a color as i let color b color as i let mut min let mut max if color gt threshold min color threshold if color lt threshold max color threshold color gt min amp amp color lt max pub fn assign clusters input Vec lt Rgba lt u gt usize gt threshold i gt HashMap lt Rgba lt u gt usize gt let mut result HashMap lt Rgba lt u gt usize gt new for item in input let s r Vec lt Rgba lt u gt gt result into keys filter p within threshold p amp item threshold collect The closest color value matches are then added to the Cluster Once the clusters are finalized the last step here is to only return the most popular ones This is done with a simple sort by call let mut sorted result Vec from iter result sorted result sort by f a f b f b partial cmp f a unwrap let size std cmp min num clusters sorted result len sorted result size to vec And now we have the most significant clusters This makes it possible to search for a color and map to images that contain many pixels with that color One month at a time ️Processing a single APOD is one thing but the end goal is to process all of them The cleanest way to group batches of days was by month As the apod api supports start date and end date parameters to support this I just used the first and last days of the month for these parameters Since I knew I d be running this via command line I first checked the arguments provided year and month and if they correlated to a valid date for an APOD Since this will map from raw numbers to chrono Date objects some serialization is needed let args Vec lt String gt env args collect let first apod Utc ymd let today Utc today let numbers Vec lt u gt args iter flat map x x parse collect let day Utc ymd numbers as i numbers if day lt first apod day gt today Err format Out of range date must be between and first apod format b e Y today format b e Y Then given that it s a valid month we can iterate over each day I added a fetch month function to generate a list of Days for a month given the first day as a chrono Date generated by the above arguments async fn fetch month first day chrono Date lt Utc gt gt Result lt Vec lt api Day gt Box lt dyn Error gt gt let first day formatted first day format Y m d to string let today Utc today let mut last day first day Duration days with day unwrap Duration days if last day gt today last day today let last day formatted last day format Y m d to string let apods api get days amp first day formatted amp last day formatted await Ok apods After getting the data for an entire month it s as simple as iterating over each day and processing it let apods fetch month day await let mut i for apod in apods process apod apod await i I didn t include this in the code snippets but things are saved via Postgrest along the way most importantly Colors and Clusters which are used to perform searches Feel free to have a look at the full source to see these Thanks for reading amp stay tuned for the next part using GitHub Actions as a free provider to run it in parallel remotely |
2023-02-23 17:45:19 |
海外TECH |
DEV Community |
Porque as pessoas estão desenvolvendo dentro de containers? |
https://dev.to/github/porque-as-pessoas-estao-desenvolvendo-dentro-de-containers-lif
|
Porque as pessoas estão desenvolvendo dentro de containers Esse artigo foi escrito com base no artigo Why are people developing inside containers Escrito pela minha colega de trabalho Rizèl Scarlett Se vocêconsome conteúdo em inglês super recomendo que siga ela Nove anos atrás em março de Solomon Hykes e seus co fundadores revolucionaram a forma como desenvolvemos software com uma plataforma Open Source chamada Docker Embora os criadores do Docker não tenham inventado os contêineres ou containers eles os popularizaram Graças ao Docker pessoas engenheiras podem criar ferramentas como GitHub Codespaces que nos permitem codar em um container de desenvolvimento hospedado na nuvem Eu admito para vocês que atéhoje o tópico de containers me gera várias dúvidas como Por que as pessoas estão desenvolvendo dentro de containers O que são containers Se vocêtem dúvidas semelhantes este post épara você vamos aprender juntes Neste post explicarei o que são containers como eles beneficiam o desenvolvimento e como configurar devcontainers no GitHub Codespaces P S Eu estarei usando a palavra containers em inglês para facilitar a busca por palavra chave mas também énormal usar a palavra em português contêineres que inclusive éa que usam na localização das documentações do Kubernetes em ptbr aprendi isso no twitter obrigada Flávio O que significa desenvolver dentro de um container Levanta a mão aíquem jáfalou ou pensou Funciona no meu PC o Se vocênão conhece esse meme saiba que essa frase não virou um meme a toa Nós tecnologistas nos pegamos dizendo isso com certa frequência quando trabalhamos com uma base de código em que os ambientes variam Embora trabalhar em ambientes variados não seja o ideal isso acontece Situações como essa ocorrem quando seu ambiente local os ambientes locais de colegas de trabalho staging e a produção têm pequenas diferenças Como os ambientes têm configurações diferentes mesmo que com pequenas diferenças os bugs existiam em um ambiente mas não no outro Pode ser muito embaraçoso e frustrante criar uma feature ou corrigir um bug que funciona localmente mas não funciona na produção ou no teste E éaqui que entram os containers Para resolver o problema de inconsistência de ambientes de desenvolvimento Os containers permitem que pessoas engenheiras de software programem em um ambiente consistente Agora seu ambiente de programação pode espelhar a produção usando o mesmo sistema operacional configurações e dependências Isso garante que bugs e recursos se comportem da mesma forma em todos os ambientes nos salvando da vergonha de ter que dizer Funciona na minha máquina Agora que entendemos o propósito dos containers vamos explorar como o Codespaces aproveita os containers GitHub Codespaces leva software e desenvolvimento em nuvem para o próximo levelO GitHub Codespaces permite codar em um container hospedado na nuvem Nesse contexto a nuvem éum ambiente que não reside no seu computador mas sim na internet Onboardings mais rápidosNormalmente as pessoas programadoras são responsáveis por configurar seu ambiente local quando ingressam em uma equipe A configuração do ambiente local inclui a instalação das dependências necessárias linters variáveis de ambiente e muito mais Énormal gastarmos atéuma semana configurando nosso ambiente dependendo da qualidade da documentação Essa experiência échata porque quando a gente começa um projeto novo a gente quer codar logo Mas em vez disso temos que alimentar nosso banco de dados e editar arquivos zshrc dentre outras coisas Felizmente empresas podem automatizar o processo de integração usando GitHub Codespaces para configurar um ambiente personalizado Quando uma nova pessoa entra na equipe ela pode abrir um codespace e pular a configuração do ambiente local porque as extensões dependências e variáveis de ambiente necessárias existem no codespace Code de qualquer lugarE não estou falando apenas de qualquer lugar geograficamente falando mas também de qualquer máquina Com Codespaces posso codar em qualquer lugar que tenha acesso àInternet notebook ou tablet meu ou emprestado da coleguinha Se eu trocar de dispositivo ou esquecer meu laptop em casa posso facilmente retomar meu trabalho em um iPad enquanto estou no avião sem clonar um repositório baixar meu IDE preferido e configurar um ambiente local Isso épossível porque o Codespaces abre um editor semelhante ao Visual Studio Code dentro do navegador A melhor parte éque os Codespaces podem salvar automaticamente o meu código mesmo que eu esqueça de enviar minhas alterações para meu repositório Não que eu játenha perdido código por esquecer de salvar… Ambientes consistentesConforme mencionado acima os containers permitem que vocêtrabalhe em um ambiente de produção espelhado Como o GitHub Codespaces usa containers vocêpode obter os mesmos resultados e experiência de desenvolvevimento em seu ambiente local que obteria em seu ambiente de produção Além disso às vezes quando ocorrem alterações na base de código como aprimoramentos de infraestrutura os ambientes locais podem ser interrompidos Quando os ambientes locais quebram cabe a pessoa desenvolvedora restaurar seu ambiente de desenvolvimento No entanto o uso de containers do GitHub Codespaces traz uniformidade aos ambientes e reduz as chances de trabalhar em um ambiente quebrado Três arquivos que vocêprecisa configurar no CodespacesVocêpode aproveitar três arquivos para fazer com que a experiência do Codespaces funcione para vocêe sua equipe o arquivo devcontainer json o Dockerfile e o arquivo docker compose yml Cada um desses arquivos reside no diretório devcontainer na raiz do seu repositório Devcontainer jsonO arquivo devcontainer json éum arquivo de configuração que informa ao GitHub Codespaces como configurar um codespace Dentro de um arquivo devcontainer vocêpode configurar o seguinte ExtensõesVariáveis de ambienteDockerfileEncaminhamento de PortComandos pós criaçãoE mais…Isso significa que sempre que vocêou alguém abrir um codepspace as extensões variáveis de ambiente e outras configurações especificadas no arquivo devcontainer json serão instaladas automaticamente quando abrirem um codespace no repositório especificado Por exemplo se eu quisesse que as pessoas tivessem o mesmo linter e as mesmas extensões que eu poderia adicionar o seguinte ao meu arquivo devcontainer json name Node js build dockerfile Dockerfile Update VARIANT to pick a Node version Append bullseye or buster to pin to an OS version Use bullseye variants on local arm Apple Silicon args VARIANT bullseye Configure tool specific properties customizations Configure properties specific to VS Code vscode Add the IDs of extensions you want installed when the container is created extensions dbaeumer vscode eslint this is the exentension id for eslint esbenp prettier vscode this is the extension id for prettier ms vsliveshare vsliveshare this is the extension id for live share Use forwardPorts to make a list of ports inside the container available locally forwardPorts Use postCreateCommand to run commands after the container is created postCreateCommand yarn install Comment out to connect as root instead More info remoteUser node Vocêpode aprender mais sobre o devcontainer json aqui DockerfileO Dockerfile éum arquivo de configuração que informa ao GitHub Codespaces como construir um container Ele contém uma lista de comandos que o cliente Docker chama ao criar uma imagem Dockerfiles são usados para automatizar a instalação e configuração de um conainer Por exemplo se eu quisesse instalar o Node js em um container poderia adicionar o seguinte ao meu Dockerfile FROM node bullseyeVocêpode aprender mais sobre Dockerfiles aqui Docker compose ymlVocênão precisa de um arquivo docker compose yml em um Codespace mas éútil se vocêdeseja executar vários containers Por exemplo se vocêdeseja executar um banco de dados e um servidor da Web em um Codespace pode usar o arquivo docker compose yml para executar os dois containers Aqui estáum exemplo de como pode ser um arquivo docker compose yml que estáse conectando a um banco de dados version services app build context dockerfile devcontainer Dockerfileargs VARIANT NODE VERSION none volumes workspace cachedcommand sleep infinitynetwork mode service dbdb image postgres latestrestart unless stoppedvolumes postgres data var lib postgresql datahostname postgresenvironment POSTGRES DB my mediaPOSTGRES USER examplePOSTGRES PASSWORD passPOSTGRES HOST AUTH METHOD trustports volumes postgres data null O Codespaces não éa mesma coisa que o editor de web do GithubO GitHub Codespaces não éo mesmo que o editor web do GitHub O editor web éaquele editor mágico que aparece quando vocêpressiona em um repositório Se vocênunca fez isso vai fazer AGORA eu espero… Esse éum editor leve que permite editar arquivos em seu repositório O editor da Web éótimo para fazer pequenas alterações em um arquivo mas não éideal para escrever e executar aplicativos da Web Full Stack Isso porque o editor da web do GitHub não possui um terminal No entanto o Codespaces permite que vocêexecute um IDE completo no navegador equipado com um terminal e muito mais Chegou a hora da revisãoNesse artigo aprendemos o que éum container e porque as pessoas usam eles Ebaaa De brinde ainda aprendemos mais sobre o GitHub Codespaces Espero que você assim como eu tenha aprendido algo novo hoj Obrigada por ler atéfinal e sigam o GitHub Brasil das redes sociais para ficar por dentro de novidades lt GitHub Brasil Twitter GitHub Brasil no LinkedIn GitHub Brasil na Twitch Meet ups do GitHub em português️ |
2023-02-23 17:43:14 |
海外TECH |
DEV Community |
Experiments with the JavaScript Garbage Collector |
https://dev.to/codux/experiments-with-the-javascript-garbage-collector-2ae3
|
Experiments with the JavaScript Garbage CollectorMemory leaks in web applications are widespread and notoriously difficult to debug If we want to avoid them it helps to understand how the garbage collector decides what objects can and cannot be collected In this article we ll take a look at a few scenarios where its behavior might surprise you If you re unfamiliar with the basics of garbage collection a good starting point would be A Crash Course in Memory Management by Lin Clark or Memory Management on MDN Consider reading one of those before continuing Detecting Object DisposalRecently I ve learned that JavaScript provides a class called FinalizationRegistry that allows you to programmatically detect when an object is garbage collected It s available in all major web browsers and Node js A basic usage example const registry new FinalizationRegistry message gt console log message function example const x registry register x x has been collected example Some time later x has been collected When the example function returns the object referenced by x is no longer reachable and can be disposed of Most likely though it won t be disposed immediately The engine can decide to handle more important tasks first or to wait for more objects to become unreachable and then dispose of them in bulk But you can force garbage collection by clicking the little trash icon in the DevTools ➵Memory tab Node js doesn t have a trash icon but it provides a global gc function when launched with the expose gc flag With FinalizationRegistry in my bag of tools I decided to examine a few scenarios where I wasn t sure how the garbage collector was going to behave I encourage you to look at the examples below and make your own predictions about how they re going to behave Example Nested Objectsconst registry new FinalizationRegistry message gt console log message function example const x const y const z x y registry register x x has been collected registry register y y has been collected registry register z z has been collected globalThis temp x example Here even though the variable x no longer exists after the example function has returned the object referenced by x is still being held by the globalThis temp variable z and y on the other hand can no longer be reached from the global object or the execution stack and will be collected If we now run globalThis temp undefined the object previously known as x will be collected as well No surprises here Example Closuresconst registry new FinalizationRegistry message gt console log message function example const x const y const z x y registry register x x has been collected registry register y y has been collected registry register z z has been collected globalThis temp gt z x example In this example we can still reach x by calling globalThis temp We can no longer reach z or y But what s this despite no longer being reachable z and y are not getting collected A possible theory is that since z x is a property lookup the engine doesn t really know if it can replace the lookup with a direct reference to x For example what if x is a getter So the engine is forced to keep the reference to z and consequently to y To test this theory let s modify the example globalThis temp gt z Now there s clearly no way to reach z but it s still not getting collected What I think is happening is that the garbage collector only pays attention to the fact that z is in the lexical scope of the closure assigned to temp and doesn t look any further than that Traversing the entire object graph and marking objects that are still alive is a performance critical operation that needs to be fast Even though the garbage collector could theoretically figure out that z is not used that would be expensive And not particularly useful since your code doesn t typically contain variables that are just chilling in there Example Evalconst registry new FinalizationRegistry message gt console log message function example const x registry register x x has been collected globalThis temp string gt eval string example Here we can still reach x from the global scope by calling temp x The engine cannot safely collect any objects within the lexical scope of eval And it doesn t even try to analyze what arguments the eval receives Even something innocent like globalThis temp gt eval would prevent garbage collection What if eval is hiding behind an alias e g globalThis exec eval Or what if it s used without being ever mentioned explicitly E g console log constructor alert opens an alert boxDoes it mean that every function call is a suspect and nothing ever can be safely collected Fortunately no JavaScript makes a distinction between direct and indirect eval Only when you directly call eval string it will execute the code in the current lexical scope But anything even a tiny bit less direct such as eval string will execute the code in the global scope and it won t have access to the enclosing function s variables Example DOM Elementsconst registry new FinalizationRegistry message gt console log message function example const x document createElement div const y document createElement div const z document createElement div z append x z append y registry register x x has been collected registry register y y has been collected registry register z z has been collected globalThis temp x example This example is somewhat similar to the first one but it uses DOM elements instead of plain objects Unlike plain objects DOM elements have links to their parents and siblings You can reach z through temp parentElement and y through temp nextSibling So all three elements will stay alive Now if we execute temp remove y and z will be collected because x has been detached from its parent But x will not be collected because it s still referenced by temp Example PromisesWarning this example is a more complex one showcasing a scenario involving asynchronous operations and promises Feel free to skip it and jump to the summary below What happens to promises that are never resolved or rejected Do they keep floating in memory with the entire chain of then s attached to them As a realistic example here s a common anti pattern in React projects function MyComponent const isMounted useIsMounted const status setStatus useState useEffect async gt await asyncOperation if isMounted setStatus Great success return lt div gt status lt div gt If asyncOperation never settles what s going to happen to the effect function Will it keep waiting for the promise even after the component has unmounted Will it keep isMounted and setStatus alive Let s reduce this example to a more basic form that doesn t require React const registry new FinalizationRegistry message gt console log message function asyncOperation return new Promise resolve reject gt never settles function example const x registry register x x has been collected asyncOperation then gt console log x example Previously we saw that the garbage collector doesn t try to perform any kind of sophisticated analysis and merely follows pointers from object to object to determine their liveness So it might come as a surprise that in this case x is going to be collected Let s take a look at how this example might look when something is still holding a reference to the Promise resolve In a real world scenario this could be setTimeout or fetch const registry new FinalizationRegistry message gt console log message function asyncOperation return new Promise resolve gt globalThis temp resolve function example const x registry register x x has been collected asyncOperation then gt console log x example Here globalThis keeps temp alive which keeps resolve alive which keeps then callback alive which keeps x alive As soon as we execute globalThis temp undefined x can be collected By the way saving a reference to the promise itself wouldn t prevent x from being collected Going back to the React example if something is still holding a reference to the Promise resolve the effect and everything in its lexical scope will stay alive even after the component has unmounted It will be collected when the promise settles or when the garbage collector can no longer trace the path to the resolve and reject of the promise In conclusionIn this article we ve taken a look at FinalizationRegistry and how it can be used to detect when objects are collected We also saw that sometimes the garbage collector is unable to reclaim memory even when it would be safe to do so Which is why it s helpful to be aware of what it can and cannot do It s worth noting that different JavaScript engines and even different versions of the same engine can have wildly different implementations of a garbage collector and externally observable differences between those In fact the ECMAScript specification doesn t even require implementations to have a garbage collector let alone prescribe a certain behavior However all of the examples above were verified to work the same in V Chrome JavaScriptCore Safari and Gecko Firefox |
2023-02-23 17:39:47 |
海外TECH |
DEV Community |
Junior Developers NEED to Hear This! |
https://dev.to/mikehtmlallthethings/junior-developers-need-to-hear-this-5bm8
|
Junior Developers NEED to Hear This What is HTML All The Things HTML All The Things is a web development podcast and discord community which was started by Matt and Mike developers based in Ontario Canada The podcast speaks to web development topics as well as running a small business self employment and time management You can join them for both their successes and their struggles as they try to manage expanding their Web Development business without stretching themselves too thin What s This One About Tech layoffs are in full swing right now with companies shrinking their teams for a variety of reasons This is a stark contrast to the hiring spree that we experienced and grew used to during the chaos that was the COVID pandemic Does this mean that junior developers should pack up and find work elsewhere Should people that are still learning web development leave the field entirely Show Notes Tough Times in TechTech pricing disparity may cause a loss in tech dominance ie you can get some plumbing done for but a simple app could cost k to make Many layoffs from major tech giants including Microsoft Meta Google Amazon Salesforce Dell Cisco and moreAI may be coming for developer jobs faster than we thought possible ie ChatGPT is a major advancement into uncanny valley that largely came out of nowhere Where to StartKeep learning and improving at your fundamentalsHTML CSS JSReact TypeScript etc Build projects and dabble into full stack if you haven t already Practice leet codePractice interviews etiquette Q amp A How to DifferentiateBuild a small SaaS whether it s actually functioning as a business or just a practice one to show off your skills Contribute to open source projectsFind a community and get involved you may be able to skip the entire job interview process if you meet someone and they need help Don t overwork yourself to stand out work within reason some overtime is fine but working until burnout is no good Thank you If you re enjoying the podcast consider giving us a review on Apple Podcasts or checking out our Patreon to get a shoutout on the podcast Support us on PatreonYou can find us on all the podcast platforms out there as well asInstagram htmlallthethings Twitter htmleverything TikTok |
2023-02-23 17:37:29 |
海外TECH |
DEV Community |
Spring Security and Non-flat Roles Inheritance Architecture |
https://dev.to/kirekov/spring-security-and-non-flat-roles-inheritance-architecture-2a7b
|
Spring Security and Non flat Roles Inheritance Architecture Table of contentsBusiness requirements and domain modelRoles enums and inheritanceUnit testing roles inheritanceDefining JPA entitiesCreating custom Authentication implementationWhy does getAuthorities return empty set UserId and volatile authenticated flagCreating custom AuthenticationProviderDefining Spring Security configDeclaring REST API methodsCreating custom role checking serviceCombining PreAuthorize and custom role checking serviceShort and elegant enum references in SpEL expressionsIntegration testing and validating securityThen it comes to authorization roles always come into play Flat ones are straightforward User possess a set of privileges that you simply check for containing the required element But what if a user can have different roles for specific entities For example I can be an editor in the particular social media post but only a viewer in another one Also there might be inheritance rules If an admin grant me the editor role I automatically become a viewer What s the approach to handle the scenario to remain code clean and maintainable Don t worry I m giving your answers In this article I m telling you What s the best approach to handle roles inheritance in Java How to test the stated hierarchy How to apply the solution in Spring Security and Spring Data JPA You can find the code example and the entire project setup in this repository Business requirements and domain modelSupposing we re developing a simple social media platform Look at the diagram below Here I described the business entities we re working with in this article There are core entities User is the one who reads existing posts and creates new ones Community is the feed to submit new posts Post is an individual part of media Some users can view it edit it and delete it Also we have role a model which is a bit more complicated than the plain privileges assignment For example a user can owe an EDITOR role for the particular post but has only a VIEWER for another one Or user may be an ADMIN for the Cats and dogs community but just a MODERATOR for the Motorcycles and Heavy metal one The roles provide such privileges CommunityRole ADMIN gives ultimate access to the community and the containing posts CommunityRole MODERATOR provides an ability to add new posts and remove old ones PostRole EDITOR allows to edit the content of the particular post PostRole REPORTER gives the credit to send reports about inappropriate attitude in the comments PostRole VIEWER grants an access to view the post and leave comments However business also wants the inheritance model For example if I have the root role it means I also can do actions that any child provides Look at the schema below to understand the approach Suppose that I m a MODERATOR in some community That means I m EDITOR REPORTER and VIEWER as well for any other post in the community On the contrary if somebody granted me with the REPORTER role for the post it doesn t mean I have the rights to edit it I need the EDITOR for that or add new posts the MODERATOR role provides access for it That s a convenient approach You don t have to check the set of many roles for every possible operation You just have to validate the presence of the lowest authority If I m an ADMIN in the community then I m also a VIEWER for any post there So due to the inheritance model you have to check the VIEWER and that s it Anyway it doesn t seem like an simple task to implement it in the code Besides the VIEWER model has two parents the REPORTER and the EDITOR The Java doesn t allow multiple inheritance Meaning that we need a special approach Roles enums and inheritanceRoles are the perfect candidates for the enum Look at the code snippet below public enum CommunityRoleType ADMIN MODERATOR public enum PostRoleType VIEWER EDITOR REPORTER How we can build an inheritance model with these simple enums At first we need an interface to bind the roles together Look at the declaration below public interface Role boolean includes Role role The Role interface will be the root of any CommunityRoleType and PostRoleType value as well The includes method tells us whether the supplied role equals to the current one or contains it in its children Look at the modified PostRoleType code below public enum PostRoleType implements Role VIEWER EDITOR REPORTER private final Set lt Role gt children new HashSet lt gt static REPORTER children add VIEWER EDITOR children add VIEWER Override public boolean includes Role role return this equals role children stream anyMatch r gt r includes role We store the children of the particular role in the regular Java HashSet as private final field What intriguing is how these children appear By default the set is empty for every enum value But the static initializer block comes into play You can treat it as the two phase constructor On this block we just need to assign proper children to the required parents The includes method is also rather simple If the passed role equals to the current one return true Otherwise perform the check recursively for every present child node An approach for the CommunityRoleType is similar Look at the code example below public enum CommunityRoleType implements Role ADMIN MODERATOR private final Set lt Role gt children new HashSet lt gt static ADMIN children add MODERATOR MODERATOR children addAll List of PostRoleType EDITOR PostRoleType REPORTER Override public boolean includes Role role return this equals role children stream anyMatch r gt r includes role As you can see the MODERATOR role has two children the PostRoleType EDITOR and the PostRoleType REPORTER Due to the fact that both CommunityRoleType and the PostRoleType share the same interface they can all be part of the one inheritance hierarchy A slight detail remains We need to know the root of the hierarchy to perform the access validation The easiest way is just declaring a static method that returns the required nodes Look at the updated Role interface definition below public interface Role boolean includes Role role static Set lt Role gt roots return Set of CommunityRoleType ADMIN I return the Set lt Role gt instead of Role because theoretically there might be several roots So it s unnecessary to restrict the roots count to on the method signature Some of you may ask why not just using Spring Security Role Hierarchy component After all it s an out of box solution It suites well for the plain role model which is not the case in our context I ll reference this point again later in the article Unit testing roles inheritanceLet s test our role hierarchy Firstly we need to check that there are no cycles which can cause StackOverflowError Look at the test below Testvoid shouldNotThrowStackOverflowException final var roots Role roots final var existingRoles Stream concat stream PostRoleType values stream CommunityRoleType values toList assertDoesNotThrow gt for Role root roots for var roleToCheck existingRoles root includes roleToCheck The idea is checking all roots and all existing roles for includes combinations None of them should throw any exception Next move is validating the inheritance Here are the test cases The CommunityRoleType ADMIN should include any other role and the CommunityRoleType ADMIN itself The CommunityRoleType MODERATOR should include the PostRoleType EDITOR PostRoleType REPORTER PostRoleType VIEWER and the CommunityRoleType MODERATORThe PostRoleType VIEWER should not include the PostRoleType REPORTER The CommunityRoleType MODERATOR should not include the CommunityRoleType ADMIN Look at the code example below to see the described test suites ParameterizedTest MethodSource provideArgs void shouldIncludeOrNotTheGivenRoles Role root Set lt Role gt rolesToCheck boolean shouldInclude for Role role rolesToCheck assertEquals shouldInclude root includes role private static Stream lt Arguments gt provideArgs return Stream of arguments CommunityRoleType ADMIN Stream concat stream PostRoleType values stream CommunityRoleType values collect Collectors toSet true arguments CommunityRoleType MODERATOR Set of PostRoleType EDITOR PostRoleType VIEWER PostRoleType REPORTER CommunityRoleType MODERATOR true arguments PostRoleType VIEWER Set of PostRoleType REPORTER false arguments CommunityRoleType MODERATOR Set of CommunityRoleType ADMIN false There are much more cases to cover But I omit them for brevity Defining JPA entitiesYou can find all the JPA entities declarations here and the corresponding Flyway migrations here Anyway I m showing you the core artefacts of the system Look at the PostRole and the CommunityRole JPA entities declaration below Entity Table name community role public class CommunityRole Id GeneratedValue strategy IDENTITY private Long id ManyToOne fetch LAZY JoinColumn name user id private User user ManyToOne fetch LAZY JoinColumn name community id private Community community Enumerated STRING private CommunityRoleType type Entity Table name post role public class PostRole Id GeneratedValue strategy IDENTITY private Long id ManyToOne fetch LAZY JoinColumn name user id private User user ManyToOne fetch LAZY JoinColumn name post id private Post post Enumerated STRING private PostRoleType type As I ve already pointed out the CommunityRole binds to the User and the particular Community whilst the PostRole binds to the User as well and the specific Post Therefore the role model structure is not flat It brings us some complexities with Spring Security But don t worry I ll show you how to nail them Because of the vertical role model the Spring Security Role Hierarchy is not going to work We need a more complicated approach So let s move forward Look at the required SQL migrations set I m using PostgreSQL below CREATE TABLE community role id BIGSERIAL PRIMARY KEY user id BIGINT REFERENCES users id NOT NULL community id BIGINT REFERENCES community id NOT NULL type VARCHAR NOT NULL UNIQUE user id community id type CREATE TABLE post role id BIGSERIAL PRIMARY KEY user id BIGINT REFERENCES users id NOT NULL post id BIGINT REFERENCES post id NOT NULL type VARCHAR NOT NULL UNIQUE user id post id type Creating custom Authentication implementationTo begin with we need to create custom Authentication interface implementation Look at the code snippet below RequiredArgsConstructorpublic class PlainAuthentication implements Authentication private final Long userId private volatile boolean authenticated true Override public Collection lt extends GrantedAuthority gt getAuthorities return emptySet Override public Long getPrincipal return userId Override public String getName return Override public boolean isAuthenticated return authenticated Override public void setAuthenticated boolean isAuthenticated throws IllegalArgumentException authenticated isAuthenticated Override public Object getCredentials return Override public Object getDetails return null Why does getAuthorities return empty set The first thing to notice is that the getAuthorities method always returns empty collection Why is that The Spring Security role model is flat but we deal with a vertical one If you want to put the community and the post roles belonging to the user it may look like this CommunityRole ADMIN PostRole VIEWER There are three parts The type of the roleThe role valueThe id of the community or the post that the role references to In such setup the role checking mechanism also become cumbersome Look at the PreAuthorize annotation usage example below PreAuthorize hasAuthority PostRole VIEWER postId GetMapping api post postId public PostResponse getPost PathVariable Long postId In my view this code is awful smells Not only there are string typings that may cause hard tracking bugs we lost the inheritance idea Do you remember that the CommunityRole ADMIN also includes the PostRole VIEWER But here we check only the particular authority Spring Security just calls the Collection contains method Meaning that the Authentication getAuthorities method has to contain all the children s roles as well Therefore you have to perform these steps Select all the community roles and the post roles as well that the user possessLoop through each role down to its children and put each occurred node to another collectionEliminate the duplicates or just use the HashSet and return the result as the user authorities Not to mention the code complexity there are also performance drawbacks Every request comes with selecting all the user s roles from the database But what if the user is an admin of the community It s unnecessary to query post roles because they already have an ultimate access But you have to do it to make the PreAuthorize annotation work as expected I thing it s clear now why I return empty collection of authorities Later I m explaining how to deal with that properly UserId and volatile authenticated flagLook at the PlainAuthentication below I leave only the getPrincipal and isAuthenticated setAuthenticated methods to discuss RequiredArgsConstructorpublic class PlainAuthentication implements Authentication private final Long userId private volatile boolean authenticated true Override public Long getPrincipal return userId Override public boolean isAuthenticated return authenticated Override public void setAuthenticated boolean isAuthenticated throws IllegalArgumentException authenticated isAuthenticated The userId points to the database row containing information about the user We ll use it later to retrieve the roles The isAuthenticated setAuthenticated methods are part of the Spring Security contract So we have to implement them properly I put volatile marker because the PlainAuthentication object is mutable and multiple threads can access it Better safe than sorry The getPrincipal method inherits from the Authentication interface and returns Object However Java allows to return the subclass of the signature definition if you extend the base class or implement the interface Therefore make your code more secure and maintainable The Authentication interface dictates to implement other methods I haven t spoken about Those are getName getCredentials and getDetails We use none of them later Therefore it s OK to return default values Creating custom AuthenticationProviderIn the beginning we should provide custom AuthenticationProvider to resolve the PlainAuthentication declared previously from the user input Look at the code snippet below Component RequiredArgsConstructorclass DbAuthenticationProvider implements AuthenticationProvider private final UserRepository userRepository Override public Authentication authenticate Authentication authentication throws AuthenticationException final var password authentication getCredentials toString if password equals password throw new AuthenticationServiceException Invalid username or password return userRepository findByName authentication getName map user gt new PlainAuthentication user getId orElseThrow gt new AuthenticationServiceException Invalid username or password Override public boolean supports Class lt gt authentication return UsernamePasswordAuthenticationToken class equals authentication Of course it s not a real production implementation For the sake of simplicity all users have the same password If the user is present by the provided name return its id as a PlainAuthentication wrapper Otherwise throw AuthenticationServiceException that will transform to the status code afterwards Defining Spring Security configFinally time to add the Spring Security config Here I m using basic access authentication Anyway the role checking patterns I m describing later remain the same for different authentication mechanisms Look at the code example below Configuration EnableWebSecurity EnableMethodSecurity RequiredArgsConstructorpublic class SecurityConfig Bean SneakyThrows public SecurityFilterChain securityFilterChain HttpSecurity http return http csrf disable cors disable authorizeHttpRequests customizer gt customizer anyRequest authenticated httpBasic authenticationEntryPoint request response authException gt response sendError and build Declaring REST API methodsThe described domain can provide lots of possible operations Though I m going to list just of them That is sufficient to cover the whole role checking idea of mine Those are Create new community Create post for the given community Update the post by id Get post by id Look at the code snippet below RestController RequestMapping api public class Controller PostMapping community PreAuthorize isAuthenticated public CommunityResponse createCommunity RequestParam String name PostMapping community communityId post Must have CommunityRoleType MODERATOR role public PostResponse createPost PathVariable Long communityId RequestParam String name PutMapping post postId Must have PostRoleType EDITOR public void updatePost PathVariable Long postId RequestParam String name GetMapping post postId Must have PostRoleType VIEWER role public PostResponse getPost PathVariable Long postId As you can see every user can create a new community the creator automatically becomes an admin of the entity as well However I haven t implemented the required checks on the endpoints yet At first we need the role checking service Creating custom role checking serviceLook at the blueprint of the RoleService below Service RoleService public class RoleService public boolean hasAnyRoleByCommunityId Long communityId Role roles public boolean hasAnyRoleByPostId Long postId Role roles I set the bean name inside the Service annotation manually Soon I ll explain the idea to you There are only two methods The first one checks the required role presence by the communityId and the second one by the postId Look at the hasAnyRoleByCommunityId implementation below The other method has the same idea and you can check out the whole class by this link Service RoleService RequiredArgsConstructorpublic class RoleService private final CommunityRoleRepository communityRoleRepository private final PostRoleRepository postRoleRepository Transactional public boolean hasAnyRoleByCommunityId Long communityId Role roles final Long userId PlainAuthentication SecurityContextHolder getContext getAuthentication getPrincipal final Set lt CommunityRoleType gt communityRoleTypes communityRoleRepository findRoleTypesByUserIdAndCommunityId userId communityId for Role role roles if communityRoleTypes stream anyMatch communityRoleType gt communityRoleType includes role return true final Set lt PostRoleType gt postRoleTypes postRoleRepository findRoleTypesByUserIdAndCommunityId userId communityId for Role role roles if postRoleTypes stream anyMatch postRoleType gt postRoleType includes role return true return false Here is the algorithm Get the current user Authentication by calling SecurityContextHolder getContext getAuthentication and cast it to the PlainAuthentication because that s the only type our application working with Find all community roles by the userId and the communityId If any of the passed roles according to the role inheritance model are present in the set of the found community roles return true Otherwise go to the next step Find all post roles by the userId and the communityId If any of the passed roles according to the role inheritance model are present in the set of the found post roles return true Otherwise return false I also want to point out the performance benefits of the solution I m proposing to you The classic approach with retrieving all user authorities by calling the Authentication getAuthorities method requires us to pull every CommunityRole and each PostRole the user has from the database at once note that the user might be a member of dozens of communities and posses hundreds of roles But we do it much more efficiently Pull the community roles only for the provided combination of userId communityId Pull the post roles only for the provided combination of userId communityId If the first step succeeds the method returns true and doesn t perform another database query Combining PreAuthorize and custom role checking serviceFinally we set almost everything and we re ready to apply the role checking mechanism at the endpoints Why almost Look at the code snippet below and try to notice an improvement slot RestController RequestMapping api public class Controller PostMapping community PreAuthorize isAuthenticated public CommunityResponse createCommunity RequestParam String name PostMapping community communityId post PreAuthorize RoleService hasAnyRoleByCommunityId communityId T com example demo domain CommunityRoleType MODERATOR public PostResponse createPost PathVariable Long communityId RequestParam String name PutMapping post postId PreAuthorize RoleService hasAnyRoleByPostId postId T com example demo domain PostRoleType EDITOR public void updatePost PathVariable Long postId RequestParam String name GetMapping post postId PreAuthorize RoleService hasAnyRoleByPostId postId T com example demo domain PostRoleType VIEWER public PostResponse getPost PathVariable Long postId The RoleService is the reference to the RoleService Spring bean by its name Then I call the specific role checking method The postId and the communityId are the method arguments and we have to prefix their usage with hash The last parameters are varargs of the required roles Because the Role interface implementations are enums we can reference the values by their fully qualified names As you have already guessed this statement T com example demo domain CommunityRoleType MODERATOR has problems It makes code harder to read and provokes copy paste development If you change the package name of any enum role then you have to update the dependent API methods accordingly Short and elegant enum references in SpEL expressionsThankfully there is more concise solution Look at the fixed CommunityRoleType definition below public enum CommunityRoleType implements Role ADMIN MODERATOR Component CommunityRole Getter static class SpringComponent private final CommunityRoleType ADMIN CommunityRoleType ADMIN private final CommunityRoleType MODERATOR CommunityRoleType MODERATOR You just need to create another Spring bean that encapsulates the enum values as fields Then you can reference them the same way we did with the RoleService The PostRoleType enhancing is similar You can check out the code by this link Let s refactor the API methods a bit and see the final result Look at the fixed Controller below RestController RequestMapping api public class Controller PostMapping community PreAuthorize isAuthenticated public CommunityResponse createCommunity RequestParam String name PostMapping community communityId post PreAuthorize RoleService hasAnyRoleByCommunityId communityId CommunityRole ADMIN public PostResponse createPost PathVariable Long communityId RequestParam String name PutMapping post postId PreAuthorize RoleService hasAnyRoleByPostId postId PostRole EDITOR public void updatePost PathVariable Long postId RequestParam String name GetMapping post postId PreAuthorize RoleService hasAnyRoleByPostId postId PostRole VIEWER public PostResponse getPost PathVariable Long postId Much more elegant solution don t you think The role checking is declarative now and even non technical folks can understand it maybe you wish to generate documentation about endpoints restrictions rules Integration testing and validating securityIf there are no tests you cannot be sure that your code is working at all So let s write some I m going to verify these cases If a user is not authenticated creating a new community request should return If a user is authenticated they should create a new community and new post inside it successfully If a user is authenticated but he has no rights to view the post the request should return I m using Testcontainers to start PostgreSQL during tests The explaining of its setup is out of the scope of this article Anyway you can check out the whole test suite by this link Look at the unauthorized request checking below Testvoid shouldReturnIfUnauthorizedUserTryingToCreateCommunity userRepository save User newUser john final var communityCreatedResponse rest postForEntity api community name name null CommunityResponse class Map of name community name assertEquals UNAUTHORIZED communityCreatedResponse getStatusCode The test works as expected Nothing complicated here Let s move forward to checking community and post successful creation Look at the code snippet down below Testvoid shouldCreateCommunityAndPostSuccessfully userRepository save User newUser john final var communityCreatedResponse rest withBasicAuth john password postForEntity api community name name null CommunityResponse class Map of name community name assertTrue communityCreatedResponse getStatusCode isxxSuccessful final var communityId communityCreatedResponse getBody id final var postCreatedResponse rest withBasicAuth john password postForEntity api community communityId post name name null PostResponse class Map of communityId communityId name post name assertTrue postCreatedResponse getStatusCode isxxSuccessful The john user creates a new community and then adds a new post for it Again everything works smoothly Let s get to the final case If user has no PostRoleType VIEWER for the particular post the request of getting it should return Look at the code block below Testvoid shouldReturnIfUserHasNoAccessToViewThePost userRepository save User newUser john userRepository save User newUser bob john creates new community and post inside it final var postViewResponse rest withBasicAuth bob password getForEntity api post postId PostResponse class Map of postId postId assertEquals FORBIDDEN postViewResponse getStatusCode The creation of community and post is the same as in the previous test So I m omitting these parts to focus on important details There are two pre configured users john and bob The john user creates a new community and post whilst bob tries to get the post by its id As long as he doesn t possess the required privileges the server returns code Look at the result of the test running below And the final check comes Let s run all tests at once to validate that their behavior is deterministic Everything works like a charm Splendid ConclusionAs you can see Spring Security plays well with a complex role hierarchy and inheritance model Just a few patterns and your code shines That s all I wanted to tell you about applying role model with Spring Security If you have any questions or suggestions please leave your comments down below If the content of the article was useful press the like button and share the link with you friends and colleagues Thank you very much for reading this long piece ResourcesThe entire project setupJava static initializer blockSpring Security Role Hierarchy interfaceTestcontainers Basic access authenticationAuthorization vs authenticationThe polytree data structureStackOverflowErrorGuide to the Volatile Keyword in JavaFlyway migration tool |
2023-02-23 17:15:54 |
海外TECH |
DEV Community |
Top 8 Open-Source Observability & Testing Tools |
https://dev.to/kubeshop/top-8-open-source-observability-testing-tools-3675
|
Top Open Source Observability amp Testing ToolsThe DevOps mindset and the “shift left mentality impact how you work as a back end engineer With more power comes more responsibility You ll pick up new processes and tools and handle more operational tasks The task is not done when you commit code to GitHub You need to monitor how an application behaves once your CI CD pipeline deploys it to production These new responsibilities include observability and testing which traditionally weren t always something back end engineers needed to worry about implementing Observability tools help you measure the internal state of a distributed system by examining distributed traces Tracing is a fast growing and immediately valuable resource for those who work in distributed systems Testing tools help you understand whether your distributed application or service performs according to its design and business requirements through automation You can test whether different services work together as expected integration tests whether your application returns the correct output from an action functional tests replicate user behavior end to end and more It gets harder when your organization uses a distributed infrastructure This complicates which tools you can use for observability and testing The sooner you integrate both observability and testing tools into your workflows the better Instrumenting your back end services early in the development process makes it easier to troubleshoot issues and release high quality code Let s look at the landscape of available observability and testing tools today with an emphasis on the open source ecosystem in search of those that can help you do both observability and testing TracetestTracetest is an open source testing tool based on distributed tracing that enables you to test your distributed application by asserting on spans within a distributed trace It allows you to use your trace data generated on your OpenTelemetry instrumented code to check and assert if your application has the desired behavior defined by your test definitions It s designed to help back end engineers implement observability driven development where back end engineers instrument their services with distributed tracing during development for high quality observability You can leverage trace based testing to build execute and view tests against your code in one place Tracetest generates end to end tests automatically based on any distributed system instrumented with distributed tracing like OpenTelemetry and integrates easily with Jaeger Grafana Tempo New Relic Lightstep Opensearch Datadog and more with even more planned for the future Tracetest is a new addition to the CNCF landscape and is open source with code first published on GitHub in February If you like what you are seeing from Tracetest give the project a star on GitHub Tracetest features for observabilityGet value from trace data you re already collecting Out of the box integrations with the most popular trace data stores Bake observability into your back end code with by adding OpenTelemetry instrumentation Find the “unknown unknowns in your infrastructure with visibility into communication between services Tracetest features for testingCreate tests against your traces to ensure your distributed system handles requests between microservices as expected and demanded Define assertions against both the response and distributed trace which ensures both your response and underlying process work without error Help QA engineers write valuable end to end tests with a visual UI Reuse tests and assertions across multiple microservices with a powerful filtering engine MalabiMalabi is an open source test framework With Malabi you can write integration tests on distributed systems by collecting data from a microservice during a test run then exposing an endpoint to make assertions on that data The maintainers say Malabi implements trace based testing similar to Tracetest Malabi uses OpenTelemetry to collect your trace data When you pick out any product or platformーopen source closed SaaS web app or anything in betweenーit s important to consider its development velocity Malabi hasn t seen a commit to GitHub in a year which might signal that it won t get more features or technical support if you run into an issue Malabi features for observabilityMalabi isn t designed with observability in mind which means it has no features in this area Malabi features for testingValidate any integration between parts of a distributed system before you push to production Add a simple JavaScript based assertion library to any microservice you want to test PrometheusPrometheus is the de facto standard for monitoring one aspect of observability focusing on gathering metrics and enabling alerts It uses a robust time series database for storing high resolution metrics data and multiple modes for visualizing what you ve collected from your back end services Prometheus is an enormously popular open source project with k stars on GitHub and full graduated status from the Cloud Native Computing Foundation CNCF which also helps manage its governance and roadmap There is undoubtedly a ton of community support and love for the value Prometheus delivers for back end engineers who need robust observability tools Prometheus features for observabilityStore long term metrics data for historical analysis with an efficient time series database and scaling functionality through sharding and federation Create powerful alerts with PromQL a flexible query language that maintains dimensional information Push metrics and or alerts to other tools in your observability infrastructure with open source client libraries and integrations Prometheus features for testingSince Prometheus is only a metrics collection and alerting tool it doesn t help for back end developers looking to test their services JaegerJaeger is an open source end to end tracing tool designed to help developers monitor and troubleshoot transactions in distributed environments The goal is to simplify how developers debug a set of distributed services which is far more complex than dealing with a single monolith Jaeger is fully open source The project started at Uber which released the source code and eventually donated the project to CNCF Jaeger features for observabilityMonitor transactions between distributed services to understand the health and performance of your infrastructure Perform root cause analysis by drilling down into single transactions that cause user facing issues Optimize for performance and latency by discovering which services respond slowest to requests Jaeger features for testingJaeger is designed for end to end tracing but it doesn t have any tools to help you develop tests for your back end services Grafana TempoGrafana Tempo is an open source high scale distributed tracing back end responsible for collecting and storing trace data The project is open source under the AGPLv license It s built and maintained by Grafana Labs the company behind other open source projects like Loki for logs Grafana for visualizing and alerting on metrics data and Mimir for storing metrics data It was first announced in October and became generally available in Grafana Tempo features for observabilityIngest trace data from the most popular open source tracing protocols including OpenTelmetry Jaeger and Zipkin Affordable long term storage for trace data to unlock historical data trends and analysis Grafana Tempo features for testingWhile Grafana Tempo helps you implement tracing in your back end services it doesn t have tools for writing or executing tests OpenSearchOpenSearch is an open source database to ingest search visualize and analyze data It s built on top of Apache Lucerce a FOSS library for indexing and search which OpenSearch leverages for more advanced analytics capabilities like anomaly detection machine learning full text search and more OpenSearch was born from a bit of open source controversy In early Elastic announced they would change the licensing model for their popular Elasticsearch and Kibana projects AWS responded by forking those projects into OpenSearch and OpenSearch Dashboards respectively under a more permissive ALv license OpenSearch features for observabilityIngest trace data from OpenTelemetry or Jaeger which can be used to visualize and identify performance problems Leverage community plugins to gather observability data from Prometheus and customize the output with rich visualizations Filter transform normalize and aggregate data to make your analytics and visualizations more relevant and less complex OpenSearch features for testingWhile OpenSearch can collect metrics traces and logs all of which can be used to validate tests it doesn t have any features to help developers create deploy or manage those testsーyou ll need to find a discrete tool and connect its outputs to OpenSearch SigNozThe team behind SigNoz describes itself as an open source alternative to enterprise level observability platforms like Datadog New Relic and more Unlike some of the more generalist tools on this list SigNoz focuses on application performance monitoring APM which attempts to measure performance from the end user experience perspective helping developers fix issues before real users are affected Since SigNoz started in January the project has amassed nearly k GitHub stars and offers a paid mo cloud based version of its software that s managed by their team SigNoz features for observabilitySupport for OpenTelemetry as the foundation for instrumentation and generating trace data from your application A unified UI for metrics traces and logs which reduces the need to context switch between other observability tools like Prometheus and Jaeger to debug and troubleshoot issues Flamegraphs and individual request traces to help discover the root of a performance problem Build dashboards and alerts based on attributes within your logs Quickly visualize the slowest endpoints in your application SigNoz features for testingBecause SigNoz is an observability only tool it doesn t currently have any specific features that help backend developers understand the health and performance of a distributed system PostmanPostman is a departure from the tracing and observability focused tools we just covered Instead Postman is a cloud platform for building and using APIs Once your back end team is on Postman it acts like an API repository giving you a single place to create document mock and test your APIs across their entire lifecycle Postman itself is not open sourceーit s a closed cloud platformーbut the company has an established open source philosophy and maintains a handful of open source projects like Newman for running and testing a Postman Collection on the CLI or SDKs and code generators in a variety of programming languages As proof of Postman s staying power the company most recently received funding in August a series D for M valuing the company at billion Postman features for observabilityAs an API development platform Postman doesn t offer any observability features Postman features for testingStore and manage all your organization s API specifications documentation test cases metrics and more in one centralized location Debug and test your APIs with a client that supports complex requests using HTTP REST SOAP GraphQL and Websockets which can be bundled into Postman Collections for reuse Integrate your API lifecycle with source control CI CD pipelines API gateways and application performance monitoring APM platforms Wrapping upI ve tried to cover some of the key players in a fast moving space with tons of variety They are all free and open source software available on GitHub Some focus exclusively on observability others on testing while a select few bridge the gap between those two to help back end engineers like yourself ship higher quality deployments through observability based testing One clear takeaway is the enormous value in instrumenting your back end code with distributed tracing and OpenTelemetry sooner rather than later Many of these popular observability and testing tools integrate with OpenTelemetry s collector or SDK which means you can instrument once and test out multiple tools to find the workflows that work best for back end development at your organization If having both observability and testing functionality in a single tool and using tracing to enable observability driven development sound like wins to you check out Tracetest And once you re generating valuable end to end tests faster than ever let us know your tracing successes on Discord If you like our direction and what you are seeing from Tracetest give us a star on GitHub |
2023-02-23 17:11:23 |
海外TECH |
DEV Community |
All you need to know about asynchronous JavaScript |
https://dev.to/ppiippaa/all-you-need-to-know-about-asynchronous-javascript-40ma
|
All you need to know about asynchronous JavaScriptIf you have ever spent time in the JavaScript world you will probably have heard the phrase JavaScript is a single threaded language But what does that mean exactly Put simply it means that JavaScript executes code line by line in sequential order and after each line has been executed there s not going back to it So what s the problem Imagine a situation where a user wants to display a list of comments on a website They click a button to view the comments However while the comments are being fetched from a server and let s say that the fetching takes a while the user cannot interact with anything else on the website because JavaScript is busy executing the show comments functionality In other words all other code is blocked until JavaScript has completed this comment fetching task Not very user friendly const name Pippa const number for let i i lt number i console log i console log Hi my name is name In this trivial example a time consuming command in this case iterating through a large for loop would block the final console log command for an undetermined amount of time Nothing else can happen in the program while the for loop is running and who knows how long that could take to complete The solution enter asynchronous JavaScript Async JS is a technique or rather group of techniques which enables your program to continue executing code while time consuming tasks are deferred No it s not magic it is just a set of features of both JavaScript and the browser which when combined allow us to write non blocking code Let s start with the first and probably most simple method Callbacks using the setTimeout APIIf you have read up on the JavaScript runtime environment and if you ve haven t you can do so here this one should come as no surprise The browser provides the event loop the callback queue and the microtask queue These features allow certain functionality to be deferred until all global code has been executed setTimeout a browser API takes a callback function and a time argument minimum time in milliseconds to pass before invoking the callback and pushes the callback function to the callback queue The callback queue must wait for all other global code to run before anything in it can be pushed to the call stack this is handled by the event loop thus we can essentially call this function knowing that our global code will continue executing in the meantime console log first hello function fetchData setTimeout gt functionality to make XHR request to fetch data from a server and log it to the console fetchData console log second hello OUTPUT gt first hello second hello object Response In the above example we start with console log first hello We then declare the function fetchData This function emulates making an XML HTTP request which is wrapped in the setTimeout browser API Once we call the fetchData function the setTimeout starts a timer up in the browser with a time of ms The callback function wrapped inside the setTimeout is sent to the callback queue where it will be waiting for at least ms Meanwhile JavaScript continues on to the second console log second hello Once the second console log is logged to the console and the callstack is clear of all global code the fetchData callback function is able to give us the response it received from the server No code has been blocked and our data has been fetched Promises and the fetch APIIn this section we will look at promises in the context of using the fetch API It s worth noting however this is not the only use case for promises There s loads of great resources on promises out there such as this article With ES came the introduction of promises In order to understand how they facilitate asynchronous code we need to take a closer look at their features and behaviour Promises are just special objects that return a placeholder object until some asynchronous code has been resolved Every promise has several properties including state value onFulfilled and onRejected State represents current status of the promise object which can be one of three options pending the default resolved or rejected Value this is undefined by default but will eventually evaluate to the response of the asynchronous task onFulfilled an array containing functions to be run once the value property has been successfully resolved We push functions into this array by using the then method The callbacks in this array receive an argument which is the response of the fetch call they are chained on to onRejected an array containing functions to run if the promise is rejected so essentially error handling functions We push functions into this array by using the catch method The callbacks in this array receive an argument which is the error received in the fetch call they are chained on to Promises allowed for the creation of the modern XHR browser function known as fetch I would like to stress that that fetch is a a feature of the browser so it is not native to JavaScript even though it looks like a regular function Once invoked fetch has two main tasks one is to return a placeholder object a promise and the other is to send an XML HTTP request from the browser to a specified url Here s the big reveal any promise deferred functionality is sent to the microtask queue separate from the callback queue which has lower priority The functionality in the microtask queue must wait until all global code has been executed until it can be pushed to the call stack Thus we can Run an asynchronous operation i e fetching data from a server Execute all other global code meanwhile Trigger response related functionality from the onFulfilled array once all global code has been run and we ve received a response from an asynchronous operation Alternatively if the promise rejects we can trigger error handling functionality from the onRejected array console log me first function fetchData fetch then res gt res json then data gt console log data catch error gt console log error fetchData console log me second OUTPUT gt me first me second object Object And there you have it asynchronous code which allows a more time intensive command to be run in this case the XHR functionality without blocking the rest of the JavaScript code in the file Side note there is also another promise array similar to onFulfilled onRejected which allows us to run callbacks once the promise has been settled either resolved or rejected We push callbacks into this array using the finally method and it s useful for performing any clean up tasks Generator Functions and Async AwaitGenerator functions are an ES feature of JavaScript Up until now we understand that once a line of JavaScript code has been executed it is gone for good there s no going back up to that line However the introduction of generator functions has changed this model Generators are special types of functions marked with an asterisk They use a very powerful keyword yield I like to think of the yield keyword as the return keyword with superpowers Yield allows us to pause a generator function s execution context return out a value but later re enter that same function execution context We re enter the generator funtion s execution context by accessing the next method on the generator function This can be quite tricky to wrap your head around so let me show you an example function myNumberGenerator yield yield yield const returnNextNumber myNumberGenerator const number returnNextNumber next value const number returnNextNumber next value const number returnNextNumber next value console log number console log number console log number OUTPUT gt On line we are assigning the value of number to returnNextNumber next value We established that next allows us to re enter the function execution context but we have not yet seen the value syntax When we are kicked out of the generator function by the yield keyword we know that we also receive a return value However what we actually get is an object with two properties done a boolean and the value itself in this case a number In order to access just the value itself we must use value On line we assign the variable number to the result of calling returnNextNumber next value so number evaluates to This is because we re entered the generator function where we left off and once more encounter the yield keyword this time with the value The same pattern continues on line We know that once a normal function has been invoked its execution context is automatically garbage collected but here we are able to go back into the function and continue running it later on in the code Cool right Let s take a look at another example function aGeneratorFunc const number const newNumber yield number yield newNumber yield const returnNextElement aGeneratorFunc const element returnNextElement next value const element returnNextElement next value OUTPUT gt Wait what…the number was returned and not Why Because not only can we re enter a function we can re enter the function and bring some new data with us The first time we call returnNextElement next value on line yield number line threw us out of the function before it had time to assign the value of number to the variable newNumber So at this point newNumber is undefined The next time we call returnNextElement next value line with as an argument we re enter the function where we left off last time i e on line where we were just about to assign the value of newNumber The argument we passed in the number is inserted where we left off so newNumber evaluates to We move to the next line yield newNumber and we are thrown out of the function execution context along with the evaluation of newNumber which as we now know is Hence we return out If we were to re enter the function once more and save returnNextElement next value to the variable element the value of element would be So now we understand how it s possible to re enter an execution context of a function with additional data if we wish but how does this link to async JS Introducing the ES feature async await Async Await is made possible due to generators and promises With this ability to re enter a function later on in our code we can start executing a more time consuming command such as fetching data continue on with the global code and then re enter the data fetching function once the requested data has been returned and all global code has been run Async await has a clean and simple syntax which requires you to place the async keyword before the function definition The await keyword is used within the function scope and commands JavaScript to pause the function execution until the result of the promise has been returned and continue on with the rest of the program code After the global code has been run JavaScript will return back into the function where it left off by this time with a received response and continue with the rest of the logic Again let s take a look at an example async function getData const response await fetch const data await response json console log data getData console log me first OUTPUT → me first object Promise Let s break it down We declare an async function using the async keyword We use the await keyword to signal that a promise will be returned from the fetch call which enables us to break out of the function and return to it once the promise has been settled and global code has been run We call the getData function but the console log runs first because the fetching of API data has been deferred to the microtask queue JavaScript continues running the global code then re enters the getData function once all global code has run and the response is available in getData It then continues executing the rest of the getData function A note on error handling in async await In order to facilitate error handling in an async function you can wrap the functionality in a try catch statement This gives you the option to run some code upon promise failure essentially the same as using the catch method with fetch Using the same example we just saw let s see what it looks like using a try catch statement async function getData try const response await fetch const data await response json console log data catch error console log error getData console log me first OUTPUT→ me first object Promise if an error occurs when fetching the data the output would be me first object Error And there you have it asynchronous JavaScript If you made it this far thanks for reading If you want to dive deeper into the topic there are loads of great resources out there but I thoroughly recommend watching Will Sentence s Hard Parts video courses available on Front End Masters |
2023-02-23 17:10:19 |
Apple |
AppleInsider - Frontpage News |
Google's Magic Eraser photo tool is coming to iPhone |
https://appleinsider.com/articles/23/02/23/googles-magic-eraser-photo-tool-is-coming-to-iphone?utm_medium=rss
|
Google x s Magic Eraser photo tool is coming to iPhoneGoogle s once exclusive Magic Eraser tool for editing photographs is coming to iPhone users through the Google One subscription Google Photos gets a new featureThe tool debuted with the company s Pixel in and can automatically remove unwanted parts of a photo using artificial intelligence Read more |
2023-02-23 17:49:45 |
Apple |
AppleInsider - Frontpage News |
New auction has early Jobs & Wozniak signatures & sealed original iPhone |
https://appleinsider.com/articles/23/02/23/new-auction-has-early-jobs-wozniak-signatures-sealed-original-iphone?utm_medium=rss
|
New auction has early Jobs amp Wozniak signatures amp sealed original iPhoneA working Apple computer that has been autographed by Steve Wozniak and an Apple II board signed by Steve Jobs are among the rare items being sold in an ongoing auction Apple signed by Woz From one of the first PC shops in the US the Apple is fully functional and bears the Woz signature from Apple s co founder Wozniak and co founder Steve Jobs originally designed the Apple as a bare circuit board to be sold as a kit and completed by electronics hobbyists Read more |
2023-02-23 17:28:48 |
海外TECH |
Engadget |
Game designer Shinji Mikami is leaving the Bethesda studio he founded |
https://www.engadget.com/game-designer-shinji-mikami-is-leaving-the-bethesda-studio-he-founded-174159251.html?src=rss
|
Game designer Shinji Mikami is leaving the Bethesda studio he foundedOne of the game industry s better known figures is moving on from the studio he created Bethesda has confirmed that Tango Gameworks founder and CEO Shinji Mikami is leaving his company in the quot coming months quot The designer hasn t provided reasons for his departure or said where he s going next We ve asked Bethesda and Tango for comment Mikami has been one of the most influential game developers in his year career He s best known for directing and producing early Resident Evil games but has also played a key role in other Capcom series like Devil May Cry Dino Crisis and Phoenix Wright Ace Attorney He had a brief stint at PlatinumGames only to found Tango Gameworks in His studio has enjoyed success with the Evil Within series and Ghostwire Tokyo Tango s most recent project is the surprise release rhythm brawlerHi Fi Rush pic twitter com yPWaSxOunーBethesda bethesda February Tango wasn t independent for long Bethesda had its parent company ZeniMax acquire Mikami s studio in October after it ran into financial trouble Microsoft bought ZeniMax in This doesn t necessarily mean Tango is in trouble Mikami executive produced the company s three most recent games and Bethesda notes he s a quot supportive mentor quot to younger developers However this still amounts to an industry legend leaving the studio that s supposed to reflect his vision |
2023-02-23 17:41:59 |
海外TECH |
Engadget |
Elon Musk says California is home to Tesla’s engineering headquarters |
https://www.engadget.com/elon-musk-says-california-is-home-to-teslas-engineering-headquarters-172303533.html?src=rss
|
Elon Musk says California is home to Tesla s engineering headquartersDespite moving its corporate headquarters to Texas Tesla now considers California its global engineering home base Elon Musk said a Palo Alto engineering hub will be “effectively a headquarters of Tesla The CEO added that the company s plant in Fremont which it bought in from a joint venture of General Motors and Toyota Motor Corp will increase production to over vehicles this year Tesla will use a former Hewlett Packard building in Palo Alto as its new engineering headquarters “This is a poetic transition from the company that founded Silicon Valley to Tesla Musk said The move is an about face from the CEO s previous comments about the state Musk didn t mince words about California s regulations and taxes when he moved Tesla s official corporate headquarters to Texas in complaining about “overregulation overlitigation over taxation He tweeted about California pandemic lockdowns the previous year “Frankly this is the final straw Tesla will now move its HQ and future programs to Texas Nevada immediately If we even retain Fremont manufacturing activity at all it will be dependent on how Tesla is treated in the future Tesla is the last carmaker left in CA Following news of the Inflation Reduction Act incentives Tesla will shift its battery production focus from Germany to the US Musk appeared with Gavin Newsom at an event on Wednesday where the California governor poked fun at the move “Eat your heart out Germany California which has more electric vehicles than any other state provided tax bonuses to Tesla on its way to growing into the EV superpower it is today Texas has minimal regulation and taxes by comparison |
2023-02-23 17:23:03 |
海外TECH |
Engadget |
Google's Workspace apps are getting an updated look |
https://www.engadget.com/googles-workspace-apps-are-getting-an-updated-look-170404744.html?src=rss
|
Google x s Workspace apps are getting an updated lookGoogle is bringing some new features to its Workspace apps in the coming weeks including a fresh lick of paint The company is updating the look and feel of Drive Docs Sheets Slides and Chat in the coming weeks drawing from its Material Design language to do so Google says the updated designs will streamline the user interface and put more emphasis on the most loved tools in each app There s another handy update coming to Drive in the next few weeks as part of the Smart Canvas collaboration initiative Google will introduce a multiselect toolbar that should make it easier to share download move and delete more than one file at a time Google is also adding an option to filter files by type such as documents videos PDFs and zip files without having to search for something first Improved file management in Drive is always welcome Several new features are on the way to Docs Starting today you ll be able to access a stopwatch directly in the app which could come in useful if you re trying to stay hyper focused for short bursts In the coming weeks Docs will gain emoji reactions for comments which can be used to emphasize or upvote a response A calendar invite template will be available in the app soon too As for Sheets you ll be able to preview a Google Maps link directly in the app starting today Google says that could come in useful for things like logistics tracking and event planning Something that could be even more helpful on a day to day basis are date shortcuts By typing today yesterday tomorrow and date you can add quickly add the relevant date to a sheet without having to look at your calendar That feature will be generally available as of today as is the option to add stocks mutual funds and currencies by using the mention function and typing in a company s name stock ticker or currency |
2023-02-23 17:04:04 |
海外TECH |
Engadget |
Google TV's new family page helps you find kid-friendly content |
https://www.engadget.com/google-tvs-new-family-page-helps-you-find-kid-friendly-content-170034596.html?src=rss
|
Google TV x s new family page helps you find kid friendly contentGoogle TV may now be easier to use when you re sitting down to watch shows with your kids Google is adding four new pages to the interface that include a Family section where you ll find suggested content rated PG or lower While this isn t a completely novel concept Netflix has a dedicated Kids profile for example it should help you find titles that are safe for everyone to watch The expansion also includes an Español page that as the name implies recommends Spanish language content like movies shows and live TV Other changes apply more universally The Movies and Shows tabs have been turned into pages You ll also find a few navigation tweaks that include a quick settings button and more convenient locations for profile and search functions The new pages are currently available in the US The navigation updates are rolling out worldwide as of today All the changes will be visible on devices that support Google TV including Google s own Chromecast hardware as well as TVs from companies like Hisense and Sony The revamp doesn t come as a surprise Google is competing with other platforms where child safe content is either already walled off or dominates such as Disney and Netflix An update like this may make Google TV more compelling to young families and creates more consistency with Google s own YouTube Kids |
2023-02-23 17:00:34 |
海外科学 |
NYT > Science |
Wind and Solar Energy Projects Risk Overwhelming America’s Antiquated Electrical Grids |
https://www.nytimes.com/2023/02/23/climate/renewable-energy-us-electrical-grid.html
|
Wind and Solar Energy Projects Risk Overwhelming America s Antiquated Electrical GridsAn explosion in proposed clean energy ventures has overwhelmed the system for connecting new power sources to homes and businesses |
2023-02-23 17:37:54 |
海外TECH |
WIRED |
You Can't Trust App Developers' Privacy Claims on Google Play |
https://www.wired.com/story/google-play-data-safety-forms-mozilla-research/
|
false |
2023-02-23 17:29:13 |
ニュース |
BBC News - Home |
Zara Aleena murder: Raab seeks to force convicts to appear at sentencing |
https://www.bbc.co.uk/news/uk-england-london-64745075?at_medium=RSS&at_campaign=KARANGA
|
family |
2023-02-23 17:00:46 |
ニュース |
BBC News - Home |
Eating turnips could help ease vegetable shortage, suggests Therese Coffey |
https://www.bbc.co.uk/news/uk-politics-64745258?at_medium=RSS&at_campaign=KARANGA
|
limit |
2023-02-23 17:51:06 |
ニュース |
BBC News - Home |
Kate Forbes 'greatly burdened' by gay marriage row |
https://www.bbc.co.uk/news/uk-scotland-64747672?at_medium=RSS&at_campaign=KARANGA
|
marriage |
2023-02-23 17:22:19 |
ニュース |
BBC News - Home |
Keir Starmer's five missions speech fact-checked |
https://www.bbc.co.uk/news/64745176?at_medium=RSS&at_campaign=KARANGA
|
checkedhas |
2023-02-23 17:18:49 |
ビジネス |
ダイヤモンド・オンライン - 新着記事 |
【小児科医が教える】子どもが「風邪」をひいたら、最優先で食べさせたい4つの食材 - 医師が教える 子どもの食事 50の基本 |
https://diamond.jp/articles/-/317954
|
食材 |
2023-02-24 02:55:00 |
ビジネス |
ダイヤモンド・オンライン - 新着記事 |
職場の人気者が自然とやっていることベスト1 - 1秒で答えをつくる力 お笑い芸人が学ぶ「切り返し」のプロになる48の技術 |
https://diamond.jp/articles/-/318233
|
|
2023-02-24 02:50:00 |
ビジネス |
ダイヤモンド・オンライン - 新着記事 |
頭の悪い人の「仕事での言い訳」ワースト1 - 99%はバイアス |
https://diamond.jp/articles/-/318151
|
言い訳 |
2023-02-24 02:45:00 |
ビジネス |
ダイヤモンド・オンライン - 新着記事 |
「言い訳ばかりの部下」に効果絶大、たった2つの質問とは? - 良書発見 |
https://diamond.jp/articles/-/316854
|
「言い訳だな」と思うことや「いいから黙ってやれ」と言い返したくなることもある。 |
2023-02-24 02:40:00 |
ビジネス |
ダイヤモンド・オンライン - 新着記事 |
英単語を暗記しても英語力が上がらない人の1つの盲点 - 5分間英単語 |
https://diamond.jp/articles/-/317824
|
英単語を暗記しても英語力が上がらない人のつの盲点分間英単語「たくさん勉強したのに英語を話せない……」。 |
2023-02-24 02:30:00 |
ビジネス |
ダイヤモンド・オンライン - 新着記事 |
人間関係が劇的によくなる「ひとこと」とは? - ひとこと化 |
https://diamond.jp/articles/-/318255
|
人間関係 |
2023-02-24 02:25:00 |
ビジネス |
ダイヤモンド・オンライン - 新着記事 |
【精神科医が教える】 人間関係で言ってはいけない「NGフレーズ」ベスト1 - 精神科医Tomyが教える 心の執着の手放し方 |
https://diamond.jp/articles/-/318048
|
【精神科医が教える】人間関係で言ってはいけない「NGフレーズ」ベスト精神科医Tomyが教える心の執着の手放し方【大好評シリーズ万部突破】誰しも悩みや不安は尽きない。 |
2023-02-24 02:20:00 |
ビジネス |
ダイヤモンド・オンライン - 新着記事 |
極貧生活からイタリア留学、人生を変えた「父のたった1つの教え」 - だから、この本。 |
https://diamond.jp/articles/-/318244
|
東アジア |
2023-02-24 02:15:00 |
ビジネス |
ダイヤモンド・オンライン - 新着記事 |
なぜ、日本は当時最強の「モンゴル帝国」を追い払うことができたのか?「元寇」の奇跡を検証してみた - 東大生が教える戦争超全史 |
https://diamond.jp/articles/-/318014
|
重要 |
2023-02-24 02:10:00 |
ビジネス |
ダイヤモンド・オンライン - 新着記事 |
精神科医が断言する「発達障害の子に効果的」な2つの超シンプルな習慣【書籍オンライン編集部セレクション】 - 発達障害サバイバルガイド |
https://diamond.jp/articles/-/317859
|
精神科医が断言する「発達障害の子に効果的」なつの超シンプルな習慣【書籍オンライン編集部セレクション】発達障害サバイバルガイド『ストレスフリー超大全』の著者で、精神科医の樺沢紫苑さんは、借金玉さんの著書『発達障害サバイバルガイド』について、「このリアリティ、具体性は当事者の経験あってのもの。 |
2023-02-24 02:05:00 |
Azure |
Azure の更新情報 |
General availability: Stream Analytics no-code editor updates in Feb 2023 |
https://azure.microsoft.com/ja-jp/updates/general-availability-stream-analytics-nocode-editor-updates-in-feb-2023/
|
General availability Stream Analytics no code editor updates in Feb New features are now available in Stream Analytics no code editor GA including Power BI output support and data preview optimization Power BI output feature enables you to build real time dashboard in minutes and low cost |
2023-02-23 18:00:05 |
コメント
コメントを投稿