投稿時間:2023-01-11 05:35:29 RSSフィード2023-01-11 05:00 分まとめ(37件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
AWS AWS HDI Group: Automated Security and Compliance Issue Remediation Using Cloud Native Services https://www.youtube.com/watch?v=XmIhHtPJWog HDI Group Automated Security and Compliance Issue Remediation Using Cloud Native ServicesHDI has successfully implemented a system for automated security and compliance issues remediation that is leveraging cloud native services extended by open source solution Prawler and comes with great practices of covering remediation logic The system greatly reduces security and compliance related risks provides visibility and remediation automation It allows product teams to focus on delivering value to the end customers of HDI In this episode we explore how HDI has designed this architecture and how cloud native services open source solutions and existing security tooling is integrated in an innovative way Check out more resources for architecting in the AWS​​​cloud AWS AmazonWebServices CloudComputing ThisIsMyArchitecture 2023-01-10 19:45:09
AWS AWS How to setup data flows between SAP Applications and AWS with Amazon AppFlow | Amazon Web Services https://www.youtube.com/watch?v=w8Y6pJvisrQ How to setup data flows between SAP Applications and AWS with Amazon AppFlow Amazon Web ServicesLearn how to use the Amazon AppFlow SAP OData Connector create and run bi directional data flows between SAP applications and AWS services in just a few clicks We ll also cover how to leverage the SAP Operational Data Provisioning ODP framework s change data capture capabilities Learn more at Subscribe More AWS videos More AWS events videos ABOUT AWSAmazon Web Services AWS is the world s most comprehensive and broadly adopted cloud platform offering over fully featured services from data centers globally Millions of customers ーincluding the fastest growing startups largest enterprises and leading government agencies ーare using AWS to lower costs become more agile and innovate faster saponaws amazonappflow AWS AmazonWebServices CloudComputing 2023-01-10 19:35:22
海外TECH Ars Technica Moderna CEO: 400% price hike on COVID vaccine “consistent with the value” https://arstechnica.com/?p=1908947 similar 2023-01-10 19:21:27
海外TECH Ars Technica FCC’s new broadband map greatly overstates actual coverage, senators say https://arstechnica.com/?p=1908925 actual 2023-01-10 19:14:39
海外TECH DEV Community Synapses: Event-driven Alternative to React Context https://dev.to/nucleoid/synapses-event-driven-alternative-to-react-context-2mdm Synapses Event driven Alternative to React Contextnpm i nucleoidjs synapsesSynapses is an alternative to React Context with event driven style that helps to build loosely coupled components How it works Subscribers are registered an event with the custom hook useEvent eventType initialValue once publisher posts an event and its payload Synapses asynchronously sends the event to subscribed components and subscribed components will eventually be re rendered with fresh data Hello World import publish from nucleoidjs synapses const PublishComponent gt return lt button onClick gt publish BUTTON CLICKED number string red gt Button lt button gt import useEvent from nucleoidjs synapses const Component gt const init number const event useEvent BUTTON CLICKED init return lt div gt event number lt div gt import useEvent from nucleoidjs synapses const Component gt const init string blue const event useEvent BUTTON CLICKED init return lt div gt event string lt div gt The complete sample project is here Stateless Handling Synapses supports stateless components with caching last published payload for the event type so that if the component is re rendered it won t lose the payload For example Component in this example is not re rendered yet but Synapses holds the last payload for the event type and once the component is rendered it returns the payload instead of initial value Event driven ArchitectureEvent driven Architecture is commonly used in Microservices systems that pretty much targets similar problem loose coupling This style of architecture require middleware like Kafka RabbitMQ etc and we are trying to adopt the very same idea to React js of course with some modification such as Stateless Handling My personal experience with React Context wasn t pleasant especially when a project gets bigger We ve been working on Low code IDE project which contains a good amount of reusable components but they are connected with the giant reducer We were considering having multi context reducers concept to ease the problem seems like it may even complicate more the structure when contexts have to talk to each other Advanced Usage Synapses can coexist with React Context actually it might be even better for complex projects React Context may handle large dataset with dispatching which re renders all listening components Usually majority of components and in meanwhile Synapses can help with local events and limit re rendering for components that reacting only certain events This can help to lower workload on a context reducer as well as provide better performance overall Star us on GitHub for the support 2023-01-10 19:45:00
海外TECH DEV Community Do you want to work in customer-facing roles? https://dev.to/hunghvu/do-you-want-to-work-in-customer-facing-roles-8mf Do you want to work in customer facing roles Assuming the customers are mostly non tech as an IT personnel and engineer do you want to work in customer facing roles The customers can be either internal or external clients If you want why Otherwise why not 2023-01-10 19:16:32
海外TECH DEV Community Manually Trigger a GitHub Action with workflow_dispatch https://dev.to/this-is-learning/manually-trigger-a-github-action-with-workflowdispatch-3mga Manually Trigger a GitHub Action with workflow dispatchThere s a plethora of triggers you can use to run a GitHub Action You can run it on a schedule on a push or a pull request or even on a release Today the spotlight is on workflow dispatch a trigger that allows you to manually trigger a GitHub Action without having to push or create a pull request Bonus you can also pass custom parameters How to use workflow dispatchTo use workflow dispatch you need to add it to the on section of your workflow file name Manual triggeron workflow dispatch That s it Now you can manually trigger your GitHub Action by going to the Actions tab of your repository and clicking on the Run workflow button Live DemoIf you want to see it in action pun intended you can watch the video below Otherwise if you prefer to read you can jump to the next section Passing inputsYou can also pass inputs to your workflow To do so you need to add an inputs section to your workflow dispatch trigger The only required field is the description which will also be rendered in the UI name Manual triggeron workflow dispatch inputs name description Who to greet default World You can now pass an input to your workflow by clicking on the Run workflow button and filling the input field You can then access the input in your workflow file from the github event inputs object name Manual triggeron workflow dispatch inputs name description Who to greet default World jobs hello runs on ubuntu latest steps name Hello Step run echo Hello github event inputs name In this case the variable we read is name which is the name of the input we specified in the workflow dispatch trigger Input typesYou can also specify the type of the input There are many types for example string boolean and choice If you don t specify a type the default is string string will render a simple text input boolean is submitted through a checkbox choice renders a dropdown menu to force the selection of some specific values You can also specify if the field is required and add a default value Lean moreworkflow dispatch is the keyword you need to run a GitHub Action on demand without having to push or create a pull request If you want to learn more on GitHub Actions let me recommend you my YouTube Playlist with all the videos I made about GitHub Actions Click here Thanks for reading this article I hope you found it interesting I recently launched my Discord server to talk about Open Source and Web Development feel free to join Do you like my content You might consider subscribing to my YouTube channel It means a lot to me ️You can find it here Feel free to follow me to get notified when new articles are out Leonardo MontiniFollow I talk about Open Source GitHub and Web Development I also run a YouTube channel called DevLeonardo see you there 2023-01-10 19:16:22
海外TECH DEV Community Angular ESLint Rules for Accessible HTML Content https://dev.to/angular/angular-eslint-rules-for-accessible-html-content-kf5 Angular ESLint Rules for Accessible HTML ContentContent accessibility for built in HTML elements is the third and final category in this series on Angular ESLint accessibility rules These rules validate several HTML attributes that developers commonly overlook regarding accessibility They check for alt text on images accessible content in buttons links and headings and proper form labels and table header associations Angular ESLint accessibility rules provide immediate guidance on accessibility best practices right in the code resulting in more accessible and user friendly Angular applications The RulesPrevious articles in this series discussed how to add Angular ESLint and configure these rules in eslintrc json config files so let s get straight into the rules Alt Text angular eslint template accessibility alt textElements Have Content angular eslint template accessibility elements contentLabel Has Associated Control angular eslint template accessibility label has associated controlTable Scope angular eslint template accessibility table scopeNo Distracting Elements angular eslint template no distracting elementsButton Has Type angular eslint template button has type Rule Alt Text angular eslint template accessibility alt textThe accessibility alt text rule validates that all images have alternative text Images must have an alt attribute to meet WCAG Success Criterion Non text Content that states all non text content should have a text alternative that serves the equivalent purpose Follow these guidelines for providing meaningful and concise alt text Alt text should express the relevant detail in an image to stand in for the same purpose meaning and intent Alt text should not be redundant or repeat information from the image caption Alt text should be short and to the point ideally characters or less When there is no alt attribute on an lt img gt some screen readers will announce the image src instead Including an empty alt attribute for decorative images indicates to screen readers that these images do not convey additional meaning or information lt img alt src decorative gif gt In addition to checking for the alt attribute on lt img gt elements the accessibility alt text rule also validates lt input type image gt lt area gt and lt object gt elements which support these attributes as alternative text HTML elementalternative text attributes lt img gt alt lt input type image gt alt aria label aria labelledby lt area gt alt aria label aria labelledby lt object gt aria label aria labelledby title Rule Elements Have Content angular eslint template accessibility elements contentThe accessibility elements content rule ensures that lt a gt lt button gt and lt h gt lt h gt elements have content Screen readers announce these elements by their role and name and level for headings Links buttons and headings are semantic elements with inherent meaning and implicit roles and the accessible name for these elements comes from their content title or aria label A button link or heading name may also come from a child element with plain text content a title or aria label Don t add redundant and unnecessary aria labels where the accessible name comes from a child element Use the Accessibility pane in Chrome DevTools to inspect an element s computed name Rule Label Has Associated Control angular eslint template accessibility label has associated controlThe HTML lt label gt element gives an accessible name to form controls Labels are important for helping users understand the purpose of lt input gt lt select gt lt textarea gt lt output gt lt meter gt and lt progress gt elements A control can have more than one lt label gt A lt label gt cannot be associated with more than one control Do not use lt label gt independently without an associated control A lt label gt is associated with a form control with either an implicit or explicit association and both styles are widely supported Nesting a control inside lt label gt creates an implicit association between the label and control lt label gt lt input type checkbox gt Blue lt label gt Whereas assigning an id to the control and using a matching for attribute on the lt label gt creates an explicit association lt label for city gt City lt label gt lt input id city type text gt The accessibility label has associated control rule determines if a label has an implicit association with a control nested inside the label or if the label has the for attribute The rule does not look elsewhere in the template for a control with an id matching the label s for attribute and does not validate explicit label and control pairings This is a natural limitation of static code analysis tools like Angular ESLint so you ll want to use additional testing techniques to validate the compiled application The for attribute could be a bound value and Angular ESLint template rules do not validate bound values Rules are generally limited to validating individual template nodes and their children and the control could even be in another component s template separate from the label Configuration Options for Label Has Associated ControlThe accessibility label has associated control rule supports the configuration options labelComponents and controlComponents The labelComponents option adds validation for custom label components The controlComponents option configures the rule to recognize an implicit association on custom controls nested inside a label Option Custom Label ComponentsConfigure labelComponents for my custom label with forControl angular eslint template accessibility label has associated control error labelComponents selector my custom label inputs forControl Validation on a custom label component to either require the forControl input or to have a nested control lt my custom label forControl myId gt Custom lt my custom label gt lt input id myId type text gt lt my custom label gt Custom lt input type text gt lt my custom label gt Option Custom Control ComponentsConfigure controlComponents for my custom control angular eslint template accessibility label has associated control error controlComponents my custom control Validation on a custom control nested inside a label with an implicit association lt label gt Custom lt my custom control gt lt my custom control gt lt label gt Rule Table Scope angular eslint template accessibility table scopeThe accessibility table scope rule validates the scope attribute for table headers The scope attribute specifies which cells belong with a table header lt th gt Table header scope accepts the following values row associates a table header with all the cells in that row col associates a table header with all the cells in that column rowgroup associates a table header that spans multiple rows with all the cells in that row group colgroup associates a table header that spans multiple columns with all the cells in that column group When table header scope is not specified browsers and assistive technologies infer the relationship between table headers and their cells Tables with headers in a single row or column do not need the scope attribute More complex tables with irregular multi level or both row and column headers should explicitly define which cells belong to which headers by using the scope attribute on their table headers or the id attribute on table headers with the headers attribute on table cells Rule No Distracting Elements angular eslint template no distracting elementsThe no distracting elements rule disallows usage of the lt blink gt and lt marquee gt elements which are both deprecated and no longer recommended for use The lt blink gt element is more than distracting Flashing content can trigger seizures in people with photosensitive epilepsy WCAG Success Criterion dictates that there are no more than three flashes in a one second period or that the flash is below the threshold The scrolling content in a lt marquee gt element can create barriers for anyone who struggles with moving objects or people with cognitive disabilities like attention deficit disorder WCAG Success Criterion states that users must be able to stop or hide any moving blinking scrolling or auto updating information Bonus Rule Button Has Type angular eslint template button has typeI mention this last rule as a bonus because it is not specifically an accessibility rule but is commonly missed and can lead to surprising functionality The button has type rule checks for the type attribute on HTML lt button gt elements A lt button gt inside a lt form gt without a type acts as a submit button and submits the form when pressed lt form gt lt label gt lt input type checkbox gt Yes lt label gt lt button gt Submits the Form lt button gt lt button type button gt Not a Submit lt button gt lt button type reset gt Resets the Form lt button gt lt button type submit gt Submits the Form lt button gt lt form gt That s All Folks The Angular ESLint rules covered in this series enable developers to create more accessible and user friendly Angular applications by helping to ensure keyboard accessibility ARIA compliance and accessible HTML content The first article discusses rules for ensuring that all interactive elements are reachable with a keyboard and that focus is not improperly managed The second article discusses Angular ESLint rules to check that ARIA roles have the required attributes and that ARIA attributes are valid This third and final article discusses Angular ESLint rules to ensure accessible HTML content such as image alt text accessible names for form controls buttons links and headings and table heading scope That covers all the rules pertaining to accessibility available with Angular ESLint Static code analysis with Angular ESLint has the advantage of identifying issues early in development but performing further automated and manual browser based testing is essential for ensuring full accessibility These Angular ESLint accessibility rules inspect individual nodes and elements used in the template code It is also important to validate accessibility within the context of the entire application to check that headings are properly nested that there are no nested interactive controls that landmarks like lt header gt lt aside gt lt nav gt and lt main gt are used and labeled correctly etc Incorporating accessibility testing into each stage of the development and release process ensures that our Angular applications are usable by a wide range of users including those with disabilities Recommended Testing Tools and Libraries Lighthouse in Chrome DevToolsAccessibility pane in Chrome DevToolsWebAIM Wave Browser ExtensionAxe DevTools Browser ExtensionAngular Testing LibraryAxePuppeteer for PuppeteerCypress axe library for CypressReferences Understanding SC Non text Content Level A on WC s Understanding WCAG Decorative Images on WC Web Accessibility InitiativeProviding Accessible Names and Description on WC Web Accessibility Initiative ARIA Authoring Practices GuideTables Tutorial on WC Web Accessibility InitiativeUnderstanding SC Three Flashes or Below Threshold Level A on WC s Understanding WCAG Understanding SC Pause Stop Hide Level A on WC s Understanding WCAG 2023-01-10 19:14:04
海外TECH DEV Community The difference between test-driven development and observability-driven development https://dev.to/kubeshop/the-difference-between-test-driven-development-and-observability-driven-development-65b The difference between test driven development and observability driven developmentHere s a problem you keep facing You have no way of knowing at precisely at which point in the complex network of microservice to microservice communications a single HTTP transaction goes wrong With tracing your organization is generating and storing tons of trace data without getting value from it in the form of consistent tests Here s how you solve it We re entering a new era of observability driven development ODD which emphasizes using instrumentation in back end code as assertions in tests We now have a culture of trace based testing developing With a tool like Tracetest back end developers are not just generating EE tests from OpenTelemetry based traces but changing the way we enforce qualityーand encourage velocityーin even the most complex apps That s exactly what we ll demo in this tutorial Once we re done you ll learn how to instrument your code and run trace based tests yourself You can find the complete source code on GitHub The past test driven developmentTest driven development TDD is a software development process where developers create test cases before code to enforce the requirements of the application In contrast traditional testing methods meant first writing code based around your software requirements then writing integration or EE tests to verify whether the code you wrote works in all cases With TDD the software development cycle changes dramatically so that once your team has agreed on those requirements you then Add a test that passes if and only if those requirements are met Run the test and see a failure as expected Write the simplest code that passes the test Run the test again and see a pass result Refactor your code as needed to improve efficiency remove duplicate code splitting code into smaller pieces and more TDD has found a ton of fans in recent years as it can significantly reduce the rate of bugs and generally speaking can improve the technical quality of an organization s code But given the complexity of modern infrastructure including the proliferation of cloud native technology and microservices TDD ends up failing to meet backend developers halfway The pain of creating and running TDD based tests on the back endAs you well know running back end tests requires access and visibility into your larger infrastructureーunlike front end unit tests they don t operate in small isolated environments You have to not only design a trigger but also access your database which requires some method of authentication Or if your infrastructure has a message bus you need to configure the way in which your test monitors that and gathers the relevant logs If your infrastructure heavily leverages resources like serverless functions or API gateways you re in for an even bigger challenge as those are ephemeral and isolated by default How do you gather logs from a Lambda that no longer exists For all its benefits TDD creates scenarios where you can see that your CI CD pipeline has failed due to an EE test but not where or why the failure occurred in the first place If you can t determine whether it happened between microservice A and B or Y and Z then your tests cases aren t useful tools to guide how you develop your codeーthey re mysterious blockers you re blindly trying to develop around How back end tests are added in a TDD environmentWe ll use a Node js project as a simple example to show the complexity of integration and EE testing in distributed environments To simply stand up the testing tools you need to add multiple new libraries like Mocha Chai and Chai HTTP to your codebase You then need to generate mocked data which you store within your repository for your tests to utilize That means creating more new testing related folders and files in your Git repository muddying up what s probably already a complicated landscape And then to add even a single test you need to add a decent chunk of code const chai require chai const chaiHttp require chai http chai use chaiHttp const app require server const should chai should const expect chai expect const starwarsFilmListMock require mocks starwars film list json describe GET people id gt it should return people information for Luke Skywalker when called done gt const peopleId chai request app get people peopleId end err res gt res should have status expect res body to deep equal starwarsLukeSkywalkerPeopleMock done From here on out you re hand coding dozens or hundreds more tests using the exact same fragile syntax to get the coverage TDD demands The future observability driven development with TracetestODD is the practice of developing software and the associated observability instrumentation in parallel As you deploy back end services you can immediately observe the behavior of the system over time to learn more about its weaknesses guiding where you take your development next You re still developing toward an agreed upon set of software requirements but instead of spending time artificially covering every possible fault you re ensuring that you ll have full visibility into those faults in your production environment That s how you uncover and fix the dangerous “unknown unknowns of software developmentーthe failure points you could have never known to create tests for in the first place At the core of ODD is distributed tracing which records the paths taken by an HTTP request as it propagates through the applications and APIs you have running on your cloud native infrastructure Each operation in a trace is represented as a span to which you can add assertionsーtestable values to determine if the span succeeds or fails Unlike traditional API tools trace based testing asserts against the system response and the trace results The benefits of leveraging the distributed trace for ODD are enormous helping you and your team Understand the entire lifecycle of an HTTP request as it propagates through your distributed infrastructure whether it succeeds or exactly where it fails Track down new problems or create new tests with no prior knowledge of the system and without shipping any new code Resolve performance issues at the code level Run trace based tests directly in production Discover and troubleshoot the “unknown unknowns in your system that might have slipped past even a sophisticated TDD process How can you add OpenTelemetry instrumentation to your back end code A platform like Tracetest integrates with a handful of trace data stores like Jaeger Grafana Tempo Opensearch and SignalFX The shortest path to adding distributed traces is adding the language specific OpenTelemetry SDK to your codebase Popular languages also have auto instrumentation like in the Node js example we ll create below If you already have a trace data store like Jaeger Grafana Tempo Opensearch or SignalFX we have plenty of detailed docs to help you connect Tracetest to your instance quickly For example here s how we add tracing to an example Node js Express based project in just a few lines of code const opentelemetry require opentelemetry sdk node const getNodeAutoInstrumentations require opentelemetry auto instrumentations node const OTLPTraceExporter require opentelemetry exporter trace otlp http const sdk new opentelemetry NodeSDK traceExporter new OTLPTraceExporter url http otel collector v traces instrumentations getNodeAutoInstrumentations sdk start This code acts as a wrapper around the rest of your application running a tracer and sending traces to the OpenTelemetry collector which in turn passes them on to Tracetest This requires a few additional services and configurations but we can package everything into two Docker Compose files to launch and run the entire ecosystem Find this example Node js Express based project project and examples of integrating with other trace data stores in the Tracetest repository on GitHub To quickly access the example you can run the following git clone cd tracetest examples quick start nodejs docker compose f docker compose yaml f tracetest docker compose yaml up build Start practicing ODD with TracetestNow that you ve seen how easily Tracetest integrates with the trace data store you already have you re probably eager to build tests around your traces set assertions and start putting observability driven development into practice Installing the Tracetest CLI takes a single step on a macOS system brew install kubeshop tracetest tracetestCheck out the download page for more info From here we recommend following the official documentation to install the Tracetest CLI server which will help you configure your trace data source and generate all the configuration files you need to collect traces and build new tests For a more elaborate explanation refer to our docs You can also read more about connecting Tracetest to the OpenTelemetry Collector Once you have Tracetest set up open http localhost in your browser to check out the web UI Create tests visuallyIn the Tracetest UI click the Create dropdown and choose Create New Test We ll make an HTTP Request here so you can click Next to give your test a name and description For this simple example you ll just GET your app which runs at http app With the test created you can click the Trace tab to showcase your distributed trace as it passes through your app In the example this is pretty simplistic but you can start to see how it delivers immediate visibility into every transaction your HTTP request generates Set assertions against every single point in the HTTP transactionClick the Test tab then Add Test Spec to start setting assertions which form the backbone of how you implement ODD and track the overall quality of your application in various environments To make an assertion based on the GET span of our trace select that span in the graph view and click Current span in the Test Spec modal Or copy this span selector directly using the Tracetest Selector Language span tracetest span type http name GET http target http method GET Below add the attr http status code attribute and the expected value which is You can add more complex assertions as well like testing whether the span executes in less than ms Add a new assertion for attr http status code choose lt and add ms as the expected value You can check against other properties return statuses timing and much more but we ll keep it simple for now Then click Save Test Spec followed by Publish and you ve created your first assertion Generate the YAML for a test in TracetestOnce you have a test spec click the gear icon next to Run Test then Test Definition which opens a modal window where you can view and download the yaml file you ll need to run this test using the Tracetest CLI Go ahead and download the yaml file name it test api yaml and save it in the root of your example app directory Run the test with the Tracetest CLIYou can of course run this test through the GUI with the Run Test button which will follow your distributed trace and let you know whether your assertion passed or failed But to enable automation which opens up using Tracetest for detecting regressions and checking service SLOs among other uses let s showcase the CLI tooling Head back over to your terminal and make sure to configure your Tracetest CLI tracetest configure Output Enter your Tracetest server URL http localhost http localhost Next run your test using the definition you generated and downloaded above tracetest test run d test api yaml wExample Test One http localhost test ycmoHg run test The CLI tells you whether the test executed correctly not whether it passed or failed For that click the link in the CLI output or jump back into Tracetest Your test will pass as you re testing the body s HTTP status code which should be and the duration which should be far less than ms Now to showcase how ODD and the trace based testing help you catch errors in your code without having to spend time writing additional tests let s add a setTimeout which prevents the app from returning a response for at least ms const express require express const app express app get req res gt setTimeout gt res send Hello World app listen gt console log Listening for requests on http localhost Run the test with the CLI again then jump into the Web UI where you can see the assertion fails to due to the setTimeout which means the span s duration is s or more ConclusionGive Tracetest a try in your applications and tracing infrastructure with our quick start guide which sets you up with the CLI tooling and the Tracetest server in a few steps From there you ll be using the simple UI to generate valuable EE tests faster than ever increasing your test coverage freeing yourself from manual testing procedures identifying bottlenecks you didn t even know existed We d also love to hear about your ODD success stories in Discord Like traces themselves we re all about transparency and generating insights where there were once unknowns so don t be shy Feel free to give us a star on GitHub as well 2023-01-10 19:04:28
海外TECH DEV Community Terraform - Fun with Functions https://dev.to/pwd9000/terraform-fun-with-functions-30p4 Terraform Fun with Functions OverviewIn todays tutorial we will take a look at Terraform functions and how we can use them in a few real world examples and boy are there many functions to get creative and have fun with But what are they When writing Infrastructure as Code you may come across certain complexities or maybe you want to improve or simplify your code by using Terraform functions You can even use functions to guardrail and safeguard your code from platform related limitations For example character limitations or case sensitivity when building certain resources in a cloud provider like Azure Functions are expressions to transform and combine values in order to manipulate these values to be used in other ways Functions can also be nested within each other Most Terraform functions follow a common syntax for example lt FUNCTION NAME gt lt ARGUMENT gt lt ARGUMENT gt ExampleNOTE You can use terraform console in a command prompt to run any of the function examples shown later or to test your own function logic Say for example you want to provision an Azure storage account using Terraform As you may know storage account names in Azure have certain name rules and character limitations The length of a storage account name must be between characters and can only be lowercase letters and numbers Take this example of provisioning a storage account in Azure variable storage account name type string description Specifies Storage account name default MySuperCoolStorageAccountName resource azurerm storage account example name var storage account name resource group name MyRgName location uksouth account tier Standard account replication type LRS As you can see from the above example the storage account name provided by the default value in the variable storage account name is MySuperCoolStorageAccountName Because of the provider limitations if the default value is used in the deployment this resource creation would fail So how can we safeguard that the default value or any value that is provided will always work You guessed it we can use Terraform functions So we will use two functions namely substr and lower Lets look at each function substr extracts a substring from a given string by offset and maximum length Usage substr string offset length lower converts all cased letters in the given string to lowercase Usage lower string So lets test this using terraform console substr MySuperCoolStorageAccountName MySuperCoolStorageAccoun The result MySuperCoolStorageAccoun has now been truncated to only characters but this would still fail because there are still uppercase characters present Let s nest this inside the lower function lower substr MySuperCoolStorageAccountName mysupercoolstorageaccoun This is much better The storage account can now be provisioned by simply amending our original terraform code as follow variable storage account name type string description Specifies Storage account name default MySuperCoolStorageAccountName resource azurerm storage account example name lower substr var storage account name resource group name MyRgName location uksouth account tier Standard account replication type LRS But what if we want to improve this even more by making the value always work but always be unique as well Maybe we can shorten the name a bit more using subsr and then add a unique random string As you thought yes we can Let s look at another special function called uuid uuid generates a unique identifier string Usage uuid So lets test this using terraform console again uuid bd fe aae dfe First let s shorten our storage account name down to characters lower substr MySuperCoolStorageAccountName mysupercoolstorage Now that our storage account name is only characters long we are left with characters to play around with that we can generate a random identifier string that would act as a suffix using the uuid function with the substr to get the following result substr uuid ebe You may be wondering How can we combine the function lower substr MySuperCoolStorageAccountName with the function that creates the unique suffix substr uuid You guessed it there is a function we can use Let s look at the function called join join produces a string by concatenating together all elements of a given list of strings with the given delimiter Usage join separator list So as a basic example join can combine two strings in the following way join StringA StringB StringAStringB Let s apply join to our storage account name function and unique suffix function join lower substr MySuperCoolStorageAccountName substr uuid mysupercoolstoragedfc Viola We now have a randomly generated storage account name that will always be unique and not be limited to character and case limitations for our storage account s Let s run this function a few times to see the results join lower substr MySuperCoolStorageAccountName substr uuid mysupercoolstoragedfc join lower substr MySuperCoolStorageAccountName substr uuid mysupercoolstoragebe join lower substr MySuperCoolStorageAccountName substr uuid mysupercoolstoragefed join lower substr MySuperCoolStorageAccountName substr uuid mysupercoolstorageb Lastly let s apply this to our original code variable storage account name type string description Specifies Storage account name default MySuperCoolStorageAccountName resource azurerm storage account example name join lower substr var storage account name substr uuid resource group name MyRgName location uksouth account tier Standard account replication type LRS Bonus exampleIf you are used to provisioning resources in the cloud on Azure you ll know that each resource has a resource ID Here is a fun little function that I have used in the past to get the last element of any resource ID usually the name of the resource without fail Basic Example element split x y z length split x y z z Resource Group name based of resource ID element split subscriptions efde aa cd cc aafe resourceGroups MSDO Lab ADO length split subscriptions efde aa cd cc aafe resourceGroups MSDO Lab ADO MSDO Lab ADO VNET name based of resource ID element split subscriptions efde aa cd cc aafe resourceGroups Pwd EB Network providers Microsoft Network virtualNetworks UKS EB VNET length split subscriptions efde aa cd cc aafe resourceGroups Pwd EB Network providers Microsoft Network virtualNetworks UKS EB VNET UKS EB VNET Take a closer look at the functions in use above and how they are combined and nested together element retrieves a single element from a list Usage element list index split produces a list by dividing a given string at all occurrences of a given separator Usage split separator string length determines the length of a given list map or string Usage length a b NOTES on element element a b c b If the given index is greater than the length of the list then the index is wrapped around by taking the index modulo the length of the list element a b c a To get the last element from the list use length to find the size of the list minus as the list is zero based and then pick the last element element a b c length a b c c ConclusionThere are so many more cool Terraform functions out there to make your code even better and more robust Go check out the official documentation for more details I hope you have enjoyed this post and have learned something new ️ AuthorLike share follow me on GitHub Twitter LinkedIn Marcel LFollow Microsoft DevOps MVP Cloud Solutions amp DevOps Architect Technical speaker focused on Microsoft technologies IaC and automation in Azure Find me on GitHub 2023-01-10 19:03:51
Apple AppleInsider - Frontpage News Deals: get a free $30 gift card with a Costco membership https://appleinsider.com/articles/22/12/10/deals-get-a-free-30-gift-card-with-a-costco-membership?utm_medium=rss Deals get a free gift card with a Costco membershipFor a limited time only get a free digital Costco Shop Card with a year Costco Gold Star membership Get a Costco Shop Card Costco membership deals are incredibly rare but AppleInsider readers can take advantage of one of s best warehouse club deals Read more 2023-01-10 19:21:27
海外TECH Engadget ‘TMNT: Shredder’s Revenge’ hits iOS and Android as a Netflix mobile exclusive https://www.engadget.com/tmnt-shredders-revenge-ios-android-netflix-mobile-exclusive-193022210.html?src=rss TMNT Shredder s Revenge hits iOS and Android as a Netflix mobile exclusiveIf you re looking for a game to play right now and you have a Netflix subscription it s worth checking out Teenage Mutant Ninja Turtles Shredder s Revenge It just hit iOS and Android mobile as a mobile exclusive for Netflix members Shredder s Revenge brings classic TMNT side scrolling beat em ups like Turtles in Time bang up to date Not only does it have gorgeous pixel art but you can hurl enemies at the screen like in the old days As soon as the mobile version dropped I downloaded it and within seconds I had joined a party of five other people to dish out swift justice to Bebop Rocksteady and the Foot Clan Along with the turtles April O Neil Master Splinter and Casey Jones are playable charactersThe game ran without a hitch on my iPhone in the couple of levels I played The touch controls work well enough but I don t think I d want to play the entire thing that way An external controller is a better option if you have one handy In any case Shredder s Revenge was one of my favorite games of I love the idea of being able to play it anywhere without lugging my Steam Deck or Switch around Netflix recently addedKentucky Route Zero and Twelve Minutes to its growing and impressive lineup of mobile games In the coming months Vikings Valhalla and Valiant Hearts Coming Home will be available on the service 2023-01-10 19:30:22
海外TECH Engadget Hyundai managed to put its 'crab-walking' e-Corner technology into an Ioniq EV https://www.engadget.com/hyundai-integrated-its-crabwalking-e-corner-technology-into-an-ioniq-5-ev-190443090.html?src=rss Hyundai managed to put its x crab walking x e Corner technology into an Ioniq EVFive years after debuting at CES Hyundai s e Corner technology is closer to reality Following its most recent appearance at CES the system was on display at last week s show And this time around rather than building a dedicated prototype to showcase the tech the automaker s Mobis arm instead integrated e Corner into an Ioniq EV As you can see from the video the Hyundai shared via Autoblog the module much like the Hummer EV s “CrabWalk functionality allows a car s wheels to turn in ways they can t in a vehicle with a traditional suspension system Subsequently that allows you to complete maneuvers you can t in other vehicles Parallel parking for instance is as easy as turning the wheels degrees and driving the car horizontally Less practical but just as cool e Corner also enables cars to move diagonally and rotate on the spot It s even possible to pull off a pivot turn It will likely be another few years before e Corner modules start showing up in production vehicles In Hyundai Mobis said it was planning to begin rolling out the technology in That said it wouldn t be surprising to see other automakers incorporate the technology into their cars since the division produces parts for other companies not just Hyundai 2023-01-10 19:04:43
海外TECH CodeProject Latest Articles PsCal https://www.codeproject.com/Articles/5351702/PsCal calendars 2023-01-10 19:28:00
海外科学 NYT > Science Ken Balcomb, 82, Dies; Revealed the Hidden World of Killer Whales https://www.nytimes.com/2023/01/06/science/ken-balcomb-dead.html compassionate 2023-01-10 19:05:51
海外TECH WIRED Astronomers May Have Just Spotted the Universe’s First Galaxies https://www.wired.com/story/astronomers-may-have-just-spotted-the-universes-first-galaxies/ Astronomers May Have Just Spotted the Universe s First GalaxiesNASA s new JWST space telescope has revealed some cosmic surprises including galaxies that might have assembled earlier than previously thought 2023-01-10 19:24:50
医療系 医療介護 CBnews 人口当たり病床数の地域差2.6倍、東北-最多は岩手の釜石 「データは語る」(3) https://www.cbnews.jp/news/entry/20230110154858 cbnews 2023-01-11 05:00:00
ニュース BBC News - Home Family of British man Chris Parry missing in Ukraine 'very worried' https://www.bbc.co.uk/news/uk-64228671?at_medium=RSS&at_campaign=KARANGA andrew 2023-01-10 19:34:17
ニュース BBC News - Home Brazil riots: Former public security chief accused of 'sabotage' https://www.bbc.co.uk/news/world-latin-america-64228530?at_medium=RSS&at_campaign=KARANGA forces 2023-01-10 19:11:45
ニュース BBC News - Home David Duckham: Former England and British & Irish Lions back dies aged 76 https://www.bbc.co.uk/sport/rugby-union/64229459?at_medium=RSS&at_campaign=KARANGA david 2023-01-10 19:28:54
ニュース BBC News - Home Hillsborough: Safety body 'concerned' by overcrowding reports during FA Cup tie https://www.bbc.co.uk/sport/football/64228487?at_medium=RSS&at_campaign=KARANGA Hillsborough Safety body x concerned x by overcrowding reports during FA Cup tieThe government s advisor on safety at sports grounds is concerned by reports of overcrowding at Hillsborough during the FA Cup tie between Sheffield Wednesday and Newcastle 2023-01-10 19:13:49
ビジネス ダイヤモンド・オンライン - 新着記事 バイデン政権、23年は「ねじれ議会」でどん詰まり!次期大統領選のキーマンとは - 総予測2023 https://diamond.jp/articles/-/314562 中間選挙 2023-01-11 04:55:00
ビジネス ダイヤモンド・オンライン - 新着記事 赤羽15坪の狭小新築住宅が9000万円に迫る住宅市場のカオス!【不動産業界インサイダー地下座談会(2)】 - 不動産業界インサイダー地下座談会 https://diamond.jp/articles/-/315837 蛇の道は蛇 2023-01-11 04:50:00
ビジネス ダイヤモンド・オンライン - 新着記事 防衛増税「先送り」の見切り発車で高まる、なし崩しの国債増発リスク - 政策・マーケットラボ https://diamond.jp/articles/-/315769 建設国債 2023-01-11 04:45:00
ビジネス ダイヤモンド・オンライン - 新着記事 コロナとウクライナ、2つの危機で露呈した「MMTの弱点」 - 政策・マーケットラボ https://diamond.jp/articles/-/315280 資源価格 2023-01-11 04:40:00
ビジネス ダイヤモンド・オンライン - 新着記事 小学生でも解ける東大入試問題【理科】「山の天気は変わりやすい」理由は? - ニュース3面鏡 https://diamond.jp/articles/-/315051 2023-01-11 04:35:00
ビジネス ダイヤモンド・オンライン - 新着記事 米失業率は最低水準、労働需給のひっ迫はFRBの意図通りに和らいでいるのか - 経済分析の哲人が斬る!市場トピックの深層 https://diamond.jp/articles/-/315836 不透明感 2023-01-11 04:30:00
ビジネス ダイヤモンド・オンライン - 新着記事 「日銀の新総裁は誰か」より注目すべきたった1つのこと - 山崎元のマルチスコープ https://diamond.jp/articles/-/315835 日本銀行 2023-01-11 04:25:00
ビジネス ダイヤモンド・オンライン - 新着記事 韓国と北朝鮮が「新冷戦」突入で“朝鮮半島有事”の恐れ、元駐韓大使が解説 - 元駐韓大使・武藤正敏の「韓国ウォッチ」 https://diamond.jp/articles/-/315845 韓国と北朝鮮が「新冷戦」突入で“朝鮮半島有事の恐れ、元駐韓大使が解説元駐韓大使・武藤正敏の「韓国ウォッチ」これまでの韓国国民の一般常識は「日本は韓国を支配したのだから、日本はそれを償わなければならない。 2023-01-11 04:23:00
ビジネス ダイヤモンド・オンライン - 新着記事 ダイソー「推し活グッズ」人気の理由、バイヤーが語る“開発の秘訣”とは - ニュース3面鏡 https://diamond.jp/articles/-/314685 開発 2023-01-11 04:20:00
ビジネス ダイヤモンド・オンライン - 新着記事 ロシアと中国が限界を露呈、世界には今「コンパクト民主主義」が必要だ - 上久保誠人のクリティカル・アナリティクス https://diamond.jp/articles/-/315834 2023-01-11 04:15:00
ビジネス ダイヤモンド・オンライン - 新着記事 「なぜか逆境に強い人」には理由があった!脳科学から見た5つの思考法 - ニュース3面鏡 https://diamond.jp/articles/-/315833 「なぜか逆境に強い人」には理由があった脳科学から見たつの思考法ニュース面鏡「目標があるけど、ついつい“やっぱり無理かもと逃げてしまう……」「状況の変化にストレスを感じて戸惑ってしまう……」。 2023-01-11 04:10:00
ビジネス ダイヤモンド・オンライン - 新着記事 中高一貫校「東洋英和女学院」の人気が急上昇した理由 - 中学受験のキーパーソン https://diamond.jp/articles/-/315151 中学受験 2023-01-11 04:05:00
ビジネス 東洋経済オンライン 大井川鉄道、蒸気機関車「動態保存」へのこだわり 鉄道車両の保存には経験と費用が不可欠だ | 旅・趣味 | 東洋経済オンライン https://toyokeizai.net/articles/-/644250?utm_source=rss&utm_medium=http&utm_campaign=link_back 動態保存 2023-01-11 04:30:00
IT IT号外 エアコンの設置、洗浄掃除業者を東南アジアやアフリカ、中東地域などの赤道付近の国でやれば多くの需要があるのではないか https://figreen.org/it/%e3%82%a8%e3%82%a2%e3%82%b3%e3%83%b3%e3%81%ae%e8%a8%ad%e7%bd%ae%e3%80%81%e6%b4%97%e6%b5%84%e6%8e%83%e9%99%a4%e6%a5%ad%e8%80%85%e3%82%92%e6%9d%b1%e5%8d%97%e3%82%a2%e3%82%b8%e3%82%a2%e3%82%84%e3%82%a2/ エアコンの設置、洗浄掃除業者を東南アジアやアフリカ、中東地域などの赤道付近の国でやれば多くの需要があるのではないかいやはやユニクロだのに代表される衣料品業界って、環境を破壊している筆頭の業界であると昨今では色々と言われてはいますが、非常にマーケットが大きい分野でもあると思うんですよね。 2023-01-10 19:43:52
海外TECH reddit Lincoln Riley confirms that DC Alex Grinch will be retained https://www.reddit.com/r/CFB/comments/108hzip/lincoln_riley_confirms_that_dc_alex_grinch_will/ Lincoln Riley confirms that DC Alex Grinch will be retained t PKfatdvzXAh XGVPqQ submitted by u SoonerFan to r CFB link comments 2023-01-10 19:08:20
GCP Cloud Blog Opinary generates recommendations faster on Cloud Run https://cloud.google.com/blog/topics/developers-practitioners/opinary-generates-recommendations-faster-cloud-run/ Opinary generates recommendations faster on Cloud RunEditor s note Berlin based startup Opinary migrated their machine learning pipeline from Google Kubernetes Engine GKE to Cloud Run After making a few architectural changes their pipeline is now faster and more cost efficient They reduced the time to generate a recommendation from seconds to a second and realized a remarkable cost reduction In this post Doreen Sacker and Héctor Otero Mediero share with us a detailed and transparent technical report of the migration Opinary asks the right questions to increase reader engagementWe re Opinary and our reader polls appear in news articles globally The polls let users share their opinion with one click and see how they compare to other readers We automatically add the most relevant reader polls using machine learning  We ve found that the polls help publishers increase reader retention boost subscriptions and improve other article success metrics Advertisers benefit from access to their target groups contextually on premium publishers sites and from high performing interaction with their audiences Let s look at an example of one of our polls Imagine reading an article on your favorite news site about whether or not to introduce a speed limit on the highway As you might know long stretches of German Autobahn still don t have a legal speed limit and this is a topic of intense debate Critics of speeding point out the environmental impact and casualty toll Opinary adds this poll to the article Diving into the architecture of our recommendation systemHere s how we ve architected our system originally on GKE Our pipeline starts with an article URL and delivers a recommended poll to add to the article Let s take a more detailed look at the various components that make this happen Here s a visual overview  First we ll push a message with the article URL to a Pub Sub topic a message queue The recommender service pulls the message from the queue in order to process it Before this service can recommend a poll it needs to complete a few steps which we ve separated out into individual services The recommender service sends a request to these services one by one and stores the results in a Redis store These are the steps  The article scraper service scrapes downloads and parses the article text from the URL The encoder service encodes the text into text embeddings we use the universal sentence encoder The brand safety service detects if the article text includes descriptions of tragic events such as death murder or accidents because we don t want to add our polls into these articles  With these three steps completed the recommendation service can recommend a poll from our database of pre existing polls and submit it to an internal database we call Rec Store This is how we end up recommending adding a poll about introducing a speed limit on the German Autobahn Why we decided to move to Cloud RunCloud Run looked attractive to us for two reasons First because it automatically scales down all the way to zero container instances if there are no requests we expected we would save costs and we did Second we liked the idea of running our code on a fully managed platform without having to worry about the underlying infrastructure especially since our team doesn t have a dedicated data engineer we re both data scientists As a fully managed platform Cloud Run has been designed to make developers more productive It s a serverless platform that lets you run your code in containers directly on top of Google s infrastructure Deployments are fast and automated Fill in your container image URL and seconds later your code is serving requests  Cloud Run automatically adds more container instances to handle all incoming requests or events and removes them when they re no longer needed That s cost efficient and on top of that Cloud Run doesn t charge you for the resources a container uses if it s not serving requests  The pay for use cost model was the main motivation for us to migrate away from GKE We only want to pay for the resources we use and not for a large idle cluster during the night Enabling the migration to Cloud Run with a few changesTo move our services from GKE to Cloud Run we had to make a few changes  Change the Pub Sub subscriptions from pull to push  Migrate our self managed Redis database in the cluster to a fully managed Cloud Memorystore instance  This is how our initial target architecture looks in a diagram  Changing Pub Sub subscriptions from pull to pushSince Cloud Run services scale with incoming web requests your container must have an endpoint to handle requests Our recommender service originally didn t have an endpoint to serve requests because we used the Pub Sub client library to pull messages  Google recommends to use push subscriptions instead of pull subscriptions to trigger Cloud Run from Pub Sub With a push subscription Pub Sub delivers messages as requests to an HTTPS endpoint Note that this doesn t need to be Cloud Run it can be any HTTPS URL Pub Sub guarantees delivery of a message by retrying requests that return an error or are too slow to respond using a configurable deadline  Introducing a Cloud Memorystore Redis instanceCloud Run adds and removes container instances to handle all incoming requests Redis doesn t serve HTTP requests and it likes to have one or a few stateful container instances attached to a persistent volume instead of disposable containers that start on demand We created a Memorystore Redis instance to replace the in cluster Redis instance Memorystore instances have an internal IP address on the project s VPC network Containers on Cloud Run operate outside of the VPC That means you have to add a connector to reach internal IP addresses on the VPC Read the docs to learn more about Serverless VPC access Making it faster using Cloud TraceThis first part of our migration went smoothly but while we were hopeful that our system would perform better we would still regularly spend almost seconds generating a recommendation  We used Cloud Trace to figure out where requests were spending time This is what we found To handle a single request our code made roughly requests to Redis Batching all these requests into one request was a big improvement  The VPC connector has a default maximum limit on network throughput that was too low for our workload Once we changed it to use larger instances response times improved As you can see below when we rolled out these changes we realized a noticeable performance benefit  Waiting for responses is expensiveThe changes described above led to scalable and fast recommendations We reduced the average recommendation time from seconds to under second However the recommendation service was getting very expensive because it spent a lot of time doing nothing waiting for other services to return their response The recommender service would receive a request and wait for other services to return a response As a result many container instances in the recommender service were running but were essentially doing nothing except waiting Therefore the pay per use cost model of Cloud Run leads to high costs for this service Our costs went up by a factor of compared with the original setup on Kubernetes Rethinking the architectureTo reduce costs we needed to rethink our architecture The recommendation service was sending requests to all other services and would wait for their responses This is called an orchestration pattern To have the services work independently we changed to a choreography pattern We needed the services to execute their tasks one after the other but without a single service waiting for other services to complete This is what we ended up doing We changed the initial entrypoint to be the article scraping service rather than the recommender service Instead of returning the article text the scraping service now stores the text in a Cloud Storage bucket The next step in our pipeline is to run the encoder service and we invoke it using an EventArc trigger EventArc lets you asynchronously deliver events from Google services including those from Cloud Storage We ve set an EventArc trigger to fire an event as soon as the article scraper service adds the file to the Cloud Storage bucket  The trigger sends the object information to the encoder service using an HTTP request The encoder service does its processing and saves the results in a Cloud Storage bucket again One service after the other can now process and save the intermediate results in Cloud Storage for the next service to use Now that we asynchronously invoke all services using EventArc triggers no single service is actively waiting for another service to return results Compared with the original setup on GKE our costs are now lower  Advice and conclusionsOur recommendations are now fast scalable and our costs are half as much as the original cluster setup Migrating from GKE to Cloud Run is easy for container based applications Cloud Trace was useful for identifying where requests were spending time Sending a request from one Cloud Run service to another and synchronously waiting for the result turned out to be expensive for us Asynchronously invoking our services using EventArc triggers was a better solution  Cloud Run is under active development and new features are being added frequently which makes it a nice developer experience overall Related ArticleHow to use Google Cloud Serverless tech to iterate quickly in a startup environmentHow to use Google Cloud Serverless tech to iterate quickly in a startup environment Read ArticleRelated ArticleCloud Wisdom Weekly ways to reduce costs with containersUnderstand the core features you should expect of container services including specific advice for GKE and Cloud Run Read ArticleRelated ArticleHow Einride scaled with serverless and re architected the freight industryEinride a Swedish freight mobility company is partnering with Google Cloud to reimagine the freight industry as we know it Read Article 2023-01-10 20:00:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)