投稿時間:2021-08-20 01:34:30 RSSフィード2021-08-20 01:00 分まとめ(41件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
IT 気になる、記になる… 新型「iPad mini」の金型?? − 「iPad Pro」風のデザインに https://taisy0.com/2021/08/20/144310.html ipadmini 2021-08-19 15:17:44
python Pythonタグが付けられた新着投稿 - Qiita Pythonのハッシュ衝突攻撃の考察 https://qiita.com/recuraki/items/de618f00786b8b6ace2f PythonはどうでしょうかサマリPythonでも実装として辞書の衝突は起こり、ONで各操作が発生しうるただし、そのためにはビットのハッシュの衝突をさせなければならないので現実的でない数値入力で衝突を引き起こす入力をすることは現実的でない文字列に関しても、ハッシュの衝突を発生させることは現実的でないこのため、競技プログラミングの範囲ではhackケースにpythonの辞書を攻撃することは困難定数倍をOくらいにはできます。 2021-08-20 00:49:36
python Pythonタグが付けられた新着投稿 - Qiita アンサンブル学習-ブースティング(Boosting) https://qiita.com/fastso/items/b5c036c6eedc543d88af 弱学習器は、ランダムな推測よりもわずかに良い結果を出すモデルのことです。 2021-08-20 00:01:58
js JavaScriptタグが付けられた新着投稿 - Qiita SortableJSを使ってみる https://qiita.com/piyo8810/items/440389c199ba9562aff5 SortableJSを使ってみるformrunみたいに「ドラッグアンドドロップで何かしらの成果物を作成する」みたいなHTMLは自分で作れるのだろうかと思って色々調べていたら、SortableJSなるものが使えそうだったので使ってみる。 2021-08-20 00:10:20
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) 部分スクリーンショットで画像が撮れない https://teratail.com/questions/355184?rss=all 部分スクリーンショットで画像が撮れない前提・実現したいこと要素のスクリーンショットで画像も撮れるようにしたい発生している問題・エラーメッセージ画像の部分をltdivnbspidquotnameplatequotgtで囲んでいます。 2021-08-20 00:53:08
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) DocumentRootが変更できません。 https://teratail.com/questions/355183?rss=all DocumentRootが変更できません。 2021-08-20 00:50:45
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) 二値化画像の指定のピクセルから0が連続で5回続いたときの距離を知りたい https://teratail.com/questions/355182?rss=all 二値化画像の指定のピクセルからが連続で回続いたときの距離を知りたい前提・実現したいこと二値化画像の指定のピクセルからが連続で回続いたときの距離を知りたい×の二値化画像のXピクセル、Yピクセルの輝度値を抽出してが連続で回続いた時、何回処理したかの値を知りたいプログラミング初心者に助言をお願いいたします。 2021-08-20 00:25:25
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) Youtubeの進行状況バーの情報と、動画の尺が取得がしたい https://teratail.com/questions/355181?rss=all ヒットした動画それぞれの下部に、①進行状況バーの位置分数②動画の長さを表示します。 2021-08-20 00:23:33
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) PHP言語を利用したform(お問い合わせ)を作成する時の注意点について https://teratail.com/questions/355180?rss=all PHP言語を利用したformお問い合わせを作成する時の注意点について実現したいことPHPを利用してお問い合わせフォームを作成したいと考えております。 2021-08-20 00:13:39
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) LaravelでVueコンポーネントが表示されない https://teratail.com/questions/355179?rss=all LaravelでVueコンポーネントが表示されないLaravelとVueを勉強しています。 2021-08-20 00:13:27
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) 基本情報の疑似言語に詳しい方いてますか!?????? https://teratail.com/questions/355178?rss=all 基本情報 2021-08-20 00:08:44
AWS AWSタグが付けられた新着投稿 - Qiita Amazon Athenaのクエリメモ https://qiita.com/okadarhi/items/c62f9a96ea2aa698d652 マネージドルールで、Overriderulegroupactiontocountを有効にした場合のCount数を確認するマネージドルールの設定内容としては以下の通り。 2021-08-20 00:55:45
golang Goタグが付けられた新着投稿 - Qiita Go言語のpackage利用で初手躓いた点 https://qiita.com/Daken91/items/38dca868295aea5ce657 2021-08-20 00:11:07
Azure Azureタグが付けられた新着投稿 - Qiita Azure Static Web Apps の機能をいろいろさわってみる https://qiita.com/ussvgr/items/b5c827f9dc7ec145895f リソースの作成が完成したら作成したリソースの画面を開き、左メニューの「環境」をクリックしてみましょう。 2021-08-20 00:36:49
Git Gitタグが付けられた新着投稿 - Qiita Mac にインストールされているGitをHomebrewを使って最新化する https://qiita.com/saku2021/items/b167f8f449a85e57061b gitversiongitversionAppleGitMacにはGitが標準で入っている様子。 2021-08-20 00:24:44
海外TECH DEV Community Array methods in JavaScript: when to use which 🤔? https://dev.to/sidmirza4/array-methods-in-javascript-when-to-use-which-2ehp Array methods in JavaScript when to use which Hey guys There are a lot of array methods in JavaScript and often we get confused about which to use when In this article I will summarise these methods and do my best to clear about which method should we use according to our needs Let s get started As I said we will study the array methods according to our needs so just think what do you want I want to mutate the original array a add to original arrayi push this method adds an element to the end of the original array and returns the new length of the array let numbers number push console log numbers ii unshift this method is like push method except it adds the element at the start of the original array let numbers numbers unshift console log numbers b remove from the original arrayi pop this method remove the last element of the array and returns the removed element let names Sid Marty John const removedName names pop console log names Sid Marty console log removedName John ii shift shift is just like pop except it removes the element from the start iii splice this method is bit tricky it can remove and or add the element s to the original array const fruits Banana Orange Apple Mango At position remove element and add elements fruits splice Lemon Kiwi console log fruits Banana Orange Lemon Kiwi Other mutating array methods which I do not use so frequently are i reverseii sortiii fill a new arrayIf you want a new array please look at the following array methods i map As a react developer map is the most used array method for me It loops over the array and perform a certain action on each element then returns the new array of the same length const numbers const numberSqr numbers map num gt num num console log numberSqr map receives a callback function which accepts the following arguments i The current element being processed in the array ii index of the current element being processed in the array iii array on which map was called value returned from the callback function will be mapped the corresponding element in the array ii filter This methods creates a new array with all the elements that passed the condition given in the callback function const words spray limit elite exuberant destruction present const result words filter word gt word length gt console log result exuberant destruction present iii slice This method returns a copy of the portion of the array const animals ant bison camel duck elephant console log animals slice camel duck elephant console log animals slice camel duck iv concat This method is used to merge two or more arrays This method does not change the existing arrays but instead returns a new array const letters a b c const numbers letters concat numbers result is a b c an array indexi indexOf This method returns the first index at which a given element can be found in the array or if it is not present const fruits Banana Apple Kiwi console log fruits indexOf Apple console log fruits indexOf Orange ii findIndex This method returns the index of the first element that passed a given condition Otherwise indicating that no element passed the condition const numbers const index numbers findIndex element gt element gt const ind numbers findIndex element gt element gt an array element find This method returns the first element which satisfies a provided condition undefined otherwise const array const found array find element gt element gt console log found to know if the array includesi includes This methods returns true if the array contains the element or false const friends Jon Joe Jack Jill console log friends includes Jon trueconsole log friends includes Sid falseii some Name of this method sometimes confuse me This method returns true if at least one element passes the given condition const array checks whether an element is evenconst even element gt element console log array some even expected output trueiii every This method returns true if all the elements in the array pass the given condition false otherwise function isBigEnough element index array return element gt every isBigEnough false every isBigEnough true a new string join This methods joins all the element of the array by a given string separator and return the string let words JS is amazing joining the words by spaceconsole log words join JS is amazing joining by dash console log words join JS is amazing to just loop over an arrayforEach This method executes a provided function once for each array element const array a b c array forEach element gt console log element a b c to transform the array to a single value reduce This methods reduces the array to a single value This value can be of any type number string boolean array or object The reducer function takes four arguments a Accumulatorb Current Valuec Current Indexd Source Array Reducer function s returned value is assigned to the accumulator whose value is remembered across each iteration throughout the array and ultimately becomes the final single resulting value sum of the elements of the array using reducelet numbers const sum numbers reduce acc el i arr gt acc el console log sum Phew this was a lot to take in I hope you guys found this article helpful if you did please leave a like If you need explanation of any particular method please let me know in the comment section or message me on twitter Thanks for reading Happy coding 2021-08-19 15:55:12
海外TECH DEV Community AWS Certified DevOps Engineer DOP-C01 Exam Questions Part 5 https://dev.to/iam_awslagi/aws-certified-devops-engineer-dop-c01-exam-questions-part-5-c40 AWS Certified DevOps Engineer DOP C Exam Questions Part Source For AWS For GCP You are hired as the new head of operations for a SaaS company Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible She complains that she has no idea what is going on in the complex service oriented architecture because the developers just log to disk and it s very hard to find errors in logs on so many services How can you best meet this requirement and satisfy your CTO A Copy all log files into AWS S using a cron job on each instance Use an S Notification Configuration on the PutBucket event and publish events to AWS Lambda Use the Lambda to analyze logs as soon as they come in and flag issues B Begin using CloudWatch Logs on every service Stream all Log Groups into S objects Use AWS EMR cluster jobs to perform ad hoc MapReduce analysis and write new queries when needed C Copy all log files into AWS S using a cron job on each instance Use an S Notification Configuration on the PutBucket event and publish events to AWS Kinesis Use Apache Spark on AWS EMR to perform at scale stream processing queries on the log chunks and flag issues D Begin using CloudWatch Logs on every service Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana and perform log analysis on a search cluster Answer DWhen thinking of AWS Elastic Beanstalk s model which is true A Applications have many deployments deployments have many environments B Environments have many applications applications have many deployments C Applications have many environments environments have many deployments D Deployments have many environments environments have many applications Answer CYou work for a company that automatically tags photographs using artificial neural networks ANNs which run on GPUs using C You receive millions of images at a time but only times per day on average These images are loaded into an AWS S bucket you control for you in a batch and then the customer publishes a JSON formatted manifest into another S bucket you control as well Each image takes milliseconds to process using a full GPU Your neural network software requires minutes to bootstrap Image tags are JSON objects and you must publish them to an S bucket Which of these is the best system architecture for this system A Create an OpsWorks Stack with two Layers The first contains lifecycle scripts for launching and bootstrapping an HTTP API on G instances for ANN image processing and the second has an always on instance which monitors the S manifest bucket for new files When a new file is detected request instances to boot on the ANN layer When the instances are booted and the HTTP APIs are up submit processing requests to individual instances B Make an S notification configuration which publishes to AWS Lambda on the manifest bucket Make the Lambda create a CloudFormation Stack which contains the logic to construct an autoscaling worker tier of EC G instances with the ANN code on each instance Create an SQS queue of the images in the manifest Tear the stack down when the queue is empty C Deploy your ANN code to AWS Lambda as a bundled binary for the C extension Make an S notification configuration on the manifest which publishes to another AWS Lambda running controller code This controller code publishes all the images in the manifest to AWS Kinesis Your ANN code Lambda Function uses the Kinesis as an Event Source The system automatically scales when the stream contains image events D Create an Auto Scaling Load Balanced Elastic Beanstalk worker tier Application and Environment Deploy the ANN code to G instances in this tier Set the desired capacity to Make the code periodically check S for new manifests When a new manifest is detected push all of the images in the manifest into the SQS queue associated with the Elastic Beanstalk worker tier Answer BYou are designing a system which needs at minimum m large instances operating to service traffic When designing a system for high availability in the us east region which has Availability Zones your company needs to be able to handle death of a full availability zone How should you distribute the servers to save as much cost as possible assuming all of the EC nodes are properly linked to an ELB Your VPC account can utilize us east s AZ s through f inclusive A servers in each of AZ s a through d inclusive B servers in each of AZ s a and b C servers in each of AZ s a through e inclusive D servers in each of AZ s a through c inclusive Answer CYou need to create a Route record automatically in CloudFormation when not running in production during all launches of a Template How should you implement this A Use a Parameter for environment and add a Condition on the Route Resource in the template to create the record only when the environment is not production B Create two templates one with the Route record value and one with a null value for the record Use the one without it when deploying to production C Use a Parameter for environment and add a Condition on the Route Resource in the template to create the record with a null string when environment is production D Create two templates one with the Route record and one without it Use the one without it when deploying to production Answer AWhat is web identity federation A Use of an identity provider like Google or Facebook to become an AWS IAM User B Use of an identity provider like Google or Facebook to exchange for temporary AWS security credentials C Use of AWS IAM User tokens to log in as a Google or Facebook user D Use of AWS STS Tokens to log in as a Google or Facebook user Answer BYou have been asked to de risk deployments at your company Specifically the CEO is concerned about outages that occur because of accidental inconsistencies between Staging and Production which sometimes cause unexpected behaviors in Production even when Staging tests pass You already use Docker to get high consistency between Staging and Production for the application environment on your EC instances How do you further de risk the rest of the execution environment since in AWS there are many service components you may use beyond EC virtual machines A Develop models of your entire cloud system in CloudFormation Use this model in Staging and Production to achieve greater parity B Use AWS Config to force the Staging and Production stacks to have configuration parity Any differences will be detected for you so you are aware of risks C Use AMIs to ensure the whole machine including the kernel of the virtual machines is consistent since Docker uses Linux Container LXC technology and we need to make sure the container environment is consistent D Use AWS ECS and Docker clustering This will make sure that the AMIs and machine sizes are the same across both environments Answer AYou are creating a new API for video game scores Reads are times more common than writes and the top of scores are read times more frequently than the rest of the scores What s the best design for this system using DynamoDB A DynamoDB table with x higher read than write throughput with CloudFront caching B DynamoDB table with roughly equal read and write throughput with CloudFront caching C DynamoDB table with x higher read than write throughput with ElastiCache caching D DynamoDB table with roughly equal read and write throughput with ElastiCache caching Answer DYou were just hired as a DevOps Engineer for a startup Your startup uses AWS for of their infrastructure They currently have no automation at all for deployment and they have had many failures while trying to deploy to production The company has told you deployment process risk mitigation is the most important thing now and you have a lot of budget for tools and AWS resources Their stack A Model the stack in AWS Elastic Beanstalk as a single Application with multiple Environments Use Elastic Beanstalk s Rolling Deploy option to progressively roll out application code changes when promoting across environments B Model the stack in CloudFormation templates Data layer compute layer and networking layer Write stack deployment and integration testing automation following Blue Green methodologies C Model the stack in AWS OpsWorks as a single Stack with compute layer and its associated ELB Use Chef and App Deployments to automate Rolling Deployment D Model the stack in CloudFormation template to ensure consistency and dependency graph resolution Write deployment and integration testing automation following Rolling Deployment methodologies Answer BWhat is the scope of an EBS snapshot A Availability ZoneB Placement GroupC RegionD VPCAnswer CYour system uses a multi master multi region DynamoDB configuration spanning two regions to achieve high availability For the first time since launching your system one of the AWS Regions in which you operate over went down for hours and the failover worked correctly However after recovery your users are experiencing strange bugs in which users on different sides of the globe see different data What is a likely design issue that was not accounted for when launching A The system does not have Lambda Functor Repair Automatons to perform table scans and check for corrupted partition blocks inside the Table in the recovered Region B The system did not implement DynamoDB Table Defragmentation for restoring partition performance in the Region that experienced an outage so data is served stale C The system did not include repair logic and request replay buffering logic for post failure to resynchronize data to the Region that was unavailable for a number of hours D The system did not use DynamoDB Consistent Read requests so the requests in different areas are not utilizing consensus across Regions at runtime Answer CYou run operations for a company that processes digital wallet payments at a very high volume One second of downtime during which you drop payments or are otherwise unavailable loses you on average USD You balance the financials of the transaction system once per day Which database setup is best suited to address this business risk A A multi AZ RDS deployment with synchronous replication to multiple standbys and read replicas for fast failover and ACID properties B A multi region multi master active active RDS configuration using database level ACID design principles with database trigger writes for replication C A multi region multi master active active DynamoDB configuration using application control level BASE design principles with change stream write queue buffers for replication D A multi AZ DynamoDB setup with changes streamed to S via AWS Kinesis for highly durable storage and BASE properties Answer CWhen thinking of DynamoDB what are true of Local Secondary Key properties A Either the partition key or the sort key can be different from the table but not both B Only the sort key can be different from the table C The partition key and sort key can be different from the table D Only the partition key can be different from the table Answer BWhich deployment method when using AWS Auto Scaling Groups and Auto Scaling Launch Configurations enables the shortest time to live for individual servers A Pre baking AMIs with all code and configuration on deploys B Using a Dockerfile bootstrap on instance launch C Using UserData bootstrapping scripts D Using AWS EC Run Commands to dynamically SSH into fleets Answer AWhich of these techniques enables the fastest possible rollback times in the event of a failed deployment A Rolling ImmutableB Rolling MutableC Canary or A BD Blue GreenAnswer DWhich of the following are not valid sources for OpsWorks custom cookbook repositories A HTTP S B GitC AWS EBSD SubversionAnswer CYou are building a deployment system on AWS You will deploy new code by bootstrapping instances in a private subnet in a VPC at runtime using UserData scripts pointing to an S zip file object where your code is stored An ELB in a public subnet has network interfaces and connectivity to the instances Requests from users of the system are routed to the ELB via a Route A Record Alias You do not use any VPC endpoints Which is a risk of using this approach A Route Alias records do not always update dynamically with ELB network changes after deploys B If the NAT routing for the private subnet fails deployments fail C Kernel changes to the base AMI may render the code inoperable D The instances cannot be in a private subnet if the ELB is in a public one Answer BWhich major database needs a BYO license A PostgreSQLB MariaDBC MySQLD OracleAnswer DWhat is the maximum supported single volume throughput on EBS A MiB sB MiB sC MiB sD MiB sAnswer AWhen a user is detaching an EBS volume from a running instance and attaching it to a new instance which of the below mentioned options should be followed to avoid file system damage A Unmount the volume firstB Stop all the I O of the volume before processingC Take a snapshot of the volume before detachingD Force Detach the volume to ensure that all the data stays intactAnswer AA user is creating a new EBS volume from an existing snapshot The snapshot size shows GB Can the user create a volume of GB from that snapshot A Provided the original volume has set the change size attribute to trueB YesC Provided the snapshot has the modify size attribute set as trueD NoAnswer BHow long are the messages kept on an SQS queue by default A If a message is not read it is never deletedB weeksC dayD daysAnswer DA user has attached an EBS volume to a running Linux instance as a “ dev sdf device The user is unable to see the attached device when he runs the command “df h What is the possible reason for this A The volume is not in the same AZ of the instanceB The volume is not formattedC The volume is not attached as a root deviceD The volume is not mountedAnswer DWhen using Amazon SQS how much data can you store in a message A KBB KBC KBD KBAnswer AWhat is the maximum time messages can be stored in SQS A daysB one monthC daysD daysAnswer AA user has created a new EBS volume from an existing snapshot The user mounts the volume on the instance to which it is attached Which of the below mentioned options is a required step before the user can mount the volume A Run a cyclic check on the device for data consistencyB Create the file system of the volumeC Resize the volume as per the original snapshot sizeD No step is required The user can directly mount the deviceAnswer DYou need your CI to build AMIs with code pre installed on the images on every new code push You need to do this as cheaply as possible How do you do this A Bid on spot instances just above the asking price as soon as new commits come in perform all instance configuration and setup then create an AMI based on the spot instance B Have the CI launch a new on demand EC instance when new commits come in perform all instance configuration and setup then create an AMI based on the on demand instance C Purchase a Light Utilization Reserved Instance to save money on the continuous integration machine Use these credits whenever you create AMIs on instances D When the CI instance receives commits attach a new EBS volume to the CI machine Perform all setup on this EBS volume so you do not need a new EC instance to create the AMI Answer AWhen thinking of DynamoDB what are true Global Secondary Key properties A The partition key and sort key can be different from the table B Only the partition key can be different from the table C Either the partition key or the sort key can be different from the table but not both D Only the sort key can be different from the table Answer AYou need to process long running jobs once and only once How might you do this A Use an SNS queue and set the visibility timeout to long enough for jobs to process B Use an SQS queue and set the reprocessing timeout to long enough for jobs to process C Use an SQS queue and set the visibility timeout to long enough for jobs to process D Use an SNS queue and set the reprocessing timeout to long enough for jobs to process Answer CYou are getting a lot of empty receive requests when using Amazon SQS This is making a lot of unnecessary network load on your instances What can you do to reduce this load A Subscribe your queue to an SNS topic instead B Use as long of a poll as possible instead of short polls C Alter your visibility timeout to be shorter D Use sqsd on your EC instances Answer BYou need to know when you spend or more on AWS What s the easy way for you to see that notification A AWS CloudWatch Events tied to API calls when certain thresholds are exceeded publish to SNS B Scrape the billing page periodically and pump into Kinesis C AWS CloudWatch Metrics Billing Alarm Lambda event subscription When a threshold is exceeded email the manager D Scrape the billing page periodically and publish to SNS Answer CYou need to grant a vendor access to your AWS account They need to be able to read protected messages in a private S bucket at their leisure They also use AWS What is the best way to accomplish this A Create an IAM User with API Access Keys Grant the User permissions to access the bucket Give the vendor the AWS Access Key ID and AWS Secret Access Key for the User B Create an EC Instance Profile on your account Grant the associated IAM role full access to the bucket Start an EC instance with this Profile and give SSH access to the instance to the vendor C Create a cross account IAM Role with permission to access the bucket and grant permission to use the Role to the vendor AWS account D Generate a signed S PUT URL and a signed S PUT URL both with wildcard values and year durations Pass the URLs to the vendor Answer CYour serverless architecture using AWS API Gateway AWS Lambda and AWS DynamoDB experienced a large increase in traffic to a sustained requests per second and dramatically increased in failure rates Your requests during normal operation last milliseconds on average Your DynamoDB table did not exceed of provisioned throughput and Table primary keys are designed correctly What is the most likely issue A Your API Gateway deployment is throttling your requests B Your AWS API Gateway Deployment is bottlenecking on request de serialization C You did not request a limit increase on concurrent Lambda function executions D You used Consistent Read requests on DynamoDB and are experiencing semaphore lock Answer CWhy are more frequent snapshots of EBS Volumes faster A Blocks in EBS Volumes are allocated lazily since while logically separated from other EBS Volumes Volumes often share the same physical hardware Snapshotting the first time forces full block range allocation so the second snapshot doesn t need to perform the allocation phase and is faster B The snapshots are incremental so that only the blocks on the device that have changed after your last snapshot are saved in the new snapshot C AWS provides more disk throughput for burst capacity during snapshots if the drive has been pre warmed by snapshotting and reading all blocks D The drive is pre warmed so block access is more rapid for volumes when every block on the device has already been read at least one time Answer BFor AWS CloudFormation which stack state refuses UpdateStack calls A UPDATE ROLLBACK FAILEDB UPDATE ROLLBACK COMPLETEC UPDATE COMPLETED CREATE COMPLETEAnswer AYou need to migrate million records in one hour into DynamoDB All records are KB in size The data is evenly distributed across the partition key How many write capacity units should you provision during this batch load A B C D Answer CYour CTO thinks your AWS account was hacked What is the only way to know for certain if there was unauthorized access and what they did assuming your hackers are very sophisticated AWS engineers and doing everything they can to cover their tracks A Use CloudTrail Log File Integrity Validation B Use AWS Config SNS Subscriptions and process events in real time C Use CloudTrail backed up to AWS S and Glacier D Use AWS Config Timeline forensics Answer AWhich of these is not a Pseudo Parameter in AWS CloudFormation A AWS StackNameB AWS AccountIdC AWS StackArnD AWS NotificationARNsAnswer CWhat is the scope of an EBS volume A VPCB RegionC Placement GroupD Availability ZoneAnswer DYou are experiencing performance issues writing to a DynamoDB table Your system tracks high scores for video games on a marketplace Your most popular game experiences all of the performance issues What is the most likely problem A DynamoDB s vector clock is out of sync because of the rapid growth in request for the most popular game B You selected the Game ID or equivalent identifier as the primary partition key for the table C Users of the most popular video game each perform more read and write requests than average D You did not provision enough read or write throughput to the table Answer BYou meet once per month with your operations team to review the past month s data During the meeting you realize that weeks ago your monitoring system which pings over HTTP from outside AWS recorded a large spike in latency on your tier web service API You use DynamoDB for the database layer ELB EBS and EC for the business logic tier and SQS ELB and EC for the presentation layer Which of the following techniques will NOT help you figure out what happened A Check your CloudTrail log history around the spike s time for any API calls that caused slowness B Review CloudWatch Metrics graphs to determine which component s slowed the system down C Review your ELB access logs in S to see if any ELBs in your system saw the latency D Analyze your logs to detect bursts in traffic at that time Answer BWhich of these is not an intrinsic function in AWS CloudFormation A Fn SplitB Fn FindInMapC Fn SelectD Fn GetAZsAnswer AFor AWS CloudFormation which is true A Custom resources using SNS have a default timeout of minutes B Custom resources using SNS do not need a ServiceToken property C Custom resources using Lambda and Code ZipFile allow inline node js resource composition D Custom resources using Lambda do not need a ServiceTokenpropertyAnswer CYour API requires the ability to stay online during AWS regional failures Your API does not store any state it only aggregates data from other sources you do not have a database What is a simple but effective way to achieve this uptime goal A Use a CloudFront distribution to serve up your API Even if the region your API is in goes down the edge locations CloudFront uses will be fine B Use an ELB and a cross zone ELB deployment to create redundancy across data centers Even if a region fails the other AZ will stay online C Create a Route Weighted Round Robin record and if one region goes down have that region redirect to the other region D Create a Route Latency Based Routing Record with Failover and point it to two identical deployments of your stateless API in two different regions Make sure both regions use Auto Scaling Groups behind ELBs Answer DYou are designing an enterprise data storage system Your data management software system requires mountable disks and a real filesystem so you cannot use S for storage You need persistence so you will be using AWS EBS Volumes for your system The system needs as lowcost storage as possible and access is not frequent or high throughput and is mostly sequential reads Which is the most appropriate EBS Volume Type for this scenario A gpB ioC standardD gpAnswer CYou need to deploy an AWS stack in a repeatable manner across multiple environments You have selected CloudFormation as the right tool to accomplish this but have found that there is a resource type you need to create and model but is unsupported by CloudFormation How should you overcome this challenge A Use a CloudFormation Custom Resource Template by selecting an API call to proxy for create update and delete actions CloudFormation will use the AWS SDK CLI or API method of your choosing as the state transition function for the resource type you are modeling B Submit a ticket to the AWS Forums AWS extends CloudFormation Resource Types by releasing tooling to the AWS Labs organization on GitHub Their response time is usually day and they complete requests within a week or two C Instead of depending on CloudFormation use Chef Puppet or Ansible to author Heat templates which are declarative stack resource definitions that operate over the OpenStack hypervisor and cloud environment D Create a CloudFormation Custom Resource Type by implementing create update and delete functionality either by subscribing a Custom Resource Provider to an SNS topic or by implementing the logic in AWS Lambda Answer DYou run a engineer organization You are about to begin using AWS at a large scale for the first time You want to integrate with your existing identity management system running on Microsoft Active Directory because your organization is a power user of Active Directory How should you manage your AWS identities in the most simple manner A Use a large AWS Directory Service Simple AD B Use a large AWS Directory Service AD Connector C Use a Sync Domain running on AWS Directory Service D Use an AWS Directory Sync Domain running on AWS LambdaAnswer BWhen thinking of AWS OpsWorks which of the following is not an instance type you can allocate in a stack layer A instancesB Spot instancesC Time based instancesD Load based instancesAnswer BWhich of these is not a CloudFormation Helper Script A cfn signalB cfn hupC cfn requestD cfn get metadataYour team wants to begin practicing continuous delivery using CloudFormation to enable automated builds and deploys of whole versioned stacks or stack layers You have a tier mission critical system Which of the following is NOT a best practice for using CloudFormation in a continuous delivery environment A Use the AWS CloudFormation ValidateTemplate call before publishing changes to AWS B Model your stack in one template so you can leverage CloudFormation s state management and dependency resolution to propagate all changes C Use CloudFormation to create brand new infrastructure for all stateless resources on each push and run integration tests on that set of infrastructure D Parametrize the template and use Mappings to ensure your template works in multiple Regions Answer BYou need to replicate API calls across two systems in real time What tool should you use as a buffer and transport mechanism for API call events A AWS SQSB AWS LambdaC AWS KinesisD AWS SNSAnswer CYou are building a Ruby on Rails application for internal non production use which uses MySQL as a database You want developers without very much AWS experience to be able to deploy new code with a single command line push You also want to set this up as simply as possible Which tool is ideal for this setup A AWS CloudFormationB AWS OpsWorksC AWS ELB EC with CLI PushD AWS Elastic BeanstalkAnswer D 2021-08-19 15:32:04
海外TECH DEV Community AWS Certified DevOps Engineer DOP-C01 Exam Questions Part 3 https://dev.to/iam_awslagi/aws-certified-devops-engineer-dop-c01-exam-questions-part-3-1ccb AWS Certified DevOps Engineer DOP C Exam Questions Part Source For AWS For GCP Due to compliance regulations management has asked you to provide a system that allows for cost effective long term storage of your application logs and provides a way for support staff to view the logs more quickly Currently your log system archives logs automatically to Amazon S every hour and support staff must wait for these logs to appear in Amazon S because they do not currently have access to the systems to view live logs What method should you use to become compliant while also providing a faster way for support staff to have access to logs A Update Amazon S lifecycle policies to archive old logs to Amazon Glacier and add a new policy to push all log entries to Amazon SQS for ingestion by the support teamB Update Amazon S lifecycle policies to archive old logs to Amazon Glacier and use or write a service to also stream your application logs to CloudWatch Logs C Update Amazon Glacier lifecycle policies to pull new logs from Amazon S and in the Amazon EC console enable the CloudWatch Logs Agent on all of your application servers D Update Amazon S lifecycle policies to archive old logs to Amazon Glacier key can be different from the tableEnable Amazon S partial uploads on your Amazon S bucket and trigger an Amazon SNS notification when a partial upload occurs E Use or write a service to stream your application logs to CloudWatch Logs Use an Amazon Elastic Map Reduce cluster to live stream your logs from CloudWatch Logs for ingestion by the support team and create a Hadoop job to push the logs to S in five minute chunks Answer BYou want to securely distribute credentials for your Amazon RDS instance to your fleet of web server instances The credentials are stored in a file that is controlled by a configuration management system How do you securely deploy the credentials in an automated manner across the fleet of web server instances which can number in the hundreds while retaining the ability to roll back if needed A Store your credential files in an Amazon S bucket Use Amazon S server side encryption on the credential files Have a scheduled job that pulls down the credential files into the instances every minutes B Store the credential files in your version controlled repository with the rest of your code Have a post commit action in version control that kicks off a job in your continuous integration system which securely copies the new credential files to all web server instances C Insert credential files into user data and use an instance lifecycle policy to periodically refresh the file from the user data D Keep credential files as a binary blob in an Amazon RDS MySQL DB instance and have a script on each Amazon EC instance that pulls the files down from the RDS instance E Store the credential files in your version controlled repository with the rest of your code Use a parallel file copy program to send the credential files from your local machine to the Amazon EC instances Answer AYou are using a configuration management system to manage your Amazon EC instances On your Amazon EC Instances you want to store credentials for connecting to an Amazon RDS DB instance How should you securely store these credentials A Give the Amazon EC instances an IAM role that allows read access to a private Amazon S bucket Store a file with database credentials in the Amazon S bucket Have your configuration management system pull the file from the bucket when it is needed B Launch an Amazon EC instance and use the configuration management system to bootstrap the instance with the Amazon RDS DB credentials Create an AMI from this instance C Store the Amazon RDS DB credentials in Amazon EC user data Import the credentials into the Instance on boot D Assign an IAM role to your Amazon RDS instance and use this IAM role to access the Amazon RDS DB from your Amazon EC instances E Store your credentials in your version control system in plaintext Check out a copy of your credentials from the version control system on boot Use Amazon EBS encryption on the volume storing the Amazon RDS DB credentials Answer AYour company has developed a web application and is hosting it in an Amazon S bucket configured for static website hosting The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table How can you ensure that API keys for access to your data in DynamoDB are kept secure A Create an Amazon S role in IAM with access to the specific DynamoDB tables and assign it to the bucket hosting your website B Configure S bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access C Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials D Store AWS keys in global variables within your application and configure the application to use these credentials when making requests Answer CYou need to implement A B deployments for several multi tier web applications Each of them has its Individual infrastructure Amazon Elastic Compute Cloud EC front end servers Amazon ElastiCache clusters Amazon Simple Queue Service SQS queues and Amazon Relational Database RDS Instances Which combination of services would give you the ability to control traffic between different deployed versions of your application A Create one AWS Elastic Beanstalk application and all AWS resources using configuration files inside the application source bundle for each web application New versions would be deployed a eating Elastic Beanstalk environments and using the Swap URLs feature B Using AWS CloudFormation templates create one Elastic Beanstalk application and all AWS resources in the same template for each web application New versions would be deployed using AWS CloudFormation templates to create new Elastic Beanstalk environments and traffic would be balanced between them using weighted Round Robin WRR records in Amazon Route C Using AWS CloudFormation templates create one Elastic Beanstalk application and all AWS resources in the same template for each web application New versions would be deployed updating a parameter on the CloudFormation template and passing it to the cfn hup helper daemon and traffic would be balanced between them using Weighted Round Robin WRR records in Amazon Route D Create one Elastic Beanstalk application and all AWS resources using configuration files inside the application source bundle for each web application New versions would be deployed updating the Elastic Beanstalk application version for the current Elastic Beanstalk environment Answer BYou work for an insurance company and are responsible for the day to day operations of your company s online quote system used to provide insurance quotes to members of the public Your company wants to use the application logs generated by the system to better understand customer behavior Industry regulations also require that you retain all application logs for the system indefinitely in order to investigate fraudulent claims in the future You have been tasked with designing a log management system with the following requirements All log entries must be retained by the system even during unplanned instance failure The customer insight team requires immediate access to the logs from the past seven days The fraud investigation team requires access to all historic logs but will wait up to hours before these logs are available How would you meet these requirements in a cost effective manner Choose three A Configure your application to write logs to the instance s ephemeral disk because this storage is free and has good write performance Create a script that moves the logs from the instance to Amazon once an hour B Write a script that is configured to be executed when the instance is stopped or terminated and that will upload any remaining logs on the instance to Amazon S C Create an Amazon S lifecycle configuration to move log files from Amazon S to Amazon Glacier after seven days D Configure your application to write logs to the instance s default Amazon EBS boot volume because this storage already exists Create a script that moves the logs from the instance to Amazon once an hour E Configure your application to write logs to a separate Amazon EBS volume with the “delete on termination field set to false Create a script that moves the logs from the instance to Amazon S once an hour F Create a housekeeping script that runs on a T micro instance managed by an Auto Scaling group for high availability The script uses the AWS API to identify any unattached Amazon EBS volumes containing log files Your housekeeping script will mount the Amazon EBS volume upload all logs to Amazon S and then delete the volume Answer C E FYou have an application running on Amazon EC in an Auto Scaling group Instances are being bootstrapped dynamically and the bootstrapping takes over minutes to complete You find that instances are reported by Auto Scaling as being In Service before bootstrapping has completed You are receiving application alarms related to new instances before they have completed bootstrapping which is causing confusion You find the cause your application monitoring tool is polling the Auto Scaling Service API for instances that are In Service and creating alarms for new previously unknown instances Which of the following will ensure that new instances are not added to your application monitoring tool before bootstrapping is completed A Create an Auto Scaling group lifecycle hook to hold the instance in a pending wait state until your bootstrapping is complete Once bootstrapping is complete notify Auto Scaling to complete the lifecycle hook and move the instance into a pending complete state B Use the default Amazon CloudWatch application metrics to monitor your application s health Configure an Amazon SNS topic to send these CloudWatch alarms to the correct recipients C Tag all instances on launch to identify that they are in a pending state Change your application monitoring tool to look for this tag before adding new instances and then use the Amazon API to set the instance state to pending until bootstrapping is complete D Increase the desired number of instances in your Auto Scaling group configuration to reduce the time it takes to bootstrap future instances Answer AYou have been given a business requirement to retain log files for your application for years You need to regularly retrieve the most recent logs for troubleshooting Your logging system must be cost effective given the large volume of logs What technique should you use to meet these requirements A Store your log in Amazon CloudWatch Logs B Store your logs in Amazon Glacier C Store your logs in Amazon S and use lifecycle policies to archive to Amazon Glacier D Store your logs in HDFS on an Amazon EMR cluster E Store your logs on Amazon EBS and use Amazon EBS snapshots to archive them Answer CYou work for a startup that has developed a new photo sharing application for mobile devices Over recent months your application has increased in popularity this has resulted in a decrease in the performance of the application clue to the increased load Your application has a two tier architecture that is composed of an Auto Scaling PHP application tier and a MySQL RDS instance initially deployed with AWS CloudFormation Your Auto Scaling group has a min value of and a max value of The desired capacity is now at because of the high CPU utilization of the instances After some analysis you are confident that the performance issues stem from a constraint in CPU capacity although memory utilization remains low You therefore decide to move from the general purpose M instances to the compute optimized C instances How would you deploy this change while minimizing any interruption to your end users A Sign into the AWS Management Console copy the old launch configuration and create a newlaunch configuration that specifies the C instances Update the Auto Scaling group with the new launch configuration Auto Scaling will then update the instance type of all running instances B Sign into the AWS Management Console and update the existing launch configuration with the new C instance type Add an UpdatePolicy attribute to your Auto Scaling group that specifies AutoScalingRollingUpdate C Update the launch configuration specified in the AWS CloudFormation template with the new C instance type Run a stack update with the new template Auto Scaling will then update the instances with the new instance type D Update the launch configuration specified in the AWS CloudFormation template with the new C instance type Also add an UpdatePolicy attribute to your Auto Scaling group that specifies AutoScalingRollingUpdate Run a stack update with the new template Answer DYou have been tasked with implementing an automated data backup solution for your application servers that run on Amazon EC with Amazon EBS volumes You want to use a distributed data store for your backups to avoid single points of failure and to increase the durability of the data Daily backups should be retained for days so that you can restore data within an hour How can you implement this through a script that a scheduling daemon runs daily on the application servers A Write the script to call the ec create volume API tag the Amazon EBS volume with the current date time group and copy backup data to a second Amazon EBS volume Use the ec describevolumes API to enumerate existing backup volumes Call the ec delete volume API to prune backup volumes that are tagged with a date time group older than days B Write the script to call the Amazon Glacier upload archive API and tag the backup archive with the current date time group Use the list vaults API to enumerate existing backup archives Call the delete vault API to prune backup archives that are tagged with a date time group older than days C Write the script to call the ec create snapshot API and tag the Amazon EBS snapshot with the current date time group Use the ec describe snapshot API to enumerate existing Amazon EBS snapshots Call the ec delete snapShot API to prune Amazon EBS snapshots that are tagged with a datetime group older than days D Write the script to call the ec create volume API tag the Amazon EBS volume with the current date time group and use the ec copy snapshot API to back up data to the new Amazon EBS volume Use the ec describe snapshot API to enumerate existing backup volumes Call the ec delete snapshot API to prune backup Amazon EBS volumes that are tagged with a date time group older than days Answer CYour application uses CloudFormation to orchestrate your application s resources During your testing phase before the application went live your Amazon RDS instance type was changed and caused the instance to be re created resulting In the loss of test data How should you prevent this from occurring in the future A Within the AWS CloudFormation parameter with which users can select the Amazon RDS instance type set AllowedValues to only contain the current instance type B Use an AWS CloudFormation stack policy to deny updates to the instance Only allow UpdateStack permission to IAM principals that are denied SetStackPolicy C In the AWS CloudFormation template set the AWS RDS DBInstance s DBlnstanceClass property to be read only D Subscribe to the AWS CloudFormation notification “BeforeResourceUpdate and call CancelStackUpdate if the resource identified is the Amazon RDS instance E In the AWS CloudFormation template set the DeletionPolicy of the AWS RDS DBInstance s DeletionPolicy property to “Retain Answer EYour company develops a variety of web applications using many platforms and programming languages with different application dependencies Each application must be developed and deployed quickly and be highly evadable to satisfy your business requirements Which of the following methods should you use to deploy these applications rapidly A Develop the applications in Docker containers and then deploy them to Elastic Beanstalk environments with Auto Scaling and Elastic Load Balancing B Use the AWS CloudFormation Docker import service to build and deploy the applications with high availability in multiple Availability Zones C Develop each application s code in DynamoDB and then use hooks to deploy it to Elastic Beanstalk environments with Auto Scaling and Elastic Load Balancing D Store each application s code in a Git repository develop custom package repository managers for each application s dependencies and deploy to AWS OpsWorks in multiple Availability Zones Answer A You have a large number of web servers in an Auto Scaling group behind a load balancer On an hourly basis you want to filter and process the logs to collect data on unique visitors and then put that data in a durable data store in order to run reports Web servers in the Auto Scaling group are constantly launching and terminating based on your scaling policies but you do not want to lose any of the log data from these servers during a stop termination initiated by a user or by Auto scaling What two approaches will meet these requirements Choose two A Install an Amazon Cloudwatch Logs Agent on every web server during the bootstrap process Create a CloudWatch log group and define Metric Filters to create custom metrics that track unique visitors from the streaming web server logs Create a scheduled task on an Amazon EC instance that runs every hour to generate a new report based on the Cloudwatch custom metrics B On the web servers create a scheduled task that executes a script to rotate and transmit the logs to Amazon Glacier Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC instance is stopped terminated Use Amazon Data Pipeline to process the data in Amazon Glacier and run reports every hour C On the web servers create a scheduled task that executes a script to rotate and transmit the logs to an Amazon S bucket Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC instance is stopped terminated Use AWS Data Pipeline to move log data from the Amazon S bucket to Amazon Redshift In order to process and run reports every hour D Install an AWS Data Pipeline Logs Agent on every web server during the bootstrap process Create a log group object in AWS Data Pipeline and define Metric Filters to move processed log data directly from the web servers to Amazon Redshift and run reports every hour Answer A CYou have been tasked with deploying a scalable distributed system using AWS OpsWorks Your a distributed system is required to scale on demand As it is distributed each node must hold a configuration file that includes the hostnames of the other instances within the layer How should you configure AWS OpsWorks to manage scaling this application dynamically A Create a Chef Recipe to update this configuration file configure your AWS OpsWorks stack to use custom cookbooks and assign this recipe to the Configure LifeCycle Event of the specific layer B Update this configuration file by writing a script to poll the AWS OpsWorks service API for new instances Configure your base AMI to execute this script on Operating System startup C Create a Chef Recipe to update this configuration file configure your AWS OpsWorks stack to use custom cookbooks and assign this recipe to execute when instances are launched D Configure your AWS OpsWorks layer to use the AWS provided recipe for distributed host configuration and configure the instance hostname and file path parameters in your recipes settings Answer AYou have an application running on an Amazon EC instance and you are using IAM roles to securely access AWS Service APIs How can you configure your application running on that instance to retrieve the API keys for use with the AWS SDKs A When assigning an EC IAM role to your instance in the console in the “Chosen SDK dropdown list select the SDK that you are using and the instance will configure the correct SDK on launch with the API keys B Within your application code make a GET request to the IAM Service API to retrieve credentials for your user C When using AWS SDKs and Amazon EC roles you do not have to explicitly retrieve API keys because the SDK handles retrieving them from the Amazon EC MetaData service D Within your application code configure the AWS SDK to get the API keys from environment variables because assigning an Amazon EC role stores keys in environment variables on launch Answer CWhen an Auto Scaling group is running in Amazon Elastic Compute Cloud EC your application rapidly scales up and down in response to load within a minute window however after the load peaks you begin to see problems in your configuration management system where previously terminated Amazon EC resources are still showing as active What would be a reliable and efficient way to handle the cleanup of Amazon EC resources within your configuration management system Choose two A Write a script that is run by a daily cron job on an Amazon EC instance and that executes API Describe calls of the EC Auto Scaling group and removes terminated instances from the configuration management system B Configure an Amazon Simple Queue Service SQS queue for Auto Scaling actions that has a script that listens for new messages and removes terminated instances from the configuration management system C Use your existing configuration management system to control the launching and bootstrapping of instances to reduce the number of moving parts in the automation D Write a small script that is run during Amazon EC instance shutdown to de register the resource from the configuration management system E Use Amazon Simple Workflow Service SWF to maintain an Amazon DynamoDB database that contains a whitelist of instances that have been previously launched and allow the Amazon SWF worker to remove information from the configuration management system Answer A DYou have enabled Elastic Load Balancing HTTP health checking After looking at the AWS Management Console you see that all instances are passing health checks but your customers are reporting that your site is not responding What is the cause A The HTTP health checking system is misreporting due to latency in inter instance metadata synchronization B The health check in place is not sufficiently evaluating the application function C The application is returning a positive health check too quickly for the AWS Management Console to respond D Latency in DNS resolution is interfering with Amazon EC metadata retrieval Answer BYou use Amazon CloudWatch as your primary monitoring system for your web application After a recent software deployment your users are getting Intermittent Internal Server Errors when using the web application You want to create a CloudWatch alarm and notify an on call engineer when these occur How can you accomplish this using AWS services Choose three A Deploy your web application as an AWS Elastic Beanstalk application Use the default Elastic Beanstalk Cloudwatch metrics to capture Internal Server Errors Set a CloudWatch alarm on that metric B Install a CloudWatch Logs Agent on your servers to stream web application logs to CloudWatch C Use Amazon Simple Email Service to notify an on call engineer when a CloudWatch alarm is triggered D Create a CloudWatch Logs group and define metric filters that capture Internal Server Errors Set a CloudWatch alarm on that metric E Use Amazon Simple Notification Service to notify an on call engineer when a CloudWatch alarm is triggered F Use AWS Data Pipeline to stream web application logs from your servers to CloudWatch Answer B D EAfter a daily scrum with your development teams you ve agreed that using Blue Green style deployments would benefit the team Which technique should you use to deliver this new requirement A Re deploy your application on AWS Elastic Beanstalk and take advantage of Elastic Beanstalk deployment types B Using an AWS CloudFormation template re deploy your application behind a load balancer launch a new AWS CloudFormation stack during each deployment update your load balancer to send half your traffic to the new stack while you test after verification update the load balancer to send of traffic to the new stack and then terminate the old stack C Re deploy your application behind a load balancer that uses Auto Scaling groups create a new identical Auto Scaling group and associate it to the load balancer During deployment set the desired number of instances on the old Auto Scaling group to zero and when all instances have terminated delete the old Auto Scaling group D Using an AWS OpsWorks stack re deploy your application behind an Elastic Load Balancing load balancer and take advantage of OpsWorks stack versioning during deployment create a new version of your application tell OpsWorks to launch the new version behind your load balancer and when the new version is launched terminate the old OpsWorks stack Answer CYour development team wants account level access to production instances in order to do live debugging of a highly secure environment Which of the following should you do A Place the credentials provided by Amazon Elastic Compute Cloud EC into a secure Amazon Sample Storage Service S bucket with encryption enabled Assign AWS Identity and Access Management IAM users to each developer so they can download the credentials file B Place an internally created private key into a secure S bucket with server side encryption using customer keys and configuration management create a service account on all the instances using this private key and assign IAM users to each developer so they can download the file C Place each developer s own public key into a private S bucket use instance profiles and configuration management to create a user account for each developer on all instances and place the user s public keys into the appropriate account D Place the credentials provided by Amazon EC onto an MFA encrypted USB drive and physically share it with each developer so that the private key never leaves the office Answer CAs part of your continuous deployment process your application undergoes an I O load performance test before it is deployed to production using new AMIs The application uses one Amazon Elastic Block Store EBS IOPS volume per instance and requires consistent I O performance Which of the following must be carried out to ensure that I O load performance tests yield the correct results in a repeatable manner A Ensure that the I O block sizes for the test are randomly selected B Ensure that the Amazon EBS volumes have been pre warmed by reading all the blocks before the test C Ensure that snapshots of the Amazon EBS volumes are created as a backup D Ensure that the Amazon EBS volume is encrypted E Ensure that the Amazon EBS volume has been pre warmed by creating a snapshot of the volume before the test Answer BAfter reviewing the last quarter s monthly bills management has noticed an increase in the overall bill from Amazon After researching this increase in cost you discovered that one of your new services is doing a lot of GET Bucket API calls to Amazon S to build a metadata cache of all objects in the applications bucket Your boss has asked you to come up with a new cost effective way to help reduce the amount of these new GET Bucket API calls What process should you use to help mitigate the cost A Update your Amazon S buckets lifecycle policies to automatically push a list of objects to a new bucket and use this list to view objects associated with the application s bucket B Create a new DynamoDB table Use the new DynamoDB table to store all metadata about all objects uploaded to Amazon S Any time a new object is uploaded update the application s internal Amazon S object metadata cache from DynamoDB C Using Amazon SNS create a notification on any new Amazon S objects that automatically updates a new DynamoDB table to store all metadata about the new object Subscribe the application to the Amazon SNS topic to update its internal Amazon S object metadata cache from the DynamoDB table D Upload all images to Amazon SQS set up SQS lifecycles to move all images to Amazon S and initiate an Amazon SNS notification to your application to update the application s Internal Amazon S object metadata cache E Upload all images to an ElastiCache filecache server Update your application to now read all file metadata from the ElastiCache filecache server and configure the ElastiCache policies to push all files to Amazon S for long term storage Answer CYour current log analysis application takes more than four hours to generate a report of the top users of your web application You have been asked to implement a system that can report this information in real time ensure that the report is always up to date and handle increases in the number of requests to your web application Choose the option that is cost effective and can fulfill the requirements A Publish your data to CloudWatch Logs and configure your application to autoscale to handle the load on demand B Publish your log data to an Amazon S bucket Use AWS CloudFormation to create an Auto Scaling group to scale your post processing application which is configured to pull down your log files stored an Amazon S C Post your log data to an Amazon Kinesis data stream and subscribe your log processing application so that is configured to process your logging data D Configure an Auto Scaling group to increase the size of your Amazon EMR duster E Create a multi AZ Amazon RDS MySQL cluster post the logging data to MySQL and run a map reduce job to retrieve the required information on user counts Answer CYou are using Elastic Beanstalk to manage your e commerce store The store is based on an open source e commerce platform and is deployed across multiple instances in an Auto Scaling group Your development team often creates new “extensions for the e commerce store These extensions include PHP source code as well as an SQL upgrade script used to make any necessary updates to the database schema You have noticed that some extension deployments fail due to an error when running the SQL upgrade script After further investigation you realize that this is because the SQL script is being executed on all of your Amazon EC instances How would you ensure that the SQL script is only executed once per deployment regardless of how many Amazon EC instances are running at the time A Use a “Container command within an Elastic Beanstalk configuration file to execute the script ensuring that the “leader only flag is set to true B Make use of the Amazon EC metadata service to query whether the instance is marked as the leader in the Auto Scaling group Only execute the script if “true is returned C Use a “Solo Command within an Elastic Beanstalk configuration file to execute the script The Elastic Beanstalk service will ensure that the command is only executed once D Update the Amazon RDS security group to only allow write access from a single instance in the Auto Scaling group that way only one instance will successfully execute the script on the database Answer AYou are administering a continuous integration application that polls version control for changes and then launches new Amazon EC instances for a full suite of build tests What should you do to ensure the lowest overall cost while being able to run as many tests in parallel as possible A Perform syntax checking on the continuous integration system before launching a new Amazon EC instance for build test unit and integration tests B Perform syntax and build tests on the continuous integration system before launching the new Amazon EC instance unit and integration tests C Perform all tests on the continuous integration system using AWS OpsWorks for unit integration and build tests D Perform syntax checking on the continuous integration system before launching a new AWS Data Pipeline for coordinating the output of unit integration and build tests Answer BYou are doing a load testing exercise on your application hosted on AWS While testing your Amazon RDS MySQL DB instance you notice that when you hit CPU utilization on it your application becomes non responsive Your application is read heavy What are methods to scale your data tier to meet the application s needs Choose three A Add Amazon RDS DB read replicas and have your application direct read queries to them B Add your Amazon RDS DB instance to an Auto Scaling group and configure your CloudWatch metric based on CPU utilization C Use an Amazon SQS queue to throttle data going to the Amazon RDS DB instance D Use ElastiCache in front of your Amazon RDS DB to cache common queries E Shard your data set among multiple Amazon RDS DB instances F Enable Multi AZ for your Amazon RDS DB instance Answer A D EYour mobile application includes a photo sharing service that is expecting tens of thousands of users at launch You will leverage Amazon Simple Storage Service S for storage of the user Images and you must decide how to authenticate and authorize your users for access to these images You also need to manage the storage of these images Which two of the following approaches should you use Choose two A Create an Amazon S bucket per user and use your application to generate the S URI for the appropriate content B Use AWS Identity and Access Management IAM user accounts as your application level user database and offload the burden of authentication from your application code C Authenticate your users at the application level and use AWS Security Token Service STS to grant token based authorization to S objects D Authenticate your users at the application level and send an SMS token message to the user Create an Amazon S bucket with the same name as the SMS message token and move the user s objects to that bucket E Use a key based naming scheme composed from the user IDs for all user objects in a single Amazon S bucket Answer C EYou have an Auto Sealing group of Instances that processes messages from an Amazon Simple Queue Service SQS queue The group scales on the size of the queue Processing Involves calling a third party web service The web service is complaining about the number of failed and repeated calls it is receiving from you You have noticed that when the group scales in instances are being terminated while they are processing What cost effective solution can you use to reduce the number of incomplete process attempts A Create a new Auto Scaling group with minimum and maximum of and instances running web proxy software Configure the VPC route table to route HTTP traffic to these web proxies B Modify the application running on the instances to enable termination protection while it processes a task and disable it when the processing is complete C Increase the minimum and maximum size for the Auto Scaling group and change the scaling policies so they scale less dynamically D Modify the application running on the instances to put itself into an Autoscaling Standby state while it processes a task and returns itself to InService when the processing is complete Answer BThe operations team and the development team want a single place to view both operating system and application logs How should you implement this using AWS services Choose two A Using AWS CloudFormation create a CloudWatch Logs LogGroup and send the operating system and application logs of interest using the CloudWatch Logs Agent B Using AWS CloudFormation and configuration management set up remote logging to send events via UDP packets to CloudTrail C Using configuration management set up remote logging to send events to Amazon Kinesis and insert these into Amazon CloudSearch or Amazon Redshift depending on available analytic tools D Using AWS CloudFormation create a CloudWatch Logs LogGroup Because the Cloudwatch Log agent automatically sends all operating system logs you only have to configure the application logs for sending off machine E Using AWS CloudFormation merge the application logs with the operating system logs and use IAM Roles to allow both teams to have access to view console output from Amazon EC Answer A CThe project you are working on currently uses a single AWS CloudFormation template to deploy its AWS infrastructure which supports a multi tier web application You have been tasked with organizing the AWS CloudFormation resources so that they can be maintained in the future and so that different departments such as Networking and Security can review the architecture before it goes to Production How should you do this in a way that accommodates each department using their existing workflows A Organize the AWS CloudFormation template so that related resources are next to each other in the template such as VPC subnets and routing rules for Networking and security groups and IAM information for Security B Separate the AWS CloudFormation template into a nested structure that has individual templates for the resources that are to be governed by different departments and use the outputs from the networking and security stacks for the application template that you controlC Organize the AWS CloudFormation template so that related resources are next to each other in the template for each department s use leverage your existing continuous integration tool to constantly deploy changes from all parties to the Production environment and then run tests for validation D Use a custom application and the AWS SDK to replicate the resources defined in the current AWS CloudFormation template and use the existing code review system to allow other departments to approve changes before altering the application for future deployments Answer BYou currently run your infrastructure on Amazon EC instances behind an Auto Scaling group All logs for you application are currently written to ephemeral storage Recently your company experienced a major bug in code that made it through testing and was ultimately deployed to your fleet This bug triggered your Auto Scaling group to scale up and back down before you could successfully retrieve the logs off your server to better assist you in troubleshooting the bug Which technique should you use to make sure you are able to review your logs after your instances have shut down A Configure the ephemeral policies on your Auto Scaling group to back up on terminate B Configure your Auto Scaling policies to create a snapshot of all ephemeral storage on terminate C Install the CloudWatch Logs Agent on your AMI and configure CloudWatch Logs Agent to stream your logs D Install the CloudWatch monitoring agent on your AMI and set up a new SNS alert for CloudWatch metrics that triggers the CloudWatch monitoring agent to backup all logs on the ephemeral drive E Install the CloudWatch monitoring agent on your AMI Update your Auto Scaling policy to enable automated CloudWatch Log copy Answer CManagement has reported an increase in the monthly bill from Amazon web services and they are extremely concerned with this increased cost Management has asked you to determine the exact cause of this increase After reviewing the billing report you notice an increase in the data transfer cost How can you provide management with a better insight into data transfer use A Update your Amazon CloudWatch metrics to use five second granularity which will give better detailed metrics that can be combined with your billing data to pinpoint anomalies B Use Amazon CloudWatch Logs to run a map reduce on your logs to determine high usage and data transfer C Deliver custom metrics to Amazon CloudWatch per application that breaks down application data transfer into multiple more specific data points D Using Amazon CloudWatch metrics pull your Elastic Load Balancing outbound data transfer metrics monthly and include them with your billing report to show which application is causing higher bandwidth usage Answer CDuring metric analysis your team has determined that the company s website is experiencing response times during peak hours that are higher than anticipated You currently rely on Auto Scaling to make sure that you are scaling your environment during peak windows How can you improve your Auto Scaling policy to reduce this high response time Choose two A Push custom metrics to CloudWatch to monitor your CPU and network bandwidth from your servers which will allow your Auto Scaling policy to have better fine grain insight B Increase your Autoscaling group s number of max servers C Create a script that runs and monitors your servers when it detects an anomaly in load it posts to an Amazon SNS topic that triggers Elastic Load Balancing to add more servers to the load balancer D Push custom metrics to CloudWatch for your application that include more detailed information about your web application such as how many requests it is handling and how many are waiting to be processed E Update the CloudWatch metric used for your Auto Scaling policy and enable sub minute granularity to allow auto scaling to trigger faster Answer B DYou are responsible for your company s large multi tiered Windows based web application running on Amazon EC instances situated behind a load balancer While reviewing metrics you have started noticing an upwards trend for slow customer page load time Your manager has asked you to come up with a solution to ensure that customer load time is not affected by too many requests per second Which technique would you use to solve this issue A Re deploy your infrastructure using an AWS CloudFormation template Configure Elastic Load Balancing health checks to initiate a new AWS CloudFormation stack when health checks return failed B Re deploy your infrastructure using an AWS CloudFormation template Spin up a second AWS CloudFormation stack Configure Elastic Load Balancing SpillOver functionality to spill over any slow connections to the second AWS CloudFormation stack C Re deploy your infrastructure using AWS CloudFormation Elastic Beanstalk and Auto Scaling Set up your Auto Scaling group policies to scale based on the number of requests per second as well as the current customer load time D Re deploy your application using an Auto Scaling template Configure the Auto Scaling template to spin up a new Elastic Beanstalk application when the customer load time surpasses your threshold Answer CYour company has multiple applications running on AWS Your company wants to develop a tool that notifies on call teams immediately via email when an alarm is triggered in your environment You have multiple on call teams that work different shifts and the tool should handle notifying the correct teams at the correct times How should you implement this solution A Create an Amazon SNS topic and an Amazon SQS queue Configure the Amazon SQS queue as a subscriber to the Amazon SNS topic Configure CloudWatch alarms to notify this topic when an alarm is triggered Create an Amazon EC Auto Scaling group with both minimum and desired Instances configured to Worker nodes in this group spawn when messages are added to the queue Workers then use Amazon Simple Email Service to send messages to your on call teams B Create an Amazon SNS topic and configure your on call team email addresses as subscribers Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to this new topic Notifications will be sent to on call users when a CloudWatch alarm is triggered C Create an Amazon SNS topic and configure your on call team email addresses as subscribers Create a secondary Amazon SNS topic for alarms and configure your CloudWatch alarms to notify this topic when triggered Create an HTTP subscriber to this topic that notifies your application via HTTP POST when an alarm is triggered Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to the first topic so that on call engineers receive alerts D Create an Amazon SNS topic for each on call group and configure each of these with the team member emails as subscribers Create another Amazon SNS topic and configure your CloudWatch alarms to notify this topic when triggered Create an HTTP subscriber to this topic that notifies your application via HTTP POST when an alarm is triggered Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to the correct team topic when on shift Answer DYour company releases new features with high frequency while demanding high application availability As part of the application s A B testing logs from each updated Amazon EC instance of the application need to be analyzed in near real time to ensure that the application is working flawlessly after each deployment If the logs show arty anomalous behavior then the application version of the instance is changed to a more stable one Which of the following methods should you use for shipping and analyzing the logs in a highly available manner A Ship the logs to Amazon S for durability and use Amazon EMR to analyze the logs in a batch manner each hour B Ship the logs to Amazon CloudWatch Logs and use Amazon EMR to analyze the logs in a batch manner each hour C Ship the logs to an Amazon Kinesis stream and have the consumers analyze the logs in a live manner D Ship the logs to a large Amazon EC instance and analyze the logs in a live manner E Store the logs locally on each instance and then have an Amazon Kinesis stream pull the logs for live analysis Answer CYou have a code repository that uses Amazon S as a data store During a recent audit of your security controls some concerns were raised about maintaining the integrity of the data in the Amazon S bucket Another concern was raised around securely deploying code from Amazon S to applications running on Amazon EC in a virtual private cloud What are some measures that you can implement to mitigate these concerns Choose two A Add an Amazon S bucket policy with a condition statement to allow access only from Amazon EC instances with RFC IP addresses and enable bucket versioning B Add an Amazon S bucket policy with a condition statement that requires multi factor authentication in order to delete objects and enable bucket versioning C Use a configuration management service to deploy AWS Identity and Access Management user credentials to the Amazon EC instances Use these credentials to securely access the Amazon S bucket when deploying code D Create an Amazon Identity and Access Management role with authorization to access the Amazon bucket and launch all of your application s Amazon EC instances with this role E Use AWS Data Pipeline to lifecycle the data in your Amazon S bucket to Amazon Glacier on a weekly basis F Use AWS Data Pipeline with multi factor authentication to securely deploy code from the Amazon S bucket to your Amazon EC instances Answer B DYou have an application consisting of a stateless web server tier running on Amazon EC instances behind load balancer and are using Amazon RDS with read replicas Which of the following methods should you use to implement a self healing and cost effective architecture Choose two A Set up a third party monitoring solution on a cluster of Amazon EC instances in order to emit custom CloudWatch metrics to trigger the termination of unhealthy Amazon EC instances B Set up scripts on each Amazon EC instance to frequently send ICMP pings to the load balancer in order to determine which instance is unhealthy and replace it C Set up an Auto Scaling group for the web server tier along with an Auto Scaling policy that uses the Amazon RDS DB CPU utilization CloudWatch metric to scale the instances D Set up an Auto Scaling group for the web server tier along with an Auto Scaling policy that uses the Amazon EC CPU utilization CloudWatch metric to scale the instances E Use a larger Amazon EC instance type for the web server tier and a larger DB instance type for the data storage layer to ensure that they don t become unhealthy F Set up an Auto Scaling group for the database tier along with an Auto Scaling policy that uses the Amazon RDS read replica lag CloudWatch metric to scale out the Amazon RDS read replicas G Use an Amazon RDS Multi AZ deployment Answer D GYour application is currently running on Amazon EC instances behind a load balancer Your management has decided to use a Blue Green deployment strategy How should you implement this for each deployment A Set up Amazon Route health checks to fail over from any Amazon EC instance that is currently being deployed to B Using AWS CloudFormation create a test stack for validating the code and then deploy the code to each production Amazon EC instance C Create a new load balancer with new Amazon EC instances carry out the deployment and then switch DNS over to the new load balancer using Amazon Route after testing D Launch more Amazon EC instances to ensure high availability de register each Amazon EC instance from the load balancer upgrade it and test it and then register it again with the load balancer Answer CYour company currently runs a large multi tier web application One component is an API service that all other components of your application rely on to perform read write operations This service must have high availability and zero downtime during deployments Which technique should you use to provide cost effective zero downtime deployments for this component A Use an AWS CloudFormation template to re deploy your application behind a load balancer and launch a new AWS CloudFormation stack during each deployment Update your load balancer to send traffic to the new stack and then deploy your software Leave your old stacks running and tag their resources with the version for rollback B Re deploy your application on Elastic Beanstalk During deployment create a new version of your application and create a new environment running that version in Elastic BeanStalk Finally take advantage of the Elastic Beanstalk Swap CNAME operation to switch to the new environment C Re deploy your application behind a load balancer that uses Auto Scaling groups Create a new identical Auto Scaling group and associate it to your Amazon Route zone Configure Amazon Route to auto weight traffic over to the new Auto Scaling group when all instances are marked as healthy D Re deploy your application behind a load balancer using an AWS OpsWorks stack and use AWS OpsWorks stack versioning during deployment create a new version of your application tell AWS OpsWorks to launch the new version behind your load balancer and when the new version is launched terminate the old AWS OpsWorks stack Answer B 2021-08-19 15:25:18
海外TECH DEV Community Markdown Linting https://dev.to/adamgordonbell/markdown-linting-5a3 Markdown LintingMany linting code formatting and static analysis tools exist for code You can use eslint gofmt or many other static analysis tools combined with a great continuous integration process and ensure that your code stays in good shape But what about markdown files and documentation How do you ensure you aren t committing spelling and grammar mistakes How do you ensure your files are valid markdown and that the language you are using is clear and correct You can do this and more with a documentation linter Many tools exist for finding problems in text files You can use this list as a starting point for finding the markdown and prose linting tools that best fit your needs Docs as CodeThe movement behind testing and linting prose is known as Docs as Code and the Writing The Docs website is a great place to learn more CriteriaFor Ease of skimming I ll rate each tool based on this criteria Formatting The ability to find errors in the formatting of text files markdown txt asciidoc Spelling The ability to find spelling mistakes Grammar The ability to detect grammar errors Clarity The ability to suggest changes that can improve writing clarity Additionally I will rate tools based on their feature set Remediation The ability to fix errors without manual intervention Customization How well the tool can be customized to fit your use case If you can t exclude a rule or disable a warning CI usage may be challenging The most robust tools support custom rules and documentation style guides Integrated Developer Environment IDE support Ability to use in common code editorsContinuous Integration CI Command Line Interface CLI Usage Ability to be used at the command line and in a continuous integration environment Markdown Lintmarkdownlint is a node js markdown linter that is easy to install and easy to customize It is based on an earlier Ruby tool also called markdownlint Both are great but the Node js tool is easy to install and easy to customize You can disable specific rules inline lt markdownlint disable file MD gt and set up a per project config in a markdownlintrc file It also supports writing custom rules in JavaScript and can remediate many problems itself with the fix option markdownlint fix posts md It doesn t handle spelling grammar or sentence structure but it can t be beaten for dealing with markdown structure and it has a great online demo site CoverageFormatting Spelling Grammar Clarity FeaturesEase of Use Remediation Customization IDE support CI CLI Support mdspellmdspell is a tool specifically for spelling checking markdown documents Install it like this npm i markdown spellcheck g You can run it on markdown files in an interactive mode that builds up a custom dictionary of exceptions You can then use that list later in a continuous integration process mdspell n a en us blog posts mitmproxy mdThe downsides of mdspell are that the dictionary will likely complain about lots of words that are quite common It may take some time to build up a list of exceptions As a shortcut you might be able to find some more spelling files on GitHub CoverageFormatting Spelling Grammar Clarity FeaturesEase of Use Remediation Customization IDE support CI CLI Support alexalex does one thing catches insensitive and inconsiderate writing It supports markdown files and works via command line and has various IDE integrations The specificity of alex is its strength For my rubric I am scoring it under clarity as catching insensitive writing certainly improves clarity CoverageFormatting Spelling Grammar Clarity FeaturesEase of Use Remediation Customization IDE support CI CLI Support write goodwrite good is designed for developers who can t write good and wanna learn to do other stuff good too The tool s focus is on improving the clarity of writing and helping developers write well Install npm install g write goodRun write good blog posts mitmproxy mdhere are several ways to accomplish this accomplish is wordy or unneeded on line at column e ca certificates is an excellent proof of concept but if you want to run a do excellent is a weasel word on line at column write good has many exciting suggestions It will highlight passive voice cliches weak adverbs and much more Unfortunately it s not easy to exclude items or configure rules It might be helpful as a writing suggestion tool but this lack of configurability means you will have difficulty using it in a continuous integration process CoverageFormatting Spelling Grammar Clarity FeaturesEase of Use Remediation Customization IDE support CI CLI Support textlinttextlint is a pluggable linting tool that supports markdown plain text and HTML The plug in architecture means that it can offer the features of some of the previous items by wrapping them up as a plug in It has a plug in for alex write good and for many spell checkers and grammar checkers The downside of this flexibility is that it is a bit harder to set up and configure you have to install each plug in separately Install npm install textlint global install each plugin npm install global textlint rule no todo Run textlint docs textlint is configurable via an textlintrc and has inline exclude rules lt textlint disable ruleA ruleB gt which may make it a possible way to use write good or other tools that lack this functionality CoverageFormatting Spelling Grammar Clarity FeaturesEase of Use Remediation Customization IDE support CI CLI Support proselintproselint goes deep on writing clarity improvements in the same way the alex goes deep on inclusive writing proselint places the world s greatest writers and editors by your side where they whisper suggestions on how to improve your prose You ll be guided by advice inspired by Bryan Garner David Foster Wallace Chuck Palahniuk Steve Pinker Mary Norris Mark Twain Elmore Leonard George Orwell Matthew Butterick William Strunk E B White Philip Corbett Ernest Gowers and the editorial staff of the world s finest literary magazines and newspapers among others Our goal is to aggregate knowledge about best practices in writing and to make that knowledge immediately accessible to all authors in the form of a linter for prose Some of the writing advice included is great echo The very first thing you ll see at the top of every well written bash script proselint lt stdin gt weasel words very Substitute damn every time you re inclined to write very your editor will delete it and the writing will be just as it should be echo Thankfully not all the advice I received was bad proselint lt stdin gt skunked terms misc Thankfully is a bit of a skunked term ーimpossible to use without issue Find some other way to say it echo it is worth noting that both for CI and CD the operating principles and coding philosophy are equally as important as the technical aspect of the implementation proselint lt stdin gt after the deadline redundancy Redundancy Use as instead of equally as This one is awesome considering the context of the original article echo thought leaders proselint lt stdin gt cliches garner thought leaders is cliché echo One elephant in the room with ngrok is proselint lt stdin gt corporate speak misc Minimize your use of corporate catchphrases like this one Learning from all the best writers is a very lofty objective and proselint has accumulated some valuable rules but it falls short of its goal of collecting all the worlds writing advice in a parsable form Ignoring and excluding rules are also not fully supported CoverageFormatting Spelling Grammar Clarity FeaturesEase of Use Remediation Customization IDE support CI CLI Support ValeVale created by Joseph Kato supports spelling grammar and clarity checks It is extendable using a YAML rule format and is designed around the idea of a style guide a specific house style that you put together and vale enforces It has an implementation of most proselint as a style guide most of write good as well as the Microsoft Writing Style Guide and the Googledeveloper documentation style guide Vale is targeted directly at the Docs as Code community and documentation teams who take the writing style of documents very seriously Vale is fast and configurable but not necessarily easy to get started with Initially I couldn t get it to find any problems until I realized that it needs a config file to run MinAlertLevel suggestion BasedOnStyles Vale vale iniAdditionally to use it effectively you will need to copy an existing style guide into your repository Separating the styles from the tool is Vale s biggest strength It s also could be a weakness as the rules you build up are specific to your repository It is easy to write and customize rules but hard to share them back as they need to live in your source code repository Besides the official Vale style guides Buildkite Linode and Write The Docs have rules online that you can copy into your repo or use as inspiration for your own rules If you are taking linting documentation seriously and can take the time to set up a style that works for you then Vale is the way to go The rules of most other tools can be implemented inside value and many already are CoverageFormatting Spelling Grammar Clarity FeaturesEase of Use Remediation Customization IDE support CI CLI Support Vale StylesOfficial StylesWrite The Docs StylesGrammarly Clone in Vale SummaryMany tools exist for testing and linting English prose You can start as simply as just spelling checking your readme before you commit it or go as complex as a full style guide running on every change to your software documentation If you are willing to invest the time then Vale with its flexible rules is the clear leader Combining Vale with markdownlint and running both in a continuous integration build should ensure that documents are spelling correctly grammatically correct and written in a properly formatted and exclusive way If you re looking for a more accessible place to start or don t need the grammar and clarity suggestions then mdspell and markdownlint make a great combination Once you have decided on what tools will work best for you make sure you find a way to automate their usage This blog uses Vale and markdownlint inside an Earthfile that is run every commit This helps us prevent mistakes from getting into the blog 2021-08-19 15:22:34
海外TECH DEV Community 7 ES6 Features all JavaScript Programmers Should Learn to Use https://dev.to/ubahthebuilder/7-es6-features-all-javascript-programmers-should-learn-to-use-4cpg ES Features all JavaScript Programmers Should Learn to UseThe EMCAScript ES came with a whole new set of fetaures and syntax In this article we will take a look at some very useful ones Destructuring Assignment Objects and Arrays Access and store multiple elements from an array or object in just one line of codelet oldArray let first oldArray first let second oldArray second let third oldArray third let newArray let first second third newArray The same operation reduced to just one lineconst oldMe name kingsley sex male age const oldName oldMe name kingsley const oldSex oldMe sex male const oldAge oldMe age const newMe name kingsley sex male age name sex age newMe Refactored to just one single line Default ParameterSet a default parameter for a function which will be used when one is not defined BEFORE function withoutDefault param param if param undefined param second string console log param param withoutDefault first string second string first string and second string WITH DEFAULT PARAMETER function withDefault param param second string console log param param withDefault first string first string and second string withDefault first string second string Outputs first string and second string MODULESShare code across multiple files capitalize jsfunction capitalize word return word toUpperCase word slice export capitalize Exports the function warn jsimport capitalize from capitalize Imports the functionfunction warn name return I am warning you capitalize name warn kingsley I am warning you Kingsley ENHANCED OBJECT LITERALCreate an object supply it properties and methods all in a very short and dynamic way var name kingsley var sex male var age Using Object Literal Enhancementvar me name sex age console log me name kingsley sex male age var name kingsley var sex male var age Functionlet sayName function console log I am this name With Object Literal Enhancementvar me name sex age sayName me sayName I am kingsley PROMISENest callbacks in a simple and clean way const successPromise new Promise resolve reject gt setTimeout gt resolve successful CONTINUATION AFTER SECONDSsuccessPromise then value gt console log value successful catch error gt console log error const failPromise new Promise resolve reject gt setTimeout gt reject oops something went wrong CONTINUATION AFTER SECONDSfailPromise then value gt console log value catch error gt console log error oops something went wrong TEMPLATE LITERALSDynamically construct string from variableslet name kingsley let age let blog ubahthebuilder tech function showBlog console log My name is name I am age years old and I blog at blog showBlog My name is kingsley I am years old and I blog at ubahthebuilder tech ARROW FUNCTIONSWrite shorter function syntaxlet sayName gt return I am Kingsley let sayName name gt I am name let sayName name gt I am name You can remove the bracketslet sayNameAge name age gt I am name and I am age years old If argument is more than one you must wrap in parenthesisYOU MAY ALSO LIKE User Authentication vs User Authorization The Difference Prototypal Inheritance Explained 2021-08-19 15:05:54
海外TECH DEV Community So, You Want to Get a Job as a React Developer (Here Are 4 Not-So-Obvious Ways to Land It) https://dev.to/michaelmangial1/so-you-get-a-job-as-a-react-developer-here-are-4-not-so-obvious-ways-to-land-it-423e So You Want to Get a Job as a React Developer Here Are Not So Obvious Ways to Land It So you want to get a job as a React developer Great Now you ve likely done or are doing the following things to land it Learn JavaScript fundamentals Learn React fundamentals Become comfortable with layouts and styling using CSS SCSS Learn how to interact with APIs Make an application that shows off your workThis is great work I do think it meets the threshold of what is required for a React job However I d like to empower you with some not so obvious ways that you can stand out from a crowd of applicants and make a transition into a React job seamless This ways are not so obvious because they are the things that you end up doing day to day in a typical role that isn t talked about as much in the blogosphere Master Copying Designs From Existing ApplicationsUsing an existing UI component library like Material UI for a project is totally fine In fact that s the route I went to be able to make an application where I got practicing interacting with APIs However the real world workflow is much different On a product team there will be a UX designer who will create mockups of a new experience that will have to be coded Meaning you have to look at something and copy it Well you don t need a UX designer to start practicing that skill Here s a fun idea Try to replicate the look and feel of products that come out on Product Hunt If you want to go a step further you can try to replicate entire experiences their workflow for loading a screen with data from an API The more comfortable you get with monkey see monkey do the more comfortable you will be in interviews and ultimately when transitioning into a new role If you follow this step even on a smaller scale like creating components from a UI library from scratch you will aggregate plenty of material for a portfolio Write As You LearnI mentioned in a previous article how this is a major key to bursting out the tutorial phase I emphasized that forcing yourself to write as you are learning has several advantages It forces you to learn what you are trying to learn if you don t get it you can t write about it if you can write about it you must learn it It provides incentive to your learning you get to see people like comment and share your posts If you see that you are helping others it will boost your confidence love for the subject and incentivize writing more It makes you explain technical concepts in a way that those less technically experienced can understand It turns out this is vital not only for if you become a senior dev but when you work closely with a product team which is a big part of the role that is often under asserted You will have more than just a resume to verify that you know what you re talking about Even senior developers can stumble in interviews due to nervousness If you have articles showcasing your understanding of technical concepts you will be able to reset assured that the articles speak for themselves the proof rests in the pudding You can most definitely include these articles to sharpen your portfolio Mimic a Real World WorkflowA major part of a real world workflow of a React developer is being able to break down mockups into prioritized estimated chunks At least once fight the urge to treat your side projects like a hackathon Fight the urge to just pump out a bunch of code as you build something Instead try to write down how the entire project can be broken down into chunks A chunk is an implementation of a feature functionality required to complete the project Chunks should be recorded in the logical order that they will have to be done Lastly chunks should be the equivalent of days worth of coding assuming a full time schedule hours of undistracted work constitutes a day Use GitHub projects to record your progress for these chunks Now if you really want to impress do this Let every chunk be implemented through a single Pull Request Reach out to a developer friend and have them review your changes Respond to feedback and move on to the next chunk when all feedback has been addressed This sounds like a lot of work to do in spare time It is However even if you just did this process for a single chunk and talked about it in an interview or showcased it in your portfolio I can guarantee you will stand out Don t Try to Over impress DevelopersWhen you get to an interview don t try to over impress developers that interview you Let s face it Even if you had the same amount of knowledge and experience as the interviewer you are bound to be at a disadvantage when it comes to impressing them Nerves and on the spot questions are tough Now if you are new to the whole field of being a developer or even just as a React developer you ll have to admit that it s very unlikely that you can outdo the developer interviewing in technical knowledge So what are you supposed to do Remember that getting a React job and doing well in it is mostly about impressing the product team the non technical people not the developers the technical people With time you re bound to learn technical skills that will eventually impress your developers And that s important However if you can showcase value as a team member that can get work done in a real world workflow then you will be valuable and therefore hire able to the product team Still try to impress the developers just not in being technically superior Don t try to impress with talking technical trivia Instead you just need to show that you are competent in the skills required for the role Beyond that impress with your portfolio i e how you ve gone through a real world workflow as you built a project If you can talk about an interesting project demonstrate that you would be easy to teach and work with and highlight the things you have done to stand above other candidates then you will impress in the way that counts 2021-08-19 15:04:27
海外TECH DEV Community The Modern Tech Stack to Build a SaaS in 2021 as a Team of One-Man with Next JS and AWS https://dev.to/ixartz/the-modern-tech-stack-to-build-a-saas-in-2021-as-a-team-of-one-man-with-next-js-and-aws-2in2 The Modern Tech Stack to Build a SaaS in as a Team of One Man with Next JS and AWSAs someone who loves cutting edge technology I choose to build my first SaaS with a modern tech stack With the rise of JAMStack and serverless architecture I created PostMage with Next JS static generation for the frontend and the Node js backend deployed to AWS Because I m a solo full stack developer my time and resources are extremely limited In this article I ll share all the technologies I use to build my SaaS product from programming language to development tools You ll find how I overcome this challenge to build a SaaS as a solo developer Hope my story gives you inspiration to create your SaaS products TypeScript EverywhereFor building my SaaS I wrote every line of code in TypeScript Yes all the code Frontend Backend and also Infrastructure as code in TypeScript The whole project only uses one and unique programming language No time to learn new languages and save time by making the code easy to maintain Why did I choose TypeScript It makes the development much more pleasant with strongly typed and has better integration to IDE So if you are still a JavaScript developer you should give it a try Frontend frameworkFor the frontend I use Next js It s a React framework to build a complex application The good news Next JS supports TypeScript out of the box I use Tailwind CSS styling the React components As a developer you usually build an ugly interface With Tailwind CSS you can have now build a not so ugly interface even if you aren t a designer As a true believer of JAMStack I have previously taken some time to try Jekyll Hexo and ty for different projects I choose to build my SaaS in static generated mode using Next JS So at build time all the pages are generated and pre rendered Perfect for SEO cheap hosting fast secure and highly scalable Static hostingI use Cloudflare Pages as a hosting service for the frontend it s a brand new alternative to Netlify or Vercel Cloudflare has announced it in December in beta and released it to the public in April There is some small missing feature nothing big in Pages Until the Cloudflare team solve it I ve found temporary workarounds So it isn t a big deal The good thing about Cloudflare Page is its generous free tier unlimited bandwidth Vercel and Netlify are limited to GB per month and you can set up a password protected website for free not included for free in Vercel or Netlify Serverless REST APIOn the backend side I ve built a REST API with Express js and Serverless Framework To support TypeScript in Serverless Framework I use serverless bundle plugin Express js needs another plugin to work with Serverless Framework named serverless http For better developer experience I ve also used two other plugins serverless dotenv plugin and serverless offline The first plugin is to support dotenv files and the second one is to run Serverless Framework on your local computer As a solo developer I choose serverless architecture for making my life easier with easy deployment low maintenance and scalable backend No need to become a DevOps engineer no need to SSH make OS updates configure proxy webserver load balancer firewall etc AuthenticationThe REST API is protected by the IAM authentication It s AWS built in feature to secure any AWS resources in our case API gateway and AWS lambda It denies the API invocation when the user isn t connected to the SaaS application So when it s protected external actors won t be able to invoke your resource Because the API is deployed to AWS I choose to use AWS Cognito for authentication The good thing is that Cognito saves a lot of time by providing everything you need to implement authentication for your SaaS You get access without any effort to Email authentication and Social sign in Facebook Google Apple and Amazon The connection between AWS Cognito and React frontend is done through AWS Amplify Amplify provides React components and code for making your frontend integration to AWS easier and faster NoSQL DatabaseMajor and well known databases like PostgreSQL and MySQL don t fit very well in Serverless architecture Due to the nature of serverless it can create a lot of connections to the database and exhaust the database connection limit On most providers even if you don t have any traffic on your SaaS you still need to pay your DB instance On the opposite when your application starts to grow your database can quickly become the bottleneck As a solo full stack developer I wanted something extremely easy to manage and compatible with serverless So I choose DynamoDB as a primary database DynamoDB is a NoSQL database fully managed by AWS and I use it to store user states They almost handle everything and I just need to focus on my code Infrastructure as codeAs you can see I use several AWS services for my SaaS app It s extremely painful to set up manually cloud resources in each environment development staging or production and hard to maintain consistency between them AWS gives developers access to AWS CDK where you can define your cloud resources in TypeScript In one command you can deploy to your AWS account and get everything provisioned DeploymentLike many developers I use Git and GitHub for version control of my code Many modern hosting services like Vercel and Netlify Cloudflare pages automatically build and deploy your code at each commit If you work with Git branches you can also live preview the results without pushing to production For the backend and the infrastructure I use a third party service named Seed run to deploy automatically at each commit Like the frontend it also builds and deploys the backend resources on AWS DNS and CDNAs you can doubt I use Cloudflare for DNS and CDN without any surprise Cloudflare Pages automatically deploy your code in the Cloudflare network I only need to point my domain to Cloudflare DNS server and they handle the rest Using Cloudflare you get plenty of security features like a firewall and a DDoS protection for your SaaS products Error trackingI use Sentry as the error tracking solution It automatically reports when something goes wrong with useful information like stack trace breadcrumbs a trail of events that happened before an issue browser information OS information etc It makes debugging in production much easier with enriched data Sentry is only set up for the frontend and not for the REST API I keep using the native solution Indeed Sentry with AWS lambda creates a lot of overhead and the setup wasn t straightforward In the next section you ll find the solution I use for error tracking in the backend Logging monitoring and alertAWS Lambda automatically sends logs to AWS CloudWatch so no need to use Sentry Here is an example of logs stored in CloudWatch You also get access to your lambda metrics Perfect to understand how your serverless functions behave and detect if there are any errors I also use Lumigo to have additional information for my logging and monitoring The interface is easier to use compared to Cloudwatch You can also enable tracing in Lumigo where you can visualize your AWS service and external API calls It makes your debugging session easier by letting you know if there is an error in your code or it s from an external service Payment and subscriptionThe last piece of a SaaS and the most important thing for a business is to accept payment Accepting a one time payment is hard but the task for recurrent payment is much complex Unfortunately for a SaaS business we need to handle the second case Your customers need to choose the plan and enter their personal information when they subscribe for the first time After that your users should have a self service portal where they can manage their plan upgrade downgrade cancel pause resume their subscription plan They sometimes also need to update their personal information And they also need access to their invoice history when needed Stripe can manage everything I mention in this section it hides all these complexities and makes the integration to payment easier ConclusionIt took me months of development to build this full stack React SaaS template Instead of focusing on my business I was solving these technical details Building the first version of your SaaS should only take month and not By going through this long journey I ve learned so many things and I ve made tons of mistakes I hope others developers won t do the same mistakes so I build Nextless JS React Boilerplate for SaaS products With Nextless js you get everything I mentioned in this article without you writing any line of code Save you time focus on things that matter and launch your SaaS faster Find more information at Nextless JS 2021-08-19 15:03:24
Apple AppleInsider - Frontpage News Steve Jobs email reveals Apple was evaluating an 'iPhone nano' in 2010 https://appleinsider.com/articles/21/08/19/steve-jobs-email-reveals-apple-was-evaluating-an-iphone-nano-in-2010?utm_medium=rss Steve Jobs email reveals Apple was evaluating an x iPhone nano x in An email sent by late Apple cofounder and CEO Steve Jobs in confirms that the company was working on ーor at least thinking about ーa so called iPhone nano Credit Andrew O Hara AppleInsiderBack in Apple was rumored to be developing a smaller and cheaper iPhone that could carry the nano moniker used for iPod models at the time Now emails collected during the Epic Games v Apple case and seen by The Verge confirm that those rumors were based on actual Apple plans Read more 2021-08-19 15:06:01
Apple AppleInsider - Frontpage News Best Deals August 19 - Free 240GB SSD, $50 off LG UltraWide monitor, 20% off Otterbox cases, and more! https://appleinsider.com/articles/21/08/19/best-deals-august-19---free-240gb-ssd-50-off-lg-ultrawide-monitor-20-off-otterbox-cases-and-more?utm_medium=rss Best Deals August Free GB SSD off LG UltraWide monitor off Otterbox cases and more Thursday s best deals include a completely free GB SSD off Otterbox cases off Apple s Smart Keyboard Folio and more Deals Thursday August Shopping online for the best discounts and deals can be an annoying and challenging task So rather than sifting through miles of advertisements check out this list of sales we ve hand picked just for the AppleInsider audience Read more 2021-08-19 15:02:06
海外TECH Engadget Apple's latest 'Foundation' trailer features an enormous space elevator https://www.engadget.com/apple-tv-foundation-trailer-isaac-asimov-154330992.html?src=rss Apple x s latest x Foundation x trailer features an enormous space elevatorAhead of the show s Apple TV premiere on September th Apple has offered another look at its latest sci fi saga Foundation The latest trailer doesn t reveal too much about the story but it has some impressive visuals nbsp The clip features a elevator that according to showrunner David S Goyer stretches around miles into space There s also a floating visualization of a supercomputer that takes design cues from a Möbius strip Goyer told IGN that he challenged his production team to find a look that didn t remind viewers of Star Wars or Star Trek perhaps the two biggest linchpins of science fiction In any case it s clear Apple hasn t skimped on the budget The show seems to be much more about humanity more than eye popping visual effects though Based on a series of Isaac Asimov novels Foundation centers around a group of exiles who try to protect the future of civilization after leader Dr Hari Seldon Jared Harris uses data to predict the fall of the Galactic Empire What s left of the Empire isn t too thrilled about that and it tries to suppress Seldon s group The story plays out over the course of a millennium The first season will run for episodes with the first two episodes dropping at the same time and the remainder hitting Apple TV on a weekly basis Apple TV has more sci fi projects on the way Invasion oddly enough is a series about an alien invasion It debuts on October nd A couple of weeks later on November th Apple will release Finch a movie starring Tom Hanks as an inventor who hits the road with his dog and a robot A third season of For All Mankind is also in the works 2021-08-19 15:43:30
Cisco Cisco Blog Delivering Worry-Free Network as-a-Service: A Cisco Partner Story https://blogs.cisco.com/partner/delivering-worry-free-network-as-a-service-a-cisco-partner-story Delivering Worry Free Network as a Service A Cisco Partner StoryOrganizations want to focus their limited resources on core competencies whether in construction healthcare or accounting Increasingly this means that they want to outsource their IT infrastructure to experts whose core expertise is in deploying and managing it Simply put businesses want the infrastructure they rely onーsuch as their local networkーto just work 2021-08-19 15:00:48
海外科学 NYT > Science When Sea Snakes Attack, Scientists Blame Sex Drive https://www.nytimes.com/2021/08/19/science/sea-snake-sex.html divers 2021-08-19 15:54:26
海外科学 BBC News - Science & Environment Nature: Rattlesnakes' sound 'trick' fools human ears https://www.bbc.co.uk/news/science-environment-58270599 clever 2021-08-19 15:04:34
金融 RSS FILE - 日本証券業協会 株券等貸借取引状況(週間) https://www.jsda.or.jp/shiryoshitsu/toukei/kabu-taiw/index.html 貸借 2021-08-19 15:30:00
ニュース BBC News - Home Sheffield hotel fall: Boy who died was Afghan refugee https://www.bbc.co.uk/news/uk-england-south-yorkshire-58269533 afghan 2021-08-19 15:25:28
ニュース BBC News - Home Ex-MP Jared O'Mara charged with seven counts of fraud https://www.bbc.co.uk/news/uk-england-south-yorkshire-58272878 claims 2021-08-19 15:52:54
ニュース BBC News - Home Foreign secretary Dominic Raab rejects calls to quit over Afghan interpreters https://www.bbc.co.uk/news/uk-58265160 foreign 2021-08-19 15:54:52
ニュース BBC News - Home Nature: Rattlesnakes' sound 'trick' fools human ears https://www.bbc.co.uk/news/science-environment-58270599 clever 2021-08-19 15:04:34
ニュース BBC News - Home Chip shortage: Toyota to cut global production by 40% https://www.bbc.co.uk/news/business-58266794 global 2021-08-19 15:22:19
ニュース BBC News - Home Lucy Bronze: Manchester City and England defender undergoes knee surgery https://www.bbc.co.uk/sport/football/58250277 bronze 2021-08-19 15:33:31
サブカルネタ ラーブロ 舎鈴 田町駅前店@田町(三田) http://feedproxy.google.com/~r/rablo/~3/RAafC1rh0f0/single_feed.php 香味油 2021-08-19 16:00:57
北海道 北海道新聞 緊急事態宣言、13都府県に拡大 解除向け感染状況の指標見直しも https://www.hokkaido-np.co.jp/article/580026/ 新型コロナウイルス 2021-08-20 00:16:00
北海道 北海道新聞 豪で巨大ハマサンゴ発見 直径10m、推定438歳 https://www.hokkaido-np.co.jp/article/580024/ 直径 2021-08-20 00:16:00
北海道 北海道新聞 トヨタ、北米8月減産 6万~9万台、半導体不足 https://www.hokkaido-np.co.jp/article/580016/ 生産 2021-08-20 00:10:00
北海道 北海道新聞 土偶の頭部、里帰り展示 松前に62年ぶり 縄文晩期の貴重な資料 https://www.hokkaido-np.co.jp/article/579921/ 市立函館博物館 2021-08-20 00:07:12
Azure Azure の更新情報 NV-series and NV_Promo Azure Virtual Machines will be retired by 31 August 2022 https://azure.microsoft.com/ja-jp/updates/nvseries-and-nvpromo-azure-virtual-machines-will-be-retired-by-31-august-2022/ august 2021-08-19 15:37:21

コメント

このブログの人気の投稿

投稿時間:2021-06-17 22:08:45 RSSフィード2021-06-17 22:00 分まとめ(2089件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)