投稿時間:2022-01-23 03:28:46 RSSフィード2022-01-23 03:00 分まとめ(34件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
python Pythonタグが付けられた新着投稿 - Qiita Macbook pro M1 Pro Maxを買ったので、Python2から3系にデフォルトを切り替える。 https://qiita.com/TaitoOtani/items/1e11cde6cca7b6449744 pyenvglobalこれで、Python系から系に変わったかなということで、pythonVでバージョンを確認するも、、、あれそのままだ。 2022-01-23 02:25:06
js JavaScriptタグが付けられた新着投稿 - Qiita JSでname属性を前方一致で複数取得したい https://qiita.com/m-kawakami/items/20f2d21529e5d9a77924 JSでname属性を前方一致で複数取得したい前方一致で取得する「onclick」でJSの関数を呼び出して引数にformを渡す場合、欲しい要素がつならこの方法でも取れるみたいltformonsubmitreturnfalsegtltinputtypetextnameAAAagtltinputtypesubmitonclickfuncthisformgtltformgtltscriptgtfunctionfuncformconsolelogformnameAAAltscriptgtでも欲しい要素が複数の場合、これではできない。 2022-01-23 02:12:24
Linux Ubuntuタグが付けられた新着投稿 - Qiita ubuntuに有線LAN(Ethernet controller: Intel Corporation Device 15fa (rev 11) )のドライバをインストールする https://qiita.com/seigot/items/dacf640aade53b9b784b WiFiを使うか、他のPCを使ってくださいtarzxvfeetargzcdeesrcここで、大事なことのですが、このままビルドしても、Checksumエラーで引っかかります。 2022-01-23 02:42:37
Docker dockerタグが付けられた新着投稿 - Qiita [03] Ansible 検証環境用 Docker コンテナで「docker_image」モジュールを使う https://qiita.com/robozushi10/items/1fc7d34ed44f62257be2 2022-01-23 02:33:02
Docker dockerタグが付けられた新着投稿 - Qiita [シリーズ] Ansible の実装検証環境として Docker コンテナを活用する https://qiita.com/robozushi10/items/1790fdf5ba220b00a2a2 2022-01-23 02:06:44
Azure Azureタグが付けられた新着投稿 - Qiita しくみがわかるkubernetes~Azureで動かしながら学ぶコンセプトと実践知識~ハンズオンやってみた【最終章】 https://qiita.com/sk_130/items/3f99776e8fa39f6d84d2 本記事は、タイトルに記載しました書籍にハンズオンが載っているのですが、そちらを実際に動かしてみてはまったところとか、ちゃんとうまくできるか、作業履歴を残したくて記事にしました。 2022-01-23 02:41:11
海外TECH MakeUseOf The 6 Best Password Managers for Android https://www.makeuseof.com/tag/best-password-managers-android-compared/ insecure 2022-01-22 17:45:12
海外TECH MakeUseOf How to Fix Low Game FPS in Windows https://www.makeuseof.com/tag/fix-low-game-fps-windows/ windows 2022-01-22 17:30:43
海外TECH MakeUseOf Garuda Linux: An Arch-Based Linux Distro Built for Speed and Beauty https://www.makeuseof.com/what-is-garuda-linux/ Garuda Linux An Arch Based Linux Distro Built for Speed and BeautyGaruda Linux is not your regular Arch based distro It transforms Linux especially Arch into a user friendly operating system for newcomers 2022-01-22 17:30:42
海外TECH MakeUseOf What Is a Ring Chime and What Does It Do? https://www.makeuseof.com/what-is-ring-chime/ camera 2022-01-22 17:00:43
海外TECH DEV Community Language Interoperability https://dev.to/ifihan/language-interoperability-1eik Language Interoperability INTRODUCTIONClient server architecture and many distributed computing systems use multiple languages and technologies for differing parts of their system The difference in these languages or technologies could pose a serious barrier to creating a workable system That s where interoperability comes in WHAT IS INTEROPERABILITY Interoperability in programming could be defined as the ability for two or more languages to interact or communicate for effective information in a system This study of interoperability is advantageous because it enables languages and technological design to be improved or enhanced to perform specific tasks It also allows languages to communicate effectively which helps build a better system Interoperability is essential in building workable systems For instance it is what allows HTML CSS and JavaScript to work together when used to make web pages TYPES OF INTEROPERABILITYThe degree of interoperability among interconnected systems varies These include Syntactic interoperabilityThis is also sometimes referred to as Structural interoperability This is when two or more systems can communicate and exchange information with compatible protocols Recognized formatting standards such as XML and SQL help with syntactic compatibility The structure and design of the data to be utilized in the exchange of information are determined by the standards in this field Semantic interoperabilityThis is the ability of systems to automatically share and comprehend information Machine computable logic inferencing knowledge discovery and data federation amongst information systems require semantic interoperability As a result semantic interoperability concerns data packing syntax and the simultaneous transfer of meaning with the data semantics This is performed by adding metadata information about the data and tying each element to a standardized or shared vocabulary Syntactic interoperability is a prerequisite for semantic interoperability Data is not only shared between two or more systems with semantic interoperability but each system also understands it Two or more systems can communicate and share data through syntactic interoperability but the interface and programming languages must be compatible CONDITION FOR INTEROPERABILITYTo be interoperable two or more systems must be capable of exchanging interpreting and displaying shared data in a way that each can understand the other This requires the establishment of syntactic interoperability which necessitates using standard data formats and data structure protocols Then there s semantic interoperability which necessitates metadata to link each data element to a standardized shared vocabulary This shared vocabulary contains associated links to an ontology a data model representing a set of concepts within a domain along with relationships amongst those concepts BENEFITS OF INTEROPERABILITYThe benefits of interoperability are All appropriate parties can have access to data Reduces the time it takes to process data enhancing organizational efficiency Interoperable systems have lower costs since they demand fewer resources and more upkeep INTEROPERABILITY VS INTEGRATION Interoperability isn t the same as integration While the words are used interchangeably there s a difference between the two concepts Although integration also ensures two or more applications are able to work together such that data from one can be accessed by the other it is necessary to use a third partyーmiddleware in software termsーto translate the data and make it function for the receiving system Interoperability on the other hand ensures a direct link between the two different systems COMPATIBILITY VS INTEROPERABILITY They are quite similar but differ at a point Two or more programs in the same environment interacting while separately conducting duties without interfering with another app or system are referred to as Compatibility Compatibility differs from interoperability in that the two apps or systems do not have to communicate They both reside in the same location CONCLUSIONThe concept of interoperability is one that has revolutionized the way systems interact And the more it is taken into consideration when building applications the more efficient a product could be 2022-01-22 17:43:34
海外TECH DEV Community Expressando (Part 1): A real-time sign language detection system https://dev.to/nitdgplug/expressando-part-1-a-real-time-sign-language-detection-system-2l15 Expressando Part A real time sign language detection systemExpressando is a real time sign language detection system made using OpenCV Keras Tensorflow and SciPy It is a beginner friendly project suitable for the enthusiasts in the field of Machine Learning and Deep Learning The projects primarily aim to highlight the basic use of OpenCV in image processing manipulation training of models with Keras using Tensorflow as backend and finally the detection of our customised sign language after constructing a Convolutional Neural Network on it So without any further ado let us begin with the tutorial Expressando has been written in Python So before we start you can have a quick recapitulation of the basics from the following resources which will be beneficial for a better understanding of the concepts Python Beginner s GuideWSchoolsTutorials PointAlso make sure that you have Python installed on your system In case you do not have Python pre installed download the latest version of Python from here You will also have to create a virtual environment for this project What is a Virtual Environment A virtual environment is an isolated Python stimulated environment where a project s dependencies and packages are installed in a different directory from the other packages installed in the system s default Python path known as the base environment and other virtual environments It is synonymous to a container where you have all your required dependencies installed and ready to be used in your project For Linux UNIX Step For creating virtual environments you need a Python package called virtualenv To install virtualenv on your system type the following command in your terminal For Python version less than pip install virtualenvFor Python version and above pip install virtualenvNow check whether virtualenv has been installed successfully on your system Type the following command virtualenv versionThe following output will be displayed on successful installation For example Step Create a directory called TDoC where you will create and save your Python files This is the main directory for the project We will also create our virtual environment inside this created directory Open your terminal at the desired directory and type mkdir TDoC A folder called TDoC will be created at your desired directory Now let us create our virtual environment inside this directory For Python version less than cd TDoC python m venv lt NAME OF THE ENVIRONMENT gt For Python version and above cd TDoC python m venv lt NAME OF THE ENVIRONMENT gt Here substitute the with a proper string which you want to name the environment For example python m venv envIt is preferable to name your environment as env or venv as these directories has already been included under gitignore Hence you do not need to make any further changes in the gitignore file while commiting your files in Github Gitlab Here we will be using env as the name of our environment Step After creating the virtual environment named env you will notice that a directory called env is created This directory basically serves as your virtual environment Now let us activate our virtual environment by the following command source lt NAME OF THE ENVIRONMENT gt bin activateIn our case we will be using the following command source env bin activateYou will be able to see the name of the environment in closed parantheses in your terminal which will indicate that your virtual environmant has been activated For example env ┌ー ritabrata ritabrata VivoBook ASUSLaptop XJB XJB ー Desktop TDoC └ー╼ Step Download the requirements txt from the given link requirements txt Copy the requirements txt file and store it under the directory TDoC You will have the following folder structure ├ーTDoC ├ーenv ├ーrequirements txtNow type the following command in your Terminal window pip install r requirements txtYou will now have all the required dependencies and Python packages with their appropriate versions installed in your virtual environment named env You can check whether the dependencies are installed according to the requirements txt file by the following command pip list This command enlists all the installed dependencies installed in your encironment You can also deactivate the environment when it is not in use by typing the following command deactivateThe virtual environment will be deactivated and the name of the environment in closed paranthesis will cease to appear For Windows Step For creating virtual environments you need a Python package called virtualenv To install virtualenv on your system type the following command in your Windows Powershell In case you do not have Windows Powershell you can download it from herepip install virtualenvNow check whether virtualenv has been installed successfully on your system Type the following command virtualenv versionThe following output will be displayed on successful installation For example virtualenv from c users administrator appdata local programs python python lib site packages virtualenv init py Step Create a directory called TDoC where you will create and save your Python files This is the main directory for the project We will also create our virtual environment inside this created directory Open your Powershell at the desired directory and type mkdir TDoC A folder called TDoC will be created at your desired directory Now let us create our virtual environment inside this directory cd TDoC python m venv lt NAME OF THE ENVIRONMENT gt Here substitute the with a proper string which you want to name the environment For example python m venv envIt is preferable to name your environment as env or venv as these directories has already been included under gitignore Hence you do not need to make any further changes in the gitignore file while commiting your files in Github Gitlab Here we will be using env as the name of our environment Step After creating the virtual environment named env you will notice that a directory called env is created This directory basically serves as your virtual environment Now let us activate our virtual environment by the following command lt NAME OF THE ENVIRONMENT gt Scripts activateIn our case we will be using the following command env Scripts activateYou will be able to see the name of the environment in closed paranthesis in your terminal which will indicate that your virtual environment has been activated For example env PS C Users Administrator Desktop Expressando TDoC gt Step Download the requirements txt from the given link requirements txt Copy the requirements txt file and store it under the directory TDoC You will have the following folder structure ├ーTDoC ├ーenv ├ーrequirements txtNow type the following command in your Terminal window pip install r requirements txtYou will now have all the required dependencies and Python packages with their appropriate versions installed in your virtual environment named env You can check whether the dependencies are installed according to the requirements txt file by the following command pip list This command enlists all the installed dependencies installed in your environment You can also deactivate the environment when it is not in use by typing the following command deactivateThe virtual environment will be deactivated and the name of the environment in closed paranthesis will cease to appear Configuring Input through Webcam using OpenCVAfter settinp up your virtual environment it is time to configure your digital input The first step of any image manipulation project starts with the configuration of digital image input using OpenCV So let us first configure the basic webcam input Step Create a file named as check py inside the directory of TDoC As the name suggests we are checking for the input through the webcam using the OpenCV library Open the file in your code editor IDE The folder structure would look like the following ├ーTDoC ├ーenv ├ーcheck py ├ーrequirements txtStep First import OpenCV into the check py file It can be accomplished by the following line of code import cv After importing cv we need to create a VideoCapture object which will initiate the process to retrieve the input through the webcam cap cv VideoCapture Here cap refers to the object that is created using OpenCV to capture the video It basically returns the video from the first webcam on your computer If you are using more than one webcam then the value indicates that the input will be configured through the first webcam of your computer For example if you want to configure the input through your nd webcam then you have to pass instead of as the parameter In simple words it means if you want to configure the input through the n th webcam then you must pass n as parameter to the VideoCapture method Step This step involves rendering a while loop to stimulate asynchronous input through the webcam with the help of a suitable condition In this step we will be discussing the most common and important methods that are present in the OpenCV library which are required for making basic projects and develop sound understanding about the various methods present in OpenCV and their uses OpenCV is a house to a huge number of methods and functions so we will be discussing only the important methods which are necessary for beginners to understand Continue in the code editor as follows while cap isOpened ret img cap read img cv flip img The function cap isOpened checks whether the VideoCapture object here cap is functional or not This is done by usually checking the response from the webcam under consideration This code initiates an infinite loop to be broken later by a break statement where we have ret and frame being defined as the cap read Basically ret is a boolean regarding whether or not there was a return at all On the other hand frame contains each frame that is being returned in the form of an image array vector This is practised in order to avoid unnecessary IO errors In case no frame was returned ret will obtain False as it s return value Hence instead of throwing an IO error it will pass None to the frame The next line of code introduces us to the method flip This method inverts the frame taken into consideration laterally Using flip the input will be similar to a mirror It is beneficial as it eases the orientation of the webcam input Step cv rectangle img cv imshow RGB Output img img img imCopy img copy gray cv cvtColor img cv COLOR BGRGRAY blur cv GaussianBlur gray In the next lines of code we are introduced to few other methods in the OpenCV library The method rectangle enables us to draw a rectangle of our desired shape on the frame taken into consideration It has the following parameters img It is the frame taken into consideration on which the rectangle is to be drawn It is the starting coordinates of rectangle The coordinates are represented as tuples of two values the X and Y coordinates respectively It is the ending coordinates of rectangle The coordinates are represented as tuples of two values similarly as the starting point Both the tuples indicate the right diagonal of the rectangle drawn If the x and y coordinates of the tuples are same it will result in a square It is the color of border line of rectangle which is to be drawn passed in the form of BGR index BGR index comprises of Blue Green and Red colour values which are used to define other colours as well Each of the values ranges from to Here denotes the blue colour It denotes the thickness of the rectangle border line in px The method imshow shows the image in the form of an independent window It has two parameters The name of the window and the image to be displayed Next we extract the region covered by the rectangle in the form of a list of pixels named img The extraction We also make a copy of the extracted image and name the copy as imCopy using the copy function Then we are introduced to the method cvtColor This method is used to convert the image into different color spaces There are more then hundreds of color space filters available in OpenCV but we will be using COLOR BGRGRAY for now This converts the image taken into consideration in the form of BGR and converts the entire image into grayscale We name the grayscale image as gray The left image is the original image while the right image represents it s grayscale form We will also use the GaussianBlur method here It is an image smoothening technique also known as blurring to reduce the amount of luminant noise in the image We will stored the reduced image as blur The left image is the original image while the right image represents it s blurred form It has the following parameters gray It is the frame taken into consideration on which the method is to be applied It is the gaussian Kernel size defined along the X and Y axes passed in the form of a tuple It denotes the thickness of the rectangle border line in px Step ret thresh cv threshold blur cv THRESH BINARY INV cv THRESH OTSU hand resize cv resize thresh width height cv imshow Threshold thresh Thresholding is a technique in OpenCV which is the assignment of pixel values in relation to the threshold value provided In thresholding each pixel value is compared with the threshold value It is one of the most common and basic segmentation techniques in computer vision and it allows us to separate the foreground i e the objects that we are interested in from the background of the image A threshold is a value which has two regions on its either side i e below the threshold or above the threshold If the pixel value is smaller than the threshold it is set to otherwise it is set to a maximum value Here ret performs the same function as before while thresh contains our thresholded image Then we define a width and the height in the form of a tuple before the initialisation of the cap object There are mainly three types of thresholding techniques Simple Threshold In this type of thresholding we manually supply parameters to segment the image ーthis works extremely well in controlled lighting conditions where we can ensure high contrast between the foreground and background of the image The parameters are discussed later Adaptive Threshold In this type of thresholding instead of trying to threshold an image globally using a single value it breaks the image down into smaller pieces and then thresholds each of these pieces separately and individually It is better in limited lighting conditions OTSU Threshold In Otsu Thresholding the value of the threshold is not defined but is determined automatically This works well when we are not sure of the lighting conditions This is an additive module i e it is applied in addition to Simple or Adaptive threshold and works well with grayscale images The function threshold has the following parameters blur The input image array on which the blur effect is applied The value of Threshold below and above which pixel values will change accordingly The maximum value that can be assigned to a pixel in general the intensity of a colour ranges from to cv THRESH BINARY INV cv THRESH OTSU The type of thresholding that is applied to the image There are other thresholding techniques as well cv THRESH BINARY If pixel intensity is greater than the set threshold value is set to white else it is set to be at black Here the brighter pixels are converted to black and darker pixels to white cv THRESH BINARY INV If pixel intensity is greater than the set threshold value is set to black else it is set to be white Here the brighter pixels are converted to white and darker pixels to black cv THRESH TRUNC If pixel intensity value is greater than threshold it is truncated to the mentioned threshold The pixel values are set to be the same as the threshold All other values remain the same cv THRESH TOZERO Pixel intensity is set to for all the pixels intensity less than the threshold value cv THRESH TOZERO INV Pixel intensity is set to for all the pixels intensity greater than the threshold value The thresholded image of the region under consideration is displayed using the imshow function lt The above are the examples of the thresholding modules Step contours hierarchy cv findContours thresh cv RETR TREE cv CHAIN APPROX SIMPLE cv drawContours imCopy contours cv imshow Draw Contours imCopy Contours are defined as the line joining all the points along the boundary of an image that are having the same intensity Contours come handy in shape analysis finding the size of the object of interest and object detection It is defined by the minimum number of edges required to define the shape taken into consideration This is done well with thresholded and grayscale images It is done by the function findContours Normally we use the cv findContours function to detect objects in an image Sometimes the objects are in different locations and in some cases some shapes are inside other shapes just like nested figures or concentric figures In this case we call outer one as parent and inner one as child This way contours in an image has some relationship to each other And we can specify how one contour is connected to each other like is it child of some other contour or is it a parent etc Representation of this relationship is called the Hierarchy The above picture represents the hierarchy of the contours Contours that have the same integer have the same hierarchy The function has the following parameters thresh The input image array from which the contours are to be detected cv RETR TREE This is known as the Contour Retrieval Method cv CHAIN APPROX SIMPLE This is known as the Contour Approximation Method Contour Retrieval Method are of the following types cv CV RETR EXTERNAL It retrieves only the extreme outer contours It sets hierarchy i hierarchy i for all the contours This gives outer contours so if you have say one contour enclosing another like concentric circles only the outermost is given cv CV RETR LIST It retrieves all of the contours without establishing any hierarchical relationships This is applied when the hierarchy and topology of the object cannot be determined from beforehand cv CV RETR CCOMP It retrieves all of the contours and organizes them into a two level hierarchy At the top level there are external boundaries of the components At the second level there are boundaries of the holes If there is another contour inside a hole of a connected component it is still put at the top level ADVANCED cv CV RETR TREE It retrieves all of the contours and reconstructs a full hierarchy of nested contours This full hierarchy is built and displayed It establishes complete hierarchial relations and imagifies the contours Contour Approximation Method are of the following types cv CHAIN APPROX NONE It stores all the points of the boundary of the shape under consideration It requires a huge amount of memory to store each unit It is efficient but highly reduces the speed of execution cv CHAIN APPROX SIMPLE It removes all redundant points and compresses the contour thereby saving memory It stores the key turning points of the shape under consideration and saves a lot of memory by reducing the number of points hence increasing the speed of execution The examples of the approximation methods are shown as above The function drawContours is used to draw the contours that have been traced superimposed on the top of an image In case we do not want to display it over any image the default is set to black The function has the following parameters imCopy The input image array on which the contours are to be displayed contours These refers to the contours array that have been declared and initialised in the findContours function It is the parameter to show all the contours in the contours array However if you want to display a specific contour according to the hierarchy pass the desired number as the parameter For example to get the rd contour you have to pass as a parameter It is the color of contour which is to be drawn passed in the form of BGR index Here denotes the green colour Then we display the contours superimposed on imCopy image using the imshow function Step Now after checking for the input it is time to proceed for the termination of the while loop and close all the windows and close our Video Capture object To exit the program on a specified keyboard interrupt type the following code k cv waitKey amp xFF if k breakcap release cv destroyAllWindows The cv waitKey function returns when no input is made whatsoever As soon the event occurs a Button is pressed here is the Unicode value for Escape Key it returns a bit integer ADVANCED The xFF in this scenario is representing binary a bit binary since we only require bits to represent a character we AND waitKey to xFF As a result an integer is obtained below ord char returns the ASCII value of the character which would be again maximum we often use q as the keybinding to quit Hence by comparing the integer to the ord char value we can check for a key pressed event and break the loop The xFF is a hexidecimal input known as bit mask ADVANCED is also the Unicode value for Non breaking Space made by the Space Bar Now the loop breaks when the key is entered and exits the control out of the loop The function cap release closes the webcam input and prevents any resource errors The function cv destroyAllWindows destroys all the opened windows rendered by the imshow functions and deallocates the memory used by the image vector arrays and frees them Now your check py should look like the following import cv width height cap cv VideoCapture while cap isOpened ret img cap read img cv flip img cv rectangle img cv imshow RGB Output img img img imCopy img copy gray cv cvtColor img cv COLOR BGRGRAY blur cv GaussianBlur gray ret thresh cv threshold blur cv THRESH BINARY INV cv THRESH OTSU hand resize cv resize thresh width height cv imshow Threshold thresh contours hierarchy cv findContours thresh cv RETR TREE cv CHAIN APPROX SIMPLE cv drawContours imCopy contours cv imshow Draw Contours imCopy k xFF amp cv waitKey if k breakcap release cv destroyAllWindows Run the code in your Powershell terminal usingpython check pyTo take input for static images use the follwing code import cvimg cv imread abc jpg img cv resize img cv imshow Image img interrupt cv waitKey amp xFFif interrupt cv destroyAllWindows Run the code in your Powershell terminal usingpython check pyASSIGNMENT Use any functions modules in OpenCV and commit the code in the official Expressando TDoC Repository The Github repository where you all shall be pushing your codes on the respective projects assigned is ready Link to the repo Github RepoYou are also asked to follow the tutorial video attached below which will clearly describe how you shall push your code specifically for the TDoC event Link to the video TDoC Instruction VideoHere is also an introductory video on basics of Git amp Github so that you are well versed with the git system Link to the video Git and Github Checking for Convexity Defects in the Camera InputSince the initial input has been configured through the webcam input it becomes important to understand the concepts of defect as a basic and fundamental method in the domain of detection In this session we are going to learn about defects and detect them in our digital video input Step Create a file named as defects py inside the directory of TDoC As the name suggests we are checking for the defects in images taken by webcam using the OpenCV library Open the file your code editor IDE The folder structure would look like the following ├ーTDoC ├ーenv ├ーcheck py ├ーdefects py ├ーrequirements txtStep First import OpenCV NumPy as np and math into the defects py file Here math is present in the standard Python library and it need not to be installed separately It can be accomplished by the following line of code import cvimport numpy as npimport mathAfter importing the packages we need to create a VideoCapture object which will initiate the process to retrieve the input through the webcam cap cv VideoCapture Step The next step involves rendering a while loop to stimulate asynchronous input through the webcam with the help of a suitable condition In this step we will be discussing the most common and important methods that are present in the OpenCV library which are required for making basic projects and develop sound understanding about the various methods present in OpenCV and their uses Continue in the code editor as follows while cap isOpened ret img cap read img cv flip img cv rectangle img crop img img cv imshow Gesture img grey cv cvtColor crop img cv COLOR BGRGRAY value blurred cv GaussianBlur grey value thresh cv threshold blurred cv THRESH BINARY INV cv THRESH OTSU cv imshow Binary Image thresh The above lines of the code is just a recap of what we did in Day REFER TO THE DOCUMENTATION OF DAY Here we initialise a while loop which iterates as long as the webcam input returns a frame or cap isOpened returns True value The cap read takes the input of the image in the form of an image array vector The flip function basically returns the inverted image of the frame taken into consideration We define a region by means of the rectangle function and then extract the region naming it as crop img It is shown by the name Gesture using the function imshow Then we apply the cvtColor function and convert the image into it s equivalent grayscale using cv COLOR BGRGRAY method and name it as grey Next we declare the tuple value which contains the kernel standard deviation for x and y coordinates This tuple is later used in the GaussianBlur function where it is used as a parameter The blurred is image is named as blurred Then we apply simple threshold using the modules cv THRESH BINARY INV and cv THRESH OTSU and naming the resultant image as thresh It is shown by the name Binary Image using the function imshow Step contours hierarchy cv findContours thresh copy cv RETR TREE cv CHAIN APPROX NONE cnt max contours key lambda x cv contourArea x x y w h cv boundingRect cnt cv rectangle crop img x y x w y h Next we derive the contours from the threshold using cv RETR TREE as the retrieval method and cv CHAIN APPROX NONE as the approximation method We then store the contours in the array named contours while the hierarchy order is stored in hierarchy We define a list called cnt which stores the external contour with the maximum area enclosed by it This refers to the area of the object as it will have the maximum area under consideration The function contourArea returns the area enclosed by the contour and the max function returns the enclosed contour with the maximum area The key used here is lambda which is a constant unit vector used to determine the direction and order of the contours ADVANCED Go through the following resources to know more about Thresholding and Contours Image Thresholding TutorialContours and Hierarchy Tutorialcv boundingRect is a function used to create an approximate rectangle of minimum area which encloses the object contour that is passed into the function as a parameter This function s primary use is to highlight the area of interest after obtaining the image s outer shape or the external contour With proper markings the users can easily highlight the desired aspect in an image It ensures clear focus and better understanding of the operations cv boundingrect returns numeric values when the contour is passed as an argument These values correspond to x y w h respectively These values are more described as x X coordinate of the contour closest to the origin Top left of the window y Y coordinate of the contour closest to the origin Top left of the window w Width of the rectangle which will enclose the contour h Height of the rectangle which will enclose the contour The above shows the use of boundingrect function to enclose all the shapes in the figure Next we draw a rectangle using the rectangle function with the coordinates from x y to x w y h as the diagonal over the crop img This serves as an enclosure to the contours Step hull cv convexHull cnt drawing np zeros crop img shape np uint cv drawContours drawing cnt cv drawContours drawing hull cv imshow Contours drawing What is a Convex Hull A Convex object is one with no interior angles greater than degrees A shape that is not convex is called Non Convex or Concave Hull means the exterior or the shape of the object Therefore the Convex Hull of a shape or a group of points is a tight fitting convex boundary around the points or the shape Any deviation of the object from this hull can be considered as convexity defect This is an example of a convex hull How to display the Convex Hull OpenCV provides a function convexHull which stores all the points of the hull in the form of list array of points on passing cnt as the contour array The next line of the program makes use of a NumPy array to store the crop img and using the function np zeroes it converts the entire image to black Here we have used black background to clearly visualise the contours np uint is an bit unsigned integer basically used to define the source of image Then we use the drawContours function to draw the contour and the hull using green and red colours respectively over the image drawing which is the black coloured background of the same size as crop img Then we show the output under the name Contours using the function imshow Step Next we have to detect the defects by making use of the Convex Hull What are Convexity Defects Any deviation of the contour from its convex hull is known as the convexity defect OpenCV provides a function cv convexityDefects for finding the convexity defects of a contour This takes as input the contour and its corresponding hull indices and returns an array containing the convexity defects as output This figure shows the depiction of the hull contours and the defect hull cv convexHull cnt returnPoints False defects cv convexityDefects cnt hull We redeclare hull with an extra parameter returnpoints False This will give us the indices of the contour points that make the hull The function convexityDefects is used to find the defects directly by passing the contours array cnt and the hull Convexity Defects returns an array where each row contains these values start point as s end point as e farthest point as f approximate distance to farthest point as d Step Now we use some mathematical expressions to determine the number of convexity defects in the hull and count them accordingly for i in range defects shape s e f d defects i start tuple cnt s end tuple cnt e far tuple cnt f a math sqrt end start end start b math sqrt far start far start c math sqrt end far end far angle math acos b c a b c if angle lt count defects cv circle crop img far cv line crop img start end Here defects returns an array where each row contains these values start point end point farthest point approximate distance to farthest point i e s e f d We segment and store each of them as a separate independent D array with y coordinate as These arrays are later converted into a tuple of coordinates and they are named as start end and far Here start and end points lie on the contour whereas far points lie on the hull We then use the basic distance formula to calculate the lengthd of a b and c Now this is Math time Let s understand the cosine theorem In trigonometry the law of cosines relates the lengths of the sides of a triangle to the cosine of one of its angles Using notation as in the given figure the law of cosines states where gamma denotes the angle contained between sides of lengths a and b and opposite the side of length c The formula for the same is given below By seeing this formula now we understand that if we have the parameters a b and c then we can find gamma the angle between the sides a and b For finding gamma the following formula is used The pictorial depiction of the following would look like the following Now gamma is always less than or equal to degrees maximum So we can say If gamma is less than degree or pi we consider it as a finger By this point we can easily derive the three sides a b c see CODE and from the cosine theorem we can derive gamma or angle between two fingers As you read earlier if gamma is less than degree we treated it as a finger We convert gamma into degrees by multiplying with as acos function returns the angle in radians We then check if the angle is less than or equal to degrees and if it is true we increase the value of count defects by The existence of an angle less than denotes the presence of defects After knowing gamma we just draw circle with radius in approximate distance to farthest points The far points are denoted by the line drawn by cv line The circle drawn would not be uniform as the farthest points are not present in a straight line Next we display the number of defects using the function cv putText The parameters of cv circle are img It is the image on which circle is to be drawn far It is the center coordinates of circle The coordinates are represented as tuples of two values i e X coordinate value Y coordinate value It is the radius of circle It is the color of border line of circle to be drawn in BGR index It is the thickness of the circle border line in px Thickness of px will fill the circle shape by the specified color The parameters of cv line are crop img It is the image on which line is to be drawn start It is the starting coordinates of line The coordinates are represented as tuples of two values i e X coordinate value Y coordinate value end It is the ending coordinates of line The coordinates are represented as tuples of two values i e X coordinate value Y coordinate value It is the color of border line of circle to be drawn in BGR index It is the thickness of the circle border line in px The parameters of cv putText are img It is the image on which text is to be drawn Number It is the text string to be drawn on the image It is the coordinates of the bottom left corner of the text string in the image The coordinates are represented as tuples of two values i e X coordinate value Y coordinate value cv FONT HERSHEY SIMPLEX It denotes the font type used in OpenCV It is the fontScale factor that is multiplied by the font specific base size It is the color of text string to be drawn in BGR Here the colour is white It is the thickness of the line in px The fonts available in OpenCV are FONT HERSHEY SIMPLEXFONT HERSHEY PLAINFONT HERSHEY DUPLEXFONT HERSHEY COMPLEXFONT HERSHEY TRIPLEXFONT HERSHEY COMPLEX SMALLFONT HERSHEY SCRIPT SIMPLEXFONT HERSHEY SCRIPT COMPLEXIf there are n defects then there exists n fingers under detection if count defects cv putText img Number cv FONT HERSHEY SIMPLEX elif count defects cv putText img Number cv FONT HERSHEY SIMPLEX elif count defects cv putText img Number cv FONT HERSHEY SIMPLEX elif count defects cv putText img Number cv FONT HERSHEY SIMPLEX elif count defects cv putText img Number cv FONT HERSHEY SIMPLEX else cv putText img Number cv FONT HERSHEY SIMPLEX cv imshow Defects crop img The number of defects will be displayed as follows which will be rendered by the name of Defects using imshow Step Now after checking for the defects it is time to proceed for the termination of the while loop and close all the windows and close our Video Capture object To exit the program on a specified keyboard interrupt type the following code k cv waitKey amp xFF if k breakcap release cv destroyAllWindows Now the loop breaks when the key is entered and exits the control out of the loop The function cap release closes the webcam input and prevents any resource errors The function cv destroyAllWindows destroys all the opened windows rendered by the imshow functions and deallocates the memory used by the image vector arrays and frees them To know more about Convexity Defects go here Convexity DefectsNow your defects py should look like the following import cvimport numpy as npimport mathcap cv VideoCapture while cap isOpened ret img cap read img cv flip img cv rectangle img crop img img cv imshow Gesture img grey cv cvtColor crop img cv COLOR BGRGRAY value blurred cv GaussianBlur grey value thresh cv threshold blurred cv THRESH BINARY INV cv THRESH OTSU cv imshow Binary Image thresh contours hierarchy cv findContours thresh copy cv RETR TREE cv CHAIN APPROX NONE cnt max contours key lambda x cv contourArea x x y w h cv boundingRect cnt cv rectangle crop img x y x w y h hull cv convexHull cnt drawing np zeros crop img shape np uint cv drawContours drawing cnt cv drawContours drawing hull cv imshow Contours drawing hull cv convexHull cnt returnPoints False defects cv convexityDefects cnt hull count defects cv drawContours thresh contours for i in range defects shape s e f d defects i start tuple cnt s end tuple cnt e far tuple cnt f a math sqrt end start end start b math sqrt far start far start c math sqrt end far end far angle math acos b c a b c if angle lt count defects cv circle crop img far cv line crop img start end if count defects cv putText img Number cv FONT HERSHEY SIMPLEX elif count defects cv putText img Number cv FONT HERSHEY SIMPLEX elif count defects cv putText img Number cv FONT HERSHEY SIMPLEX elif count defects cv putText img Number cv FONT HERSHEY SIMPLEX elif count defects cv putText img Number cv FONT HERSHEY SIMPLEX else cv putText img Number cv FONT HERSHEY SIMPLEX cv imshow Defects crop img k cv waitKey if k breakcap release cv destroyAllWindows Run the code in your Powershell terminal usingpython check pyIn the part of this post you will be learning about Collecting data through OpenCV Demonstration of Data Collection and TensorFlow Convolutional Neural Networks CNN any many more things You will also learn about how live prediction works and all the things Stay tuned for part Project collaborators ltag user id follow action button background color f important color fffcc important border color f important PritamFollow Full stack web developer Network Security Admin of editechstudio Member of nitdgplug B Tech NIT Durgapur ltag user id follow action button background color ffe important color important border color ffe important Ritabrata DasFollow Open Source Fanatic Full Stack Developer Graphic Designer C P Enthusiast CTF Player ltag user id follow action button background color important color important border color important Amool kkFollow Full stack web developer Member of nitdgplug B Tech NIT Durgapur May the source be with you ️ 2022-01-22 17:31:10
海外TECH DEV Community Change Management in the Cloud | AWS White Paper Summary https://dev.to/awsmenacommunity/change-management-in-the-cloud-aws-white-paper-summary-2o47 Change Management in the Cloud AWS White Paper SummaryLike every business function change management should act as an enabler for your organization to succeed Just as every business has some kind of finance function to ensure that it optimizes spending change management is essential for optimizing business risk You will benefit from change management if you have migrated to the cloud have a hybrid environment or are born in the cloud An effective change management process enables agility and reduces time to market It ensures that resources deliver business value reduces failed changes and helps ensure delivery to business An effective record of change should also act as one of your first troubleshooting references when an incident occurs This paper discusses change management the Cloud Adoption Framework and the Well Architected Framework in the context of applying governance to deploying changes to your Amazon Web Services AWS environment IntroductionIn a cloud computing environment new IT resources are only a click away which means you reduce the time to make those resources available to your developers from weeks to minutes This results in a dramatic increase in agility for the organization because the cost and time it takes to experiment and develop is significantly lower Refer to Six Advantages of Cloud Computing The more successful an organization is at increasing its agility in the cloud the more difficult it can become to manage change Stakeholders may have become accustomed to long release cycles using waterfall methodologies and the transition to new ways of working that increase the frequency of releases can cause challenges These challenges may result in increased stakeholder engagement the introduction of unnecessary gates that hinder development progress or unmanaged change Make frequent small reversible changes Design workloads to allow components to be updated regularly Make changes in small increments that can be reversed if they fail without affecting customers when possible For more information refer to the AWS Well Architected Framework Making frequent small reversible changes are essential to achieve agility and are aligned to AWS best practices and strategies for designing and operating a cloud workload Deployment of new services software patches and configuration changes can all be automated and they should still be governed by a change process In the cloud you can enable this governance through policy and automation with a complete audit trail of the deployment steps You can also preserve agility by “de penalizing the rollback of failed changes In fact to achieve agility organizations must be willing to roll back changes that have adverse business consequences and build the automation to make this happen Regular scheduled and unscheduled changes should flow through an unchanged pipeline that ensures all of your best practices are met before implementing a change in production Different policies and procedures should exist for emergency changes or changes that require manual processes during deployment What is ITIL The framework managed by AXELOS Limited defines an internationally recognized best practice approach to IT Service Management ITSM Although it builds on ISO IEC which “provides a formal and universal standard for organizations seeking to have their Service Management capabilities audited and certified ITIL Service Operation AXELOS page ITIL goes one step further to propose operational processes to achieve the standard As defined by AXELOS ITIL comprises five volumes that describe the ITSM lifecycle Table ITIL volumes ITIL volumeDescriptionService strategyThe service strategy stage of the ITIL service lifecycle is crucial for defining an IT service strategy that operates effectively within its business context Service designThrough the service design stage of the service lifecycle you can turn your strategy into a cost effective plan that meets both current and future business needs Service transitionBest practice guidance on managing service change in a timely cost effective manner with minimal disruption to operations customers users and the business Service operationBest practice guidance on efficiently and effectively delivering these services for the benefit of the business customers and users Continual service improvementBest practice guidance on identifying and introducing a cycle of service management improvements as well as a structured approach for assessing and measuring services AWS Cloud Adoption FrameworkITIL and AWS Cloud Adoption Framework AWS CAF are compatible Like ITIL AWS CAF organizes and describes all of the activities and processes involved in planning creating managing and supporting modern IT services It offers practical guidance and comprehensive guidelines for establishing developing and running cloud based IT capabilities The AWS CAF organizes guidance into six areas of focus called perspectives Table AWS CAF perspectivesPerspectiveDescriptionBusinessHelps you move from separate strategies for business and IT to a business model that integrates IT strategy Agile IT strategies are aligned to support your business outcomes PeopleHelps Human Resources HR and personnel management prepare their teams for cloud adoption by updating staff skills and organizational processes to include cloud based competencies GovernanceIntegrates IT Governance and Organizational Governance It provides guidance on identifying and implementing best practices for IT Governance and on supporting business processes with technology PlatformHelps you design implement and optimize the architecture of AWS technology based on business goals and objectives It helps provide strategic guidance for the design principles tools and policies you use SecurityHelps you structure the selection and implementation of controls Following this guidance can make it easier to identify areas of non compliance and plan ongoing security initiatives OperationsHelps you to run use operate and recover IT workloads to levels that meet the requirements of your business stakeholders AWS CAF is an important supplement to enterprise ITSM frameworks used today because it provides enterprises with practical operational advice for implementing and operating ITSM in a cloud based IT infrastructure For more information refer to the AWS Cloud Adoption Framework Change management in ITILIn ITIL change management is part of Service Transition That is the transitioning of an update or something new from Service Design to Service Operation Change management aims to standardize the process for the efficient handling of all changes In terms of a system or process efficiency means maximizing productivity while minimizing wasted effort or cost A service change is defined by ITIL as “the addition modification or removal of authorized planned or supported service or service component and its associated documentation ITIL Service Transition AXELOS page Change management is defined as “the Process responsible for controlling the Lifecycle of all Changes The primary objective of Change Management is to enable beneficial Changes to be made with minimum disruption to IT Services ITIL Service Transition AXELOS page Change management is not designed to minimize business risk the process should ensure that “overall business risk is optimized ITIL Service Transition AXELOS page Assessing risk from the business perspective can produce a correct course of action very different from that which would have been chosen from an IT perspective especially within high risk industries ITIL Service Transition AXELOS page Every change should deliver business value the change management processes should be geared towards enabling that delivery ITIL states a number of benefits for effective change management including “reducing failed changes and therefore service disruption defects and re work ITIL Service Transition AXELOS page and “delivering change promptly to meet business timescales ITIL Service Transition AXELOS page In any environment a good change management process should enable the delivery of business value while protecting the business by balancing risk against business value It should do so in a way that maximizes productivity and minimizes wasted effort or cost for all participants in the process Change management in the cloudIt s key to remember that all changes should be delivering business value and that change management should be focused on optimizing business risk in a way that maximizes productivity while minimizing wasted effort or cost The AWS Cloud enables automation that optimizes this business risk by Minimizing the possibility of human error Enabling the creation of identical environments for predictable and testable outcomes to changes Removing the requirement to submit changes to scale infrastructure to meet business demand andAutomatically recovering from failure and rolling back failed changes The benefits of automation can dramatically reduce the business risk associated with change and increase business agility ultimately delivering more business value which is what change is about The key concepts of change management remain the same in the AWS Cloud Change delivers business value and it should be efficient Agile methodologies and the automation capabilities of the AWS Cloud go hand in hand with the core principles of change management as they are also designed to deliver business value quickly and efficiently There are some key areas that may require existing change processes to be modified to adapt to new methods of delivering change AWS Systems Manager Change Manager is an enterprise change management framework for requesting approving implementing and reporting on operational changes to your application configuration and infrastructure From a single delegated administrator account if you use AWS Organizations you can manage changes across multiple AWS accounts and across AWS Regions Alternatively using a local account you can manage changes for a single AWS account Use Change Manager for managing changes to both AWS resources and on premises resources Configuration items in the cloudFor example if an application suffers a fault in a traditional IT environment where application updates and operating system patches are installed or deployed on a server an engineer may be tasked to investigate and either apply a fix or deploy a new server Either of these tasks would at least require an emergency change and could put the business at risk for a significant amount of time In the AWS Cloud you can use Auto Scaling groups to automate this process Failures can be automatically detected using predefined health checks and servers can be automatically replaced with exactly the same configuration This simple scenario shows the clear benefits of automation human error is eliminated configuration drift is eliminated and business risk is minimized as the time to recover is dramatically reduced Auto Scaling groups can also be used to automatically provision additional resources to meet business demand Again in a traditional environment the addition of servers may have required several business processes before approaching change management and then a standard or normal change to implement the requirement In the best case scenario a lot of work was done to increase capacity in the worst case scenario the business was impacted and put at risk by all of the business processes required to introduce additional capacity and it may not have been possible to meet the business demand in the timescales require With this example in mind any manual approval steps required to recover from failure or scale capacity to meet business demand inherently introduce risk to the business A change is considered to be the addition modification replacement or removal of a configuration item When approval is required to make a change to a configuration item existing change management processes may forbid these automated scenarios This scenario is where it may help to redefine what items are considered to be configuration items In the previous example it is not the servers themselves that are the configuration items when they are in an Auto Scaling group because they are transient and potentially non configurable items The Auto Scaling group and the image that is used to create the servers should be considered as the configuration items because they are the items that may put the business at risk if they are configured incorrectly To manage configuration items In the AWS Cloud AWS Config can be used to assess audit and evaluate the configuration of AWS resources allowing you to continuously monitor and record AWS resource configurations With AWS Config you can track the relationships among resources and review resource dependencies prior to making changes Once a change occurs you can quickly review the history of the resource s configuration and determine what the resource s configuration looked like at any point in the past AWS Config provides you with information to assess how a change to a resource configuration would affect your other resources which minimizes the impact of change related incidents AWS CloudFormation change sets enable you to preview how proposed changes to a stack might affect your running resources for example to check whether your changes will delete or replace any critical resources AWS CloudFormation makes the changes to your stack only after you decide to deploy the Change Set AutomationAnother key consideration is understanding the business risk when deploying in the AWS Cloud Regardless of whether or not a deployment is an application a patch or a configuration change an optimized cloud configuration can automate the deployment process through an unchanged pipeline This ensures repeatability and consistency across multiple environments as well as enabling automation of software testing compliance testing security testing and functional testing Although this does not guarantee against a change having an adverse impact it does allow risks to be reduced and those automated processes should not be reconsidered for every change It is the actual configuration change itself that should have the focus For example if an automated security test is approved for deployment purposes the security review during the change approval process can be dramatically reduced or even eliminated entirely in the appropriate circumstances Repeatability and consistency throughout the lifecycle of a workload and its deployment should reduce the burden on the examination of changes by the Change Approval Board The focus should be on how changes are delivered the pipeline and the automation of tests that can reduce manual testing and scrutiny by the Board both of which are prone to human error AWS CodePipeline automates your software release process enabling you to rapidly release new features to your users With CodePipeline you can quickly iterate on feedback and get new features to your users faster Automating your build test and release process enables you to quickly and easily test each code change and catch bugs while they are small and simple to fix You can ensure the quality of your application or infrastructure code by running each change through your staging and release process EC Image Builder significantly reduces the effort of keeping images up to date and secure by providing a simple graphical interface built in automation and AWS provided security settings With Image Builder there are no manual steps for updating an image nor do you have to build your own automation pipeline Creating a golden image using EC Image Builder significantly reduces the risk of non compliant images being used as well as improving security consistency and compliance Using AWS Systems Manager you can automate operational tasks to help make your teams more efficient With automated approval workflows and runbooks with rich text descriptions you can reduce human error and simplify maintenance and deployment tasks on AWS resources You can use predefined automation runbooks or build your own to share for common operational tasks such as stopping and restarting an EC instance Systems Manager also has built in safety controls enabling you to incrementally roll out new changes and automatically halt the roll out if errors occur RemediationChanges should not be approved without considering the consequences of a failure “Ideally there will be a back out plan which will restore the organization to its initial situation ITIL Service Transition AXELOS page The AWS Cloud enables back out plans to be fully automated using repeatable processes Not all changes are reversible and “remediation may require a revisiting of the change itself in the event of failure ITIL Service Transition AXELOS page Deployments in the AWS Cloud that use an automated pipeline allow changes to be redeployed quickly and safely minimizing risk and reducing business impact In certain scenarios it may not be possible to back out changes or redeploy in which case it might be that “it requires invoking the organization s business continuity plan ITIL Service Transition AXELOS page Even in the most severe cases using continuous data protection in the cloud can enable sub second recovery point objectives RPOs and recovery time objectives RTOs can be measured in minutes Refer to CloudEndure Disaster Recovery for more information Crucially where it s not possible to back out changes the AWS Cloud provides methods to significantly reduce business risk and impact of a failed change by making it quicker and easier to redeploy or invoke disaster recovery plans Modern deployment methods in the cloud allow for fast or instant rollback For example with blue green deployments you can make a change to a workload by deploying an identical copy green of the live environment blue with the configuration change Users can then be switched to the new environment green while the old live environment blue remains available but idle In this scenario if a failure is discovered users can be instantly redirected back to the blue environment and the business impact is greatly reduced It is also possible to combine this approach with a canary release that is easily enabled in the cloud With this approach you can redirect a subset of users to the new deployment assess its efficacy and gradually increase the number of users on the new deployment until all users are using the new deployment There are other considerations when choosing a method of deployment but the key for change management is the risk to the business of a change deployed in a manner like this is greatly reduced AWS CodeDeploy helps maximize your application availability during the software deployment process It introduces changes incrementally and tracks application health according to configurable rules Software deployments can easily be stopped and rolled back if there are errors AWS CloudFormation rollback triggers monitor the state of your application during stack creation and updating and enable you to roll back that operation if the application breaches the threshold of any of the alarms you ve specified For each rollback trigger you create you specify the CloudWatch alarm that AWS CloudFormation should monitor AWS CloudFormation monitors the specified alarms during the stack create or update operation and for the specified amount of time after all resources have been deployed If any of the alarms goes to ALARM state during the stack operation or the monitoring period AWS CloudFormation rolls back the entire stack operation AWS AppConfig supports best practices by rolling out configuration changes instantly or gradually The configuration change is monitored over a time period that customers define If you configure alarms in Amazon CloudWatch AWS AppConfig can automatically rollback configuration changes in the event that those alarms are triggered Adapting change management to the cloudThere are two areas in which change process may need to be adapted Because the risk and impact to the business of a failed change is greatly reduced changes can be made more frequently and with more confidence in the rollback plan As a result the second area for consideration is the acceptance of rolling back changes If failed changes have a much lower impact due to the speed and consistency of roll back activating roll backs should be considered to be part of the normal process This is particularly true if it is possible to quickly remediate the issue and push it through the same automated pipelines to quickly deliver the original intended business value of the change With these considerations in mind if automation pipelines and deployment methods are in place it may be possible to reconsider the approach to standard changes A standard change is where there is a defined trigger to initiate the change request In addition in a standard change actions are well known documented and proven authority is given in advance or pre authorized and the risk is usually low If the appropriate automation testing and deployment strategies are put in place it should result in a scenario where large infrequent and risky changes are transformed in to small frequent low risk changes By understanding the risk reduction strategies that are enabled by the AWS Cloud it should be possible and it may even be necessary to widen the scope of a standard change to include deployments that would have previously been considered as normal changes due to the risks associated with them in traditional IT environments As changes become more frequent due to agile methodologies and increased automation there is a risk that change management becomes overburdened with normal changes which can lead to delaying changes due to bandwidth constraints or important details are missed as changes are not properly scrutinized due to resource constraints Both of these scenarios introduce business risk which change management aims to optimize In an environment of small frequent changes standard changes should become the new normal so proper scrutiny can be given to normal changes optimizing business risk and enabling the delivery of business value A reduction in the size of a change reduces the risk of disruption Smaller changes also mean that change can happen more frequently By changing more frequently the organizations capability of changing is improved Increased capability of change leads in turn to lower risk of disruption ITIL High Velocity IT AXELOS page You can use deployment tools such as AWS CodePipeline a continuous integration and continuous delivery service which define release process workflow CodePipeline can pull source code directly from a repository run builds and unit tests in AWS CodeBuild and then deploy changes into staging or production environment using AWS CodeDeploy if build and test cases are successful CodePipeline automates the entire release process which reduces manual errors increasing agility and business value allowing you to optimize business risk Service transitionAfter a release has been approved via the change management process and all the appropriate project management release and deployment management steps have been followed the release is deployed and enters into a process of service validation and testing It s worth pausing here to determine the scope of service validation and testing within the AWS Cloud This is best illustrated by understanding the AWS Shared Responsibility Model for security The validation and testing of a service should be limited to the areas in this diagram that are in scope for the customer However it is critical that operations have an operational understanding of any managed services before acceptance into service Shared Responsibility ModelAs previously stated automation integration and deployment tools in the AWS Cloud allow the business to make small frequent changes that reduce business risk and introduce business value at an increased rate The introduction of the cloud should not change the process of service validation and testing but the rate of change will lead to an increased requirement for validation and testing that may require changes to the implementation of the process and the focus of the stakeholders Changes introduce business value It is important that releases meet customer expectations and that IT operations teams are able to support this new added business value The criteria for assessing this value in the cloud should not change from what already exists but the organization must be prepared for the increase in releases and adapt the implementation of these processes by introducing automation to the processes A new service requires consent from the customer that the new service meets agreed service level requirements The current best practices of tracking your current service level objectives and tracking service level agreement SLA breaches still apply This can be done by a third party monitoring service for external facing services For internal services this must be tracked with monitoring and metrics on the primary business function of the services Separate service level requirements may exist for different aspects of services and additional dimensions may be required as metrics to indicate which aspect is being measured Indeed it is often this monitoring that drives an automated rollback if it indicates that there is a trend towards violating an SLA Operations must be able to support a new release or service before it is made available to the customer With the correct tooling this process can be largely automated by automating the creation of documentation provisioning automated runbooks and playbooks and building predefined and automated patching plans This process can be made even more robust by using the correct tooling to ensure that only pre approved services are used The focus of a test manager should be to automate service acceptance testing as much as is possible This is made easier in the cloud with a wide variety of tools that are available for both validation and testing In the AWS Cloud Amazon CloudWatch provides you with data and actionable insights to monitor your applications running on AWS or on premises respond to system wide performance changes and get a unified view of operational health You can set alarms visualize logs and metrics side by side take automated actions troubleshoot issues and discover insights to keep your applications running smoothly CloudWatch provides different features including Dashboards Synthetic monitoring CloudWatch Application Insights and ServiceLens which can be used during and after Service Transition to ensure that actionable alarms are present to prevent or remediate against service degradation or failure Access to metrics logs and dashboards to monitor the health of application helps teams to resolve problem faster reducing business risk ReliabilityChange implementation has a direct impact on the availability of workloads and the ability to recover from logical disasters There is detailed information in the AWS Well Architected Framework Reliability Pillar whitepaper specifically in the Operational Considerations for Availability section Automation of change is foremost in maximizing availability If you have any manual processes you lose critical time awaiting those manual actions Use deployment patterns that reduce risk such as blue green or canary deployments Ensure that there is comprehensive testing in pipelines including load performance under load and resiliency testing Effective monitoring on the key performance indicators KPIs is a requirement and automated rollback should be triggered if those KPIs indicate that thresholds are likely to be exceeded Test disaster recovery thoroughly to ensure that recovery objectives are met All backing up of data must be done through automation Regularly restore and recover in order to ensure that your recovery process and procedures are valid These considerations improve the reliability of workloads and decrease risk Change management processes should reflect this reduction in risk and organizations should consider that because the “risk is usually low and always well understood ITIL Service Transition AXELOS page automated frequent small and reversible changes can be processed as standard changes ConclusionAutomation integration and deployment tools in the cloud allow the business to make small frequent changes that reduce business risk and introduce business value at an increased rate Change processes should be adapted to reflect what is actually being changed the increase in the amount of change and the reduced risk associated with these changes For changes that do not take advantage of automation consistency or rollback the change process should remain as is Finally it s always worth considering the business impact and risk of not implementing a change or introducing delay and remembering that the purpose of managing change is to optimize business risk ReferenceOriginal paper 2022-01-22 17:25:33
海外TECH DEV Community Enable feature-rich Logging for FastAPI on Google Cloud Logging https://dev.to/floflock/enable-feature-rich-logging-for-fastapi-on-google-cloud-logging-j3i Enable feature rich Logging for FastAPI on Google Cloud Logging IntroductionI recently evaluated some of the typical Python web frameworks for a small REST API project Previously I worked with Flask but something caught me to try another framework this time To put it very briefly this time I decided to use FastAPI a modern and high speed framework for use with REST APIs FastAPI also has other advantages For example the documentation of the endpoints is already automatically generated in OpenAPI format and output as a Swagger interface Smart Many voices and experts also see FastAPI as an emerging framework So it s time to give it a try Chakit arorachakit Frameworks to know if you code in Python Flask Django FastApi Getting Popular Use these to build your APIs Which python framework do you like the most AM Jan Since many services are already running in the Google Cloud it was evident for this project to fall back on the hosting services of the Google Cloud New framework and new infrastructure would feel overwhelming My project in this case processes financial data and accordingly I wanted to make sure that the logging of activities is clean and traceable Sometimes I need to reproduce step by step of the processors in order to understand where adaptions to the business logic is needed Google Cloud LoggingGoogle Cloud offers a product called Google Cloud Logging for logging and tracing All log entries are collected centrally and retrieved using a custom query language With most hosting options in the Google Cloud the log entries are collected and processed via Fluentd Google provides documentation for a JSON object of the optimal log entry One of our goals would be to meet the requirements of this object In addition clients are provided by google for the most common programming languages that convert log entries into the format shown above pip install google cloud loggingIn the case of Python the Django and Flask frameworks are natively supported by the client For both frameworks the client converts a lot of data into the proper format especially the severity level and the trace of the request Django and Flask benefit from built in framework recognition by the google client Have a look on the specific code piece However FastAPI is a quite new framework and does not have any built in support from the logging client The log entries look loosely coupled and even the severity of the log entries are not correctly shown by Google Cloud Logging I tried to modify my logging accordingly to ensure better traceability of the log entries Enable Log TracingOne of the main objectives was to include the so called trace ID for each log entry This ID enables the log entries to be linked to the respective request The additional information makes it possible to trace which request caused the log entry It is particularly advantageous when the REST API is under heavy traffic and errors or warnings should be reproduced I know that there are tools like OpenTelemetry Even Google suggests OpenTelemetry in their docs But for most applications in my opinion this is over engineering especially for stand alone applications not chained in a micro service architecture However if applications are hosted with the App Engine Cloud Functions or Cloud Run on Google Cloud the GCP proxy load balancer technology adds a special incoming request header X Cloud Trace Context TRACE ID SPAN ID o TRACE TRUEWe will use this header information to link the log entries as already discussed File StructureI decided to create a directory called cloud logging At this point it should be mentioned that the folder is deliberately not only called logging as there may be complications with the build in packages of the same name The structure after setup of FastAPI and my logging adaptation looks like this Dockerfile cloud logging init py filter py middleware py setup py main py readme md requirements txt MiddlewareLet us start with the middleware This class implements the BaseHTTPMiddleware class provided by Starlette and is mainly used to provide additional logic for all incoming requests Starlette is shipped with FastAPI and it is the underlying ASGI framework Essentially two important things are done here Firstly essential information about the request is written into a Python dictionary Secondly it checks whether the previously mentioned header is available Both pieces of information are processed and written into context variables We use these variables later in the filter The finished middleware then looks like this import contextvarsimport sysfrom fastapi logger import loggerfrom starlette middleware base import BaseHTTPMiddleware RequestResponseEndpointfrom starlette requests import Requestfrom starlette responses import Response JSONResponsecloud trace context contextvars ContextVar cloud trace context default http request context contextvars ContextVar http request context default dict class LoggingMiddleware BaseHTTPMiddleware async def dispatch self request Request call next RequestResponseEndpoint gt Response if x cloud trace context in request headers cloud trace context set request headers get x cloud trace context http request requestMethod request method requestUrl request url path requestSize sys getsizeof request remoteIp request client host protocol request url scheme if referrer in request headers http request referrer request headers get referrer if user agent in request headers http request userAgent request headers get user agent http request context set http request try return await call next request except Exception as ex logger debug f Request failed ex return JSONResponse status code content success False message ex Logging FilterAfter that it is necessary to implement a logging filter The following filter implements the CloudLoggingFilter provided in the Google Cloud Logging Library We overwrite the filter method which is called for each log entry The main task of the filter is to append the information according to the Google Cloud Logging format to the record which where previously fetched by the middleware The trace header still needs a little processing The finished filter looks like this import loggingimport refrom google cloud logging v handlers import CloudLoggingFilterfrom cloud logging middleware import http request context cloud trace contextclass GoogleCloudLogFilter CloudLoggingFilter def filter self record logging LogRecord gt bool record http request http request context get trace cloud trace context get split header trace split record trace f projects self project traces split header header suffix split header record span id re findall r w header suffix super filter record return True SetupLast but not least we have to make sure that the Python logger executes the desired logic For this purpose I have written a setup method that can be called when the FastAPI application is started Essentially this connects the logger provided by Python to Google Cloud Logging and our filter import loggingimport google cloud loggingfrom cloud logging filter import GoogleCloudLogFilterfrom fastapi logger import loggerdef setup logging client google cloud logging Client handler client get default handler handler setLevel logging DEBUG handler filters handler addFilter GoogleCloudLogFilter project client project logger handlers logger addHandler handler logger setLevel logging DEBUG Main ApplicationWhen starting the FastAPI application it is still necessary to add the middleware and execute the setup method I have decided to use Google Cloud Logging only in the production environment locally the standard logger is used The main application could then look something like this main py import loggingfrom fastapi import FastAPIfrom fastapi logger import loggerfrom pydantic import BaseSettingsfrom cloud logging middleware import LoggingMiddlewarefrom cloud logging setup import setup loggingclass Settings BaseSettings environment str development settings Settings app FastAPI if settings environment production setup logging app add middleware LoggingMiddleware else logger setLevel logging DEBUG app get async def root logger debug DEBUG LOG logger error ERROR LOG logger warning WARNING LOG logger info INFO LOG return message Hello World After re deploying the application to Google Cloud Run the log entries looks much better ConclusionClean log entries are one of the most important things for me when it comes to programming robust applications Even though the process described here is not yet perfect the log entries in the Google Cloud interface look great And with the help of the trace the log entries that belong together can now be analysed DockerfileJust to make the things complete in the blog post I will share my Dockerfile as well The image is optimized or the usage on Google Cloud Run The environment variable PORT is always set when using Cloud Run FROM python slimWORKDIR appCOPY requirements txt app requirements txtRUN pip install r app requirements txtCOPY appCMD exec gunicorn bind PORT workers worker class uvicorn workers UvicornWorker threads main appCover Photo by Chris Ried on Unsplash 2022-01-22 17:19:29
海外TECH DEV Community How to create database MySQL with Docker https://dev.to/jessicanathany/how-to-create-database-mysql-with-docker-1ph3 How to create database MySQL with DockerI ll to show how to create an local database MySQL with Docker in yours Windows It is necessary has the last version Docker intalladed in your machine st step Dowload the MySQL imageLet s download the latest MySQL image to the Docker hub type the following command below docker pull mysql mysql serverst step Execute the containerLet´s execute the container for an image that we did download At first we don´t configure environment variables at moment just execute the MySQL for default users without a password Before executing the command you can create a directory with the name mysql Write the command below in your terminal Note create only the directory C mysql for default the MySQL to need has folder var lib mysql this directory the MySQL will save your database Write the command belowdocker ryn e MYSQL ALLOW EMPTY PASSWORD yes v C mysql var lib mysql mysqlst step Getting the container s IPFor discovery the container IP write command docker container ls for verify the containers that running and copy the ID the container using the command docker container inpect ID CONTAINER st step Acess database from containerTo access the database through the container we need to go inside the container so write the command belowdocker exec it ID CONTAINERAlready We have a datbase MySQL in container Docker Now you can create your database and tables ConslusionThis is a sample example for you create database with docker Of course this example I created the sample database but you can configure user and password in your database Write for don´t forget 2022-01-22 17:03:43
海外TECH DEV Community The price of free time: programmer’s guide to helping a Non-profit https://dev.to/artpi/the-price-of-free-time-programmers-guide-to-helping-a-non-profit-31a4 The price of free time programmer s guide to helping a Non profitCongratulations You have decided to help out a Non Profit Full of energy and good intentions you have embarked on a journey to use your professional skills to help a cause It s a win win Surely with a better website CRM tech they will be able to help a few more people You on the other hand will meet interesting folk do something purposeful as opposed to optimizing button colors at your day job and learn a few things Here is what you need to know to not go insane The benefits of helping a Non profitYou probably have personal reasons to help a Non profit Working on hard problems with friends is one of the most fulfilling things you can do with your life If you are not working on a world changing startup and you need a respite from the drudgery of corporate existence a Non Profit may be your best next bet the purpose and mission are plentiful Non profits are also a great place to meet interesting like minded people Working side by side you can make real friends and create deeper connections than you would build by exchanging the latest plots of TV shows over coffee at work But there also are powerful benefits directly translating to your career My entire programming journey started from helping a Non profit a scout team I was a part of I made my first website in graduated to building one for dad s business and later launched a WordPress web agency Now I work at WordPress com periodically reporting to the creator of WordPress himself During that journey I helped my high school a local TEDx chapter and a non profit supporting remote work Working on projects is the best way to learn you get to experiment with real world problems and you get to try out different approaches and fail building that tacit knowledge that makes one an expert Since you are not paid for your contributions there is a shared understanding of what can be expected of you in a Non Profit You have a mandate to play a little try out things your way and goof off To further boost learning it feels more like play than work encoding the knowledge much more effectively The trapsAs with everything in life the downsides are directly correlated to the upsides Yes in a Non Profit you can be a bit unpredictable and inexperienced It does not feel like work and you get a breather from a corporate feel of a professional workplace But guess what other people get to do that too If you have just reserved a weekend to finish that signup page and the people preparing the copy texts decided to be unprofessional it suddenly becomes a problem The Hero s that s you JourneyLet s assume you volunteered to create a website for your favorite Non profit Don t be surprised if the whole process goes like this You start full of energy and ideas The non profit is eager to launch a new website because they have project X coming up Project X is the most important thing and the website meaning you is a blocker You jump straight into work You cannot be a blocker right You ramp up and are ready to implement the most important piece The texts and promotional materials are not ready despite previous promises You try to work around these requirements project X is most important right You get a call It seems that the “About the Team page is most important now Let s do a photoshoot for the Team You still don t have materials for project X but you got pages of UI corrections including a bigger logo different button colors and some creative ideas about the slider You start implementing those changes still have no materials about project X Wait there are changes to the changes now Can you revert to the old button color Sometime after a few weeks we finally got the Project X page to work The placeholder photos you chose are still there “About the Team page that got meetings photoshoot and hours of your time has gotten a total of visitors this month Things to watch out in a Non profit The price of free timeProfessional environments have learned a long time ago that time is money If everybody is salaried the easiest way to turn a profit is to stop wasting people s time The correlation is clear and obvious I do realize that corporate environments waste mindblowing eons of their employees time This is due to the scale Big organisms being less nimble is a law of physics called inertia Non profits however have a peculiar relationship with money They are called Non Profits Duh They get funded through donations grants and sometimes sales but they are incentivized not to run a tight operation Volunteers time is treated as free so wastefulness is not controlled It s up to you to say no which is hard because non profits attract precisely the people least likely to defend their time It s everybody else s side gig tooAs I mentioned you can learn and experiment with new techniques and approaches But other people do too If your work depends on graphic design don t be surprised when the designer comes up with something out of the box which naturally will be harder for you to implement than the run of the mill website Other people like you will cut corners The designer has a family to feed probably a day job and the thing called life She can t check every resolution think about dimensions of headlines when you cram characters in a title and give the proper attention to everything Last but not least without salary recognition becomes the currency Don t be surprised that “about the team is treated as the most important page on the entire website even if the visitors don t care this is the equity paid to volunteers Being paid with recognition also drives some folk to seek more of that compensation they will contribute to discussions where they have not much expertise nor understanding These are perfect bikeshedding conditions Beware Non profits are passion drivenMost non profits have a mission to fix a particular problem in the world Hunger poor education lack of equality climate change these are all areas society is failing at and non profits are stepping in to help Many people are driven to work on these problems because they feel strongly about putting up with the collective screwups of society Non profits tend to attract people who approach most of the problems with passion and purpose with no patience for tedious reasoning This leads to Passion driven project managementUrgency is the sole method of prioritization Things are made urgent to ensure their completion not because they actually are time sensitive Since urgency priority the priorities are fluid over time Yesterday s priority is forgotten today because somebody who feels more strongly comes in with more passion Flashy things are more important than fulfilling the initial purpose If you are working on a website prepare for multiple CTAs Non profit survival techniquesThese techniques helped me stay sane while working within a few organizations Find a senior member of the organization to “report to Ideally somebody with corporate experience and some tenure inside the Non Profit You don t want to report to a committee Never agree to do anything ASAP Chances are that before you get to it the original request will change or be forgotten Save yourself the revert Bonus points for batching change requests into sprints They will promise you texts materials and whatever else you ll need You WILL NOT get them on time Plan accordingly Record yourself changing stuff in the interface this will be a good v for documentation so that everyone else can implement tiny changes themselvesIf you are creating a website for goodness sake use WordPress It will save you from reinventing the wheel With WP you have ready tutorials to send people to so you don t have to fix every typo yourself Chances are that other folks have WP experience too The next person dealing with the system will know what to do with it Use a ready template don t work with an empty canvas Yes it will be less original than a custom made design but you will be able to get off the ground and focus on what s important content and functionality You have no idea how many tiny details come together to make a template work Implementing custom design without an hourly rate will lead to an endless back and forth on every detail It costs them nothing to throw in another change The constraints of an existing template work in your favor Every statement you hear will be over hyped it s a function of passion driven project management You have to do the mental math of halving the emotional charge of all statements Remember to have fun Despite unreasonable requests the people you are working with are probably quite awesome Don t forget that and schedule some time to meet them as people not vendors of website updates Working in a Non Profit is a process of realizing that the corporate environment has its advantages and lessons to teach you as well Coming to work on Monday to a well oiled machine where every cog including you is humming nicely where the work flows seamlessly through the paths of well established processes where everything has its place is a refreshing experience Of course sometime around Wednesday you are sick of it all yearning for the freedom and creativity you get to enjoy in your organization This post was first published on Piszek com Check out my site where you can find more articles like this one 2022-01-22 17:03:15
Apple AppleInsider - Frontpage News iPhone 13 'pink screen' issue affects small number of users https://appleinsider.com/articles/22/01/22/iphone-13-pink-screen-issue-affects-small-number-of-users?utm_medium=rss iPhone x pink screen x issue affects small number of usersA so called pink screen issue is affecting a small number of iPhone units with a brightly colored display and crashes thought by Apple support teams to be a software problem not hardware An iPhone with a pink display via Apple Support Forums user Jphon Shortly after the release of the iPhone and iPhone Pro lines in late a small number of users encountered a frustrating situation with their smartphones Their iPhones would lock up and display a pink screen Read more 2022-01-22 17:56:29
海外TECH Engadget 'Dying Light 2' will include free PS5 and Xbox Series X/S upgrades https://www.engadget.com/techland-dying-light-2-ps5-xbox-series-x-upgrade-free-174713239.html?src=rss x Dying Light x will include free PS and Xbox Series X S upgradesAfter multiple delays Dying Light will finally arrive on February th If you haven t had a chance to purchase a PlayStation or Xbox Series X S yet developer Techland is making the decision of whether to buy the game now or later easy In an announcement spotted by Eurogamer the studio shared this week it will provide free current gen upgrades to those who buy Dying Light on PlayStation or Xbox One What that means is that you ll have the chance to play the game with improved graphics at a later date Like many recent PS and Xbox Series X S releases Dying Light will ship with multiple rendering modes thereby allowing you to configure the game to prioritize either graphical fidelity or better performance nbsp If you want the best possible graphics you can choose between separate “Quality and “Resolution modes As you can probably tell from the name the latter will attempt to render the game at K Less obvious is the Quality mode which adds raytracing to the experience And if all you want is a smooth framerate the included “Performance mode will render Dying Light at frames per second or greater You can see the different modes in action in the video above The news comes in the same week Techland announced the cloud version of Dying Light for Switch will be delayed by up to half a year The studio said it made the decision to push back the release to ensure it could provide the best possible experience to Nintendo fans 2022-01-22 17:47:13
海外TECH Engadget Sony will release a movie made using the PlayStation game-builder ‘Dreams’ https://www.engadget.com/play-station-game-builder-dreams-to-be-used-in-feature-film-picked-up-by-sony-170550385.html?src=rss Sony will release a movie made using the PlayStation game builder Dreams Sony Pictures Classics has picked up the rights to an animated movie entitled A Winter s Journey which will be made in part using the PlayStation game creation tool Dreams According to Deadline the film will blend live actors with CG and hand painted animation and is an adaptation of Franz Schubert s set of songs for voice and piano called Winterreise It tells the story of a lovelorn poet who embarks on a dangerous journey that takes him across mountains and snow in Bavaria nbsp Dreams was originally created by Media Molecule the studio behind LittleBigPlanet for the PS The studio pitched it as a way to create quot art movies and video games quot from the start and we once described it as quot an engine learning suite and distribution platform rolled into one quot Since then people have been using it to create their own games realistic renders of nature immersive experiences of their favorite movies among other things A Winter s Journey however will reportedly be the first time Dreams will be used on a feature film The movie has yet to get a release date but shooting is expected to start in June in Wrocław Poland with actors that include John Malkovich and Jason Isaacs It ll likely take some time before it s ready to premiere As for Dreams itself it s currently on sale in the US PlayStation Store for and it includes a rotating list of the most creative games made using the tool 2022-01-22 17:05:50
ニュース BBC News - Home Call to delay compulsory Covid vaccines for NHS staff https://www.bbc.co.uk/news/uk-60096735?at_medium=RSS&at_campaign=KARANGA college 2022-01-22 17:48:20
ニュース BBC News - Home 'Unspeakable heartache' caused by Belfast murder https://www.bbc.co.uk/news/uk-northern-ireland-60096966?at_medium=RSS&at_campaign=KARANGA belfast 2022-01-22 17:03:27
ニュース BBC News - Home Autograph book found in dusty garage fetches £11k at auction https://www.bbc.co.uk/news/uk-england-somerset-60096804?at_medium=RSS&at_campaign=KARANGA grace 2022-01-22 17:05:16
ニュース BBC News - Home Rashford hits stoppage-time winner against West Ham to move Man Utd into top four https://www.bbc.co.uk/sport/football/60005896?at_medium=RSS&at_campaign=KARANGA Rashford hits stoppage time winner against West Ham to move Man Utd into top fourMarcus Rashford scores a stoppage time winner as Manchester United snatch victory against West Ham to move into the top four of the Premier League 2022-01-22 17:25:08
ニュース BBC News - Home Shelvey gives Newcastle vital win at Leeds https://www.bbc.co.uk/sport/football/60005895?at_medium=RSS&at_campaign=KARANGA jonjo 2022-01-22 17:29:30
ニュース BBC News - Home Fourth-tier Kelty stun Scottish Cup holders St Johnstone https://www.bbc.co.uk/sport/football/60005931?at_medium=RSS&at_campaign=KARANGA hearts 2022-01-22 17:53:47
ニュース BBC News - Home Drone hovering over stadium delays Wolves win over Brentford https://www.bbc.co.uk/sport/football/60005893?at_medium=RSS&at_campaign=KARANGA brentford 2022-01-22 17:26:46
ビジネス ダイヤモンド・オンライン - 新着記事 精神科医が教える いちばんダメなアドバイス - 精神科医Tomyが教える 心の荷物の手放し方 https://diamond.jp/articles/-/292582 voicy 2022-01-23 03:00:00
ビジネス ダイヤモンド・オンライン - 新着記事 【出口学長・週末学び直し特別講義】 現代の知の巨人が明かす 哲学の教科書で 今なおソクラテスが特別視される理由 - 哲学と宗教全史 https://diamond.jp/articles/-/292833 2022-01-23 02:55:00
ビジネス ダイヤモンド・オンライン - 新着記事 未経験の仕事でも活躍できる人、お金を払ってスクールに行っても成長できない人の差 - 転職が僕らを助けてくれる https://diamond.jp/articles/-/290459 未経験の仕事でも活躍できる人、お金を払ってスクールに行っても成長できない人の差転職が僕らを助けてくれる「今の会社で働き続けていいのかな」「でも、転職するのは怖いな……」。 2022-01-23 02:50:00
ビジネス ダイヤモンド・オンライン - 新着記事 「自分が嫌い」で苦しい……悩みが深い人ほどご神仏に愛されると言える納得の理由 - 神さま仏さまがこっそり教えてくれたこと https://diamond.jp/articles/-/293420 神さま仏さま 2022-01-23 02:45:00
ビジネス ダイヤモンド・オンライン - 新着記事 “優待おじさん”で有名な桐谷さんが米国株を始めたワケ - 一番売れてる月刊マネー誌ZAiと作った桐谷さんの日本株&米国株入門 https://diamond.jp/articles/-/292101 ampamp 2022-01-23 02:40:00
ビジネス ダイヤモンド・オンライン - 新着記事 1億総“反応”社会を生き抜くために知っておきたいこと - だから、この本。 https://diamond.jp/articles/-/293711 2022-01-23 02:35:00
ビジネス ダイヤモンド・オンライン - 新着記事 【私たちが生きづらい理由<後編>】人との距離のとり方は難しい! ちょっとしたコツで心がラクになります - とても傷つきやすい人が無神経な人に悩まされずに生きる方法 https://diamond.jp/articles/-/293445 2022-01-23 02:30:00
ビジネス ダイヤモンド・オンライン - 新着記事 身近な人が亡くなったら、「車の放置」は厳禁です - ぶっちゃけ相続「手続大全」 https://diamond.jp/articles/-/294059 身近 2022-01-23 02:25:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)