投稿時間:2022-04-06 03:34:17 RSSフィード2022-04-06 03:00 分まとめ(44件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
AWS AWS Big Data Blog Introducing Protocol buffers (protobuf) schema support in Amazon Glue Schema Registry https://aws.amazon.com/blogs/big-data/introducing-protocol-buffers-protobuf-schema-support-in-amazon-glue-schema-registry/ Introducing Protocol buffers protobuf schema support in Amazon Glue Schema RegistryAWS Glue Schema Registry now supports Protocol buffers protobuf schemas in addition to JSON and Avro schemas This allows application teams to use protobuf schemas to govern the evolution of streaming data and centrally control data quality from data streams to data lake AWS Glue Schema Registry provides an open source library that includes Apache licensed serializers … 2022-04-05 17:21:53
AWS AWS Big Data Blog Run AWS Glue crawlers using Amazon S3 event notifications https://aws.amazon.com/blogs/big-data/run-aws-glue-crawlers-using-amazon-s3-event-notifications/ Run AWS Glue crawlers using Amazon S event notificationsThe AWS Well Architected Data Analytics Lens provides a set of guiding principles for analytics applications on AWS One of the best practices it talks about is build a central Data Catalog to store share and track metadata changes AWS Glue provides a Data Catalog to fulfill this requirement AWS Glue also provides crawlers that automatically … 2022-04-05 17:17:00
AWS AWS Machine Learning Blog Customize the Amazon SageMaker XGBoost algorithm container https://aws.amazon.com/blogs/machine-learning/customize-the-amazon-sagemaker-xgboost-algorithm-container/ Customize the Amazon SageMaker XGBoost algorithm containerThe built in Amazon SageMaker XGBoost algorithm provides a managed container to run the popular XGBoost machine learning ML framework with added convenience of supporting advanced training or inference features like distributed training dataset sharding for large scale datasets A B model testing or multi model inference endpoints You can also extend this powerful algorithm to accommodate different requirements … 2022-04-05 17:24:58
AWS AWS Machine Learning Blog Detect adversarial inputs using Amazon SageMaker Model Monitor and Amazon SageMaker Debugger https://aws.amazon.com/blogs/machine-learning/detect-adversarial-inputs-using-amazon-sagemaker-model-monitor-and-amazon-sagemaker-debugger/ Detect adversarial inputs using Amazon SageMaker Model Monitor and Amazon SageMaker DebuggerResearch over the past few years has shown that machine learning ML models are vulnerable to adversarial inputs where an adversary can craft inputs to strategically alter the model s output in image classification speech recognition or fraud detection For example imagine you have deployed a model that identifies your employees based on images of their … 2022-04-05 17:19:32
Google Official Google Blog Make Google Maps your copilot with these new updates https://blog.google/products/maps/make-google-maps-your-copilot-these-new-updates/ Make Google Maps your copilot with these new updatesSay goodbye to road trip and vacation planning woes with new updates to Google Maps Whether you re driving around a new city or heading out on a weekend road trip we re launching new improvements including toll prices a more detailed navigation map and iOS updates to help you plan your drive save money and explore a new place To toll or not to toll Pick the best route with new toll pricesLong distance drives poor road conditions and heavy traffic can dampen the mood of any road trip In those moments you might want to take a toll road To help make the choice between toll roads and regular roads easier we re rolling out toll prices on Google Maps for the first time Soon you ll see the estimated toll price to your destination before you start navigating thanks to trusted information from local tolling authorities We look at factors like the cost of using a toll pass or other payment methods what the day of the week it is along with how much the toll is expected to cost at the specific time you ll be crossing it Not a fan of toll roads No problem When a toll free route is available we ll still show you that route as an option Like always you can choose to avoid seeing routes with toll roads completely Simply tap on the three dots at the top right corner of your directions in Google Maps to see your route options and select Avoid tolls You ll start seeing toll prices on Android and iOS this month for nearly toll roads in the U S India Japan and Indonesia ーwith more countries coming soon New toll prices in Google Maps will help you decide the best route for you A more detailed map so you can navigate new roads with easeDriving on unfamiliar roads can be stressful ーespecially when you re driving at night or with a car full of people We re adding rich new details to Google Maps navigation experience so you can explore with confidence You ll soon see traffic lights and stop signs along your route along with enhanced details like building outlines and areas of interest And in select cities you ll see even more detailed information like the shape and width of a road including medians and islands you can better understand where you are and help decrease the odds of making last minute lane changes or missing a turn The new navigation map starts rolling out to select countries in the coming weeks on Android iOS Android Auto and CarPlay Google Maps will soon show traffic lights and stop signs along your route as well as other enhanced detailsEasier ways to explore on iOSWhen you re out and about efficiency matters ーwhether you want to be unattached to your iPhone use Siri to look up directions while behind the wheel or quickly search within Google Maps We re rolling out new iOS updates that make Google Maps easier to use on the go Access Google Maps from your home screen with new widgets Our new pinned trip widget lets you access trips you ve pinned in your Go Tab right from your iOS home screen ーmaking it even easier to get directions You can see your arrival time the next departure for your transit trip and even a suggested route if you re driving And because good things come in small packages we re also making the existing Google Maps search widget smaller so you can search for your favorite places or navigate to frequent destinations with one tiny tap Make sure you have the latest version of the Google Maps app downloaded to see these widgets in the coming weeks Navigate from your Apple Watch If you have an Apple Watch and constantly find yourself away from home ーand away from your phone ーyou ll soon be able to get directions on Google Maps directly from your Watch Starting in a few weeks you ll no longer need to begin navigation from your iPhone Simply tap on the Google Maps shortcut in your Apple Watch app and the navigation will start automatically on your Apple Watch You can also add the “Take me home complication to your watch and tap it to start the navigation home on Google Maps Search and get directions with Siri and Spotlight Google Maps is integrating directly into Spotlight Siri and the Shortcuts app on iOS Once you ve set up the shortcuts just say “Hey Siri get directions or “Hey Siri search in Google Maps to access Google Maps helpful information instantly You ll start seeing this feature in the coming months with enhanced Siri search functionality coming later this summer iPhone home screen featuring new pinned trip widgetAccess Google Maps from your home screen with new widgetsBlack Apple Watch featuring new “Take me home complicationNavigate from your Apple WatchiPhone Shortcuts page featuring colorful boxes and the new Siri and Spotlight integrationsSearch and get directions with Siri and SpotlightWe re always looking for more ways to bring new information to Maps to help you explore For more on how to use Maps as your copilot for road trip travel check out these tips 2022-04-05 17:08:00
技術ブログ Developers.IO IoT デバイスデータを SiteWise Monitor でノンコーディングで可視化してみた https://dev.classmethod.jp/articles/introduce-sitewise-monitor/ awsiotsitewisemonitor 2022-04-05 17:25:13
海外TECH Ars Technica Canoo wins NASA’s Artemis crew transport vehicle contract https://arstechnica.com/?p=1845901 vehicle 2022-04-05 17:23:39
海外TECH MakeUseOf How to Use Your Workplace Messaging Apps More Effectively https://www.makeuseof.com/how-to-use-workplace-messaging-apps/ communication 2022-04-05 17:15:14
海外TECH MakeUseOf Sick of Compiling Gentoo Linux? Then Try the New LiveGUI Distro! https://www.makeuseof.com/gentoo-reintroduces-livegui-distro/ Sick of Compiling Gentoo Linux Then Try the New LiveGUI Distro Gentoo s LiveGUI image lets users test the technical distro before installing Artists have also been invited to submit artwork for a rebranding 2022-04-05 17:06:42
海外TECH DEV Community Disabling all the fields in a form (Formik) https://dev.to/atosh502/disabling-all-the-fields-in-a-form-formik-2ec2 Disabling all the fields in a form Formik I needed a way to disable the entire form in Formik until the data fetching or submission was complete For this I found a tag that can enable or disable the entire form We simply need to wrap our form components inside the lt fieldset gt tag and pass true to attribute disabled and then all the elements inside the form will be disabled Original commentReference code from mdn 2022-04-05 17:31:09
海外TECH DEV Community Add autocomplete search to your Strapi blog https://dev.to/algolia/add-autocomplete-search-to-your-strapi-blog-263h Add autocomplete search to your Strapi blogStrapi is an open source headless CMS that builds API abstractions to retrieve your content regardless of where it s stored It s a great tool for building content driven applications like blogs and other media sites In this post you re going to add interactive search to a Strapi blog with a Next js front end using Algolia s Autocomplete js library and the community built Search plugin for Strapi PrerequisitesStrapi and Algolia Autcomplete js are both build using Javascript so you ll want node v installed You ll build on the basic blog application from this guide using Strapi and Next js You should familiarize yourself with the steps in that post before building your search experience on top of it You ll also need an Algolia account for search You can use your existing Algolia account or sign up for a free account Building the back endStart by creating a directory to hold your project s front and back ends mkdir blog strapi algolia amp amp cd blog strapi algoliaStrapi has a set of pre baked templates you can use to get a CMS up and running quickly These all include Strapi itself with pre defined content types and sample data If you don t have a Strapi blog already you can use the blog template to quickly set one up npx create strapi app backend quickstart template strapi template blog blogAfter the script finishes installing add an admin user at http localhost so you can log in to the Strapi dashboard This script sets up most of back end for us including a few demo blog posts You can read more about everything that was setup in the quick start Next you need to index your demo content Fortunately the Strapi community has you covered with a Search plugin and Algolia indexing provider built by community member Mattias van den Belt You can read more about Mattie s plugin in the documentation but getting up and running only requires a couple of pieces of configuration Go ahead and stop your Strapi server so you can install the plugin using npm or yarn cd backend amp amp npm install mattie bundle strapi plugin search mattie bundle strapi provider search algoliai saveYou ll need to add an Algolia API key and App ID to your Strapi environment You can manage your keys by navigating the Algolia Dashboard under Settings gt Team and Access gt API Keys or go directly to Since Strapi is modifying your Algolia index you ll need to provide either the admin API key for demos or create a key with appropriate access for your production project env ALGOLIA PROVIDER APPLICATION ID XXXXXXXXXXALGOLIA PROVIDER ADMIN API KEY XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXWith your credentials in place the last step is to configure the plugin Create or modify the config plugins js file in your backend directory You want to tell the plugin to index the article and category content types for your blog use strict module exports env gt search enabled true config provider algolia providerOptions apiKey env ALGOLIA PROVIDER ADMIN API KEY applicationId env ALGOLIA PROVIDER APPLICATION ID contentTypes name api article article name api category category Restart your Strapi server to pick up these environment variables and the new plugin npm run develop The Search plugin triggers when new content is Published so you ll need to either Unpublish and Publish the demo articles or create a new one Load the Strapi admin panel http localhost admin and navigate to Content Manager gt COLLECTION TYPES gt Article and either click on an existing article to Unpublish or click on Create new Entry Click Publish on your article to index this entry into your Algolia application You can do the same for the category content type if you d like to index those as well Building the front endNow that you ve built your back end and populated your index it s time to build the front end for you users Strapi has a great blog post walking you through building a Next js powered front end You ll build on top of those steps here You can either walk through their quick start yourself or you can just clone this repo if you want to jump directly to adding search git clone single branch branch no search version git github com chuckmeyer blog strapi frontend git frontend Don t forget to run cd frontend amp amp npm install if you cloned the front end from the repo This is enough to get the basic blog site up and running You can test it out by running npm run dev in the frontend directory The only thing missing is search You ll be using the Algolia Autocomplete js library to add an autocomplete search experience to your blog s navigation bar When a user types into the field the autocomplete completes their thought by providing full terms or results The Autocomplete library is source agnostic so you ll also need the Algolia InstantSearch library to connect to your index on the back end npm install algolia autocomplete js algoliasearch algolia autocomplete theme classic saveTo use the autocomplete library in a React project you first need to create an Autocomplete component to wrap the library You can find the boilerplate for this in the autocomplete documentation frontend components autocomplete jsimport autocomplete from algolia autocomplete js import React createElement Fragment useEffect useRef from react import render from react dom export function Autocomplete props const containerRef useRef null useEffect gt if containerRef current return undefined const search autocomplete container containerRef current renderer createElement Fragment render children root render children root props return gt search destroy props return lt div ref containerRef gt Just like in the back end you ll need your Algolia credentials to connect to the API Since the front end only needs to read from the index you can use your search key for the application Create a frontend env local file to store your credentials NEXT PUBLIC ALGOLIA APP ID XXXXXXXXXXNEXT PUBLIC ALGOLIA SEARCH API KEY XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXNow you can initialize your connection to Algolia and add your new Autocomplete component by updating the code in frontend components nav js import React from react import Link from next link import getAlgoliaResults from algolia autocomplete js import algoliasearch from algoliasearch import Autocomplete from autocomplete import SearchItem from searchItem import algolia autocomplete theme classic const searchClient algoliasearch process env NEXT PUBLIC ALGOLIA APP ID process env NEXT PUBLIC ALGOLIA SEARCH API KEY const Nav categories gt return lt div gt lt nav className uk navbar container data uk navbar gt lt div className uk navbar left gt lt ul className uk navbar nav gt lt li gt lt Link href gt lt a gt Strapi Blog lt a gt lt Link gt lt li gt lt ul gt lt div gt lt div className uk navbar center gt lt Autocomplete openOnFocus false detachedMediaQuery placeholder Search for articles getSources query gt sourceId articles getItemUrl item return article item slug getItems return getAlgoliaResults searchClient queries indexName development api article article query templates item item components return lt SearchItem hit item components components gt gt lt div gt lt div className uk navbar right gt lt ul className uk navbar nav gt categories map category gt return lt li key category id gt lt Link href category category attributes slug gt lt a className uk link reset gt category attributes name lt a gt lt Link gt lt li gt lt ul gt lt div gt lt nav gt lt div gt export default Nav As you can see you re passing a few parameters to the Autocomplete component openOnFocus false tells your search not to populate results until a user starts typingdetachedMediaQuery opens search in a detached modal providing more room for your resultsplaceholder Search for articles the text that appears in the searchbox before a searchgetSources query gt where you define your data sources for your autocomplete experienceRemember that Autocomplete is source agnostic You define sources based on APIs libraries or static content within your application Here you re binding a source called articles to your Algolia index using the getAlgoliaResults function from the autocomplete js library sourceId articles getItemUrl item return article item slug getItems return getAlgoliaResults searchClient queries indexName development api article article query templates item item components return lt SearchItem hit item components components gt The development api article article is the index generated by the Strapi Search plugin above for your article content type When you move to production the plugin will create a separate production api article article index in the same application The getItemUrl section sets up keyboard navigation while getItems handles retrieving articles from your index using the query term s from the searchbox Notice the code above references a SearchItem component This is the template you ll use to tell Autocomplete how to render your search results Add a new component called frontend components searchItem js with the following code import React from react function SearchItem hit components return lt a className aa ItemLink href article hit slug gt lt div className aa ItemContent gt lt div className ItemCategory gt hit category name lt div gt lt div className aa ItemContentBody gt lt div className aa ItemContentTitle gt lt components Highlight hit hit attribute title gt lt div gt lt div className aa ItemContentDescription gt lt components Highlight hit hit attribute description gt lt div gt lt div gt lt div gt lt a gt export default SearchItem With this code you re displaying the category associated with the article the title and the description Use the components Highlight component to emphasize the part of the attribute that matched the user s query And with that you re done Start your front end server with npm run dev You should now see the autocomplete searchbox at the top of the page Clicking on it opens the modal search interface where you can start typing your search term You can see a hosted version of this front end on codesandbox although it may take some time for the back end container to start The before and after versions of the front end code are both available on Github as well If you build something cool using this blog share it with us on Twitter algollia 2022-04-05 17:26:20
海外TECH DEV Community Sobre motivação, início e constância. https://dev.to/carvalhodanielg/sobre-motivacao-inicio-e-constancia-30hj Sobre motivação início e constância Háalguns meses tomei uma decisão profissional que tem causado um grande impacto geral no meu cotidiano Tenho contato com programação e tecnologia há aproximadamente anos na verdade o contato com tecnologia começou quando tive meu primeiro contato com computadores por volta dos anos Esse primeiro contato aconteceu nas lan houses da cidade a partir de então todo o pouco dinheiro que eu tinha disponível era usado para comprar mais tempo de computador Mais tarde fiz um curso técnico em informática onde tive ese contato de uma forma mais técnica e profissional e apesar de gostar muito da área sempre quis cursar alguma engenharia decidi então seguir para o curso de Bacharelado Interdisciplinar em Ciência e Tecnologia que de forma resumida conta com uma grade curricular que éa base para a maioria dos cursos de exatas em especial as engenharias Ao me formar fiz a transição para a Engenharia Civil me formando anos depois Durante a graduação no meu estágio acabei tendo contato novamente com programação dessa vez aplicada para a automatização e otimização de algumas tarefas repetitivas Depois disso as atividades mais prazerosas do meu dia a dia de trabalho eram as que demandavam resolver algum problema que seria mais interessante se resolvido com alguma rotina que poderia ser automatizada dessa forma eu passaria algumas horas quebrando a cabeça para entender o problema e desenvolver alguma rotina para resolver o problema Um belo dia ouvindo um relato de uma pessoa que tinha feito uma transição de carreira para a tecnologia também decidi fazer essa mudança Quando tomei esse decisão pulei de cabeça investi em uma mentoria e em alguns cursos e desde então se passaram quase dois meses desde a tomada de decisão E nesse meio tempo venho lutando pra manter a rotina diária de forma eficiente São quase dois meses programando todo dia sábados domingos e feriados inclusive lendo livros sobre o assunto e conversando com pessoas da área Apesar de conseguir manter essa constância e disciplina posso dizer que não énada fácil fazer isso Muito se fala que o mais difícil écomeçar Eu não acredito nisso jáque começar qualquer coisa ésempre muito fácil jáque quase sempre que começamos algo estamos sempre muito motivados tudo énovo empolgante e excitante Nesse início o progresso érápido jáque o nosso conhecimento épequeno ou nulo qualquer pedaço de conhecimento éum avanço gigantesco o primeiro Hello World no JavaScript éum avanço gigantesco pra quem nunca digitou um código antes mas a partir disso os avanços são menores jáque vamos acumulando uma bagagem Para exemplificar pra quem tem um salário de R se recebe um aumento de R fica extremamente feliz Por outro lado quem tem um salário de R se recebe um aumento de R vai ficar extremamente frustrado Mas por que Jáque R éR em qualquer situação Simples éuma questão de percepção pro primeiro os reais representam um aumento de jápara o segundo isso representa com o conhecimento isso continua sendo válido Retomando começar éfácil e égratificante pois estamos motivados cheios de adrenalina sonhos e ideias Mas depois de um tempo essa motivação passa jáque os avanços vão sendo menos perceptivos vão aparecendo dificuldades desculpas o cansaço chega e corpo se defende tentando parar ésópor um dia não tem problema Mas éaíque mora o perigo se quebramos a rotina vez a ºse torna muito mais fácil assim como a décima vez vai ser ainda mais natural Logo se vocêmantém uma rotina muito bem definida àrisca por anos mas um dia que vocêquebra essa rotina todo o trabalho pode ser perdido jáque se vocêquebrou um dia mais um não faz mal né Por isso a constância éalgo difícil de manter jáque não conseguimos fazer isso apenas com motivação porque ela não dura muito tempo Após alguns dias tudo o que sobra éa disciplina O segredo éfazer todo dia vai ser difícil mas depois de um tempo acabamos nos acostumando a manter a disciplina apenas por ser legal se desafiar e continuar cumprindo essa tarefa seja qual for Depois de um tempo isso vira um jogo de vocêcom vocêmesmo depois disso qualquer atividade que demanda essa disciplina vai ficando mais fácil jáque isso vira um jogo Enfim atédar certo vai dar errado muitas vezes mas isso faz parte o legal écontinuar tentando 2022-04-05 17:19:23
海外TECH DEV Community How To Find Element By Text In Selenium WebDriver https://dev.to/lambdatest/how-to-find-element-by-text-in-selenium-webdriver-1l58 How To Find Element By Text In Selenium WebDriverFind element by Text in Selenium is used to locate a web element using its text attribute The text value is used mostly when the basic element identification properties such as ID or Class are dynamic in nature making it hard to locate the web element Sometimes developers tend to group similar web elements with the same ID or the same Class together For example if you have an element with the tag as Button which has a dynamic ID and Class name where ID is getting changed from ID text to ID textE and the Class name gets changed from “ID Class text to “ID Class text on every new session In such cases it becomes very difficult to locate the web elements using ID or Class attribute and this is when the Text attribute comes to the rescue while performing Selenium automation testing The text value can be fully matched or partially matched to locate the element In this article on how to find element by text in Selenium WebDriver you will read about how to use the Text attribute in order to find any element Let s get started What is Find Element In Selenium When you start writing your Selenium automation script interaction with web elements becomes your first and a very vital step because it s the WebElements that you play around with within your test script Now interaction with these web elements can only happen if you identify them using the right approach Find Element method in Selenium is a command which helps you identify a web element There are multiple ways that Find Element provides to uniquely identify a web element within the web page using Web locators in Selenium like ID Name Class Name etc Here is the syntax of Find Element In Selenium The syntax of Find Element isWebElement elementName driver findElement By lt LocatorStrategy gt LocatorValue As shown in the above syntax this command accepts the “By object as the argument and returns a WebElement object The “By is a locator or query object and accepts the locator strategy The Locator Strategy can assume the below values IDNameClass NameTag NameLink TextPartial Link TextXPathCSS Selector What is Find Element By Text in Selenium We saw in the previous section about find element in Selenium and its syntax Now you must be wondering how to find element by text in Selenium WebDriver The answer is by making use of XPath in Selenium Wondering how Let s look at the sections below In order to use Text you will need to make use of XPath as your Locator Strategy and the text attribute of the element in the Locator Value The basic format of XPath in Selenium is as below XPath tagname Attribute Value However before we get started it s important to understand two built in methods of Selenium which will ultimately be used in findElement text ーThis is a built in method in Selenium that is used with XPath in order to locate an element based on its exact text value The syntax of using text with findElement is WebElement ele driver findElement By xpath “ lt tagName gt text text value contains ーSimilar to the text method contains is another built in method which is used with XPath However this is used when we want to write the locator based on a partial text match The syntax of using text amp contains with findElement is WebElement ele driver findElement By xpath “ lt tagName gt contains text textvalue Let us now read about these in detail Note The ConnectionClosedException occured whenever connection between driver and client has been lost and client is sending the request to the driver after disconnecting the driver Find Element by Text in Selenium for Complete Text matchNow that you saw the syntax for using text in case of complete text match In this section on how to find element by text in Selenium let us see it using an example We will use Selenium Playground offered by LambdaTest for understanding the same LambdaTest is a cloud based cross browser testing platform that supports Selenium Grid providing a solution to every obstacle you face while performing automation testing using your local machine Test Automation Platforms like LambdaTest offer a Selenium Grid consisting of online browsers for you to perform Selenium automation testing effortlessly Use CaseLog in to Selenium Playground Identify the Web Element for the Checkbox Demo link on the above web page using the text method Click on it and print the page header ImplementationWe will implement the case using Selenium with Java and use the cloud Selenium Grid offered by LambdaTest for executing our test case Selenium Grid refers to a software testing setup that enables QAs to perform parallel testing across multiple browsers and devices with unique operating systems When the entire setup of Selenium Grid is accessible using cloud based servers it is called Selenium Grid Cloud An online Selenium Grid helps you focus on writing better Selenium test scripts rather than worrying about infrastructure maintenance Let us now inspect the locator for the Checkbox Demo page In order to inspect you can simply right click on the Web Element and click on Inspect On the Elements tab you can start writing your locator As shown in the above picture we use the Checkbox Demo text with its tag a for a complete match and hence the correct implementation here would be WebElement checkbox driver findElement By xpath “ a text Checkbox Demo Let us now use the same and write our test case You can refer to the below testcase package LambdaTest import org openqa Selenium By import org openqa Selenium WebElement import org openqa Selenium remote DesiredCapabilities import org openqa Selenium remote RemoteWebDriver import org testng Assert import org testng annotations AfterTest import org testng annotations BeforeTest import org testng annotations Listeners import org testng annotations Test import java net MalformedURLException import java net URL import java util List Listeners util Listener class class AutomationUsingFindElementByText public String username YOUR USERNAME public String accesskey YOUR ACCESSKEY public static RemoteWebDriver driver null public String gridURL hub lambdatest com wd hub BeforeTest public void setUp throws Exception DesiredCapabilities capabilities new DesiredCapabilities capabilities setCapability browserName chrome capabilities setCapability version capabilities setCapability platform win If this cap isn t specified it will just get the any available one capabilities setCapability build AutomationUsingFindElement capabilities setCapability name AutomationUsingFindElementSuite try driver new RemoteWebDriver new URL https username accesskey gridURL capabilities catch MalformedURLException e System out println Invalid grid URL catch Exception e System out println e getMessage Test public void findElementByCompleteTextMatch try System out println Logging into Lambda Test Selenium Playground driver get WebElement checkBoxDemoPage driver findElement By xpath a text Checkbox Demo checkBoxDemoPage click System out println Clicked on the Checkbox Demo Page WebElement header driver findElement By xpath h System out println The header of the page is header getText catch Exception e AfterTest public void closeBrowser driver close System out println The driver has been closed You can use the below testng xml file for running the above testcase lt xml version encoding UTF gt lt DOCTYPE suite SYSTEM gt lt suite name AutomationUsingFindElementSuite gt lt test name AutomationUsingFindElementTest gt lt classes gt lt class name LambdaTest AutomationUsingFindElementByText gt lt class gt lt classes gt lt test gt lt suite gt And the below pom xml file for installing all the necessary dependencies lt xml version encoding UTF gt lt project xmlns xmlns xsi xsi schemaLocation gt lt modelVersion gt lt modelVersion gt lt groupId gt org example lt groupId gt lt artifactId gt LambdaTest lt artifactId gt lt version gt SNAPSHOT lt version gt lt dependencies gt lt dependency gt lt groupId gt org Seleniumhq Selenium lt groupId gt lt artifactId gt Selenium api lt artifactId gt lt version gt alpha lt version gt lt dependency gt lt dependency gt lt groupId gt org Seleniumhq Selenium lt groupId gt lt artifactId gt Selenium remote driver lt artifactId gt lt version gt alpha lt version gt lt dependency gt lt dependency gt lt groupId gt org Seleniumhq Selenium lt groupId gt lt artifactId gt Selenium chrome driver lt artifactId gt lt version gt alpha lt version gt lt dependency gt lt dependency gt lt groupId gt org testng lt groupId gt lt artifactId gt testng lt artifactId gt lt version gt lt version gt lt dependency gt lt dependency gt lt groupId gt io github bonigarcia lt groupId gt lt artifactId gt webdrivermanager lt artifactId gt lt version gt lt version gt lt dependency gt lt dependencies gt lt properties gt lt maven compiler source gt lt maven compiler source gt lt maven compiler target gt lt maven compiler target gt lt properties gt lt project gt Code WalkthroughIn this section on how to find element by text in Selenium let s look at the different areas of code in detail Imported Dependencies Here we have imported all the necessary classes of Selenium WebDriver WebDriverWait Desired Capabilities and RemoteWebDriver to set the respective browser capabilities and run the test cases on the grid Global Variables As we have used a Selenium Grid Cloud like LambdaTest to perform our test execution we are using the below shown variables Here you can populate the values for your corresponding username and access key which can be collected by logging into your LambdaTest Profile Section You can copy the Username and the Access Token to be used in the code However the grid URL will remain the same as shown below We have also used the Listener class here in order to customize the TestNG Report TestNG provides us with a lot of Listeners e g IAnnotationTransformer IReporter etc These interfaces are used while performing Selenium automation testing mainly to generate logs and customize the TestNG reports To implement the Listener class you can simply add an annotation in your test class just above your class name Syntax Listeners PackageName ClassName class BeforeTest Setup Method Here we have used the LambdaTest Desired Capabilities Generator and have set the necessary capabilities of browser name version platform etc for our Selenium Remote WebDriver After that we are opening the website in the launched browser test findElementByCompleteTextMatch In this case we are first logging into the Selenium Playground web page After that we locate the Checkbox Demo button using a complete text match and click on the same In the end we are printing the header of the web page AfterTest closeBrowser Here we are just closing the launched browser Once the tests are completed you can also view your test results logs and the test recording as well in your LambdaTest Automation Dashboard Console OutputOnce you run the test case the console output will look something like the below You can also Subscribe to the LambdaTest YouTube Channel and stay updated with the latest tutorials around Selenium testing Cypress EE testing CI CD and more Note The ConnectionFailedException occured when client is not able to establish connection with selenium hub or webdriver endpoints Find Element by Text in Selenium for Partial Text matchIn the previous example of this article on how to find element by text in Selenium WebDriver you saw how you could use findElement by Text for a complete text match In this section we will understand how we can use partial Text match in order to locate web elements Use CaseLog in to Selenium Playground Identify all the Web Elements which have a table in their names Print the text of all such Web Elements Let us see the locator for the above test case ImplementationAs shown in the above picture we use the Table text with its tag a for a partial match and as a result we get a total of Web Elements using the above locator Since there are more than Web Element in this case we will use FindElements FindElements in Selenium returns you the list of web elements that match the locator value unlike FindElement which returns only a single web element In case there are no matching elements within the web page FindElements returns an empty list The syntax of FindElements in Selenium is List lt WebElement gt listName driver findElements By lt LocatorStrategy gt “LocatorValue Hence the correct implementation using FindElements with partial text match here would be List lt WebElement gt tableOptions driver findElements By xpath “ a contains text Table Let us now use the same and write our test case You can refer to the below testcase package LambdaTest import org openqa Selenium By import org openqa Selenium WebElement import org openqa Selenium remote DesiredCapabilities import org openqa Selenium remote RemoteWebDriver import org testng Assert import org testng annotations AfterTest import org testng annotations BeforeTest import org testng annotations Listeners import org testng annotations Test import java net MalformedURLException import java net URL import java util List Listeners util Listener class class AutomationUsingFindElementByText public String username YOUR USERNAME public String accesskey YOUR ACCESSKEY public static RemoteWebDriver driver null public String gridURL hub lambdatest com wd hub BeforeTest public void setUp throws Exception DesiredCapabilities capabilities new DesiredCapabilities capabilities setCapability browserName chrome capabilities setCapability version capabilities setCapability platform win If this cap isn t specified it will just get the any available one capabilities setCapability build AutomationUsingFindElement capabilities setCapability name AutomationUsingFindElementSuite try driver new RemoteWebDriver new URL https username accesskey gridURL capabilities catch MalformedURLException e System out println Invalid grid URL catch Exception e System out println e getMessage Test public void findElementByPartialTextMatch try System out println Logging into Lambda Test Selenium Playground driver get List lt WebElement gt tableOptions driver findElements By xpath a contains text Table for WebElement e tableOptions System out println The different options with table in name are e getText catch Exception e AfterTest public void closeBrowser driver close System out println The driver has been closed You can use the below testng xml file for running the above testcase lt xml version encoding UTF gt lt DOCTYPE suite SYSTEM gt lt suite name AutomationUsingFindElementSuite gt lt test name AutomationUsingFindElementTest gt lt classes gt lt class name LambdaTest AutomationUsingFindElementByText gt lt class gt lt classes gt lt test gt lt suite gt And the below pom xml file for installing all the necessary dependencies lt xml version encoding UTF gt lt project xmlns xmlns xsi xsi schemaLocation gt lt modelVersion gt lt modelVersion gt amp lt groupId amp gt org example amp lt groupId amp gt amp lt artifactId amp gt LambdaTest amp lt artifactId amp gt amp lt version amp gt SNAPSHOT amp lt version amp gt amp lt dependencies amp gt amp lt dependency amp gt amp lt groupId amp gt org Seleniumhq Selenium amp lt groupId amp gt amp lt artifactId amp gt Selenium api amp lt artifactId amp gt amp lt version amp gt alpha amp lt version amp gt amp lt dependency amp gt amp lt dependency amp gt amp lt groupId amp gt org Seleniumhq Selenium amp lt groupId amp gt amp lt artifactId amp gt Selenium remote driver amp lt artifactId amp gt amp lt version amp gt alpha amp lt version amp gt amp lt dependency amp gt amp lt dependency amp gt amp lt groupId amp gt org Seleniumhq Selenium amp lt groupId amp gt amp lt artifactId amp gt Selenium chrome driver amp lt artifactId amp gt amp lt version amp gt alpha amp lt version amp gt amp lt dependency amp gt amp lt dependency amp gt amp lt groupId amp gt org testng amp lt groupId amp gt amp lt artifactId amp gt testng amp lt artifactId amp gt amp lt version amp gt amp lt version amp gt amp lt dependency amp gt amp lt dependency amp gt amp lt groupId amp gt io github bonigarcia amp lt groupId amp gt amp lt artifactId amp gt webdrivermanager amp lt artifactId amp gt amp lt version amp gt amp lt version amp gt amp lt dependency amp gt amp lt dependencies amp gt amp lt properties amp gt amp lt maven compiler source amp gt amp lt maven compiler source amp gt amp lt maven compiler target amp gt amp lt maven compiler target amp gt amp lt properties amp gt lt project gt Code WalkthroughIn this section on how to find element by text in Selenium let us now check the test case walkthrough in detail The BeforeTest and import statements remain the same as we saw in our previous example test findElementByPartialTextMatch In this case we are first logging into the Selenium Playground web page After that we locate all the Web Elements which have Table in their text and store them in a list Later we iterate over the list and print their text Once the tests are completed you can also view your test results logs and the test recording as well in your LambdaTest Automation Dashboard You can also see the test results on the LambdaTest Analytics Dashboard The dashboard shows all the details and metrics related to your tests Navigate to the LambdaTest Analytics Dashboard to view the metrics of your tests You can quickly assess test performance and overall health from Test Overview The Test Summary will show how many passed and failed tests your team has run and the overall efficiency of these tests Console OutputOnce you run the test case the console output will look something like the below If you re a developer or a tester and want to take your skills to the next level this Selenium certification from LambdaTest can help you reach that goal Here s a short glimpse of the Selenium certification from LambdaTest ConclusionIn this Selenium Java tutorial on how to find element by text in Selenium WebDriver we explored finding an element using text in Selenium We saw how we could use the text method in case of both complete and partial text matches We also saw how we could use it in the case of FindElements and get a list of Web Elements through text match In the end we also implemented the cases using Selenium with Java on a cloud Selenium Grid Honestly using text is one of my personal favorite methods in Selenium when it comes to locating Web Elements as it s very easy to implement and can be tweaked in any way to match our use case I hope you enjoyed reading this article on how to find element by text in Selenium learned some more about FindElement By Text and I believe this method will become your personal favorite too Happy Testing 2022-04-05 17:18:10
海外TECH DEV Community Job Queues and Workers In Laravel Apps https://dev.to/honeybadger/job-queues-and-workers-in-laravel-apps-46gc Job Queues and Workers In Laravel AppsThis article was originally written by Darius D on the Honeybadger Developer Blog We would like to start this article with a real life example Imagine an ice cream van It is a very hot day and people want to refresh themselves with delicious ice cream The buyers are all screaming about the kind of ice cream they want one buyer just wants a single portion while another has a big family and wants a whole box This situation doesn t seem difficult buyers are sending requests and sellers are receiving money preparing orders and providing ice cream to the customers However if there are many buyers the orders are big or several people are working in the ice cream van simultaneously it could become chaotic Many angry buyers could be waiting on their big orders and there s only one frantic seller who might not be able to fulfill an order due to tiredness This is likely one of the reasons queues were invented In this article we will learn how to use queues define a worker and analyze some examples using different situations including how to add queues for one worker how the program will behave when we have several workers and how to combine tasks into groups What are Queues in Laravel The most common situation where we want to use queues is heavy processes such as importing or synchronizing big data which takes more time and happens quite often When we run a big process on page loading it means the user has to wait until the process finishes to access the content If we want to improve the user experience we could make this process run in the background allow the user the continue working with other stuff and when all the tasks are finished we can send the user a notification or message via email The main idea of Laravel queues is to create PHP methods register them in a database and then tell the server to call these methods sequentially by reading the database This explanation is perhaps over simplified because as well will see later the registration of these methods or jobs can be more complicated if we want to get more information about executed processes and create for example a progress bar to show the user his or her current status Configuring and Installing QueuesOur installation will start with an artisan command php artisan queue tableThis command will create a migration file containing information about the new Jobs database table So let s migrate this file php artisan migrateWhen we have prepared a database table we can create our first job Let s name it CalculateDataJob and by running the following command we will create a new PHP file app Jobs CalculateDataJob phpphp artisan make job CalculateDataJobThe task we want to run has to be described in a handle public method such as the following public function handle for x x lt x sleep do calculation In this example we added a very small job that can be executed in milliseconds If we want to see slower progress let s add a sleep function that delays execution by two seconds The handle method should control your task s main logic or if you have a more complicated situation you can write a method that calls other methods NOTE Every time you use jobs and make changes in the method you need to clean the cache Otherwise it could cost you additional time for looking bugs in the code the problem is that the server is running your old code because it was cached php artisan cache clearAnother new term is Dispatching Jobs It means we need to queue this job Let s create a Livewire component which is a simple button that will activate the job with one click php artisan make livewire JobButtonIt will create two files a class file app Http Livewire JobButton php and a view file resources views livewire job button blade php In job button blade php let s add this simple code lt div wire click runJob gt Run job lt div gt Anywhere in your website template insert the following component tag lt livewire job button gt or blade directive livewire job button In the JobButton php file add this new method use App Jobs CalculateDataJob public function runJob CalculateDataJob dispatch Additionally before triggering this method we to configure the queue driver In Laravel there are several options to choose from Database Information about jobs will be saved in a database table Redis Good for big applications and when more flexibility is needed Other Three dependencies which can be installed by using a Composer package manager Amazon SQS Beanstalkd or Redis phpredis PHP extension For this example we will use the database as the queue driver Therefore in the env file let s find the QUEUE CONNECTION sync row by default the QUEUE CONNECTION value is synched which means that we want to execute the job immediately and change it to QUEUE CONNECTION database Finally if we will click our Run job button information about the job will be saved in the database table jobs To process this job we just need to write a simple command in the console php artisan queue workAfter running this commend we will see messages about processing jobs in the console Thus when users click a button they won t need to wait while the task finishes The process will be executed in the background Regardless of whether a process is executed by a user or the server we could encounter issues with memory especially if we are working with big data Instead of performing one big job we can split it into several smaller jobs In the JobButton php file make the following change public function runJob for x x lt x CalculateDataJob dispatch x With this change we will create jobs by sending parameters Thus in CalculateDataJob php we need to add a new public variable assign it in the constructor and use it in the handle method public x public function construct x this gt x x public function handle sleep do calculation with this gt x Now we can see ten registered jobs in the database These jobs are quite small to help avoid server memory problems Laravel Queue WorkersAll these examples explore the basics of using queues in Laravel When we run the artisan command queue work we activate workers When we changed from one job to several smaller jobs we created ten independent workers which means that these ten jobs will be executed like there are ten different invisible users in the background After activation queue workers will live in the server indefinitely and hold job information in memory Therefore when we change something in the code we need to both clean the cache and restart the workers php artisan queue restartNOTE For example in Linux after closing the console window workers will be stopped Thus if we want to continue our processes after closing the console window we need to run the same queue work but with extra commands nohup php artisan queue work amp As mentioned previously it is possible to send parameters to a job and split a big task into smaller ones such as reading a big csv file and splitting it into smaller files However a major disadvantage of this approach is that we need to create many small files read the data and then delete the files The best way to resolve this issue is to read the file chunk the data and send the chunks of data as parameters Thus the data will be saved in a database and when workers are activated the data will be read from the jobs table and imported into specific tables Other important aspects of queues are handling errors and notifying users When we only have a few jobs it may not be crucial but if we are talking about a big system performing many tasks per day it is very important to register failed jobs Hence in the same CalculateDataJob php file we can add a new public method failed public function failed Throwable exception Send user notification of failed job Laravel Queue Workers Process Progress InformationTo run processes in the background and avoid having to wait it is a very good solution However what if the user wants to know what s going on in the background The best way is to present live information or simply a progress bar From Laravel version we have Job Batching This functionality requires new database tables to save detailed information about jobs php artisan queue batches tablephp artisan migrateTherefore in our JobButton php file we need to change the old code to the following use Illuminate Bus Batch use Illuminate Support Facades Bus use Throwable public function runJob this gt batchHolder Bus batch gt then function Batch batch All jobs completed successfully gt catch function Batch batch Throwable e First batch job failure detected gt finally function Batch batch The batch has finished executing gt name data calculation gt dispatch for x x lt x CalculateDataJob dispatch x this gt batchHolder gt add new CalculateDataJob x return this gt batchHolder If we have different groups of jobs we can name them In this example it s data calculation so it will be easier to separate individual jobs of a particular task and only receive information about these jobs When you are using Job batching add Batchable to the CalculateDataJob php file use Illuminate Bus Batchable class ApiUpdateJob implements ShouldQueue use Batchable Dispatchable InteractsWithQueue Queueable SerializesModels To create a progress bar we need to add code to job button blade php lt div wire poll ms checkStatus gt lt div style background color black padding px gt lt div style background color red text align center height px width loaded value gt loaded value lt div gt lt div gt lt div gt The main html tag will call the checkStatus method each second and refresh the loaded value value In JobButton php we will add a public checkStatus method public function checkStatus batches DB table job batches gt where pending jobs gt name data calculation gt orderBy created at desc gt limit gt get if count batches gt job status Bus findBatch batches gt id gt toArray this gt loaded value round job status progress else this gt loaded value In this method we will send requests to the database to get for example the last ten created jobs with the name data calculation and pick from the first item to provide job status information If we print the variable job status we will see different elements totalJobs pendingJobs processedJobs progress and failedJobs For now we are only interested in progress but other elements could be very important too if we want to give user more detailed information Bonus ExampleOne of the most popular use cases for queue workers is sending email messages to users These messages could be a welcome letter sent to users after registration or after users perform a specific action First add the credentials of your mailbox to the env file If you don t want to use your private email you can easily register for a free account at which is very good tool for sending and testing email functionality We need to create a mailable class php artisan make mail SendEmailIt will automatically create a default email template app Mail SendEmail php which can be changed to meet your needs As in previous examples we need to create a job for this purpose php artisan make job SendWelcomeEmailJobIn the newly created app Jobs SendWelcomeEmailJob php file a handle method needs to be added use App Mail SendEmail public function handle test email test test com Mail to test email gt send new SendEmail Finally we can create a simple route to call this job in the web php file add a new route send emailuse App Jobs SendWelcomeEmailJob Route get send email function dispatch new SendWelcomeEmailJob This is another good example where we can use queues NOTE If we want to delay our job process we can use the delay method job new SendWelcomeEmailJob gt delay Carbon now gt addMinutes dispatch job Applying this method will cause the job to be dispatched after ten minutes Queue Workers on a Live ServerAll the examples we ve shown so far should work and on either a local host or a live server However on a live server the administrator can t constantly check whether the queue workers are active and run the php artisan queue work command when necessary especially if the project is international and involves users from different time zones What happens when a worker encounters an error or an execution timeout Your application queue workers must be active hours per day and react every time a user performs a specific action On a live server a process monitor is needed This monitor controls the queue workers and automatically restarts processes if they fail or are impacted by other processes A supervisor is commonly used on Linux server The first step of installing a supervisor on a live server is to run the following command sudo apt get install supervisorAfter installation in the etc supervisor conf d directory prepare a configuration laravel worker conf file with the following content program laravel worker process name program name s process num dcommand php var www app com artisan queue work sqs sleep tries max time autostart trueautorestart truestopasgroup truekillasgroup trueuser rootnumprocs redirect stderr truestdout logfile var www app com worker logstopwaitsecs All directories depend on your server structure After the configuration files are created you ll need to activate the supervisor using the following commands sudo supervisorctl rereadsudo supervisorctl updatesudo supervisorctl start laravel worker If configuring queue workers on a live server seems too complicated there are many hosting providers with special tools that perform these functions in a more user friendly manner such as Laravel Forge or Digitalocean ConclusionLaravel queues is a very powerful tool to improve your application s performance especially when the application has many heavy frequently executed tasks We don t want to make our users wait until a heavy job is finished Thus after one click the user should still be free to navigate to other pages or perform other actions However keep in mind that background processes can take down a server if not properly configured or tested It is advisable to split big jobs into smaller ones to avoid process execution timeout 2022-04-05 17:11:59
海外TECH DEV Community Okta for Kubernetes – A Step-by-Step Guide https://dev.to/loft/okta-for-kubernetes-a-step-by-step-guide-2ojo Okta for Kubernetes A Step by Step GuideBy Lukonde MwilaSingle Sign On SSO has become a prominent part of enterprise strategy to bolster the security posture of user identification across different applications SSO is an authentication system for users that enables them to use a single set of credentials when accessing a number of autonomous software systems With this security model companies can create federated identities for users across their different platforms This is common for businesses granting internal teams access to tools like Google Workspace Slack Asana etc However SSO is not restricted to applications of this nature it can also be used with a platform like Kubernetes As a system Kubernetes separates the processes of authentication authn and authorization authz of users and applications There are several approaches that can be used to authenticate users From these options OpenID Connect OIDC can be used to set up an SSO authentication model for your clusters Having SSO for Kubernetes in place allows organizations to strengthen consolidate and simplify identity management for cluster users at scale Furthermore it empowers software teams with the necessary parameters to best execute cluster operations This article will delve into the what and why of SSO in the context of Kubernetes and will also provide a step by step guide to configure SSO for your clusters with Okta and Loft What Is SSO for Kubernetes At this point in time Kubernetes doesn t maintain an internal system for the storage and management of user accounts Instead users have to be created and managed outside of the cluster This begs the question How exactly does user authentication work For starters authentication is the process of validating that a user or entity is who they claim to be In the context of Kubernetes any user attempting to interact with a Kubernetes cluster must have a certain set of credentials attached to their client request These credentials are passed off and validated against an external authn module If your organization uses an identity provider IdP with OIDC support you can extend this identity solution to fulfill the role of authentication for your Kubernetes clusters OIDC is an authentication protocol based on the OAuth specifications with an additional layer on top of OAuth This layer adds login and profile information about the logged in identity An IdP that supports OIDC like Okta allows you to extend the user lifecycle management of creating enabling and disabling user accounts for your organization to Kubernetes SSO forms the basis for mapping centralized identity management to your Kubernetes clusters This is important because your company s operators administrators developers and testers will each require different levels of self service access to your cluster resources With SSO you can maintain a single source of truth for team members who have designated roles that map to RBAC definitions in Kubernetes dictating what operations they can perform For example OIDC tokens contain an access token role that can be used with the Kubernetes RBAC API to grant or deny access to cluster resources As such any validated cluster users will be able to independently carry out any necessary actions with the predefined permissions in place Implementing Okta SSO for Kubernetes Using LoftNow let s get started with the tutorial Before you begin be sure you have An AWS account A registered domain for your Loft instance Terraform optional Provision Your Kubernetes ClusterAs you would expect the first requirement you need to meet is to have a remote Kubernetes cluster In this demonstration you will use Terraform to create an Amazon EKS cluster using this project It contains the modules needed to create all the necessary infrastructure in your AWS account The repository README md details the steps on how to execute the creation of the infrastructure Alternatively you can provision an Amazon EKS Kubernetes cluster manually using the AWS Console However this approach is time consuming and more prone to misconfigurations in the underlying infrastructure for your cluster When the cluster has been provisioned you can update your local kube configuration kube config to point to the newly created cluster and verify the connection using the following commands aws eks region lt cluster region gt update kubeconfig name lt cluster name gt kubectl config current context Install Loft CLITo install the Loft CLI binary from GitHub execute one of the following commands depending on your machine s Operating System Mac Terminalcurl s L sed nE s loft darwin amd p xargs n curl L o loft amp amp chmod x loft sudo mv loft usr local bin Linux Bashcurl s L sed nE s loft linux amd p xargs n curl L o loft amp amp chmod x loft sudo mv loft usr local bin Windows Powershellmd Force Env APPDATA loft System Net ServicePointManager SecurityProtocol System Net SecurityProtocolType Tls Tls Tls Invoke WebRequest UseBasicParsing Invoke WebRequest URI UseBasicParsing Content replace ms loft windows amd exe o Env APPDATA loft loft exe env Path Env APPDATA loft Environment SetEnvironmentVariable Path env Path System EnvironmentVariableTarget User The binary can also be downloaded from the GitHub releases page here Create an Okta AccountNext if you don t already have an Okta account you ll need to create one The free trial account will suffice just make sure you have a company email address to complete registration Deploy Loft to Your Kubernetes ClusterAfter installing the Loft CLI and creating an Okta account you can proceed to install Loft on your Kubernetes cluster by running the following command loft startYou will be prompted to add an email address that will be used to create the administrator of the Loft account After that Loft will be deployed to your cluster Once the deployment is complete you will be presented with your login credentials in the terminal to access your Loft account Configure Your Domain for LoftNow that you have Loft running you need to configure your instance with a registered domain in order for SSO to work with Okta As mentioned at the beginning of the tutorial you ll need to have a domain name that you can associate with your Loft instance The first step is to install the Nginx Ingress Controller helm upgrade install ingress nginx ingress nginx repository config n ingress nginx create namespace repo set string controller config hsts false waitOnce the Nginx Ingress Controller has rolled out successfully you can re run the loft start command with the host flag specifying your domain name In order for your instance to work with the domain name you need to ensure that your registered domain has an A record with simple routing configured and that traffic routing is set to forward to the load balancer created by the ingress controller To do so re run the loft start command with your domain name provided as a value for the host flag loft start host yourdomainname comAt this stage the Loft CLI will prompt you with questions about whether you re using a remote cluster and if you have an ingress controller installed After verification your Loft instance will be re configured Configure Single Sign On for LoftOnce the domain has been configured for your Loft instance you can proceed to configure SSO Create App Integration in OktaSign in to your Okta account navigate to the Application section in the side menu and click on the Create App Integration button Select the OpenID Connect OIDC option for the sign in method and Web Application for the application type As you configure the integration make sure you specify the correct domain name for your instance in the sign in redirect URI Okta will then generate a Client ID and a Client Secret for your application Update Auth Configuration in LoftIn your Loft administrator account navigate to Admin gt Config and update your configuration with the following details auth oidc issuerUrl https MY OKTA SUBDOMAIN okta com clientId CLIENT ID clientSecret CLIENT SECRET groupsClaim groups This is needed because okta uses thin id tokens that do not contain the groups directly getUserInfo trueOptionally you can disable the default password based authentication system managed by Loft by appending the following lines auth oidc password disabled true Disable password based authenticationThis approach can be useful for organizations wanting to enforce an SSO model for all their personnel accessing Loft In addition end users can have a better experience by managing only one set of credentials for Okta Assign Users in OktaAfter applying the above changes you can assign a user or users to your application in Okta To carry this out make sure you are signed in to the Admin Console of your Okta account as you were before Then proceed to the Loft application you added and navigate to the Assignments tab You will be presented with the option to either assign people or groups to the application which will grant users the ability to sign in to Loft with their Okta account Here s what the Loft landing page will look like with Okta configured for SSO along with the default password based authentication If you disabled the password based authentication your sign in page will look like this Lastly once an end user is signed in they will be presented with the Loft space management screen ConclusionIn this post you learned about SSO as a security model for identity management and how it can be extended to Kubernetes Furthermore this article detailed how to set up SSO for your Kubernetes cluster with Okta and Loft Loft is a platform designed to empower software teams and enhance developer experience when working with Kubernetes Be sure to check it out Photo by Alp Duran on Unsplash 2022-04-05 17:08:57
海外TECH DEV Community Scheduled Cron Jobs with Render https://dev.to/alvinslee/scheduled-cron-jobs-with-render-f0l Scheduled Cron Jobs with RenderProgrammers often need to run some recurring process automatically at fixed intervals or at specific times A common solution for this problem is to use a cron job When you have full access to your own server configuring cron jobs is quite straightforward However how hard is it to configure cron jobs when you use an application hosting service Some services thankfully provide a way for you to do this In this article we ll walk through a sample mini project that shows how to easily set up and deploy a cron job on Render Core Concepts What Is a Cron Job A cron job is a Unix command that cron runs as a background process on a schedule determined by a Cron Expression Generally cron determines the jobs to run via crontab configuration files which consist of pairs of cron expressions and corresponding commands What Is Render Render is a cloud application hosting service that offers a variety of web service hosting solutions such as static sites web servers databases and yes even cron jobs Render handles the hassle of hosting and deployment for you so that you can spend all of your time focusing on building out your projects What Are Render Cron Jobs Render offers a cron job hosting service that simplifies the process of deploying and maintaining a cron job in the cloud To set up a Render cron job service simply link a GitHub repo choose a runtime and provide the command to run and the cron expression to determine the schedule Overview of Our Mini ProjectOur project will be a simple service that lets us create and store notes The service also runs an hourly cron job to email us all the notes created in the last hour The application consists of three parts An Express web server that handles requests to create the notesA PostgreSQL database to store the notesA cron job that sends the notes digest emailWe ll use Render services for each of these components We ll also use Mailjet as the service for sending out emails For our Node js application we ll add the following dependency packages pg to interact with the databaseexpress async handler as a quality of life upgrade that allows us to use async functions as our Express handlersnode mailjet which is the official client library that interacts with the Mailjet APIWe ll assume that you have Node js installed on your development machine In our demo code we ll use Yarn for our package manager Setting Up the Project RepoLet s start by setting up our project repo and our web service on Render We can fork Render s Express Hello World repo for our initial Express server boilerplate code In Render we create a web service page that uses the forked repo We enter a name for our web service and we proceed with all of the default values After Render finishes deploying we see a service URL We can visit that URL in our browser to verify that everything was set up correctly Now we can clone the forked repo to our development machine and then add our dependencies project yarn add pg express async handler node mailjetWith our initial project repo set up let s move on to setting up our database Setting Up the DatabaseOur database is very simple consisting of just one table called notes The table will have a column to store the note text and another column to store the timestamp when the note was created We ll create a PostgreSQL database service on Render We provide a name for the database service and then use the default values for all other options After creating the database we can connect to it from our local machine and create the notes table Copy the external connection string from the database dashboard and then start up a node REPL in your local project directory We ll use a connection pool to make the query to our database so we ll need to import the Pool class and create a Pool object with our external connection string const Pool require pg const pool new Pool connectionString lt External Connection String gt ssl true Note that since we are connecting through SSL in the node REPL we need to append ssl true to the end of the connection string With our pool object created we can execute the query to create the table pool query CREATE TABLE notes text text created timestamp console log Voila Our database is set up with our notes table Setting Up an Environment Group in RenderBefore we add the functionality to our web service to start populating the table let s make sure that our web service has access to our database In fact because both our web service and cron job will need to connect to the database we can take advantage of Render s environment groups to create a shared group of environment variables that we can use for both services To do this we ll want the internal connection string from the database dashboard since both the web service and cron job will communicate with the database through Render s internal network Click on Env Groups in the main Render navigation Next click on New Environment Group Choose a name for your environment group Then add a new variable with a key of CONNECTION STRING and paste the internal connection string as the value no need for ssl true this time Once you ve created the group you can go back to the Environments settings for the web service In the Linked Environment Groups section you can select the environment group you just created and click on Link Now our Node js code can access any variables we define in this group through the global process env object We ll see an example of this as we start to build out our Express app Let s do that now Creating the Express AppOur Express app will only have one endpoint notes where we ll handle POST and GET requests When we receive a POST request we create a new note row in the database We ll expect the Content Type of the request to be application json and the body to be formatted as note lt note text gt We ll also note the time of the request and store that timestamp as the note s created value When we receive a GET request we ll query the database for all the notes and return them as a JSON response Let s start by getting rid of all of the unnecessary code from our boilerplate We only need to keep the following lines and we change the app listen callback slightly const express require express const app express const port process env PORT app listen port gt console log Notes server listening on port port Next let s add all of the imports we ll need Again we ll use a connection Pool to connect to the database const Pool require pg Additionally we ll make use of the express async handler package const asyncHandler require express async handler We instantiate our Pool with the CONNECTION STRING environment variable const connectionString process env CONNECTION STRING const pool new Pool connectionString Since we re expecting a JSON POST request let s also use JSON middleware from Express which will parse the request body into a JavaScript object that we can access at req body app use express json Handling GET notes RequestsNow we can get into the meat of our app the request handlers We ll start with our GET handler since it s a bit simpler Let s show the code first and then we ll explain what we ve done app get notes asyncHandler async req res gt const result await pool query SELECT FROM notes res json notes result rows First we register an async function with asyncHandler at the notes endpoint using app get In the body of the callback we want to select all the notes in the database using pool query We return a JSON response with all of the rows we received from the database And that s all we need for the GET handler At this point we can commit and push these changes Render automatically builds and redeploys our updated application We can verify that our GET handler works but for now all we see is a sad empty notes object Handling POST notes RequestsLet s move on to our POST handler so that we can start populating our database with some notes Our code looks like this app post notes asyncHandler async req res gt const query text INSERT INTO notes VALUES values req body note new Date await pool query query res sendStatus First we insert a new row into our database with our note text and creation timestamp We get the note text from req body note and we use new Date to get the current time The Date object is converted into a PostgreSQL data type through our use of parameterized queries We send the insert query and then we return a response Deploy and TestAfter pushing our code and having Render redeploy we can test our server by sending some test requests At the command line we use curl curl X POST lt INSERT WEB SERVICE URL gt notes H Content Type application json d note lt INSERT NOTE TEXT gt You can then visit the notes endpoint in your browser to see all of your newly created notes Creating the Cron JobThe last component that ties our project together is the cron job This cron job will run at the top of every hour emailing us with all the notes created in the last hour Set Up MailjetWe ll use Mailjet as our email delivery service You can sign up for a free account here You ll need your Mailjet API key and secret key from the API key management page Let s add these keys to the environment group we created earlier Add the following environment variables MAILJET APIKEYMAILJET SECRETUSER NAME the name of the email recipient your name USER EMAIL the email address of the recipient your email address Implement Cron Job ScriptNow let s write the script we ll run as the cron job which we can call mail latest notes js Again we ll use a Pool to query our database and we ll also want to initialize our Mailjet client with our environment variables const Pool require pg const mailjet require node mailjet connect process env MAILJET APIKEY process env MAILJET SECRET const connectionString process env CONNECTION STRING const pool new Pool connectionString Next let s query the database for all notes created in the last hour Since this will be an asynchronous operation we can wrap the rest of the script in an async IIFE which will allow us to use the await keyword to make it easier to work with async gt all remaining code will go here We use another parameterized query with new Date to capture the current time and use it to filter the notes This time however we ll want to get the time an hour before the current time which we can do using the setHours and getHours Date methods so that we can filter for all the notes after that timestamp const timestamp new Date timestamp setHours timestamp getHours const query text SELECT FROM notes WHERE created gt values timestamp const result await pool query query We check how many rows were returned and we won t send the email if there aren t any notes to send if result rows length console log No latest notes process exit If there are rows then we create the email message with the retrieved notes We pull out the text from each note row with a map and use HTML for some easy formatting joining all the note texts with lt br gt tags const emailMessage result rows map note gt note text join lt br gt Finally we use the Mailjet client to send an email with the message we just created and the environment variables we set up earlier We can also log the response we get back from Mailjet just to make sure that our email was sent const mailjetResponse mailjet post send version v request Messages From Email process env USER EMAIL Name process env USER NAME To Email process env USER EMAIL Name process env USER NAME Subject Latest Notes HTMLPart lt p gt emailMessage lt p gt console log mailjetResponse That s all we need for our script Set Up Render Cron Job ServiceLastly let s create the cron job service on Render We give our cron job service a name and set the environment to Node Then we set the command field to node mail latest notes js To run the script every hour we set the schedule field to the cron expression Render has a nifty label under the input which shows what the cron expression translates to in plain English We create the cron job Next we go to the Environment tab for the cron job service and we link the environment group that we created earlier All that s left to do is wait for Render to finish building our cron job service Then we can test it Before the build finishes you can create more notes to make sure the script sends an email Finally you can click on the Trigger Run button on the cron dashboard to manually run the script and check your inbox to make sure you receive that email And with that we ve finished our notes project ConclusionJob schedulers like cron are powerful tools that provide a simple interface to run automated processes on strict schedules Some application hosting services ーlike Render ーmake it easy for you to set up cron job services alongside your web and database services In this article we walked through how to do just that by building a mini project that saves notes and then sends an email digest triggered hourly by a cron job With Render coordinating communication between our various components and setting up the cron job was straightforward and simple Happy coding 2022-04-05 17:05:39
海外TECH DEV Community How To Navigate In The OpenBB Terminal https://dev.to/danglewood/how-to-navigate-in-the-openbb-terminal-jl2 How To Navigate In The OpenBB TerminalAfter a successful installation it s a good idea to quickly review how to navigate through the various menus The documentation is available here for reference Two helpful commands to know right away help home takes you to the home menu from any location in the Terminal Navigating is easier than you can say MS DOS When you see gt beside a menu item that indicates a submenu to a group of related functions gt economyTo get into the economy submenu from home simply enter economyEntering lets you return to the parent menu Apr economy Apr What if you want to jump to another section from another submenu This is easily accomplished by prefixing a to any absolute path in the Terminal stocks disc portfolio poIn the same way that you can navigate the Terminal commands can be sandwiched into these moves like so stocks load TSLA i p s ta obv The sequence above navigates you to the stocks menu loads the ticker TSLA with a one minute interval starting yesterday and pre post market prices turned on enters the technical analysis menu and displays a chart of on balance volume That s all you need to know to comfortably and efficiently navigate through the OpenBB Terminal If you require assistance or have any questions join our community Discord server and someone will gladly help you out Happy trails 2022-04-05 17:04:52
海外TECH DEV Community Eloquent Tips and Tricks - Laravel https://dev.to/morcosgad/eloquent-tips-and-tricks-laravel-3n94 Eloquent Tips and Tricks LaravelI found this wonderful article that contains a lot of tips specifically tips that will help you in your advice with dealing with databases and will make your next project more powerful and accurate and I will talk about some points that interest me Increments and Decrements Instead of this article Article find article id article gt read count article gt save You can do this article Article find article id article gt increment read count Article find article id gt increment read count Article find article id gt increment read count Product find produce id gt decrement stock Model boot methodpublic static function boot parent boot self creating function model model gt uuid string Uuid generate Model properties timestamps appends etcclass User extends Model protected table users protected fillable email password which fields can be filled with User create protected dates created at deleted at which fields will be Carbon ized protected appends field field additional values returned in JSON protected primaryKey uuid it doesn t have to be id public incrementing false and it doesn t even have to be auto incrementing protected perPage Yes you can override pagination count PER MODEL default const CREATED AT created at const UPDATED AT updated at Yes even those names can be overriddenpublic timestamps false or even not used at allWhereX users User where approved gt get users User whereApproved gt get User whereDate created at date Y m d User whereDay created at date d User whereMonth created at date m User whereYear created at date Y BelongsTo Default Models post gt author gt name we can assign default property values to that default modelpublic function author return this gt belongsTo App Author gt withDefault name gt Guest Author Order by Mutator Imagine you have thisfunction getFullNameAttribute return this gt attributes first name this gt attributes last name clients Client orderBy full name gt get doesn t work clients Client get gt sortBy full name works Default ordering in global scopeprotected static function boot parent boot Order by name ASC static addGlobalScope order function Builder builder builder gt orderBy name asc Raw query methods whereRaw orders DB table orders gt whereRaw price gt IF state TX gt get havingRawProduct groupBy category id gt havingRaw COUNT gt gt get orderByRawUser where created at gt gt orderByRaw updated at created at desc gt get Chunk method for big tables Instead of users User all foreach users as user You can doUser chunk function users foreach users as user Override updated at when saving product Product find id product gt updated at product gt save timestamps gt false What is the result of an update result products gt whereNull category id gt update category id gt orWhere with multiple parameters you can pass an array of parameters to orWhere “Usual way q gt where a q gt orWhere b q gt orWhere c q gt where a q gt orWhere b gt c gt I tried to present some basic points but to go deeper visit the source I hope you enjoyed with me and I adore you who search for everything new 2022-04-05 17:00:36
Apple AppleInsider - Frontpage News Google Maps gets better iOS navigation, standalone Apple Watch support https://appleinsider.com/articles/22/04/05/google-maps-gets-better-ios-navigation-standalone-apple-watch-support?utm_medium=rss Google Maps gets better iOS navigation standalone Apple Watch supportGoogle is rolling out new features and updates to its Google Maps for iOS app including ones that will make navigation easier on iPhone and Apple Watch Credit GoogleThe changes include a more detailed navigation map ーwhich will include streetlight data and enhanced building outlines ーfor users traveling unfamiliar locations Google is also rolling out even more detail in select cities for iOS and CarPlay Read more 2022-04-05 17:50:58
Apple AppleInsider - Frontpage News Apple seeds first developer betas for iOS 15.5, iPadOS 15.5, watchOS 8.6, tvOS 15.5 https://appleinsider.com/articles/22/04/05/apple-seeds-first-developer-betas-for-ios-155-ipados-155-watchos-86-tvos-155?utm_medium=rss Apple seeds first developer betas for iOS iPadOS watchOS tvOS Apple has resumed the beta testing process by releasing first developer betas of iOS iPadOS tvOS and watchOS The first developer betas are available for downloadThe newest builds can be downloaded via the Apple Developer Center for those enrolled in the test program or via an over the air update on devices running the beta software Public betas typically arrive within a few days of the developer versions via the Apple Beta Software Program website Read more 2022-04-05 17:18:34
Apple AppleInsider - Frontpage News Apple issues first developer beta of macOS Monterey 12.4 https://appleinsider.com/articles/22/04/05/apple-issues-first-developer-beta-of-macos-monterey-124?utm_medium=rss Apple issues first developer beta of macOS Monterey The beta cycle has been reset with Apple making the first build of macOS Monterey available for developer testing First developer beta for macOS now availableThe latest builds can be downloaded from the Apple Developer Center for participants in the Developer Beta program as well as via an over the air update for hardware already used for beta software Public beta versions of the developer builds are usually issued within a few days of their counterparts and can be acquired from the Apple Beta Software Program site Read more 2022-04-05 17:11:07
海外TECH Engadget A new Tomb Raider game is on the way, powered by Unreal Engine 5 https://www.engadget.com/tomb-raider-unreal-engine-5-annoucement-174930410.html?src=rss A new Tomb Raider game is on the way powered by Unreal Engine Crystal Dynamics has “just started development on a new Tomb Raider game the studio announced today on Twitter It didn t share what the game will be called nor when fans can expect to play it but it did note that it will run on Epic s new Unreal Engine After working on the mediocre Marvel s Avengers the project will see Crystal Dynamics return to the franchise it spent more than a decade making popular again Crystal Dynamics is incredibly excited about the future of Unreal and how it will help us take our storytelling to the next level That s why we re proud to announce that our next tombraider game is being built on Unreal Engine pic twitter com UFMiWzJAZcーTomb Raider tombraider April It would also appear to signal the end of the studio s in house Foundation engine which powered Rise of the Tomb Raider and the most recent mainline entry in the series s Shadow of the Tomb Raider Crystal Dynamics said the move to UE would help the studio take its “storytelling to the next level The next Tomb Raider joins a handful of games already announced for UE Those include the next Witcher game and Black Myth Wukong Of that group only the latter has a release date with Chinese developer Game Science Studio aiming to get it out sometime in 2022-04-05 17:49:30
海外TECH Engadget The entire ‘Next Generation’ cast will appear in 'Star Trek: Picard' season three https://www.engadget.com/star-trek-picard-tng-cast-reunion-172119529.html?src=rss The entire Next Generation cast will appear in x Star Trek Picard x season threeThe entire principal cast of Star Trek The Next Generation will appear on the third and final season of Picard Jonathan Frakes Marina Sirtis and Brent Spiner who have already featured in the series will be joined by LeVar Burton Gates McFadden and Michael Dorn In a statement executive producer Terry Matalas said that “it s most fitting that the story of Jean Luc Picard ends honoring the beginning with his dearest and most loyal friends from the USS Enterprise Incoming transmission from StarTrekPicard season pic twitter com onTGdOWーStar Trek StarTrek April Matalas added that the final season will offer a “final high stakes starship bound adventure which at a guess nods at the predominantly planet bound series so far Of course long time fans might be nervous at what the show s writers will have cooked up for our beloved crew especially after giving Riker and Troi a minor key postscript to their Star Trek tenure Maybe Dr Crusher is now pushing medical misinformation over subspace while Geordie spends his retirement as a crypto evangelist Sadly no in series return for Ready Room host Wil Wheaton despite the fact he was in almost half the episodes or Diana Muldaur 2022-04-05 17:21:19
海外科学 NYT > Science Russians’ Flight Suits Weren’t a Political Statement, NASA Astronaut Says https://www.nytimes.com/2022/04/05/science/russian-astronauts-flight-suits-ukraine-flag.html Russians Flight Suits Weren t a Political Statement NASA Astronaut SaysThe Russians were “kind of blindsided that people thought they were making a political statement Mark Vande Hei said But the suits matched the colors of the university they all attended 2022-04-05 17:41:30
金融 金融庁ホームページ EU・アジア太平洋フォーラムの開催について公表しました。 https://www.fsa.go.jp/inter/etc/20220405/20220405.html 開催 2022-04-05 19:00:00
金融 金融庁ホームページ 黄川田副大臣による「FinCity Global Forum~グリーン国際金融都市・東京の未来~」の講演について掲載しました。 https://www.fsa.go.jp/common/conference/danwa/index_kouen.html#StateMinister fincityglobalforum 2022-04-05 17:50:00
金融 金融庁ホームページ 鈴木財務大臣兼内閣府特命担当大臣閣議後記者会見の概要(令和4年4月1日)を公表しました。 https://www.fsa.go.jp/common/conference/minister/2022a/20220401-1.html 内閣府特命担当大臣 2022-04-05 17:50:00
ニュース BBC News - Home Ukraine war: Zelensky tells UN of horrors of Russian invasion https://www.bbc.co.uk/news/world-europe-61002914?at_medium=RSS&at_campaign=KARANGA ukraine 2022-04-05 17:34:03
ニュース BBC News - Home Can Arslan found guilty of murdering neighbour https://www.bbc.co.uk/news/uk-england-gloucestershire-60995894?at_medium=RSS&at_campaign=KARANGA boorman 2022-04-05 17:37:09
ニュース BBC News - Home Hypersonic missiles: UK, US, and Australia to boost defence co-operation https://www.bbc.co.uk/news/uk-61000416?at_medium=RSS&at_campaign=KARANGA super 2022-04-05 17:43:33
ニュース BBC News - Home Harry Billinge: Cornwall D-Day veteran dies aged 96 https://www.bbc.co.uk/news/uk-england-cornwall-60998335?at_medium=RSS&at_campaign=KARANGA normandy 2022-04-05 17:21:34
ニュース BBC News - Home Mother ‘begged for life' of IS hostage, court hears https://www.bbc.co.uk/news/world-us-canada-61003674?at_medium=RSS&at_campaign=KARANGA federal 2022-04-05 17:29:00
ニュース BBC News - Home Bucha killings: Satellite image of bodies site contradicts Russian claims https://www.bbc.co.uk/news/60981238?at_medium=RSS&at_campaign=KARANGA bucha 2022-04-05 17:13:56
ビジネス ダイヤモンド・オンライン - 新着記事 【700万人が感動した数学ノート】アメリカの中学生が学んでいる「平方根」超入門 - アメリカの中学生が学んでいる14歳からの数学 https://diamond.jp/articles/-/300793 【万人が感動した数学ノート】アメリカの中学生が学んでいる「平方根」超入門アメリカの中学生が学んでいる歳からの数学年の発売直後から大きな話題を呼び、中国・ドイツ・韓国・ブラジル・ロシア・ベトナム・ロシアなど世界各国にも広がった「学び直し本」の圧倒的ロングセラーシリーズ「BigFatNotebook」の日本版が刊行される。 2022-04-06 02:55:00
ビジネス ダイヤモンド・オンライン - 新着記事 アメリカの中学生が学ぶデータ収集の授業【全世界700万人が感動したプログラミングノート】 - アメリカの中学生が学んでいる14歳からのプログラミング https://diamond.jp/articles/-/301058 アメリカの中学生が学ぶデータ収集の授業【全世界万人が感動したプログラミングノート】アメリカの中学生が学んでいる歳からのプログラミング年の発売直後から大きな話題を呼び、中国・ドイツ・韓国・ブラジル・ロシア・ベトナムなど世界各国にも広がった「学び直し本」の圧倒的ロングセラーシリーズ「BigFatNotebook」の日本版が刊行された。 2022-04-06 02:50:00
ビジネス ダイヤモンド・オンライン - 新着記事 意外と知らないウクライナ隣国「ハンガリーはどんな国?」 - 読むだけで世界地図が頭に入る本 https://diamond.jp/articles/-/299654 2022-04-06 02:45:00
ビジネス ダイヤモンド・オンライン - 新着記事 ガジェットブロガー「マクリンさん」が、副業で月収100万円を実現するためにやったことのすべて - 真の「安定」を手に入れるシン・サラリーマン https://diamond.jp/articles/-/299440 ガジェットブロガー「マクリンさん」が、副業で月収万円を実現するためにやったことのすべて真の「安定」を手に入れるシン・サラリーマン異例の発売前重版刷仕事がデキない、忙しすぎる、上司のパワハラ、転職したい、夢がない、貯金がない、老後が不安…サラリーマンの悩み、この一冊ですべて解決これからのリーマンに必要なもの、結論、出世より「つの武器」リーマン力副業力マネー力。 2022-04-06 02:40:00
ビジネス ダイヤモンド・オンライン - 新着記事 昇給10万円と副業の収入10万円、どっちがお得? - 40代からは「稼ぎ口」を2つにしなさい https://diamond.jp/articles/-/301072 新刊『代からは「稼ぎ口」をつにしなさい年収アップと自由が手に入る働き方』では、余すことなく珠玉のメソッドを公開しています。 2022-04-06 02:35:00
ビジネス ダイヤモンド・オンライン - 新着記事 【性行為の前に打つ】がん予防に効く「HPVワクチン」 - 40歳からの予防医学 https://diamond.jp/articles/-/301063 予防医学 2022-04-06 02:30:00
ビジネス ダイヤモンド・オンライン - 新着記事 アメリカの中学生が学んでいる「ワクチン」の授業【全世界700万人が感動した「科学」ノート】 - アメリカの中学生が学んでいる14歳からの科学 https://diamond.jp/articles/-/300961 軍隊 2022-04-06 02:25:00
ビジネス ダイヤモンド・オンライン - 新着記事 「すぐやる人」「やらない人」を分ける決定的な思考の違い - 数値化の鬼 https://diamond.jp/articles/-/300983 「すぐやる人」「やらない人」を分ける決定的な思考の違い数値化の鬼発売即万部のベストセラーになっている、安藤広大氏の最新刊『数値化の鬼』。 2022-04-06 02:20:00
ビジネス ダイヤモンド・オンライン - 新着記事 【精神科医が教える】 他人の悪口をいう人のたった1つの対処法 - 精神科医Tomyが教える 心の荷物の手放し方 https://diamond.jp/articles/-/300782 voicy 2022-04-06 02:15:00
ビジネス ダイヤモンド・オンライン - 新着記事 【ご神仏に愛される人になる】ご神仏とつながる生き方とは、実は驚くほど簡単で今すぐにできることです - 神さま仏さまがこっそり教えてくれたこと https://diamond.jp/articles/-/300713 神さま仏さま 2022-04-06 02:10:00
ビジネス ダイヤモンド・オンライン - 新着記事 大きな伸びが予想されるロボ・アドバイザーの運用残高 - ETFはこの7本を買いなさい https://diamond.jp/articles/-/299612 2022-04-06 02:05:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)