IT |
ITmedia 総合記事一覧 |
[ITmedia News] au PAY、ポイント基盤をOracle Cloudに移行 決済は5倍速く“脱COBOL”も実現 |
https://www.itmedia.co.jp/news/articles/2303/01/news141.html
|
aupay |
2023-03-01 14:45:00 |
IT |
ITmedia 総合記事一覧 |
[ITmedia PC USER] NVIDIA、GeForce向けGame Readyドライバー最新版を提供開始 Web動画で超解像機能を利用できる“RTX Video Super Resolution”をサポート |
https://www.itmedia.co.jp/pcuser/articles/2303/01/news138.html
|
gameready |
2023-03-01 14:21:00 |
IT |
ITmedia 総合記事一覧 |
[ITmedia News] 山善、冷凍ペットボトル使う「水冷服」を強化 保冷シート搭載で冷たさ長持ち |
https://www.itmedia.co.jp/news/articles/2303/01/news134.html
|
itmedia |
2023-03-01 14:11:00 |
IT |
ITmedia 総合記事一覧 |
[ITmedia Mobile] ソフトバンクで通信障害、通話ができない場合も 現在は復旧 |
https://www.itmedia.co.jp/mobile/articles/2303/01/news130.html
|
itmediamobile |
2023-03-01 14:01:00 |
TECH |
Techable(テッカブル) |
NTTコノキュー、メタバースの相互運用性の標準開発を目指す国際組織「Metaverse Standards Forum」加盟 |
https://techable.jp/archives/198410
|
metaversestandardsforum |
2023-03-01 05:00:48 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
機械学習入門:機械学習の自動化テクニック AutoML TPOT編 |
https://qiita.com/ksonoda/items/965bcb072984a1fb3cbb
|
automltpot |
2023-03-01 14:16:02 |
Linux |
Ubuntuタグが付けられた新着投稿 - Qiita |
Ubuntu22.04初期設定覚書 |
https://qiita.com/strawbeRinMilk/items/e19655e69f1f9d3ec2f7
|
tryorinstallubuntu |
2023-03-01 14:11:26 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
[Route53] フェイルオーバールーティングにてフェイルオーバーとフェイルバックが発生したことを検知する |
https://qiita.com/K5K/items/2f4021af0ec32db5a1b9
|
route |
2023-03-01 14:06:17 |
技術ブログ |
Developers.IO |
新しいopswitchのアーキテクチャ ~ジョブ編~ |
https://dev.classmethod.jp/articles/opswitch-renewal-architecture-job/
|
opswitch |
2023-03-01 05:27:53 |
技術ブログ |
Developers.IO |
【3/29(水)リモート】クラスメソッドの会社説明会を開催します |
https://dev.classmethod.jp/news/jobfair-230329/
|
会社説明会 |
2023-03-01 05:09:02 |
海外TECH |
DEV Community |
Learn How to Setup a CI/CD Pipeline from Scratch |
https://dev.to/pavanbelagatti/learn-how-to-setup-a-cicd-pipeline-from-scratch-for-a-go-application-4m69
|
Learn How to Setup a CI CD Pipeline from ScratchIn this tutorial we will take an example of a Go application and setup a CI CD pipeline Go is becoming increasingly popular amongst developers for its ability to simplify and secure the building of modern applications The language was created by Google and has gained traction due to its open source nature and ability to write programs in the Go language Go also provides users with the freedom to build their own front end websites and applications as well as making it easy to develop maintain and use Enterprises can rely on Go to help build and scale cloud computing systems while enjoying its powerful concurrency features Furthermore Go offers high performance without utilizing too many resources Today we will create a simple Go application and set up a CI CD pipeline for the same Let s Go PrerequisitesCreate a free Harness cloud account to setup CI CDDownload amp install Go quickly from hereKubernetes cluster access from any cloud provider to deploy our application you can also use Minikube or Kind to create a single node cluster Docker preferably Docker Desktop TutorialThe example repository is accessible here feel free to fork it or just follow along I will not go into many details about the application code itself It is a sample “Hello World app that prints the text “Hello World on the local host Here is the code for the main go file package mainimport fmt log net http func homePage w http ResponseWriter r http Request fmt Fprintf w Home Page func wsEndpoint w http ResponseWriter r http Request fmt Fprintf w Hello World func setupRoutes http HandleFunc homePage http HandleFunc ws wsEndpoint func main fmt Println Hello World setupRoutes log Fatal http ListenAndServe nil The application also has a test file main test go with simple test case package mainimport net http net http httptest testing func TestHomePage t testing T req err http NewRequest GET nil if err nil t Fatal err rr httptest NewRecorder handler http HandlerFunc homePage handler ServeHTTP rr req if status rr Code status http StatusOK t Errorf handler returned wrong status code got v want v status http StatusOK if rr Body String Home Page t Errorf handler returned unexpected body got v want v rr Body String Home Page func TestWsEndpoint t testing T req err http NewRequest GET ws nil if err nil t Fatal err rr httptest NewRecorder handler http HandlerFunc wsEndpoint handler ServeHTTP rr req if status rr Code status http StatusOK t Errorf handler returned wrong status code got v want v status http StatusOK if rr Body String Hello World t Errorf handler returned unexpected body got v want v rr Body String Hello World The Dockerfile you see in the repo will be used to build and push our application as an image to the Docker Hub FROM golang buster AS builderARG VERSION devWORKDIR go src appCOPY main go RUN go build o main ldflags X main version VERSION main go FROM debian buster slimCOPY from builder go src app main go bin mainENV PATH go bin PATH CMD main The next thing is we will build the image and push it to the Docker Hub using the command docker buildx build platform linux arm platform linux amd t docker io lt docker hub username gt lt image name gt lt tag gt push f Dockerfile Once the build and push are successful you can confirm it by going to your Docker Hub account We will be deploying our application on a Kubernetes cluster You can see the deployment yaml and service yaml files in the forked repo which define the deployment and service to help us deploy and expose our application At this point make sure your Kubernetes cluster is up and running Our deployment yaml file is shown belowapiVersion apps vkind Deploymentmetadata name go app deploymentspec replicas selector matchLabels app go app template metadata labels app go app spec containers name go app image pavansa golang hello world latest ports containerPort env name PORT value The service yaml file is shown below apiVersion vkind Servicemetadata name go app servicespec selector app go app ports name http protocol TCP port targetPort type LoadBalancerIt is time to set up a Harness account to do CI CD Harness is a modern CI CD platform with AI ML capabilities Create a free Harness account and your first project Once you sign up at Harness you will be presented with the new CI CD experience and capabilities Add the required connectors such as Harness Delegate your GitHub repo Docker Hub account and secrets Delegate in Harness is a service software you need to install run on the target cluster Kubernetes cluster in our case to connect your artifacts infrastructure collaboration verification and other providers with the Harness Manager When you set up Harness for the first time you install a Harness Delegate Continuous IntegrationAfter signing in at Harness it will first ask you to create a project Invite collaborators to the project if you want Select the Continuous Integration module Create your first pipeline Connect with your source control management like GitHub where the application code is present Next configure your pipeline with the proper language of the project When you save and continue you see the below default setup in the pipeline studio When you click on the Go Build App under execution you will see the below setup details Let s modify the previous above step name it as Test Go App add the following code in the command tab and save and run the pipeline You should see a successful output We successfully created a CI pipeline for our application where the build and testing of the code happens Let s extend the idea of deploying this error free application code our target environment i e Kubernetes Continuous Delivery and DeploymentIt is time to deploy our Go application create the deployment stage Add a name to your stage select the deployment type as Kubernetes and click Set up Stage This is what you should be seeing after creating the deploy stage We need to create a service Hence click on add service In the next step we need to add a name to our service and manifest details Click on Add manifest to add the details Then select KS manifest and continue Specify the KS manifest store We know that our manifest files are present in GitHub Hence select GitHub Add a new GitHub connector to connect your manifest files Specify all the details step by step Add credentials through inbuilt secrets Connect with your delegate Make sure the connection of your manifest is successful with your delegate Now add the manifest details from your GitHub repo Save everything and continue You should see the service with manifest details Your pipeline studio will look like this with your added service Add a new environment save and continue Similarly add new infrastructure Select infrastructure type as Kubernetes and add the cluster details Save and continue In the next step you need to select the deployment strategy type We are selecting Rolling as our deployment strategy We are all done and this is how our CD pipeline looks like Now save everything and run the pipeline You should see a successful pipeline execution starting from CI and then CD step by step Automate CI CDThe last step is to automate our CI CD pipeline by creating Triggers Let s do it In the pipeline studio you can see the Triggers tab Click the Triggers tab and create a new trigger Add the GitHub trigger whenever someone pushes a new code to the main branch the pipeline should trigger automatically Save everything and create the trigger You should see the created trigger under the Trigger tab Now let s confirm if our CI CD is automated and working properly Add a readme file to our GitHub repo and see if it triggers our pipeline You see the pipeline triggered when a new code commit happened We have successfully automated the CI CD process for our Go application using Harness Also checkout my other articles on continuous integration and deployment A Step by Step Guide to Continuous Integration for Your Node js Application Pavan Belagatti・Feb ・ min read javascript devops node tutorial Deploying an Application on Kubernetes A Complete Guide Pavan Belagatti・Feb ・ min read kubernetes tutorial devops cicd |
2023-03-01 05:34:03 |
海外TECH |
DEV Community |
Git Work Trees |
https://dev.to/bsrz/git-work-trees-fg9
|
Git Work Trees OverviewThe git worktree command allows you to use and or manage multiple work trees at the same time So what s a work tree You re already using one you just might not know it Or you might call it working copy When you clone a repository the classic way or create a new repository using git init git will create what is called the main work tree it clones the bare repository in the git folderit creates the main work tree one level above the bare repository what you might already know as simply the folder in which the repository was cloned For example git clone git github com bsrz mvvm gitCloning into mvvm remote Enumerating objects done remote Counting objects done remote Compressing objects done remote Total delta reused delta pack reused Receiving objects KiB MiB s done Resolving deltas done cd mvvm ls latotal drwxr xr x bsrz staff Feb drwxr xr x bsrz staff Feb drwxr xr x bsrz staff Feb git rw r r bsrz staff Feb gitignoredrwxr xr x bsrz staff Feb img rw r r bsrz staff Feb LICENSEdrwxr xr x bsrz staff Feb MVVM drwxr xr x bsrz staff Feb MVVM xcodeproj drwxr xr x bsrz staff Feb MVVMTests drwxr xr x bsrz staff Feb MVVMUITests rw r r bsrz staff Feb README mdYou can see the git folder and one level above aka the current directory contains all of the files committed to the repository aka the work tree or working copy Why Ever had a situation where you had modified files and someone maybe your boss asked you to look into a bug in production code Or maybe you have a major refactoring effort in progress with hundreds or thousands of modified files but you were asked to quickly complete another task If so then you ll be familiar with the Ill just commit everything I have in a WIP commit approach or the stash management hell Wouldn t it be a dream to have more than one branch checked out at the same time This is why Work treesLet s try out the same example as before but using work trees mkdir mvvm cd mvvm git clone bare git github com bsrz mvvm git bare Cloning into bare repository bare remote Enumerating objects done remote Counting objects done remote Compressing objects done remote Total delta reused delta pack reused Receiving objects KiB MiB s done Resolving deltas done Start by creating the directory that will contain all of your branches and the bare repositoryChange the directory to the newly created oneThis is the key part here you want to clone a bare version of the repository this is more or less cloning only the git folder that s normally done automatically by the classic clone methodNext you ll create the main work tree This is done by using the git worktree add method to register a new work tree with the bare repository cd bare git worktree add main main Preparing worktree checking out main HEAD is now at db lt commit message gt cd main ls la total drwxr xr x bsrz staff Feb drwxr xr x bsrz staff Feb rw r r bsrz staff Feb git rw r r bsrz staff Feb gitignoredrwxr xr x bsrz staff Feb img rw r r bsrz staff Feb LICENSEdrwxr xr x bsrz staff Feb MVVM drwxr xr x bsrz staff Feb MVVM xcodeproj drwxr xr x bsrz staff Feb MVVMTests drwxr xr x bsrz staff Feb MVVMUITests rw r r bsrz staff Feb README mdChange the directory to the bare repositoryAdd the main work tree to the mvvm folder one level above the bare repository ️this structure is purely a personal choice you can clone the repository anywhere you want and check out branches anywhere you wantChange the directory to your newly created work treeThe committed files in your repository appear in the same way as beforeYou can work in the main directory in the exact same way as you were before You can checkout other branches you can stash modified files you can commit files you can rebase merge etc The power of bare repositories lie in their ability to add a nd a rd an Nth work tree and checkout another branch inside of them cd bare git worktree add track my awesome branch b my awesome branch Preparing worktree new branch my awesome branch branch my awesome branch set up to track main HEAD is now at db lt commit message gt cd my awesome branch Go back to the bare repositoryAdd a new work tree this time we re creating a new branch and tracking itChange the directory to the new work treeNow you can make modifications to the main work tree using the main branc at the same time as making modifications to the my awesome branch work tree using the my awesome branch branch The same capabilities apply to the new work tree you can checkout other branches you can stash modified files you can commit files you can rebase merge etc The only caveat here is that a branch can only be checked out in a single work tree at a time If you try to checkout a branch that s already checked out in a different work tree you will receive this error git checkout mainfatal main is already checked out at Users bsrz Developer mvvm main ConclusionAlthough I don t always use this method I m starting to use it more and more The ability to leave my work in progress as is and start new work in a separate folder has given me a ton of flexibility and has caused a lot less git management work I no longer have to constantly manage stashes or save patches for later and it allows me to pivot onto a new problem pretty quickly Hope this helped you learn something Cheers |
2023-03-01 05:26:35 |
海外TECH |
DEV Community |
Kubernetes 101, part II, pods |
https://dev.to/leandronsp/kubernetes-101-part-ii-pods-19pb
|
Kubernetes part II podsIn the previous post we ve seen the fundamentals of Kubernetes as well as an introduction to its main architecture Once we got introduced it s time to explore how we can run an application in Kubernetes A wrapper for containersIn Kubernetes we aren t able to create single containers directly Instead for the better we can wrap containers into a single unit which comprises a specification where multiple containers can use the same specification as deployable unitsa shared storage they can use a shared storage so the same volumes are mounted across multiple containersa single network containers under the same wrapper can share a single network so the can communicate to each otherComparing to Docker such wrapper is similar to docker compose yml where different services containers can share a common specification volumes and network Yes we are talking about Pods PodsPods are the smallest deployable unit you can create and manage in Kubernetes Within Pods we can group multiple containers that should communicate to each other somehow either using the same network or through shared volumes Let s create some Pods Using YAML for goodUp to this point we ve used kubectl in order to create pods for instance kubectl run lt container name gt image lt some image gt It works pretty well for running experimental Pods creating temporary resources and other workloads in ks we ll talk about workloads later We could create multiple Pods using kubectl run but what if we want to share with other people team or even the open source community how we declared our Pods How about sharing in a VCS repository like Git the representation of the desired state of our application in ks using a standard serialization format Kubernetes brings a serialization format which can be used to represent our Pods and yes you may like it or not it s the well known YAML Creating a PodWith YAML we can declare Kubernetes objects using the kind attribute Ks employs many different kind of objects which we ll explore on later posts but at this moment we ll start with the most common and smallest unit in Kubernetes a Pod Our Pod specification should be composed by a container called server backed by the ubuntu image that shares a volume with the Pod This container will create in the shared volume a UNIX named pipe a k a FIFO listening for some message coming into the FIFO a container called client also backed by an ubuntu image that shares a volume with the Pod This container will write to the shared volume a simple message called Hey ExpectationWhen the server is started the FIFO will be created in the shared volume The server keeps waiting for some message arriving into the FIFO When the client is started it will write the message Hey into the shared volume Afterwards we look at the container server logs as it should print the message Hey that was sent by the client Let s declare the YAMl file fifo pod yml kind PodapiVersion vmetadata name fifo podspec volumes name queue emptyDir containers name server image ubuntu volumeMounts name queue mountPath var lib queue command bin sh args c mkfifo var lib queue fifo cat var lib queue fifo name client image ubuntu volumeMounts name queue mountPath var lib queue command bin sh args c echo Hey gt var lib queue fifo kind the object kind In this case simply Podmetadata name the name of the Pod in the cluster under the current default namespace we ll talk about namespaces in later posts volumes the shared volume of the Pod We re using emptyDir which will share any empty directory in the Pod s filesystemvolumeMounts mounting the Pod s shared volume into some directory of the container s filesystemcommand the command to be executed in the containerAfter declaration we can share the YAMl file with our friends co workers etc using Git But the object is yet to be created in our cluster We do this by using the command kubectl apply kubectl apply f fifo pod ymlpod fifo pod createdLet s check the logs of the server container We can use the command kubectl logs lt pod gt so we get the logs of every container in the Pod However we want to fetch logs from the server container only kubectl logs fifo pod c serverHeyYay It works Getting the list of PodsUsing kubectl we can get the list of Pods in our cluster kubectl get podsNAME READY STATUS RESTARTS AGEnginx Running m ago dhfifo pod Completed mWe have a Pod called nginx which is Running for days It s quite comprehensible since I ve run the kubectl run nginx image nginx days ago Also it s well known that NGINX is a web server that keeps running listening for TCP connections so that s why the Pod is still in a Running status But the Pod fifo pod we just created is returning a Completed status Why Pod LifecyclePods follow a lifecycle in Kubernetes Like containers in Docker Pods are designed to be ephemeral Once a Pod is scheduled assigned to a Node the Pod runs on that Node until it stops or is terminated A Pod lifecycle works by phases Let s understand each phase PendingIt s when a Pod is accepted by the cluster but its containers are not ready yet The Pod is NOT yet scheduled to any Node RunningAll containers are created and the Pod has been scheduled to a Node At least one of the containers are still running or being started Succeeded FailedIf all containers are Terminated in success then the Pod status is Succeeded But in case all containers have terminated but at least container terminated in failure the Pod status is Failed Terminated CompletedIndicates that all the containers are terminated internally by Kubernetes or completed More about Pod LifecyclePods lifecycle is a quite big topic in Kubernetes covering Pod conditions readiness liveness and so on We ll dig into further details about lifecycles in later posts Wrapping UpThis post showed a bit more about Pods which are the smallest and main deployable unit in Kubernetes On top of that we also created a Pod with two containers communicating to each other using FIFO and a shared volume In addition we ve seen a bit about Pod lifecycle Hence the Pod lifecycle and its lifetime will be crucial to understand the subject in the upcoming post self healing capabilities in Kubernetes Stay tuned and Cheers |
2023-03-01 05:14:08 |
金融 |
ニッセイ基礎研究所 |
CO2排出とライフスタイル-環境に関する行動をとるかどうかは、他人の眼に左右される |
https://www.nli-research.co.jp/topics_detail1/id=74046?site=nli
|
目次ーはじめにー国別のCO排出量日本はCO排出量では位人当たり排出量の上位には、産油国が並んでいるー排出源の比較中国住宅が排出量全体の分のアメリカ輸送が排出量全体の割以上日本住居が排出量全体の割以上ーライフスタイルが排出量に与える影響性別男性は女性よりも年間排出量が多い年齢・コホート高齢化には排出量の増加と減少の両面がある所得所得が多いほど排出が大きいー上位の世帯が排出量の割前後を占める居住地域都市部はエネルギー排出が多く、農村部は食料や輸送の排出が多い活動排出量が小さいのは睡眠と休息教育と環境関連の知識環境知識の習得が排出削減につながるかどうかは一概に言えない社会的ステータスステータスの設定を上手に誘導すれば、排出削減につなげることもできる不平等不平等が進むと排出削減への意欲が薄れるーおわりに私見気候変動問題を巡る動きが、世界中で活発化している。 |
2023-03-01 14:57:15 |
海外ニュース |
Japan Times latest articles |
Early detection of postpartum depression? Japanese researchers may have found a way. |
https://www.japantimes.co.jp/news/2023/03/01/national/science-health/postpartum-depression-detection-research/
|
depression |
2023-03-01 14:32:19 |
海外ニュース |
Japan Times latest articles |
U.S. experts have high expectations for Bank of Japan nominee |
https://www.japantimes.co.jp/news/2023/03/01/business/boj-nominee-us-experts/
|
U S experts have high expectations for Bank of Japan nomineeUeda is an ideal candidate to meet the challenges facing the BOJ such as achieving a inflation target a former head of the Japan Korea |
2023-03-01 14:13:42 |
ニュース |
BBC News - Home |
Covid: FBI chief Christopher Wray says China lab leak 'most likely' |
https://www.bbc.co.uk/news/world-us-canada-64806903?at_medium=RSS&at_campaign=KARANGA
|
wuhan |
2023-03-01 05:55:49 |
マーケティング |
MarkeZine |
SUPER STUDIO、「ecforce ma」を提供へ ECやD2CのCRM施策を自動化 |
http://markezine.jp/article/detail/41524
|
ecforcema |
2023-03-01 14:15:00 |
IT |
週刊アスキー |
2つのフェアを同時開催! かっぱ寿司の「かっぱのシーズン最後のかに祭り」&「かっぱの春ネタ祭り」 |
https://weekly.ascii.jp/elem/000/004/126/4126821/
|
同時開催 |
2023-03-01 14:40:00 |
IT |
週刊アスキー |
京王プラザホテル、人気の謎解き宿泊プラン第二弾開催! 3月27日より |
https://weekly.ascii.jp/elem/000/004/126/4126812/
|
京王プラザホテル |
2023-03-01 14:20:00 |
IT |
週刊アスキー |
いか焼そばにイカ天のイカダブル! エースコック「スーパーカップ Wいか焼そば」 |
https://weekly.ascii.jp/elem/000/004/126/4126826/
|
焼そば |
2023-03-01 14:20:00 |
IT |
週刊アスキー |
モンスター交配RPG『モンスターユニバース』/『Volzerk』で新エピソードが配信! |
https://weekly.ascii.jp/elem/000/004/126/4126835/
|
monsteruniverse |
2023-03-01 14:15:00 |
IT |
週刊アスキー |
おやつ感覚で食べられるかわいらしいシウマイ! 「お花見シーズン限定 シウマイまん&さくらまん」 |
https://weekly.ascii.jp/elem/000/004/126/4126809/
|
限定 |
2023-03-01 14:10:00 |
IT |
週刊アスキー |
バーガーキング「ワンパウンダー チャレンジ2023」10店舗にて開催 4枚肉のワンパウンダー食べたい放題! |
https://weekly.ascii.jp/elem/000/004/126/4126825/
|
月日 |
2023-03-01 14:10:00 |
コメント
コメントを投稿