IT |
ITmedia 総合記事一覧 |
[ITmedia News] 「ICOCA」がApple Payに対応へ 2023年内に |
https://www.itmedia.co.jp/news/articles/2304/17/news142.html
|
applepay |
2023-04-17 16:40:00 |
IT |
ITmedia 総合記事一覧 |
[ITmedia ビジネスオンライン] セガ、「アングリーバード」のロビオを1036億円で買収 グローバル展開を加速 |
https://www.itmedia.co.jp/business/articles/2304/17/news128.html
|
ITmediaビジネスオンラインセガ、「アングリーバード」のロビオを億円で買収グローバル展開を加速ゲーム大手のセガは月日、フィンランドに本社を置きモバイルゲーム「アングリーバード」を手掛けるロビオ・エンターテインメントを億ユーロ億円で買収すると発表した。 |
2023-04-17 16:25:00 |
IT |
ITmedia 総合記事一覧 |
[ITmedia ビジネスオンライン] 「スマホ管理疲れ」約4割の親が実感 フィルタリング以外の対策は? |
https://www.itmedia.co.jp/business/articles/2304/17/news114.html
|
cyberowl |
2023-04-17 16:25:00 |
IT |
ITmedia 総合記事一覧 |
[ITmedia Mobile] PayPayの「あなたのまちを応援プロジェクト」、6月以降のキャンペーンを発表 |
https://www.itmedia.co.jp/mobile/articles/2304/17/news135.html
|
itmediamobilepaypay |
2023-04-17 16:13:00 |
IT |
ITmedia 総合記事一覧 |
[ITmedia PC USER] GIGABYTE、KVMスイッチ機能を備えた27型WQHDゲーミング液晶ディスプレイ |
https://www.itmedia.co.jp/pcuser/articles/2304/17/news129.html
|
gigabyte |
2023-04-17 16:02:00 |
TECH |
Techable(テッカブル) |
Web3技術が拓く国際協力と地方創生~みんなで進めたアフリカの農村での“まちづくり”とは |
https://techable.jp/archives/203395
|
savannakidznft |
2023-04-17 07:30:20 |
AWS |
AWS Japan Blog |
Amazon EKS で CNI カスタムネットワーキングと Pod セキュリティグループを活用する |
https://aws.amazon.com/jp/blogs/news/leveraging-cni-custom-networking-alongside-security-groups-for-pods-in-amazon-eks/
|
networkingalongsidesecuri |
2023-04-17 07:38:55 |
Ruby |
Rubyタグが付けられた新着投稿 - Qiita |
AivenからPostgreSQL®、MySQL、Redis®*の無料プランの提供開始 |
https://qiita.com/tomozilla/items/d9f72bfb4bac265ef410
|
aiven |
2023-04-17 16:36:25 |
Linux |
CentOSタグが付けられた新着投稿 - Qiita |
Laravelでのパーミッションの設定メモ |
https://qiita.com/ky-jp16/items/9956ea9dd391f0d69127
|
cdpathtolaravelapplica |
2023-04-17 16:08:56 |
GCP |
gcpタグが付けられた新着投稿 - Qiita |
Firebase Extention がインストールできないときに |
https://qiita.com/zono___zono/items/d031e158743d637fb24d
|
bigquery |
2023-04-17 16:14:07 |
Git |
Gitタグが付けられた新着投稿 - Qiita |
【Xcode】プロジェクトをGitHub に上げる |
https://qiita.com/spuuuu_uuuups/items/3ca362802cba45bd9130
|
firstcommit |
2023-04-17 16:59:16 |
Git |
Gitタグが付けられた新着投稿 - Qiita |
他人数で開発しているrepoのtagsを削除する方法 |
https://qiita.com/tzxdtc10/items/d8a8b5987b198b2192b7
|
sourcetree |
2023-04-17 16:10:14 |
技術ブログ |
Developers.IO |
Lambda 関数の環境変数の設定に AWS Secrets Manager を使ってみた話 |
https://dev.classmethod.jp/articles/use-secrets-manager-to-set-environment-argument-for-lambda/
|
awslambda |
2023-04-17 07:18:44 |
技術ブログ |
Developers.IO |
Cloud9 環境を複製してみた |
https://dev.classmethod.jp/articles/cloud9-replication-using-snapshot/
|
cloud |
2023-04-17 07:17:49 |
海外TECH |
DEV Community |
AWS open source newsletter, #153 |
https://dev.to/aws/aws-open-source-newsletter-153-2gim
|
AWS open source newsletter April th Instalment WelcomeHello and welcome to the AWS open source newsletter as featured in the latest episode of Build on Open Source SE We have lots of great projects for you this week with a strong chatGPT influence pg gpt cw logs insights gpt and aiws all integrate chatGPT to help you do different things on AWS semantic search aws docs is a very interesting demo on how to build a more coherent search for your documentation aws chime chat demo a very nice demo using the Chime SDK ckia is an open source AWS Trusted Advisor tool AWS ED helps you keep your local IP in sync with external DNS records cfnctl provides a Terraform like cli experience to CloudFormation and more We also have content on some of your favourite open source projects which this week include Babelfish for Aurora PostgreSQL MySQL Apache Airflow PostgreSQL gMSA on Linux Cloud Custodian Delta Lake Apache Kafka Amazon EMR Streamlit Crossplane Flux Amazon Corretto and Kubernetes Make sure you check out the Video and Events section some great videos this week as always FeedbackPlease please please take minute to complete this short survey and bask in the warmth of having helped improve how we support open source Celebrating open source contributorsThe articles and projects shared in this newsletter are only possible thanks to the many contributors in open source I would like to shout out and thank those folks who really do power open source and enable us all to learn and build on top of what they have created So thank you to the following open source heroes Nithya Ruff Nick Karpov Chandrapal Badshah Walter McCain II Kyaw Soe Hlaing Sai Kiran Akula Samiullah Mohammed Masudur Rahaman Sayem Akeef Khan Bret Edwards Roger Welin Brittan DeYoung Roberson Andrade Damon Cortesi Arthur Petitpierre Andreas Cavagna Join Sheetal Joshi and Sai Vennam Latest open source projectsThe great thing about open source projects is that you can review the source code If you like the look of these projects make sure you that take a look at the code and if it is useful to you get in touch with the maintainer to provide feedback suggestions or even submit a contribution Toolscfnctlcfnctl pronounced cfn control or cloudformation control is a great tool from Roger Welin that provides a CLI that brings the Terraform cli experience to Cloudformation With cfnctl you write Cloudformation templates as usual but use the cli workflow that you are already used to from Terraform including apply plan destroy and output Still work in progress so why not give this a try and provide Roger feedback or even better contribute back to the project Very neat indeed ckiackia is a new tool from Brittan DeYoung that provides an open source alternative to AWS Trusted Advisor that is intended to run opinionated checks against your cloud environment and provide recommendations The full suite of AWS trusted advisor checks is the inspiration for this project The current focus for the project is providing check parity with the current AWS Trusted Advisor offerings but the README does state that this is early days for the project It is worth reading the Reddit thread where Brittan discusses why he created this tool and how it compares to others that provide similar capabilities AWS EDAWS ED is a new project from Bret Edwards that I seemed to have missed when it was first published sorry Bret is intended to allow you to leverage your own Domain name to keep track of your public IP address If you are a residential customer or you are a small medium business that does not have a static IP at your office and you want to be able to access resources remotely this is something you may want to leverage It uses AWS s Route platform to update your personal IP address dynamically This tool will track your IP address locally and if it detects a change in your IP it will make a call to route using boto and update your resource record your A name record for your domain This currently assumes that you are using a subdomain instead of using the Apex record pg gptpg gpt is an experimental extension from the lovely folks at Cloudquery that brings OpenAI API to your PostgreSQL to run queries in human language Check out the details and the short demo they have put together You will need to have an OPENAI API key to use this and as the README states do not use this for production as the plugin does send the schema info to OpenAI aiwsaiws is another project that has been inspired by ChatGPT this time Hüseyin BABAL has put together this tool that provides an AI driven AWS CLI to help you to generate and use AWS commands to manage your resources in AWS You will need to have an OPENAI API key to use this cw logs insights gptcw logs insights gpt is the final project this week that looks to integrate chatGPT this time it is AWS Community Builder Lars Jacobsson who has put together this Chrome extension that generates CloudWatch Logs Insights queries from ChatGPT prompts As with the other projects that feature chatGPT you need to bring your OpenAI key to use this I have to say that I think this is a very useful tool as I always end up having to google these so this is going to save me a lot of time effort Lars is our special guest in the next episode of Build on Open Source so make sure you put that on your calendar as he will be talking about some of his open source projects transformers neuronxtransformers neuronx is an open source library built by the AWS Neuron team that helps run transformer decoder inference workflows using the AWS Neuron SDK Currently it has examples for the GPT GPT J and OPT model types Find out more how you can use this project to optimise how you run those models using AWS silicon innovations such as AWS Inferentia and AWS Trainium by reading the post Deploy large language models on AWS Inferentia using large model inference containers Demos Samples Solutions and Workshopssemantic search aws docssemantic search aws docs this is a very promising sample project that demonstrates how to set up AWS infrastructure to perform semantic search and question answering on documents using a transformer machine learning models like BERT RoBERTa or GPT via the Haystack open source framework What do you think of these kinds of tools to help you find what you are looking for in your documentation I would love to know as I am putting together some thoughts in this area aws chime chat demoaws chime chat demo is a side project that Roberson Andrade has been working on that provides an example of how to build a chat for real time messaging using AWS Chime Messaging service From his Reddit post The goal of this project is to use it as a reference for a future developer friendly guide on how to use this Amazon service I believe it would be of great use since there are not many projects created for this purpose for Chime Messaging The main point of the project is to create a demo of a real application that uses AWS Chime Messaging service to work with real time messaging using the AWS infrastructure itself For this a simple React frontend was developed using the Chakra UI component library to accelerate development For message rendering the React Virtuoso library was used to generate a virtualized list and enable infinite scrolling loading messages as the user scrolls To enable communication between the frontend and Chime temporary AWS credentials need to be generated For this the authentication service offered by Cognito and the generation of credentials through the Cognito Identity Pool were used Finally to perform the action of creating a user in Chime Messaging a trigger was configured that invokes a lambda when a Cognito account is confirmed To create this entire infrastructure Terraform was used as an IaC Infrastructure as code platform to facilitate the creation modification and destruction of all infrastructure created in AWS To exemplify the application flow I created a diagram that shows the entire user creation flow and communication between AWS services as well as a video that shows the application in action emr cli examplesemr cli examples is a repo from colleague Damon Cortesi that provide examples on how to use another project amazon emr cli that he put together and we shared in These examples show how EMR CLI can be used to easily deploy a variety of different jobs to EMR Serverless and EMR on EC So dive in and explore the varied ways that you can deploy PySpark code to EMR and how the EMR CLI can make it all as easy as a single command BonusWhen I put demos together I am often having to work with files that contain data Those files are very often CSV formatted and have always looked out for a good primer on how to optimise working with this kind of source data Well I need look no longer as Damon has put together this very cool tutorial Intro to Data Processing on AWS that shows what happens when you read both raw TSV and optimised Apache Parquet data from S using Apache Spark hands on AWS and Community blog postsDelta LakeNick Karpov has put together this post Introducing Support for Delta Lake Tables in AWS Lambda that shares news about how the AWS SDK for pandas what was formally AWS Data Wrangler and how it now supports the deltalake package for read and write access to Delta Lake tables The post provides a nice walk through to get you started and show you how you can build your own Lambda layer with deltalake hands on Cloud CustodianCloud Custodian is an open source project that enables you to manage your cloud resources by filtering tagging and then applying actions to them Chandrapal Badshah has put together a blog post My Love Hate Relationship with Cloud Custodian where he shares his experiences in working with this open source tool over the past year If you are thinking about using this project or maybe a seasoned user take a look and see what you think gMSA on Linuxgroup Managed Service Account gMSA is an open sourced credentials fetcher daemon that allows Linux users initial support on Amazon Linux Fedora Linux and Red Hat Enterprise Linux to integrate with Microsoft Active Directory and use Microsoft AD group Managed Service Accounts capability In this post AWS Now Supports Credentials fetcher for gMSA on Amazon Linux Walter McCain II Kyaw Soe Hlaing Sai Kiran Akula and Samiullah Mohammed share more details on the use cases they see and provide an example of using an Active Directory domain joined Linux server with gMSA Warning may contain Windows hands on Apache KafkaIn this post Connect to Amazon MSK Serverless from your on premises network Masudur Rahaman Sayem and Akeef Khan speak to my inner Hybrid geek and explore the option of on premises connectivity with MSK Serverless and how by establishing this integration you can gain access to a wide range of real time analytics use case possibilities and unlock the full potential of your data whether it is in AWS or within your existing on premises infrastructure Nice hands on Other posts and quick readsAmazon EMR on EKS widens the performance gap Run Apache Spark workloads times faster and at times lower cost describes the benchmark setup and results of running Amazon EMR on the EKS environmentPush Amazon EMR step logs from Amazon EC instances to Amazon CloudWatch logs shares how you can entralise the EMR step logs of the jobs in CloudWatch hands on Build Streamlit apps in Amazon SageMaker Studio provides a hands on example of hosting a Streamlit demo for an object detection task using Amazon Rekognition on SageMaker Studio hands on Part Multi Cluster GitOps ーCluster fleet provisioning and bootstrapping is a follow up post that shows you how to use a decentralised hub and spoke model to manage the lifecycle of Amazon EKS clusters using Crossplane and Flux hands on Quick updatesKubernetesThe Amazon Elastic Kubernetes Service Amazon EKS team announced support for Kubernetes version for Amazon EKS and Amazon EKS Distro Just in time for KubeCon next week you can dive deeper by reading the post Amazon EKS now supports Kubernetes version where Leah Tucker looks at some of the key features you need to know about Apache AirflowYou can now create Apache Airflow version environments on Amazon Managed Workflows for Apache Airflow MWAA Apache Airflow is the latest minor release of the popular open source tool that helps customers author schedule and monitor workflows With Apache Airflow version on Amazon MWAA customers can enjoy the same scalability availability security and ease of management that Amazon MWAA offers with the improvements of Apache Airflow such as annotations for DAG runs and task instances auto refresh for task log view and a better dataset user interface Apache Airflow version on Amazon MWAA includes Python version and comes pre installed with recently released Amazon Provider Package version enabling access to new AWS integrations such as Amazon SageMaker Pipelines Amazon SageMaker Model Registry and Amazon EMR Notebooks PostgreSQLA few updates this week for PostgreSQL users Amazon Relational Database Service Amazon RDS for PostgreSQL now supports Amazon RDS Optimized Reads for up to two times faster query processing compared to previous generation instances Complex queries that utilise temporary tables such as queries involving sorts hash aggregations high load joins and Common Table Expressions CTEs can now execute up to two times faster with Optimized Reads on RDS for PostgreSQL Optimized Read enabled instances achieve faster query processing by placing temporary tables generated by PostgreSQL on the local NVMe based SSD block level storage thereby reducing your traffic to Elastic Block Storage EBS over the network Refer to our recent blog post to learn more about performance improvements using local disk based database instances for workloads that have highly concurrent read write processing Amazon RDS Optimized Reads is available by default on RDS for PostgreSQL versions and higher and higher and and higher This feature is now available on Intel based Md and Rd instances with up to GiB of NVMe based SSD block level storage and AWS Graviton based Mgd and Rgd database DB instances with up to GiB of NVMe based SSD block level storage and up to Gbps of network bandwidth You can configure these disk based DB instances as Multi AZ DB cluster Multi AZ DB instances and Single AZ DB instances If you want to dive deeper make sure you read the blog post from Naga Appani Introducing Optimized Reads for Amazon RDS for PostgreSQL Amazon Aurora PostgreSQL Compatible Edition now supports PostgreSQL major version New features in PostgreSQL include the SQL standard MERGE command for conditional SQL queries performance improvements for both in memory and disk based sorting and support for two phase commit and row column filtering for logical replication Following the open source community announcement of updates to the PostgreSQL database we updated Amazon Aurora PostgreSQL Compatible Edition to support PostgreSQL and These releases contain bug fixes and improvements by the PostgreSQL community Refer to the Aurora version policy to help you decide how often and how to plan your upgrade As a reminder if you are running any version of Amazon Aurora PostgreSQL you must upgrade to a newer major version by January This release contains optimisations that improve database availability during patching or minor version upgrades The optimisations will be used when applying patches or newer minor version upgrades to Aurora PostgreSQL versions or higher or higher or higher or or higher This release also contains logical replication performance improvements and new features for Babelfish for Aurora PostgreSQL version MySQLA couple of important updates from last week Amazon Relational Database Service Amazon RDS for MySQL now supports inbound replication from Amazon RDS Single AZ database DB instances and Amazon RDS Multi AZ DB instances with one standby to Amazon RDS Multi AZ deployments with two readable standby database DB instances You can use this inbound replication to help migrate your existing Amazon RDS MySQL deployments within minutes to Amazon RDS Multi AZ deployments with two readable standby db instances which has one writer instance and two readable standby instances across three availability zones By creating a Multi AZ deployment with two readable standby db instances as a read replica of your existing RDS MySQL database instance you can promote the read replica to be your new primary typically within minutes Amazon RDS Multi AZ deployments provide enhanced availability and durability making them a natural fit for production database workloads Deployment of Amazon RDS Multi AZ with two readable standby database instances supports up to x faster transaction commit latencies than a Multi AZ deployment with one standby instance In this configuration automated failovers typically take under seconds In addition the two readable standbys can also serve read traffic without needing to attach additional read replicas Amazon RDS for MySQL now supports up to asynchronous read replicas from Amazon RDS Multi AZ deployments with two readable standbys delivering up to x the previous read capacity Amazon RDS Multi AZ deployments with two readable standbys have one writer instance and two readable standby instances across three availability zones You can now create additional asynchronous read replicas outside the cluster in addition to the two reader instances thereby scaling up your read capacity to instances Amazon RDS Read Replicas provide enhanced performance and durability for RDS database DB instances For read heavy database workloads the additional read replicas provide the option to elastically scale out beyond the capacity constraints of the two readable instances inside the Multi AZ deployment with two readable standbys You can create one or more read replicas and serve high volume application read traffic from multiple copies of your data thereby increasing aggregate read throughput Read replicas can also be promoted and modified to be a Multi AZ deployment option with two readable standbys when needed Amazon EMRLast week we announced support for Apache Spark with Java in EMR on EKS Amazon EMR on EKS enables customers to run open source big data frameworks such as Apache Spark on Amazon EKS AWS customers can now leverage Java as a supported Java runtime to run Spark workloads on Amazon EMR on EKS Until now Amazon EMR on EKS ran Spark with Java as the default Java runtime In order to run Spark with Java customers will need to create a custom image and install the Java runtime to replace the default Java This required additional engineering effort when the customer first started to use Amazon EMR on EKS and when the customer upgraded to a new release version With this new feature Amazon EMR on EKS supports launching Spark with Java runtime by simply passing in a new release label Videos of the weekHands on with EKS Networking Amazon EKS WorkshopJoin Sheetal Joshi and Sai Vennam as they dive into a hands on demo focusing on the Networking module in the all new Amazon EKS Workshop Java on Graviton How to Use Amazon CorrettoJava is one of the most popular languages when running applications on AWS Did you know about Amazon Corretto a no cost multi platform production ready distribution of the Open Java Development Kit Did you know that the kit runs on Graviton based Amazon EC instances During this session Arthur Petitpierre shares more info about Amazon Corretto and how to install it on Graviton based Amazon EC instances Nithya Ruff from Amazon on Building Successful Open Source BusinessesNithya Ruff heads up Amazon s Open Source Program Office and in this video podcast she talks about various aspects of open source with hosts Avi Press and Matt Yonkovit They cover a broad range of topics including the challenges facing open source today how to evaluate new open source projects the importance of Open Source Program Offices OSPOs for startups building successful and sustainable open source businesses reducing friction for developers open source diversity managing diverse talent and competing ideals in open source governance and more Build on Open SourceIf you missed the fifth episode which we streamed live last Friday special guest and AWS Community Builder Andreas Cavagna came on the show to talk about Leapp a tool that anyone who develops on AWS needs to know about Andreas shares his passions on automation and open source and how they combined to create this project Catch the replay hereFor those unfamiliar with this show Build on Open Source is where we go over this newsletter and then invite special guests to dive deep into their open source project Expect plenty of code demos and hopefully laughs We have put together a playlist so that you can easily access all eight of the episodes of the Build on Open Source show Build on Open Source playlist Events for your diaryIf you are planning any events in either virtual in person or hybrid get in touch as I would love to share details of your event with readers AWS at KubeCon CloudNativeCon Europe April th live on twitch tv awsAWS Container Day ft Kubernetes at KubeCon CloudNativeCon Europe is a day long virtual event dedicated to helping Kubernetes practitioners optimise their workloads and reduce their Ops burden AWS and guest speakers will dive deep into the latest trends techniques and best practices for deploying managing securing and scaling with Kubernetes The day will feature new solution demos and interactive challenges designed to provide hands on experience and practical insights Attendees will walk away with new tools mental models and resources to innovate optimise and scale their applications Check out the amazing schedule that has been published on the event page AWS at KubeCon CloudNativeCon Europe and register to set yourself a reminder AWS Community NordicsApril th HelsinkiThe AWS Community Day Nordics is a free full day event for AWS users to come together to network learn from each other and get inspired The event is organised by the community for the community The cfp is currently open so if you are in the area and want to talk then here is your chance Check out the full event details and save your space here AWS Community Nordics registration pageReducing the costs of your openCypher applicationsMay th pm UK onlineopenCypher is an open source project for creating graph applications Neptune supports openCypher graph query language and in this webinar you will learn more about the cost benefits for moving openCypher workloads to Neptune serverless With Neptune serverless customers can see up to cost savings compared to provisioning for peak capacity A demo of Neptune in action will be included in this session Head over to the You Tube holding page Reducing the costs of your openCypher applications CortexEvery other Thursday next one th FebruaryThe Cortex community call happens every two weeks on Thursday alternating at UTC and UTC You can check out the GitHub project for more details go to the Community Meetings section The community calls keep a rolling doc of previous meetings so you can catch up on the previous discussions Check the Cortex Community Meetings Notes for more info OpenSearchEvery other Tuesday pm GMTThis regular meet up is for anyone interested in OpenSearch amp Open Distro All skill levels are welcome and they cover and welcome talks on topics including search logging log analytics and data visualisation Sign up to the next session OpenSearch Community Meeting Stay in touch with open source at AWSRemember to check out the Open Source homepage to keep up to date with all our activity in open source by following us on AWSOpen |
2023-04-17 07:57:54 |
海外TECH |
DEV Community |
CI Pipelines for dockerized PHP Apps with Github & Gitlab [Tutorial Part 7] |
https://dev.to/pascallandau/ci-pipelines-for-dockerized-php-apps-with-github-gitlab-tutorial-part-7-5gc2
|
CI Pipelines for dockerized PHP Apps with Github amp Gitlab Tutorial Part How to setup CI Continuous Integration pipelines for dockerized PHP applications with Github Actions and Gitlab PipelinesThis article appeared first on at CI Pipelines for dockerized PHP Apps with Github amp Gitlab Tutorial Part In the seventh part of this tutorial series on developing PHP on Docker we will setup a CI Continuous Integration pipeline to run code quality tools and tests on Github Actions and Gitlab Pipelines All code samples are publicly available in my Docker PHP Tutorial repository on Github You find the branch for this tutorial at part ci pipeline docker php gitlab github All published parts of the Docker PHP Tutorial are collected under a dedicated page at Docker PHP Tutorial The previous part was Use git secret to encrypt secrets in the repository and the following one is A primer on GCP Compute Instance VMs for dockerized Apps If you want to follow along please subscribe to the RSS feed or via email to get automatic notifications when the next part comes out Table of contentsIntroductionRecommended readingApproachTry it yourselfCI setupGeneral CI notesInitialize make for CIwait for service shSetup for a local CI runRun detailsExecution exampleSetup for Github ActionsThe Workflow fileSetup for Gitlab PipelinesThe gitlab ci yml pipeline filePerformanceThe caching problem on CIDocker changesCompose file updatesdocker compose local ymldocker compose ci ymlAdding a health check for mysqlBuild target ciBuild stage ci in the php base imageUse the whole codebase as build contextBuild the dependenciesCreate the final imageBuild stage ci in the application image dockerignoreMakefile changesInitialize the shared variablesENV based docker compose configCodebase changesAdd a test for encrypted filesAdd a password protected secret gpg keyCreate a JUnit report from PhpUnitWrapping up IntroductionCI is short for Continuous Integration and to me mostly means running the code quality tools and tests of a codebase in an isolated environment preferably automatically This is particularly important when working in a team because the CI system acts as the final gatekeeper before features or bugfixes are merged into the main branch I initially learned about CI systems when I stubbed my toes into the open source water Back in the day I used Travis CI for my own projects and replaced it with Github Actions at some point At ABOUT YOU we started out with a self hosted Jenkins server and then moved on to Gitlab CI as a fully managed solution though we use custom runners Recommended readingThis tutorial builds on top of the previous parts I ll do my best to cross reference the corresponding articles when necessary but I would still recommend to do some upfront reading on the general folder structure the update of the docker directory and the introduction of a make directorythe general usage of make and it s evolution as well as the connection to docker compose commandsthe concepts of the docker containers and the docker compose setupAnd as a nice to know the setup of PhpUnit for the test make target as well as the qa make targetthe usage of git secret to handle secret values ApproachIn this tutorial I m going to explain how to make our existing docker setup work with Github Actions and Gitlab CI CD Pipelines As I m a big fan of a progressive enhancement approach we will ensure that all necessary steps can be performed locally through make This has the additional benefit of keeping a single source of truth the Makefile which will come in handy when we set up the CI system on two different providers Github and Gitlab The general process will look very similar to the one for local development build the docker setupstart the docker setuprun the qa toolsrun the testsYou can see the final results in the CI setup section including the concrete yml files and links to the repositories seeSetup for a local CI runSetup for Github ActionsSetup for Gitlab PipelinesOn a code level we will treat CI as an environment configured through the env variable ENV So far we only used ENV local and we will extend that to also use ENV ci The necessary changes are explained after the concrete CI setup instructions in the sectionsDocker changesMakefile changesCodebase changes Try it yourselfTo get a feeling for what s going on you can start by executing the local CI run checkout branch part ci pipeline docker php gitlab githubinitialize makerun the local ci sh script This should give you a similar output as presented in the Execution example git checkout part ci pipeline docker php gitlab github Initialize makemake make init Execute the local CI runbash local ci sh CI setup General CI notes Initialize make for CIAs a very first step we need to configure the codebase to operate for the ci environment This is done through the make init target as explained later in more detail in the Makefile changes section viamake make init ENVS ENV ci TAG latest EXECUTE IN CONTAINER true GPG PASSWORD make make init ENVS ENV ci TAG latest EXECUTE IN CONTAINER true GPG PASSWORD Created a local make env fileENV ci ensures that weuse the correct docker compose config filesuse the ci build targetTAG latest is just a simplification for now because we don t do anything with the images yet In an upcoming tutorial we will push them to a container registry for later usage in production deployments and then set the TAG to something more meaningful like the build number EXECUTE IN CONTAINER true forces every make command that uses a RUN IN CONTAINER setup to run in a container This is important because the Gitlab runner will actually run in a docker container itself However this would cause any affected target to omit the DOCKER COMPOSER exec prefix GPG PASSWORD is the password for the secret gpg key as mentioned in Add a password protected secret gpg key wait for service shI ll explain the container is up and running but the underlying service is not problem for the mysql service and how we can solve it with a health check later in this article at Adding a health check for mysql On purpose we don t want docker compose to take care of the waiting because we can make better use of the waiting time and will instead implement it ourselves with a simple bash script located at docker scripts wait for service sh bin bashname max interval z amp amp echo Usage example bash wait for service sh mysql z amp amp max z amp amp interval echo Waiting for service name to become healthy checking every interval second s for max max times while true do i echo i max status docker inspect format json State Health Status docker ps filter name name q if echo status grep q healthy then echo SUCCESS break fi if i max then echo FAIL exit fi sleep interval doneThis script waits for a docker service to become healthy by checking the State Health Status info of the docker inspect command CAUTION The script uses docker ps filter name name q to determine the id of the container i e it will match all running containers against the name this would fail if there is more than one matching container I e you must ensure that name is specific enough to identify one single container uniquely The script will check up to max times in a interval of interval seconds See these answers on the How do I write a retry logic in script to keep retrying to run it up to times question for the implementation of the retry logic To check the health of the mysql service for times with seconds between each try it can be called viabash wait for service sh mysql Output bash wait for service sh mysql Waiting for service mysql to become healthy checking every second s for max times FAIL OR bash wait for service sh mysql Waiting for service mysql to become healthy checking every second s for max times SUCCESSThe problem of container dependencies isn t new and there are already some existing solutions out there e g wait forwait for itdockerizedocker compose waitBut unfortunately all of them operate by checking the availability of a host port combination and in the case of mysql that didn t help because the container was up the port was reachable but the mysql service in the container was not Setup for a local CI runAs mentioned under Approach we want to be able to perform all necessary steps locally and I created a corresponding script at local ci sh bin bash fail on any error see set emake docker down ENV ci truestart total date s STORE GPG KEYcp secret protected gpg example secret gpg DEBUGdocker versiondocker compose versioncat etc release true SETUP DOCKERmake make init ENVS ENV ci TAG latest EXECUTE IN CONTAINER true GPG PASSWORD start docker build date s make docker buildend docker build date s mkdir p build amp amp chmod build START DOCKERstart docker up date s make docker upend docker up date s make gpg initmake secret decrypt with password QAstart qa date s make qa FAILED trueend qa date s WAIT FOR CONTAINERSstart wait for containers date s bash docker scripts wait for service sh mysql end wait for containers date s TESTstart test date s make test FAILED trueend test date s end total date s RUNTIMESecho Build docker expr end docker build start docker build echo Start docker expr end docker up start docker up echo QA expr end qa start qa echo Wait for containers expr end wait for containers start wait for containers echo Tests expr end test start test echo echo Total expr end total start total CLEANUP reset the default make variablesmake make initmake docker down ENV ci true EVALUATE RESULTSif FAILED true then echo FAILED exit fiecho SUCCESS Run detailsas a preparation step we first ensure that no outdated ci containers are running this isonly necessary locally because runners on a remote CI system will start from scratch make docker down ENV ci truewe take some time measurements to understand how long certain parts take via start total date s to store the current timestampwe need the secret gpg key in order to decrypt the secrets and simply copy thepassword protected example key in the actual CI systems the key will be configured as a secret value that is injected in the run STORE GPG KEY cp secret protected gpg example secret gpgI like printing some debugging info in order to understand which exact circumstanceswe re dealing with tbh this is mostly relevant when setting the CI system up or makingmodifications to it DEBUG docker version docker compose version cat etc release truefor the docker setup we start withinitializing the environment for ci SETUP DOCKER make make init ENVS ENV ci TAG latest EXECUTE IN CONTAINER true GPG PASSWORD then build the docker setup make docker buildand finally add a build directory to collect the build artifacts mkdir p build amp amp chmod buildthen the docker setup is started START DOCKER make docker upand gpg is initialized so that the secrets can be decrypted make gpg init make secret decrypt with passwordWe don t need to pass a GPG PASSWORD to secret decrypt with password because we have set it up in the previous step as a default value via make initonce the application container is running the qa tools are run by invoking theqa make target QA make qa FAILED trueThe FAILED true part makes sure that the script will not be terminated if the checks fail Instead the fact that a failure happened is recorded in the FAILED variable so that we can evaluate it at the end We don t want the script to stop here because we want the following steps to be executed as well e g the tests to mitigate the mysql is not ready problem we will now apply thewait for service sh script WAIT FOR CONTAINERS bash docker scripts wait for service sh mysql once mysql is ready we can execute the tests via the test make target and apply the same FAILED true workaround as for the qa tools TEST make test FAILED truefinally all the timers are printed RUNTIMES echo Build docker expr end docker build start docker build echo Start docker expr end docker up start docker up echo QA expr end qa start qa echo Wait for containers expr end wait for containers start wait for containers echo Tests expr end test start test echo echo Total expr end total start total we clean up the resources this is only necessary when running locally because the runner ofa CI system would be shut down anyway CLEANUP make make init make docker down ENV ci trueand finally evaluate if any error occurred when running the qa tools or the tests EVALUATE RESULTS if FAILED true then echo FAILED exit fi echo SUCCESS Execution exampleExecuting the script viabash local ci shyields the following shortened output bash local ci shContainer dofroscra ci redis Stopping Stopping all other ci containers Client Cloud integration v Version Print more debugging info Created a local make env fileENV ci TAG latest DOCKER REGISTRY docker io DOCKER NAMESPACE dofroscra APP USER NAME application APP GROUP NAME application docker compose p dofroscra ci env file docker env f docker docker compose docker compose php base yml build php base internal load build definition from Dockerfile Output from building the docker containers ENV ci TAG latest DOCKER REGISTRY docker io DOCKER NAMESPACE dofroscra APP USER NAME application APP GROUP NAME application docker compose p dofroscra ci env file docker env f docker docker compose docker compose local ci yml f docker docker compose docker compose ci yml up dNetwork dofroscra ci network Creating Starting all ci containers C Program Files Git mingw bin make s gpg import GPG KEY FILES secret gpg gpg directory home application gnupg createdgpg keybox home application gnupg pubring kbx createdgpg home application gnupg trustdb gpg trustdb createdgpg key DABBBBC public key Alice Doe protected lt alice protected example com gt imported Output of importing the secret and public gpg keys C Program Files Git mingw bin make s git secret ARGS reveal f p git secret done of files are revealed C Program Files Git mingw bin make j k no print directory output sync target qa exec NO PROGRESS truephplint done took sphpcs done took sphpstan done took scomposer require checker done took sWaiting for service mysql to become healthy checking every second s for max times SUCCESSPHPUnit StandWithUkraine Time Memory MBOK tests assertions Build docker Start docker QA Wait for containers Tests Total Created a local make env fileContainer dofroscra ci application StoppingContainer dofroscra ci mysql Stopping Stopping all other ci containers SUCCESS Setup for Github ActionsRepository branch part ci pipeline docker php gitlab github CI CD overview Actions Example of a successful jobExample of a failed jobIf you are completely new to Github Actions I recommend to start with the official Quickstart Guide for GitHub Actions and the Understanding GitHub Actions article In short Github Actions are based on so called WorkflowsWorkflows are yaml files that live in the special github workflows directory in therepositorya Workflow can contain multiple Jobseach Job consists of a series of Stepseach Step needs a run element that represents a command that is executed by a new shellmulti line commands that should use the same shell are written as run echo line echo line See also difference between run and multiple runs in github actions The Workflow fileGithub Actions are triggered automatically based on the files in the github workflows directory I have added the file github workflows ci yml with the following content name CI build and teston automatically run for pull request and for pushes to branch part ci pipeline docker php gitlab github see push branches part ci pipeline docker php gitlab github pull request enable to trigger the action manually see CAUTION there is a known bug that makes the button to trigger the run not show up see workflow dispatch jobs build runs on ubuntu latest steps uses actions checkout v name start timer run echo START TOTAL date s gt GITHUB ENV name STORE GPG KEY run Note make sure to wrap the secret in double quotes echo secrets GPG KEY gt secret gpg name SETUP TOOLS run DOCKER CONFIG DOCKER CONFIG HOME docker install docker compose see install on linux see issuecomment mkdir p DOCKER CONFIG cli plugins curl sSL uname m o DOCKER CONFIG cli plugins docker compose chmod x DOCKER CONFIG cli plugins docker compose name DEBUG run docker compose version docker version cat etc release name SETUP DOCKER run make make init ENVS ENV ci TAG latest EXECUTE IN CONTAINER true GPG PASSWORD secrets GPG PASSWORD make docker build mkdir build amp amp chmod build name START DOCKER run make docker up make gpg init make secret decrypt with password name QA run Run the tests and qa tools but only store the error instead of failing immediately see make qa echo FAILED qa gt gt GITHUB ENV name WAIT FOR CONTAINERS run We need to wait until mysql is available bash docker scripts wait for service sh mysql name TEST run make test echo FAILED test FAILED gt gt GITHUB ENV name RUNTIMES run echo expr date s START TOTAL name EVALUATE run Check if FAILED is NOT empty if z FAILED then echo Failed at FAILED amp amp exit fi name upload build artifacts uses actions upload artifact v with name build artifacts path buildThe steps are essentially the same as explained before at Run details for the local run Some additional notes I want the Action to be triggered automatically only when Ipush to branch part ci pipeline docker php gitlab githubOR when a pull request is created via pull request In addition I want to be able totrigger the Action manually on any branch via workflow dispatch on push branches part ci pipeline docker php gitlab github pull request workflow dispatch For a real project I would let the action only run automatically on long living branches like main or develop The manual trigger is helpful if you just want to test your current work without putting it up for review CAUTION There is a known issue that hides the Trigger workflow button to trigger the action manually a new shell is started for each run instruction thus we must store our timer in the global environment variable GITHUB ENV name start timer run echo START TOTAL date s gt GITHUB ENV This will be the only timer we use because the job uses multiple steps that are timed automatically so we don t need to take timestamps manually the gpg key is configured as anencrypted secret namedGPG KEY and is stored in secret gpg The value is the content of thesecret protected gpg example file name STORE GPG KEY run echo secrets GPG KEY gt secret gpgSecrets are configured in the Github repository under Settings gt Secrets gt Actions at user repository settings secrets actions e g the ubuntu latest image doesn t contain the docker compose plugin thus we need toinstall it manually name SETUP TOOLS run DOCKER CONFIG DOCKER CONFIG HOME docker mkdir p DOCKER CONFIG cli plugins curl sSL uname m o DOCKER CONFIG cli plugins docker compose chmod x DOCKER CONFIG cli plugins docker composefor the make initialization we need the second secret named GPG PASSWORD which isconfigured as in our case seeAdd a password protected secret gpg key name SETUP DOCKER run make make init ENVS ENV ci TAG latest EXECUTE IN CONTAINER true GPG PASSWORD secrets GPG PASSWORD because the runner will be shutdown after the run we need to move the build artifacts to apermanent location using theactions upload artifact v action name upload build artifacts uses actions upload artifact v with name build artifacts path buildYou can download the artifacts in the Run overview UI Setup for Gitlab PipelinesRepository branch part ci pipeline docker php gitlab github CI CD overview Pipelines Example of a successful jobExample of a failed jobIf you are completely new to Gitlab Pipelines I recommend to start with the official Get started with GitLab CI CD guide In short the core concept of Gitlab Pipelines is the Pipelineit is defined in the yaml file gitlab ci yml that lives in the root of the repositorya Pipeline can contain multiple Stageseach Stage consists of a series of Jobseach Job contains a script sectionthe script section consists of a series of shell commands The gitlab ci yml pipeline fileGitlab Pipelines are triggered automatically based on a gitlab ci yml file located at the root of the repository It has the following content stages build testQA and Tests stage build test rules automatically run for pull request and for pushes to branch part ci pipeline docker php gitlab github if CI PIPELINE SOURCE merge request event CI COMMIT BRANCH part ci pipeline docker php gitlab github see use docker in docker image docker services name docker dind script start total date s STORE GPG KEY cp GPG KEY FILE secret gpg SETUP TOOLS start install tools date s curl is required to download docker compose apk add no cache make bash curl install docker compose see install on linux mkdir p docker cli plugins curl sSL o docker cli plugins docker compose chmod x docker cli plugins docker compose end install tools date s DEBUG docker version docker compose version show linux distro info cat etc release SETUP DOCKER Pass default values to the make init command otherwise we would have to pass those as arguments to every make call make make init ENVS ENV ci TAG latest EXECUTE IN CONTAINER true GPG PASSWORD GPG PASSWORD start docker build date s make docker build end docker build date s mkdir build amp amp chmod build START DOCKER start docker up date s make docker up end docker up date s make gpg init make secret decrypt with password QA Run the tests and qa tools but only store the error instead of failing immediately see start qa date s make qa ENV ci FAILED true end qa date s WAIT FOR CONTAINERS We need to wait until mysql is available start wait for containers date s bash docker scripts wait for service sh mysql end wait for containers date s TEST start test date s make test ENV ci FAILED true end test date s end total date s RUNTIMES echo Tools expr end install tools start install tools echo Build docker expr end docker build start docker build echo Start docker expr end docker up start docker up echo QA expr end qa start qa echo Wait for containers expr end wait for containers start wait for containers echo Tests expr end test start test echo Total expr end total start total EVALUATE RESULTS Use if else constructs in Gitlab pipelines see if FAILED true then exit fi Save the build artifact e g the JUNIT report xml file so we can download it later see artifacts when always paths the quotes are required see comment build expire in weekThe steps are essentially the same as explained before under Run details for the local run Some additional notes we start by defining the stages of the pipeline though that s currently just one build test stages build testthen we define the job QA and Tests and assign it to the build test stage QA and Tests stage build testI want the Pipeline to be triggered automatically only when Ipush to branch part ci pipeline docker php gitlab githubOR when a pull request is createdTriggering the Pipeline manually on any branch is possible by default rules if CI PIPELINE SOURCE merge request event CI COMMIT BRANCH part ci pipeline docker php gitlab github since we want to build and run docker images we need to use a docker base image and activate thedocker dind service See Use Docker to build Docker images Use Docker in Docker image docker services name docker dindwe store the secret gpg key as a secret file using the file type in theCI CD variables configuration of the Gitlab repositoryand move it to secret gpg in order to decrypt the secrets later STORE GPG KEY cp GPG KEY FILE secret gpgSecrets can be configured under Settings gt CI CD gt Variables at project repository settings ci cd e g the docker base image doesn t come with all required tools thus we need to install themissing ones make bash curl and docker compose SETUP TOOLS apk add no cache make bash curl mkdir p docker cli plugins curl sSL o docker cli plugins docker compose chmod x docker cli plugins docker composefor the initialization of make we use the GPG PASSWORD variable that we defined in theCI CD settings SETUP DOCKER make make init ENVS ENV ci TAG latest EXECUTE IN CONTAINER true GPG PASSWORD GPG PASSWORD Note I have marked the variable as masked so it won t show up in any logsfinally we store the job artifacts artifacts when always paths build expire in week They can be accessed in the Pipeline overview UI PerformancePerformance isn t an issue right now because the CI runs take only about min Github Actions and min Gitlab Pipelines but that s mostly because we only ship a super minimal application and those times will go up when things get more complex For the local setup I used all cores of my laptop The time breakdown is roughly as follows StepGitlabGithublocal without cachelocal with cached imageslocal with cached images layersSETUP TOOLSSETUP DOCKERSTART DOCKERQAWAIT FOR CONTAINERSTESTStotal excl runner startup total incl runner startup Times taken from CI build and test Github Action run Pipeline Gitlab Pipeline run local without cache via bash local ci sh with no local images at all local with cached images via bash local ci sh with cached images for mysql and redis local with cached images layers via bash local ci sh with cached images for mysql andredis and a warm layer cache for the application imageOptimizing the performance is out of scope for this tutorial but I ll at least document my current findings The caching problem on CIA good chunk of time is usually spent on building the docker images We did our best to optimize the process by leveraging the layer cache and using cache mounts see section Build stage ci in the php base image But those steps are futile on CI systems because the corresponding runners will start from scratch for every CI run i e there is no local cache that they could use In consequence the full docker setup is also built from scratch on every run There are ways to mitigate that e g pushing images to a container registry and pulling them before building the images to leveragethe layer cache via the cache from optionof docker composeexporting and importing the images as tar archives viadocker save anddocker load storing them either in the built in cache ofGithubor Gitlabsee also the satackey action docker layer caching v Github Actionand the official actions cache v Github Actionusing the cache fromand cache to options ofbuildxsee also the cache docu of the docker build push action v Github ActionBut None of that worked for me out of the box We will take a closer look in an upcoming tutorial Some reading material that I found valuable so far Caching Docker builds in GitHub Actions Which approach is the fastest A research Caching strategies for CI systemsBuild images on GitHub Actions with Docker layer cachingFaster CI Builds with Docker Layer Caching and BuildKitImage rebase and improved remote cache support in new BuildKit Docker changesAs a first step we need to decide which containers are required and how to provide the codebase Since our goal is running the qa tools and tests we only need the application php container The tests also need a database and a queue i e the mysql and redis containers are required as well whereas nginx php fpm and php worker are not required We ll handle that through dedicated docker compose configuration files that only contain the necessary services This is explained in more detail in section Compose file updates In our local setup we have sheen the host system and docker mainly because we wanted our changes to be reflected immediately in docker This isn t necessary for the CI use case In fact we want our CI images as close as possible to our production images and those should contain everything to run independently I e the codebase should live in the image not on the host system This will be explained in section Use the whole codebase as build context Compose file updatesWe will not only have some differences between the CI docker setup and the local docker setup different containers but also in the configuration of the individual services To accommodate for that we will use the following docker compose config files in the docker docker compose directory docker compose local ci yml holds configuration that is valid for local and ci trying to keep the config files DRYdocker compose ci yml holds configuration that is only valid for cidocker compose local yml holds configuration that is only valid for localWhen using docker compose we then need to make sure to include only the required files e g for ci docker compose f docker compose local ci yml f docker compose ci ymlI ll explain the logic for that later in section ENV based docker compose config In short docker compose local ymlWhen comparing ci with local for ciwe don t need to share the codebase with the host system application volumes APP CODE PATH HOST APP CODE PATH CONTAINER we don t need persistent volumes for the redis and mysql data mysql volumes mysql var lib mysql redis volumes redis datawe don t need to share ports with the host system application ports APPLICATION SSH HOST PORT redis ports REDIS HOST PORT we don t need any settings for local dev tools like xdebug or strace application environment PHP IDE CONFIG PHP IDE CONFIG cap add SYS PTRACE security opt seccomp unconfined extra hosts host docker internal host gateway So all of those config values will only live in the docker compose local yml file docker compose ci ymlIn fact there are only two things that ci needs that local doesn t a bind mount to share only the secret gpg key from the host with the application container application volumes APP CODE PATH HOST secret gpg APP CODE PATH CONTAINER secret gpg roThis is required to decrypt the secrets the private key has to be named secret gpg and put in the root of the codebase so that the import can be simplified with make targetsThe secret files themselves are baked into the image but the key to decrypt them will be provided only during runtime and a bind mount to share a build folder for build artifacts with the application container application volumes APP CODE PATH HOST build APP CODE PATH CONTAINER buildThis will be used to collect any files we want to retain from a build e g code coverage information log files etc Adding a health check for mysqlWhen running the tests for the first time on a CI system I noticed some weird errors related to the database Tests Feature App Http Controllers HomeControllerTest test invoke with data set default array lt li gt lt a href dispatch fo gt lt li gt PDOException SQLSTATE HY Connection refusedAs it turned out the mysql container itself was up and running but the mysql process within the container was not yet ready to accept connections Locally this hasn t been a problem because we usually would not run the tests immediately after starting the containers but on CI this is the case Fortunately docker compose has us covered here and provides a healtcheck configuration option healthcheck declares a check that s run to determine whether or not containers for this service are healthy Since this healthcheck is also valid for local I defined it in the combined docker compose local ci yml file mysql healthcheck Only mark the service as healthy if mysql is ready to accept connections Check every seconds for times each check has a timeout of s test mysqladmin ping h u MYSQL USER password MYSQL PASSWORD timeout s retries interval sThe script in test was taken from SO Docker compose check if mysql connection is ready When starting the docker setup docker ps will now add a health info to the STATUS make docker up docker psCONTAINER ID IMAGE STATUS NAMESbebfc dofroscra application ci latest Up seconds dofroscra ci application efde mysql Up seconds health starting dofroscra ci mysql a couple of seconds later docker psCONTAINER ID IMAGE STATUS NAMESbebfc dofroscra application ci latest Up seconds dofroscra ci application efde mysql Up seconds healthy dofroscra ci mysql Note the health starting and healthy infos for the mysql service We can also get this info from docker inspect used by our wait for service sh script via docker inspect format json State Health Status dofroscra ci mysql healthy FYI We could also use the depends on property with a condition service healthy on the application container so that docker compose would only start the container once the mysql service is healthy application depends on mysql condition service healthyHowever this would block the make docker up until mysql is actually up and running In our case this is not desirable because we can do other stuff in the meantime namely run the qa checks because they don t require a database and thus save a couple of seconds on each CI run Build target ciWe ve already introduced build targets in Environments and build targets and how to choose them through make with the ENV variable defined in a shared make env file Short recap create a make env file via make make init that contains the ENV e g ENV cithe make env file is included in the main Makefile making the ENV variables available to makeconfigure a DOCKER COMPOSE variable that passes the ENV as an environment variable i e via ENV ENV docker composeuse the ENV variable in the docker compose configuration file to determine the build target property E g in docker docker compose docker compose php base yml php base build target ENV in the Dockerfile of a service define the ENV as a build stage E g in docker images php base Dockerfile FROM base as ci So to enable the new ci environment we need to modify the Dockerfiles for the php base and the application image Build stage ci in the php base image Use the whole codebase as build contextAs mentioned in section Docker changes we want to bake the codebase into the ci image of the php base container Thus we must change the context property in docker docker compose docker compose php base yml to not only use the docker directory but instead the whole codebase I e dont use but File docker docker compose docker compose php base yml php base build pass the full codebase to docker for building the image context Build the dependenciesThe composer dependencies must be set up in the image as well so we introduce a new stage stage in docker images php base Dockerfile The most trivial solution would look like this copy the whole codebaserun composer installFROM base as ciCOPY codebaseRUN composer install no scripts no plugins no progress oHowever this approach has some downsides if any file in the codebase changes the COPY codebase layer will be invalidated I e docker could not use the layer cachewhich also means that every layer afterwards cannot use the cache as well In consequence the composer install would run every time even when the composer json file doesn t change composer itself uses a cache for storing dependencies locally so it doesn t have to download dependencies that haven t changed But since we run composer install in Docker this cache would be thrown away every time a build finishes To mitigate that we can use mount type cacheto define a directory that docker will re use between builds gt Contents of the cache directories persists between builder invocations without invalidating gt the instruction cache Keeping those points in mind we end up with the following instructions File docker images php base Dockerfile FROM base as ci By only copying the composer files required to run composer install the layer will be cached and only invalidated when the composer dependencies are changedCOPY composer json dependencies COPY composer lock dependencies use a cache mount to cache the composer dependencies this is essentially a cache that lives in Docker BuildKit i e has nothing to do with the host system RUN mount type cache target tmp composer cd dependencies amp amp COMPOSER HOME tmp composer sets the home directory of composer that also controls where composer looks for the cache so we don t have to download dependencies again if they are cached COMPOSER HOME tmp composer composer install no scripts no plugins no progress o copy the full codebaseCOPY codebaseRUN mv dependencies vendor codebase vendor amp amp cd codebase amp amp remove files we don t require in the image to keep the image size small rm rf docker amp amp we need a git repository for git secret to work can be an empty one git initFYI The COPY codebase step doesn t actually copy everything in the repository because we have also introduced a dockerignore file to exclude some files from being included in the build context see section dockerignore Some notes on the final RUN step rm rf docker doesn t really save that much in the current setup please take it more as an example to remove any files that shouldn t end up in the final image e g tests in a production image the git init part is required because we need to decrypt the secrets later and git secret requires a git repository which can be empty We can t decrypt the secrets during the build because we do not want decrypted secret files to end up in the image When tested locally the difference between the trivial solution and the one that makes use of layer caching is seconds see the results in the Performance section Create the final imageAs a final step we will rename the current stage to codebase and copy the build artifact from that stage into our final ci build stage FROM base as codebase build the composer dependencies and clean up the copied files FROM base as ciCOPY from codebase chown APP USER NAME APP GROUP NAME codebase APP CODE PATHWhy are we not just using the previous stage directly as ci Because using multistage builds is a good practice to keep the final layers of an image to a minimum Everything that happened in the previous codebase stage will be forgotten i e not exported as layers That does not only save us some layers but also allows us to get rid of files like the docker directory We needed that directory in the build context because some files where required in other parts of the Dockerfile e g the php ini files so we can t exclude it via dockerignore But we can remove it in the codebase stage so it will NOT be copied over and thus not end up in the final image If we wouldn t have the codebase stage the folder would be part of the layer created when COPYing all the files from the build context and removing it via rm rf docker would have no effect on the image size Currently that doesn t really matter because the building step is super simple just a composer install but in a growing and more complex codebase you can easily save a couple MB To be concrete the multistage build has layers and the final layer containing the codebase has a size of MB docker image history H dofroscra application ciIMAGE CREATED CREATED BY SIZE COMMENTdceede minutes ago COPY codebase var www app buildkit MB buildkit dockerfile v lt missing gt minutes ago WORKDIR var www app B buildkit dockerfile v lt missing gt minutes ago COPY usr bin composer usr local bin compos…MB buildkit dockerfile v lt missing gt minutes ago COPY docker images php base bashrc root…B buildkit dockerfile v lt missing gt minutes ago COPY docker images php base bashrc home…B buildkit dockerfile v lt missing gt minutes ago COPY docker images php base conf d zz app…B buildkit dockerfile v lt missing gt minutes ago COPY docker images php base conf d zz app…B buildkit dockerfile v lt missing gt minutes ago RUN APP USER ID APP GROUP ID …kB buildkit dockerfile v lt missing gt minutes ago RUN APP USER ID APP GROUP ID …MB buildkit dockerfile v lt missing gt minutes ago ADD …B buildkit dockerfile v lt missing gt minutes ago RUN APP USER ID APP GROUP ID …MB buildkit dockerfile v lt missing gt minutes ago ADD …B buildkit dockerfile v lt missing gt minutes ago RUN APP USER ID APP GROUP ID …kB buildkit dockerfile v lt missing gt minutes ago ENV ENV ci B buildkit dockerfile v lt missing gt minutes ago ENV ALPINE VERSION B buildkit dockerfile v lt missing gt minutes ago ENV TARGET PHP VERSION B buildkit dockerfile v lt missing gt minutes ago ENV APP CODE PATH var www app B buildkit dockerfile v lt missing gt minutes ago ENV APP GROUP NAME application B buildkit dockerfile v lt missing gt minutes ago ENV APP USER NAME application B buildkit dockerfile v lt missing gt minutes ago ENV APP GROUP ID B buildkit dockerfile v lt missing gt minutes ago ENV APP USER ID B buildkit dockerfile v lt missing gt minutes ago ARG ENV B buildkit dockerfile v lt missing gt minutes ago ARG ALPINE VERSION B buildkit dockerfile v lt missing gt minutes ago ARG TARGET PHP VERSION B buildkit dockerfile v lt missing gt minutes ago ARG APP CODE PATH B buildkit dockerfile v lt missing gt minutes ago ARG APP GROUP NAME B buildkit dockerfile v lt missing gt minutes ago ARG APP USER NAME B buildkit dockerfile v lt missing gt minutes ago ARG APP GROUP ID B buildkit dockerfile v lt missing gt minutes ago ARG APP USER ID B buildkit dockerfile v lt missing gt days ago bin sh c nop CMD bin sh B lt missing gt days ago bin sh c nop ADD file dddaace…MBThe non multistage build has layers and the final layer s containing the codebase have a combined size of MB MB MB docker image history H dofroscra application ciIMAGE CREATED CREATED BY SIZE COMMENTbaca minutes ago RUN bin sh c COMPOSER HOME tmp composer …MB buildkit dockerfile v lt missing gt minutes ago COPY var www app buildkit MB buildkit dockerfile v lt missing gt minutes ago WORKDIR var www app B buildkit dockerfile v lt missing gt minutes ago COPY usr bin composer usr local bin compos…MB buildkit dockerfile v lt missing gt minutes ago COPY docker images php base bashrc root…B buildkit dockerfile v lt missing gt minutes ago COPY docker images php base bashrc home…B buildkit dockerfile v lt missing gt minutes ago COPY docker images php base conf d zz app…B buildkit dockerfile v lt missing gt minutes ago COPY docker images php base conf d zz app…B buildkit dockerfile v lt missing gt minutes ago RUN APP USER ID APP GROUP ID …kB buildkit dockerfile v lt missing gt minutes ago RUN APP USER ID APP GROUP ID …MB buildkit dockerfile v lt missing gt minutes ago ADD …B buildkit dockerfile v lt missing gt minutes ago RUN APP USER ID APP GROUP ID …MB buildkit dockerfile v lt missing gt minutes ago ADD …B buildkit dockerfile v lt missing gt minutes ago RUN APP USER ID APP GROUP ID …kB buildkit dockerfile v lt missing gt minutes ago ENV ENV ci B buildkit dockerfile v lt missing gt minutes ago ENV ALPINE VERSION B buildkit dockerfile v lt missing gt minutes ago ENV TARGET PHP VERSION B buildkit dockerfile v lt missing gt minutes ago ENV APP CODE PATH var www app B buildkit dockerfile v lt missing gt minutes ago ENV APP GROUP NAME application B buildkit dockerfile v lt missing gt minutes ago ENV APP USER NAME application B buildkit dockerfile v lt missing gt minutes ago ENV APP GROUP ID B buildkit dockerfile v lt missing gt minutes ago ENV APP USER ID B buildkit dockerfile v lt missing gt minutes ago ARG ENV B buildkit dockerfile v lt missing gt minutes ago ARG ALPINE VERSION B buildkit dockerfile v lt missing gt minutes ago ARG TARGET PHP VERSION B buildkit dockerfile v lt missing gt minutes ago ARG APP CODE PATH B buildkit dockerfile v lt missing gt minutes ago ARG APP GROUP NAME B buildkit dockerfile v lt missing gt minutes ago ARG APP USER NAME B buildkit dockerfile v lt missing gt minutes ago ARG APP GROUP ID B buildkit dockerfile v lt missing gt minutes ago ARG APP USER ID B buildkit dockerfile v lt missing gt days ago bin sh c nop CMD bin sh B lt missing gt days ago bin sh c nop ADD file dddaace…MBAgain It is expected that the differences aren t big because the only size savings come from the docker directory with a size of kb du hd dockerK dockerFinally we are also using the chown option of the RUN instruction to ensure that the files have the correct permissions Build stage ci in the application imageThere is actually nothing to be done here We don t need SSH any longer because it is only required for the SSH Configuration of PhpStorm So the build stage is simply empty ARG BASE IMAGEFROM BASE IMAGE as baseFROM base as ciFROM base as local Though there is one thing to keep in mind In the local image we used sshd as the entrypoint i e we had a long running process that would keep the container running To keep the ci application container running we muststart it via the d flag of docker compose already done in the make docker up target PHONY docker up docker up validate docker variables DOCKER COMPOSE up d DOCKER SERVICE NAME allocate a tty via tty true in the docker compose local ci yml file application tty true dockerignoreThe dockerignore file is located in the root of the repository and ensures that certain files are kept out of the Docker build context This will speed up the build because less files need to be transmitted to the docker daemon keep images smaller because irrelevant files are kept out of the image The syntax is quite similar to the gitignore file in fact I ve found it to be quite often the case that the contents of the gitignore file are a subset of the dockerignore file This makes kinda sense because you typically wouldn t want files that are excluded from the repository to end up in a docker image e g unencrypted secret files This has also been noticed by others see e g Reddit Any way to copy gitignore contents to dockerignoreSO Should dockerignore typically be a superset of gitignore but to my knowledge there is currently no way to keep the two files in sync CAUTION The behavior between the two files is NOT identical The documentation saysMatching is done using Go s filepath Match rules A preprocessing step removes leading and trailing whitespace and eliminates and elements using Go s filepath Clean Lines that are blank after preprocessing are ignored Beyond Go s filepath Match rules Docker also supports a special wildcard string that matches any number of directories including zero For example go will exclude all files that end with go that are found in all directories including the root of the build context Lines starting with exclamation mark can be used to make exceptions to exclusions Please note the part regarding go In gitignore it would be sufficient to write go to match any file that contains go regardless of the directory In dockerignore you must specify it as go In our case the content of the dockerignore file looks like this gitignore env example env idea phpunit result cache vendor secret gpg gitsecret keys random seed gitsecret keys pubring kbx secret passwords txt build additionally ignored files git lt generated gt lt a id makefile changes gt lt a gt lt generated gt Makefile changes lt generated gt lt a id initialize the shared variables gt lt a gt lt generated gt Initialize the shared variablesWe have introduced the concept of shared variables via make env shared variables make env previously It allows us to define variables in one place single source of truth that are then used as defaults so we don t have to define them explicitly when invoking certain make targets like make docker build We ll make use of this concept by setting the environment to ci via ENV ci and thus making sure that all docker commands use ci automatically as well Initialize make to run docker commands with ENV ci In addition I made a small modification by introducing a second file at make variables env that is also included in the main Makefile and holds the default shared variables Those are neither secret nor are they likely to be changed for environment adjustments The file is NOT ignored by gitignore and is basically just the previous make env example file without the environment specific variables text File make variables envDOCKER REGISTRY docker ioDOCKER NAMESPACE dofroscraAPP USER NAME applicationAPP GROUP NAME applicationThe make env file is still gitignored and can be initialized with the make init target using the ENVS variable make make init ENVS ENV ci SOME OTHER DEFAULT VARIABLE foo which would create a make env file with the contentENV ci SOME OTHER DEFAULT VARIABLE fooIf necessary we could also override variables defined in the make variables env file because the make env is included last in the Makefile File Makefile include the default variablesinclude make variables env include the local variables include make envThe default value for ENVS is ENV local TAG latest to retain the same default behavior as before when ENVS is omitted The corresponding make init target is defined in the main Makefile and now looks like this ENVS ENV local TAG latest PHONY make initmake init Initializes the local makefile env file with ENV variables for make Use via ENVS KEY value KEY value if ENVS error ENVS is undefined rm f make env for variable in ENVS do echo variable tee a make env gt dev null gt amp done echo Created a local make env file ENV based docker compose configAs mentioned in section Compose file updates we need to select the correct docker compose configuration files based on the ENV value This is done in make docker mk File make docker mk DOCKER COMPOSE DIR DOCKER COMPOSE COMMAND DOCKER COMPOSE FILE LOCAL CI DOCKER COMPOSE DIR docker compose local ci ymlDOCKER COMPOSE FILE CI DOCKER COMPOSE DIR docker compose ci ymlDOCKER COMPOSE FILE LOCAL DOCKER COMPOSE DIR docker compose local yml we need to assemble the correct combination of docker compose yml config filesifeq ENV ci DOCKER COMPOSE FILES f DOCKER COMPOSE FILE LOCAL CI f DOCKER COMPOSE FILE CI else ifeq ENV local DOCKER COMPOSE FILES f DOCKER COMPOSE FILE LOCAL CI f DOCKER COMPOSE FILE LOCAL endifDOCKER COMPOSE DOCKER COMPOSE COMMAND DOCKER COMPOSE FILES When we now take a look at a full recipe when using ENV ci with a docker target e g docker up we can see that the correct files are chosen e g make docker up ENV ci nENV ci TAG latest DOCKER REGISTRY docker io DOCKER NAMESPACE dofroscra APP USER NAME application APP GROUP NAME application docker compose p dofroscra ci env file docker env f docker docker compose docker compose local ci yml f docker docker compose docker compose ci yml up d gt f docker docker compose docker compose local ci yml f docker docker compose docker compose ci yml Codebase changes Add a test for encrypted filesWe ve introduced git secret in the previous tutorial Use git secret to encrypt secrets in the repository and used it to store the file passwords txt encrypted in the codebase To make sure that the decryption works as expected on the CI systems I ve added a test at tests Feature EncryptionTest php to check if the file exists and if the content is correct class EncryptionTest extends TestCase public function test ensure that the secret passwords file was decrypted pathToSecretFile DIR passwords txt this gt assertFileExists pathToSecretFile expected my secret password n actual file get contents pathToSecretFile this gt assertEquals expected actual Of course this doesn t make sense in a real world scenario because the secret value would now be exposed in a test but it suffices for now as proof of a working secret decryption Add a password protected secret gpg keyI ve mentioned in Scenario Decrypt file that it is also possible to use a password protected secret gpg key for an additional layer of security I have created such a key and stored it in the repository at secret protected gpg example in a real world scenario I wouldn t do that but since this is a public tutorial I want you to be able to follow along completely The password for that key is The corresponding public key is located at dev gpg keys alice protected public gpg and belongs to the email address alice protected example com I ve added this email address and re encrypted the secrets afterwards viamake gpg initmake secret add user EMAIL alice protected example com make secret encryptWhen I now import the secret protected gpg example key I can decrypt the secrets though I cannot use the usual secret decrypt target but must instead use secret decrypt with passwordmake secret decrypt with password GPG PASSWORD or store the GPG PASSWORD in the make env file when it is initialized for CImake make init ENVS ENV ci TAG latest EXECUTE IN CONTAINER true GPG PASSWORD make secret decrypt with password Create a JUnit report from PhpUnitI ve added the log junit option to the phpunit configuration of the test make target in order to create an XML report in the build directory in the make application qa mk file File make application qa mk PHPUNIT CMD php vendor bin phpunitPHPUNIT ARGS c phpunit xml log junit build report xmlI e each run of the tests will now create a Junit XML report at build report xml The file is used as an example of a build artifact i e something that we would like to keep from a CI run Wrapping upCongratulations you made it If some things are not completely clear by now don t hesitate to leave a comment You should now have a working CI pipeline for Github via Github Actions and or Gitlab via Gitlab pipelines that runs automatically on each push In the next part of this tutorial we will create a VM on GCP and provision it to run dockerized applications Please subscribe to the RSS feed or via email to get automatic notifications when this next part comes out |
2023-04-17 07:17:19 |
海外TECH |
Engadget |
German artist refuses award after his AI image wins prestigious photography prize |
https://www.engadget.com/german-artist-refuses-award-after-his-ai-image-wins-prestigious-photography-prize-071322551.html?src=rss
|
German artist refuses award after his AI image wins prestigious photography prizeThere s some controversy in the photography world as an AI generated image won a major prize at a prestigious competition PetaPixel has reported An piece called The Electrician by Boris Eldagsen took first prize in the Creative category at the World Photography Organization s Sony World Photography Awards ーdespite not being taken by a camera Eldagsen subsequently refused the award saying AI is not photography I applied to find out if the competitions are prepared for AI images to enter They are not Eldagsen s image is part of a series called PSEUDOMNESIA Fake Memories designed to evoke a photographic style of the s However they are in reality fake memories of a past that never existed that no one photographed These images were imagined by language and re edited more between to times through AI image generators combining inpainting outpainting and prompt whispering techniques In a blog Eldagsen explained that he used his experience as a photographer to create the prize winning image acting as a director of the process with the AI generators as co creators Although the work is inspired by photography he said that the point of the submission is that it is not photography Participating in open calls I want to speed up the process of the Award organizers to become aware of this difference and create separate competitions for AI generated images he said Eldagsen subsequently declined the prize “Thank you for selecting my image and making this a historic moment as it is the first AI generated image to win in a prestigious international photography competition he wrote “How many of you knew or suspected that it was AI generated Something about this doesn t feel right does it AI images and photography should not compete with each other in an award like this They are different entities AI is not photography Therefore I will not accept the award Shortly thereafter the photo was stripped from the show and competition website and organizers have yet to comment on the matter Edalgsen actually traveled to London to attend the ceremony and even got up on stage uninvited to read a statement in person nbsp It s not clear if the organizers knew the work was AI generated or not Eldagsen said he told them it was In any case rather than shrinking from the situation they should be embracing it AI generated art has entered the culture in a huge way over the past year with AI winning both photo and art competitions over the past few months Eldagsen s piece is bound to create conversations about how to handle it particularly when it encroaches into traditional mediums nbsp This article originally appeared on Engadget at |
2023-04-17 07:13:22 |
金融 |
日本銀行:RSS |
業態別の日銀当座預金残高(3月) |
http://www.boj.or.jp/statistics/boj/other/cabs/cabs.xlsx
|
日銀当座預金残高 |
2023-04-17 17:00:00 |
海外ニュース |
Japan Times latest articles |
Explosive device thrown at Kishida could have been lethal, experts say |
https://www.japantimes.co.jp/news/2023/04/17/national/fumio-kishida-bomb-attack-analysis/
|
bodily |
2023-04-17 16:35:16 |
海外ニュース |
Japan Times latest articles |
SDF working to recover three bodies after two from missing chopper officially confirmed dead |
https://www.japantimes.co.jp/news/2023/04/17/national/sdf-chopper-recovery-efforts/
|
SDF working to recover three bodies after two from missing chopper officially confirmed deadThe SDF was continuing efforts to locate the others who had been aboard the helicopter after the first official confirmation of deaths since the chopper |
2023-04-17 16:27:29 |
海外ニュース |
Japan Times latest articles |
Fender amps up Japan presence with high-end retail strategy |
https://www.japantimes.co.jp/news/2023/04/17/business/corporate-business/fender-ceo-harajuku-store-strategy/
|
Fender amps up Japan presence with high end retail strategyThe guitar maker is emphasizing its long standing connections to Japan with a flagship store in Harajuku its first ever as it seeks to appeal to both |
2023-04-17 16:13:55 |
ニュース |
BBC News - Home |
'Ignorant' protesters blamed for Grand National death |
https://www.bbc.co.uk/sport/horse-racing/65296693?at_medium=RSS&at_campaign=KARANGA
|
x Ignorant x protesters blamed for Grand National deathHorse trainer Sandy Thomson says the interruption to the Grand National by ignorant animal rights activists contributed to Hill Sixteen s death |
2023-04-17 07:04:00 |
ニュース |
BBC News - Home |
Melbourne overtakes Sydney as Australia's biggest city |
https://www.bbc.co.uk/news/world-australia-65261720?at_medium=RSS&at_campaign=KARANGA
|
country |
2023-04-17 07:19:03 |
ニュース |
BBC News - Home |
Sega to buy Angry Birds maker Rovio |
https://www.bbc.co.uk/news/business-65295724?at_medium=RSS&at_campaign=KARANGA
|
rovio |
2023-04-17 07:44:29 |
ニュース |
BBC News - Home |
Women's Six Nations: Abby Dow and a brave little boy star in round three |
https://www.bbc.co.uk/sport/av/rugby-union/65295470?at_medium=RSS&at_campaign=KARANGA
|
Women x s Six Nations Abby Dow and a brave little boy star in round threeWatch the top five moments from the third round of the Women s Six Nations with a brave little Wales fan and England s Abby Dow among the stars |
2023-04-17 07:37:32 |
マーケティング |
MarkeZine |
Enigol、TikTok運用代行サービスを提供開始 専門チームが戦略策定からレポーティングまで支援 |
http://markezine.jp/article/detail/41984
|
enigol |
2023-04-17 16:30:00 |
マーケティング |
MarkeZine |
大阪メトロ アドエラ、仮想の駅空間においてOOH媒体のアイトラッキング調査を実施 |
http://markezine.jp/article/detail/41997
|
大阪メトロ |
2023-04-17 16:15:00 |
IT |
週刊アスキー |
上映作品が決定! 横浜の4つの会場で野外映画イベント「SEASIDE CINEMA 2023」5月2日~7日開催 |
https://weekly.ascii.jp/elem/000/004/133/4133171/
|
seasidecinema |
2023-04-17 16:50:00 |
コメント
コメントを投稿