IT |
気になる、記になる… |
「iPhone 15」の一部モデルか ー Appleの未発表製品「A3094」がインドのBIS認証を取得 |
https://taisy0.com/2023/08/17/175438.html
|
apple |
2023-08-16 15:34:23 |
AWS |
AWS Big Data Blog |
Implement a serverless CDC process with Apache Iceberg using Amazon DynamoDB and Amazon Athena |
https://aws.amazon.com/blogs/big-data/implement-a-serverless-cdc-process-with-apache-iceberg-using-amazon-dynamodb-and-amazon-athena/
|
Implement a serverless CDC process with Apache Iceberg using Amazon DynamoDB and Amazon AthenaApache Iceberg is an open table format for very large analytic datasets Iceberg manages large collections of files as tables and it supports modern analytical data lake operations such as record level insert update delete and time travel queries The Iceberg specification allows seamless table evolution such as schema and partition evolution and its design is … |
2023-08-16 15:44:55 |
AWS |
AWS Big Data Blog |
Derive operational insights from application logs using Automated Data Analytics on AWS |
https://aws.amazon.com/blogs/big-data/derive-operational-insights-from-application-logs-using-automated-data-analytics-on-aws/
|
Derive operational insights from application logs using Automated Data Analytics on AWSAutomated Data Analytics ADA on AWS is an AWS solution that enables you to derive meaningful insights from data in a matter of minutes through a simple and intuitive user interface ADA offers an AWS native data analytics platform that is ready to use out of the box by data analysts for a variety of use … |
2023-08-16 15:41:49 |
AWS |
AWS Machine Learning Blog |
How Thomson Reuters developed Open Arena, an enterprise-grade large language model playground, in under 6 weeks |
https://aws.amazon.com/blogs/machine-learning/how-thomson-reuters-developed-open-arena-an-enterprise-grade-large-language-model-playground-in-under-6-weeks/
|
How Thomson Reuters developed Open Arena an enterprise grade large language model playground in under weeksIn this post we discuss how Thomson Reuters Labs created Open Arena Thomson Reuters s enterprise wide large language model LLM playground that was developed in collaboration with AWS The original concept came out of an AI ML Hackathon supported by Simone Zucchet AWS Solutions Architect and Tim Precious AWS Account Manager and was developed into production using AWS services in under weeks with support from AWS AWS managed services such as AWS Lambda Amazon DynamoDB and Amazon SageMaker as well as the pre built Hugging Face Deep Learning Containers DLCs contributed to the pace of innovation |
2023-08-16 15:48:59 |
AWS |
AWS Government, Education, and Nonprofits Blog |
4 steps to launching a successful data literacy program for public sector employees |
https://aws.amazon.com/blogs/publicsector/launch-successful-data-literacy-program-public-sector-employees/
|
steps to launching a successful data literacy program for public sector employeesPublic sector agencies today require digital skills to deliver the functions citizens expect and the innovations necessary to speed up processes and address gaps and inequity However digital transformation can only be achieved when workers have the data literacy skills to read interpret communicate and reason with this data Coursera an AWS Training Partner hosts their global online learning platform in AWS Cloud Coursera has created a four step framework for successfully building and managing a data literacy program Learn how your organization can adopt this framework to empower your workforce to harness data more effectively |
2023-08-16 15:51:31 |
golang |
Goタグが付けられた新着投稿 - Qiita |
【Go】ログイン機能でウェブアプリを作ってみる(11) |
https://qiita.com/kins/items/89fc3d844a73b669f24d
|
authmiddleware |
2023-08-17 00:49:31 |
Azure |
Azureタグが付けられた新着投稿 - Qiita |
SAP on Azure デプロイ自動化フレームワーク - SAPシステムの作成(BOMファイル準備&配置) |
https://qiita.com/R3ne7/items/108ab8277c6e4441d205
|
saponazure |
2023-08-17 00:38:38 |
海外TECH |
Ars Technica |
Early plate tectonics was surprisingly speedy |
https://arstechnica.com/?p=1961150
|
australia |
2023-08-16 15:30:56 |
海外TECH |
Ars Technica |
New Triassic fossil features sharp claws and a nasty beak |
https://arstechnica.com/?p=1961156
|
earth |
2023-08-16 15:08:59 |
海外TECH |
DEV Community |
CV-based self-diagnosis telemedicine application |
https://dev.to/abtosoftware/cv-based-self-diagnosis-telemedicine-application-439m
|
CV based self diagnosis telemedicine applicationThis post is a short overview of an Abto Software healthcare project Markerless human pose detection to benefit physical therapy project overviewOur client is a health focused organization that provides comprehensive products for the healthcare industry The portfolio of the reputable company comprises solutions that benefit healthcare institutions and patients from enterprise level management platforms to user friendly mobile applications Our partner was looking for knowledge and experience in delivering human body pose detection and analysis our former successful cooperation was focused around performing personal medical device integration Having expertise in building and implementing advanced solutions for accurate pose estimation and analysis our company smoothly upgraded another solution Sensorless human body detection to transform exercise sessions Our approachOur team has entered into cooperation to modernize a custom telemedicine application supporting specialists in improving patient outcomes by enhancing personalized treatment and better tailored exercise guidance Our engineers quickly designed a solution that processes different exercises for example cervical flexion Abto Software s main goal Initial investigation and identification of the best techniques for accurate motion tracking and analysis to enable MSK telerehabilitationEfficient customization and implementation of the chosen technique for precise movement assessment to empower therapeutics monitoringAt the first stage we trained the CV algorithm to recognize human motion on ready captured video materials During the next stages we implemented additional CV algorithms to process human movement in real time allowing end users to leverage helpful feedback during exercising Abto Software took over Business logic developmentDemo product designComputer vision technique implementationa In depth researchb Application prototypingUI UX design CV self diagnosis telemedicine app Our solutionOur application was created to accelerate physical therapy The system guides patients through exercises automatically tracking every movementThe system after analyzing patient performance and adherence transfers processed health indicators to the attending clinicians who can then adjust the prescribed treatment program The application is intended to streamline different domains Digital rehabilitationDigital therapeuticsSpinal rehabSports medicine amp orthopedics CV camera based telemedicine app The challenges Measurement pointsAt the discovery phase we had to determine the correct measurement points for accurate limb assessment This challenge was resolved by monitoring the ear nose line segment Viewpoint variationWe faced viewpoint variation which means that recognized shapes change and alter determined features That s why our engineers have used different datasets to train the algorithms Pose variationAnother issue pose variation which means the recognized objects aren t steady bodies and can be deformed To handle this problem our engineers made sure the datasets included numerous possible variations of pose Summing upAbto Software has joined the project to assist our client a mature healthcare focused organization in providing a user friendly telemedicine application to empower physical therapists and patients undergoing rehabilitation Our experts have utilized advanced technology in particular computer vision to facilitate remote monitoring and transform healthcare delivery By leveraging Artificial intelligence ML DL data analyticsComputer visionWe assist Healthcare businesses that prioritize patient first carePhysical therapists embracing innovationAnd benefit Physiotherapy patients undergoing physiotherapy and rehabilitationThose patients who suffer chronic conditions and prefer accessible services |
2023-08-16 15:35:04 |
海外TECH |
DEV Community |
Extensions Guide for Nebula Oni Color Theme |
https://dev.to/psudo-dev/extensions-guide-for-nebula-oni-color-theme-2hl0
|
Extensions Guide for Nebula Oni Color ThemeSince I ve made a color theme I also ended up customizing the Extensions that use color schemes like Better Comments and Indent Rainbow and trying to match them with the Nebula Oni Color Theme So of course in order to follow this guide you ve got to download them first Originally I had made a color scheme for the Bracket Pair Colorization but since I originally launched the theme Bracket Pair Colorization became part of VSCode and if I m not mistaken it s now active by default Bracket Pair ColorizationInitially I was trying to make them different from the Nebula Syntax so I wouldn t confuse them but it s hard to use colors that are different enough because if they are close and almost the same color it kind of bothers me These colors have the same hue as the colors from Nebula Syntax and I combine them in a way that the colors I prefer appear more often and that each pair can work well together within the iteration cycle It was quite a hassle but in the end I think it works well and it s aesthetically pleasing Hourglass SpirographPegasus CerberusSpirographCerberus Better CommentsI customized the tags that trigger the colors as well as the colors themselves They are not the same as the Nebula Syntax but they have a similar hue but are brighter more saturated All triggers need the Shift Key except for the strikethrough double slash For the other tags you basically have triggers in the numbers row a pair on the far left and and another on the far right and and then the colors I use the most are near the Right Shift lt and gt and then and But of course depending on your keyboard layout it may vary So if you prefer some other character you just have to change it on the settings json file You might need to reload window for the new trigger characters to take effect These are the settings I use better comments highlightPlainText true better comments tags backgroundColor transparent color F strikethrough false tag backgroundColor transparent color FFA strikethrough false tag backgroundColor transparent color FC strikethrough false tag lt backgroundColor transparent color EBD strikethrough false tag gt backgroundColor transparent color EE strikethrough false tag backgroundColor transparent color AFF strikethrough false tag backgroundColor transparent color FF strikethrough false tag backgroundColor transparent color EEE strikethrough false tag backgroundColor transparent color strikethrough true tag Indent RainbowIf you want you can use the same colors as the Bracket Pair Colorization or just use colors like the default settings for Indent Rainbow I tried to use the same colors as the Bracket Pair Colorizer but I don t know I didn t think it worked that well so I ve tried a few combinations and came up with this indentRainbow colors DFFFF FFFD CAFF DFF FFE indentRainbow errorColor BDA indentRainbow tabmixColor AA All these settings are part of the settings json file just remember to use where needed but otherwise it s just a matter of copy and paste Panel and TerminalThe bottom Panel is one of the few sections that have a border same as the pinned tabs but it s almost imperceptible it s just enough so you can differentiate where it ends and where the editor starts making it look like they are on top of the editor I tried to select colors that are easy to read but it s still compatible with terminal customizations like ZSH shell For more information on how to customize it I ve followed this tutorial that has instructions for Windows Mac and Linux My Settings debug console fontFamily Liga Meslo LG M DZ terminal integrated cursorStyle line terminal integrated cursorWidth terminal integrated fontFamily Liga Meslo LG M DZ terminal integrated fontWeightBold normal terminal integrated lineHeight I use these settings but MesloLGS NF Hack NF and FiraCode NF are also good font options For more information check Nerd Fonts Help Support Nebula Oni Color ThemeTo learn more about the Nebula Oni Color Theme or how to further customize it take charge and change colors for the Semantic Tokens and Text Mate Tokens yourself check out this post If you want to support this theme would you consider sharing this theme with friends and colleaguesrating it on Visual Studio Code Market Place and Open VSX Market Placegiving it a star on GithubAnd if you really liked this theme would you consider buying me a coffee Thanks psudo dev |
2023-08-16 15:19:51 |
海外TECH |
DEV Community |
Did anyone built online store with Next.JS? |
https://dev.to/shnai0/did-anyone-built-online-store-with-nextjs-3a41
|
Did anyone built online store with Next JS I am working on new e commerce project and would like to set up online store Simple one I drafted it in Shopify As it looks like the easiest way but still build a good looking store kind of challenge I am not sure if there are any open source projects or recommendations for it |
2023-08-16 15:09:22 |
海外TECH |
DEV Community |
Refine & Nest.js boilerplate |
https://dev.to/igolubic/refine-nestjs-boilerplate-4jh6
|
Refine amp Nest js boilerplateWe were in search of a solution that bridged the gap between BaaS platforms such as Appwrite or Supabase and building from the ground up We needed a system that would offer the flexibility to implement BI data management IoT automation and similar applications while also incorporating features like authentication RBAC social login a frontend equipped with CRUD operations testing and more And this is how Refine amp Nest js boilerplate was born Source Refine boilerplate Nest JS boilerplate Quick start guide And here is the Quick start guide Refine Nest JS BoilerplateLast edited by Ivan GolubicLast edited time August PMThis document describes the procedure to create new project using Poliath Refine and Nest JS templates RequirementsNode js Minimum version is but is highly recommended npm installed and updatedDocker installed configured and runningIDE WebStorm is recommended although VS Code will make it BackendIn order to start our Nest JS backend we have to create our new project based on the templategit clone depth my appcd my app cp env example env Quick runIn order to start the project without development for a quick test from your app directory run the command docker compose up dThis command will Wait for Postgres database container to spin upRun migration create needed tables in the database Run seeder create users and few random articles Run application in production mode FrontendIn order to see data from your backend service we have to run our frontend application refine based React application Clone the poliath refine boilerplate repository in preferred directory git clone depth frontendNow head to your directory and install dependencies cd frontend npm installAfter installing dependencies we have to build our project and run preview mode basically a local server that will serve our static content npm run buildnpm run previewThis command will show you where your application is running e g Local http localhost head to the location and login You should see demo users and articles RBAC explainedBoth backend and frontend have RBAC implemented Admin user can do all operations on Users object while User user can only see articles and manage articles Later in this documentation this feature will be explained in detail but in order to experience it login with admin example com Adminjohn doe example com User Password for both users is secret Inferencer explainedOne of the important features of refine dev is Inferencer in short it generates CRUD dashboard based on data from the database and it provides you the code If you click on Articles you will see the pop up This pop up is not visible on Users since raw code generated by Inferencer was already implemented in this boilerplate while in Articles it is not for showcasing the purpose You can read more about Inferencer here If everything works then we can proceed with development setup and develop our application Backend development If Docker containers from Quick run are still running shut them down in order to avoid any collisions during development phase Head to your backend directory e g my app and run docker compose down If frontend is running it also might create some confusion so simple exit the preview server by pressing Ctrl C or appropriate shortcut for your OS In this step we will assume that you already cloned nestjs poliath boilerplate project Open your backend directory in your IDE In order to start development we need to update two environment variables since we will run Nest JS project on our local machine using NPM but our database and some development tools will be in Docker Open your env file and Change DATABASE HOST postgres to DATABASE HOST localhostChange MAIL HOST maildev to MAIL HOST localhostRun Docker containers docker compose up d postgres adminer maildevNotice that this command will not start our api container Running Nest JS project locallySince we are running our project on our local machine we have to install dependencies run seeder and start in development environment npm installnpm run migration runnpm run seed runnpm run start devThis will start our backend and you should see logs in the terminal Frontend developmentRunning frontend in development mode is extremely simple Open your frontend directory in your IDE And in the terminal run the following command npm run devThis will start the development server and it will show you on which port it is currently running If everything is good you should be able to log in and see your application up and running Backend adding new resourceNow we will add new resource to our backend For this purpose it will be a simple “Task it will have id title done created at updated at author assignee We will use Nest CLI for most of our work so let s install it npm install g nestjs cliNest provides CRUD generator as part of its CLI tool It generates starting point for your resource Let s generate our Tasks resource nest g resourceThis will start a simple wizard What name would you like to use for this resource plural e g users tasksNext select “REST API for transport layer Select Yes for generating CRUD entry points This will generate all needed files for our “Tasks resource under tasks directory Nest will not create DTO by default it will create only a simple class that we should populate with our data Note The database table is not created for each model but only for those models which are declared as entities To declare a model as an entity we just need to add the Entity decorator before the declaration of the Class defining our model import Column CreateDateColumn Entity ManyToOne PrimaryGeneratedColumn UpdateDateColumn from typeorm import User from users entities user entity Entity export class Task PrimaryGeneratedColumn id number CreateDateColumn createdAt Date UpdateDateColumn updatedAt Date Column type String nullable false title string Column default false done boolean ManyToOne gt User user gt user tasks author User ManyToOne gt User user gt user tasks assignee User In the above s code we created our task model Nest provides multiple decorators like CreateDateColumn that will automatically add date when the entity is created Additionally PrimaryGeneratedColumn is used as primary key We can pass various Column options like type or default Additionally we have ManyToOne relationship since User can be author to many Tasks and User can have many tasks where he is assignee in our case Task can have only one assignee Additionally since we have foreign relationships for User we have to add it to the user entity ts OneToMany gt Task task gt task author tasks Task OneToMany gt Task task gt task assignee assignedTasks Task Next we have to create a migration to be able to apply our changes to the database tables npm run migration generate src database migrations CreateTasksThis command will generate new migration file under migrations Name will be lt timestamp CreateTasks tsNext we have to run our migration npm run migration runIf something is wrong with our previous migration we can simply revert it npm run migration revertAbove our Tasks Controller class we have to add the following Controller path tasks version UseGuards AuthGuard jwt ApiTags Tasks ApiBearerAuth Controller decorator will define that that class acts as a controller which is called though “tasks path and version of this API is This will be also visible ins Swagger UseGuards decorator will define the guards in this case only jwt so only authenticated users will be able to access this route This boilerplate also supports RolesGuard which is used in conjunction with Roles decorator in order to restrict user access for a specific group of users ApiTags defines how this route set will be called in swagger ApiBearerAuth defines that this controller needs Bearer token in order to be accessed Methods in ControllerSince we selected Yes for generating CRUD entry points during resource creation each method is defined in the controller and has its own decorator Post Get Delete Patch Calling findAll method In this case we can use Postman to call http localhost api v tasks and it should return the message “This action returns all tasks since our Service is not implemented yet If it returns error that means that you did not provide bearer token which you will get by calling a POST request to the http localhost api v auth email login with the following JSON body email admin example com password secret Then add your Bearer token to the Authorization field in Postman If you want to restrict a specific route to a specific or multiple role then you have to add the following decorator above the method Roles RoleEnum user UseGuards AuthGuard jwt RolesGuard This will restrict access only to logged in user of type user And if you try to call GET method to the previous route while logged in with Admin account you will get the following error message Forbidden resource error Forbidden statusCode DTO sData Transfer Object basically transforms our data between our service and database In TypeORM it can define the structure and validation of our entity in this case a Task Two main DTO s are generated by default “create and “update in some cases they will be the same so “update will only partially extend the “create and in some cases it will be different In our case it will extend our “create DTO export class UpdateTaskDto extends PartialType CreateTaskDto You can see that this is already generated by default Back to “crate DTO According to our task entity we must provide title author id and assignee id For simplicity our Task DTO will look like this import IsNotEmpty from class validator export class CreateTaskDto IsNotEmpty title string IsNotEmpty author number IsNotEmpty assignee number It basically defines only that one of the fields cannot be empty There are multiple decorators and validators that can be used ServicesIn order to actually do something with our data we must create some logic Logic is handled in tasks service ts file First we will write a code for creating new task At the very beginning we must inject the repository in the constructor of our service class constructor InjectRepository Task private tasksRepository Repository lt Task gt Repository is a part of TypeORM and it is basically used for our interaction with the database In order to insert author of our Task we have to pass current user to the service so our create method will look like this create createTaskDto CreateTaskDto user User createTaskDto author user const newTask this tasksRepository save this tasksRepository create createTaskDto return newTask But this will not work out of the box since we need to pass User object to our create method This boilerplate contains CurrentUser decorator which returns currently logged in user So edit “tasks controller ts Post create Body createTaskDto CreateTaskDto CurrentUser user User return this tasksService create createTaskDto user Now we can send POST request to the http localhost api v tasks route with the following JSON body title This is a task assignee This will assign this task to our user with id John Doe and author will be our signed in user Admin Next we will create service for fetching all tasks In order to do that we have to edit findAll method in tasks service ts findAll return this tasksRepository find This is a simple method that calls find which is basically like SELECT FROM TASKS Now we can call GET method on the http localhost api v tasks endpoint This will return JSON Array of our tasks id createdAt T Z updatedAt T Z title This is a task done false id createdAt T Z updatedAt T Z title This is a task done false Next we will implement findOne method in order to fetch data for specific task Get id findOne Param id id string return this tasksService findOne id id First we will update our findOne method in controller to pass id id since we will use EntityCondition to pass fields Finally our findOne method in service will look like this findOne fields EntityCondition lt Task gt Promise lt NullableType lt Task gt gt return this tasksRepository findOne where fields Next we will write our update method in service update id Task id payload DeepPartial lt Task gt Promise lt Task gt return this tasksRepository save this tasksRepository create id payload As you can see we are accepting task id and payload which is basically Partial DTO and this method returns Task that we updated Next we will implement delete service This is pretty simple method remove id number return this articlesRepository delete id Please note that there are various implementations of these services I provided the simple ones there are various ways of updating and handling errors responses etc But this can be implemented based on a specific use case Frontend adding new resourceOnce we have our backend up and running we can implement our frontend We will use refine CLI for creating new resources Refine CLI is installed when project is created with refine create command but in this boilerplate is already available Position your terminal to your frontend directory e g my app and run the following command npm run refine create resourceThis wizard will ask you to define your resource name tasks in our case and with leave all pages selected This will generate tasks directory with all pages needed Also it will update App tsx with new resource data Please note that this command will NOT generate routes and it will only use Inferencer in generated pages First we will add routes for our tasks lt Route path tasks gt lt Route index element lt TasksList gt gt lt Route path create element lt TasksCreate gt gt lt Route path edit id element lt TasksEdit gt gt lt Route path show id element lt TasksShow gt gt lt Route gt These routes are directly related to our pages but they won t be visible immediately due to RBAC restrictions We have to edit casbin access control Open src casbin accessControl ts and add the following to the adapter p tasks list create edit show delete p tasks list create edit show delete This will enable both users and admins to make all CRUD operations Now if you refresh your frontend on localhost and click on Tasks you should see the list of tasks If not added automatically add the following to the tasks resource in App tsx meta canDelete true All pages are generated using Inferencer a part from refine dev If you click on the Inferencer pop up you will see the generated code for a list of tasks copy this code and paste it in tasks list tsx replace the current code Pop up will disappear and you will have the flexibility to customize the page If there is an error in your console or your page is blank probably page export is wrong TaskList instead of TasksList check for this kind of issues in TranslationsAfter exporting we can edit translation for our page to display real values instead of object fields Open public locales en common json Add the following object to the same level as “users tasks tasks Tasks fields id ID title Title done Done createdAt Created at updatedAt Updated at titles create Create task edit Edit task list Tasks show Show task Additionally under documentTitle on the same level as users add the following tasks list Tasks Poliath Manager show id Show task Poliath Manager edit id Edit task Poliath Manager create Create new task Poliath Manager clone id Clone task Poliath Manager The above s is used for page titles Add the same fields for other languages When you refresh the page you should see how fields now have names Creating new task through our frontendThis is pretty straightforward just click on Create button and fill out the fields Of course you can copy the Inferencer code to your create tsx file and for example remove Created at and Updated at fields “since those are handled on our backend When you click save you will get an error Unprocessable entity this is because we are not passing assignee parameter But how to get it We can use useList although it is a rough option since it loads all users into the memory and we are doing parsing on clients side Instead we should use useMany that will call our backend method and retrieve only filtered users that we actually need const data data isLoading useList lt IUser gt resource users SwizzleRefine provides some solutions out of the box e g data providers but this code sometimes does not meet our needs That is is why swizzle exists it basically generates the code that can be customized based on the existing predefined code npm run refine swizzleThis is how we can edit our components auth pages auth providers etc what is swizzle Sending an emailThis boilerplate has implemented email service and usage is implemented in src mail mail service ts which can be used as a reference for customization if needed Additional tools and infoBackend comes with handy tools for development such as Swagger full API documentation http localhost docs Adminer client for database http localhost Maildev SMTP server http localhost Please note that these services should be disabled in production Running in productionOnce you finish your backend logic you can run it in production using Docker You should disable above mentioned development tools Please note that if you were running this boilerplate previously in order to “catch latest updates of your code migrations etc You first have to rebuild docker image docker compose build no cacheLogging is implemented by default and logs are available in the logs directory ConclusionPlease note that there can be a lot of situations where you will have to edit default code e g change querying methods update current DTO s etc But this is a great starting point for any project Additionally this documentation will be updated or fixed as needed with new features fixes and improvements Issues and errorsPlease be aware that this documentation is primarily designed to expedite your onboarding process While it may offer some shortcut solutions these are intended purely for illustrative purposes to help clarify the workings of this boilerplate Should you encounter any issues with the boilerplate whether on the frontend or backend we encourage you to raise an issue on GitHub Refine boilerplate NestJS boilerplate |
2023-08-16 15:00:44 |
Apple |
AppleInsider - Frontpage News |
Moment debuts 8 new iPhone lenses as part of T-Series overhaul |
https://appleinsider.com/articles/23/08/16/moment-debuts-8-new-iphone-lenses-as-part-of-t-series-overhaul?utm_medium=rss
|
Moment debuts new iPhone lenses as part of T Series overhaulPhotography powerhouse Moment has debuted a whole new series of mobile lenses for iPhone and Android releasing eight new lenses with multiple improvements to up your camera game Moment s new lens systemMoment launched its last generation M Series lenses back in Now after six years it has updated all its glass with the launch of eight lenses that include more elements a new bayonet system and support for Android Read more |
2023-08-16 16:00:05 |
Apple |
AppleInsider - Frontpage News |
CrossOver update brings EA and DirectX 12 game support to Mac |
https://appleinsider.com/articles/23/08/16/crossover-update-brings-ea-and-directx-12-game-support-to-mac?utm_medium=rss
|
CrossOver update brings EA and DirectX game support to MacCrossOver now lets Mac and Linux users play Windows games from EA or which rely on DirectX while its new geometry shader support allows games to play without graphics issues Windows games running on Apple SiliconFollowing its first successful tests of running DirectX games on Mac in June CodeWeavers has now announced that an update CrossOver is shipping with this feature and more Read more |
2023-08-16 15:40:35 |
Apple |
AppleInsider - Frontpage News |
Daily deals Aug. 16: 14" MacBook Pro $1,599, $749 MacBook Air, 25% off Magic Keyboard, more |
https://appleinsider.com/articles/23/08/16/daily-deals-aug-16-14-macbook-pro-1499-749-macbook-air-25-off-magic-keyboard-more?utm_medium=rss
|
Daily deals Aug quot MacBook Pro MacBook Air off Magic Keyboard moreWednesday s top deals include a MacBook Pro blowout bonanza up to off Speck iPhone cases up to off Kindle e readers Kasa Smart light bulbs for an iPhone Pro Max from and more Save on a M MacBook AirThe AppleInsider team combs the web for amazing bargains at ecommerce retailers to develop a list of stellar discounts on trending products including deals on Apple tech TVs accessories and other gadgets We share our top finds daily to help you save money Read more |
2023-08-16 15:54:57 |
海外TECH |
ReadWriteWeb |
LaserPecker LP2 | Full Review |
https://readwrite.com/laserpecker-lp2-full-review/
|
LaserPecker LP Full ReviewDive into the world of creativity and precision with the LaserPecker LP Laser Engraver Laser Cutter This innovative device The post LaserPecker LP Full Review appeared first on ReadWrite |
2023-08-16 15:00:41 |
海外TECH |
Engadget |
Steam changes could increase game prices in some countries |
https://www.engadget.com/steam-changes-could-increase-game-prices-in-some-countries-154516181.html?src=rss
|
Steam changes could increase game prices in some countriesValve has updated Steam s minimum pricing policy for some non USD currencies which could impact those who sell games and expansions for less than the equivalent of The company warned publishers and developers that the move could lead to games and DLC using lower pricing being unavailable to purchase in some regions if they don t make adjustments while they may not be able to offer discounts as deeply as they used to According to Valve the aim of the revised policy is to align minimum pricing with recommended currency conversions the company issued last October It updated those recommendations quot to adjust for some currencies drifting significantly in value over time quot As such the base price for a game or expansion must be at least the equivalent of cents The minimum price for a discounted game or DLC is the equivalent of cents nbsp Developers and publishers may need to change the pricing of their products in some countries As Game Developer notes they ll have to be mindful of how they handle discounts too Publishers and developers of games that usually cost or less will need to make sure they avoid going below the threshold during sales Valve offers price management and discount tools on Steam to help them navigate such issues The move may also impact players who create Steam accounts in different countries to take advantage of regional price differences While the new thresholds won t necessarily impact blockbuster games they could make it somewhat less viable for players to change their virtual location to the likes of Turkey and Argentina to pick up a grab bag of indie games and other deeply discounted titles This article originally appeared on Engadget at |
2023-08-16 15:45:16 |
海外TECH |
Engadget |
The Rodecaster Duo podcast mixer proves bigger isn't always better |
https://www.engadget.com/rodecaster-duo-review-153032883.html?src=rss
|
The Rodecaster Duo podcast mixer proves bigger isn x t always betterA couple of years ago you might have described Rode as a company that makes microphones Today it s positioning itself more as a one stop shop for creator tools The original Rodecaster Pro podcast mixer was the first big step in this evolution That includes the new gaming focused “Rode X sub brand and products like the Streamer X capture card The company of course still makes a microphone or two But with the new smaller more affordable and very capable Rodecaster Duo stream mixer this move toward general creators is basically official The original Rodecaster Pro was the first mixing desk specifically designed for podcasters to really catch people s attention The build quality price ease of use and simple workflow struck a chord with pros and amateurs alike The Rodecaster Pro II went in a slightly different direction introducing the ability to route different audio sources to different places an essential tool for game streamers The pads were upgraded from simple audio triggers to multi purpose smart pads that can be used for MIDI vocal effects and more The second version also came in with a smaller footprint removing two physical faders and making them “virtual The Rodecaster Duo is arguably just the Rodcaster Pro II “mini The functionality is identical to its bigger sibling but it comes with four physical faders down from six six pads down from eight and two XLR ports for microphones or instruments down from four You actually have control over seven mixing channels at any one time but adjusting three of them is done via virtual faders Importantly you get to configure which inputs remain on physical faders and which are assigned to virtual controls in the companion software Photo by James Trew EngadgetTwo other small changes include the removal of the “record button which is now virtual on the display and there s also a headphone port on the front edge This last change solves one of my main nitpicks with Rodecaster Pro II which only had headphone ports around the back The port on the front is mm rather than inch and is compatible with headset TRRS mics adding another input effectively ーone that s particularly handy for game streamers One of the biggest upgrades from the original Rodecaster Pro is the addition of a second USB C port around the back which can connect to a second PC This is a massive boon for streamers who want to keep their gaming rig separate from their streaming one and the new routing table allows you to send whatever inputs you like to either USB connection This same port also can be used for connecting a phone which is perfect for introducing callers or for streaming via mobile apps You could always connect a phone via Bluetooth on the original model which was handy but now you have multiple options and via cable is much better quality The fact that there are only two XLR combo jacks speaks strongly to who this is for While the Rodecaster Pro and its sequel were originally built for in person multi guest podcasts it s also a very capable tool for solo creators which has helped fuel its popularity And with an increasing number of tools like Zencastr or Adobe Podcast the need to host fellow flesh sacks in the same room is no longer required for high quality audio from all speakers As such the Rodecaster Duo makes a lot of sense for a broad stroke of creators from podcasters to streamers and even music producers and video editors both the Duo and the II Pro are MIDI enabled Be under no illusions the Duo ーand its bigger sibling ーare just as “pro friendly as the first Rodecaster but they both lean into the creator space a bit more than the original This point is made most clearly by the very existence of the Duo The smaller footprint is a clear admission that this was made to live on a desk full time alongside your other daily tools Photo by James Trew EngadgetThe Rodecaster II Pro was already a bit more manageable than the first model but after a few weeks with the Duo the difference is stark It can remain nested under my monitor and easily moved into position when I go live Before the Duo I had the Pro II on my desk in a similar setup but I was frequently moving it out of the way to make space for other things that it became a bit of a burden and I ended up unplugging it until show time With the Duo it s clear this can be a daily driver with little to no need to organize around it The number of tools for creators and streamers is expanding exponentially and with that are more direct rivals to the Rodecaster series In fact just days after the Rodecaster Duo was announced Boss unveiled its own take on the category with the Gigcaster and Gigcaster Both offer very similar features to Rode s products in a generally smaller footprint The Gigcaster is a near in terms of functionality to the Pro II while the Gigcaster sacrifices the physical trigger pads to make way for two more physical faders ーsix total ーover the Duo s four to create an even smaller footprint Though it has a slight focus on musicians via some sound presets and effects and doesn t quite match the overall build quality and polish as the Rode Rode s audio chops are also not to be underestimated The pre amps and headphone outputs on the Duo are capital L loud and squeaky clean with a very low noise floor When the products were announced Rode went out of its way to show how well it could power the notoriously quiet and insanely popular SMB microphone When you re giving a shout out to a rival company s product to demonstrate a feature you better be confident that the feature you re touting does the goods And surely it does The amount of clean gain to drive microphones such as the aforementioned Shure classic is impressive and a step up from the already decent Rodecaster Pro before it Photo by James Trew EngadgetIn short the Rodecaster Duo feels like a product that Rode maybe didn t initially think was the main event It s the smaller more affordable version of its flagship mixer after all It turns out that this is likely the one that most solo creators will actually want Even pros might want to consider the Duo over the Pro II if they don t absolutely need the capacity to run four microphones in tandem It s worth mentioning that if you re considering moving over to the Duo from something like the GoXLR or the Razer Audio Mixer know that Rode s take on a routing table is a little different to what you might be used to The Duo s companion software is generally pretty good but it doesn t use the conventional “table format many streamers will be used to Instead it s a little bit convoluted but once you get the hang of it it s quite powerful This is particularly handy if you re in the business of recording audio from multiple sources I often just use the routing options so I can record either one or both sides of a phone call or online meeting depending on my needs but it s also good for feeding PC audio ーincluding Zoom calls or YouTube videos etc ーinto well wherever you want it to go including your phone If you do any kind of live audio production or recording especially podcasts the Rodecaster Duo is an easy sell For streamers it s also a very capable device one that s also easy to recommend but with a small asterisk Streaming setups and their associated platforms are often a little more to their host s tastes and preferences As such the Duo s suitability will depend on what you re used to and the specifics of what you want to do But for most creators the Duo is the better option over the Pro II at the very least This article originally appeared on Engadget at |
2023-08-16 15:30:32 |
海外科学 |
NYT > Science |
Superconductor Scientist Faces Investigation as a Paper Is Retracted |
https://www.nytimes.com/2023/08/15/science/retraction-ranga-dias-rochester.html
|
Superconductor Scientist Faces Investigation as a Paper Is RetractedThe University of Rochester will examine the work of Ranga Dias who was an author of a materials science paper unrelated to his superconductor research that was retracted on Tuesday |
2023-08-16 15:59:33 |
海外科学 |
NYT > Science |
Surgeons Improve Function of Kidneys Transplanted From Genetically Altered Pigs |
https://www.nytimes.com/2023/08/16/health/pig-kidney-organ-transplants.html
|
Surgeons Improve Function of Kidneys Transplanted From Genetically Altered PigsIn two experiments researchers implanted the organs into brain dead patients for extended periods raising hopes for a new supply of donor organs |
2023-08-16 15:56:43 |
海外科学 |
NYT > Science |
6 Months After the Ohio Train Derailment, Residents Are Still in Crisis |
https://www.nytimes.com/2023/08/16/health/east-palestine-ohio-train-derailment-crisis.html
|
Months After the Ohio Train Derailment Residents Are Still in CrisisThe Albright family left town after a train carrying toxic chemicals derailed near their Ohio home Now they are back facing personal medical and financial crises in a newly divided community |
2023-08-16 15:00:37 |
海外TECH |
WIRED |
'Baby Steps': We Can't Wait for This Failure-to-Launch Adventure |
https://www.wired.com/story/baby-steps-failure-to-launch-2024/
|
adventurelife |
2023-08-16 15:12:47 |
ニュース |
BBC News - Home |
Junior doctors in Scotland accept new pay offer |
https://www.bbc.co.uk/news/uk-scotland-66524465?at_medium=RSS&at_campaign=KARANGA
|
action |
2023-08-16 15:50:00 |
ニュース |
BBC News - Home |
Ireland ATM: Queues form as bank glitch allows extra cash withdrawals |
https://www.bbc.co.uk/news/world-europe-66524657?at_medium=RSS&at_campaign=KARANGA
|
accounts |
2023-08-16 15:39:08 |
ニュース |
BBC News - Home |
No plans for bank holiday if England win World Cup |
https://www.bbc.co.uk/news/uk-66524191?at_medium=RSS&at_campaign=KARANGA
|
lionesses |
2023-08-16 15:41:45 |
ニュース |
BBC News - Home |
Rishi Sunak defends government cost-of-living support |
https://www.bbc.co.uk/news/uk-politics-66521633?at_medium=RSS&at_campaign=KARANGA
|
energy |
2023-08-16 15:02:32 |
ニュース |
BBC News - Home |
Graham Linehan: New venue for Father Ted writer's cancelled gig |
https://www.bbc.co.uk/news/uk-scotland-66520643?at_medium=RSS&at_campaign=KARANGA
|
issues |
2023-08-16 15:22:17 |
ニュース |
BBC News - Home |
Ulez expansion: Mayor of London urges councils to 'put their politics aside' |
https://www.bbc.co.uk/news/uk-england-66521469?at_medium=RSS&at_campaign=KARANGA
|
council |
2023-08-16 15:39:09 |
ニュース |
BBC News - Home |
Ghost hunting at 'haunted house' that sparked Hollywood interest |
https://www.bbc.co.uk/news/uk-wales-66443613?at_medium=RSS&at_campaign=KARANGA
|
century |
2023-08-16 15:54:10 |
ニュース |
BBC News - Home |
Sarina Wiegman: England boss says reaching Women's World Cup final is a 'fairytale' |
https://www.bbc.co.uk/sport/football/66524615?at_medium=RSS&at_campaign=KARANGA
|
Sarina Wiegman England boss says reaching Women x s World Cup final is a x fairytale x Sarina Wiegman says it s a fairytale to lead England to the final of the World Cup and reach her fourth major tournament final overall |
2023-08-16 15:17:23 |
ニュース |
BBC News - Home |
Women's World Cup: Australia 'disappointed' but will reflect on 'inspiring' campaign |
https://www.bbc.co.uk/sport/football/66524491?at_medium=RSS&at_campaign=KARANGA
|
Women x s World Cup Australia x disappointed x but will reflect on x inspiring x campaignAustralia s World Cup semi final loss to England was disappointing but the Matildas have won over a nation |
2023-08-16 15:42:32 |
Azure |
Azure の更新情報 |
General Availability: Incremental snapshots for Premium SSD v2 Disk and Ultra Disk Storage |
https://azure.microsoft.com/ja-jp/updates/general-availability-incremental-snapshots-for-premium-ssd-v2-disk-and-ultra-disk-storage-3/
|
General Availability Incremental snapshots for Premium SSD v Disk and Ultra Disk StorageIncremental snapshots for Premium SSD v and Ultra Disk Storage with instant restore capability are now generally available GA |
2023-08-16 16:00:08 |
GCP |
Cloud Blog |
How to set up observability for a multi-tenant GKE solution |
https://cloud.google.com/blog/products/devops-sre/setting-up-observability-for-a-multi-tenant-gke-environment/
|
How to set up observability for a multi tenant GKE solutionMany of you have embraced the idea of multi tenancy in your Kubernetes clusters as a way to simplify operations and save money Multi tenancy offers a sophisticated solution for hosting applications from multiple teams on a shared cluster thereby enabling optimal resource utilization simplified security and less operational overhead While this approach presents a lot of opportunities it comes with risks you need to account for Specifically you need to thoughtfully consider how you ll troubleshoot issues handle a high volume of logs and give developers the correct permissions to analyze those logs If you want to learn how to set up a GKE multi tenant solution for best observability this blog post is for you We will configure multi tenant logging on GKE using the Log Router and setup a sink to route a tenant s logs to their dedicated GCP project enabling you to define how their logs get stored and analyzed and set up alerts based on the contents of logs and charts from metrics derived from logs for quick troubleshooting ArchitectureWe will set up a GKE cluster shared by multiple tenants and configure a sink to route a tenant s logs to their dedicated GCP project for analysis We will then set up a log based metric to count application errors from incoming log entries and set up dashboards and alerts for quick troubleshooting To demonstrate how this works I am using this GCP repo on Github to simulate a common multi tenant setup where multiple teams share a cluster separated by namespace The app consists of a web frontend and redis backend deployed on a shared GKE cluster We will route frontend specific logs to the web frontend team s dedicated GCP project If you already have a GKE cluster shared by multiple teams you may skip to the part where we configure a sink to route logs to a tenant s project and set up charts and alerts Below is the logical architecture Routing OverviewIncoming log entries on GCP pass through the Log Router behind the Cloud Logging API Sinks in the Log Router control how and where logs get routed by checking each log entry against a set of inclusion and exclusion filters if present The following sink destinations are supported Cloud logging log buckets Log buckets are the containers that store and organize logs data in GCP Cloud Logging Logs stored in log buckets are indexed and optimized for real time analysis in Logs Explorer and optionally for log analysis via Log Analytics Other GCP projects This is what will be showcased in this blog post We will be exporting a tenant s logs to their GCP project where they can control how their logs are routed stored and analyzed Pub Sub topics This is the recommended approach for integrating Cloud Logging logs with third party software such as Splunk BigQuery datasets Provides storage of log entries in BigQuery datasets Cloud Storage Buckets To store logs for long term retention and compliance purposes Cloud Logging doesn t charge to route logs to a supported destination however the destination charges apply See Cloud Logging pricing for more information PrerequisitesYou may skip this section if you alreadyhave a shared GKE clusterhave a separate project for the tenant to send tenant specific logsSet up a shared GKE cluster in the main projectcode block StructValue u code u gcloud container clusters create CLUSTER NAME r n release channel CHANNEL r n zone COMPUTE ZONE r n node locations COMPUTE ZONE u language u u caption lt wagtail wagtailcore rich text RichText object at xeeadbdfd gt Once the cluster is successfully created create a separate namespace for the tenant We will route all tenant specific logs from this namespace to the tenant s dedicated GCP project code block StructValue u code u kubectl create ns TENANT NAMESPACE u language u u caption lt wagtail wagtailcore rich text RichText object at xeeafa gt I am using this GCP repo to simulate a multi tenant setup separated by namespace I will deploy the frontend in the tenant namespace and redis cluster in the default namespace You may use another app if you d like Set up a GCP project for the tenant by following this guide Sink ConfigurationWe ll first create a sink in our main project where our shared GKE cluster resides to send all tenant specific logs to the tenant s project code block StructValue u code u gcloud logging sinks create gke TENANT NAMESPACE sink r nlogging googleapis com projects TENANT PROJECT r n project MAIN PROJECT r n log filter resource labels namespace name TENANT NAMESPACE r n description Log sink to TENANT PROJECT for TENANT NAMESPACE namespace u language u u caption lt wagtail wagtailcore rich text RichText object at xeeadced gt The above command will create a sink in the main project that forwards logs in the tenant s namespace to their own project You may use a different or more restrictive value for the log filter to specify which log entries get exported See the API documentation here for information about these fields Optionally you may create an exclusion filter in the main project with the GKE cluster to avoid redundant logs from being stored in both the projects Some DevOps teams prefer this set up as it helps them to focus on the overall system operations and performance while giving dev teams the autonomy and tooling needed to monitor their applications To create an exclusion filter runcode block StructValue u code u gcloud logging sinks update Default project MAIN PROJECT add exclusion name gke TENANT NAMESPACE default exclusion description Exclusion filter on the Default bucket for TENANT NAMESPACE filter resource labels namespace name TENANT NAMESPACE u language u u caption lt wagtail wagtailcore rich text RichText object at xeeaad gt The above command will create an exclusion filter for the sink that routes logs to the main project so that tenant specific logs only get stored in the tenant project Grant permissions to the main project to write logs to the tenant projectcode block StructValue u code u gcloud projects add iam policy binding TENANT PROJECT member gcloud logging sinks describe gke TENANT NAMESPACE sink project MAIN PROJECT format value writerIdentity role roles logging logWriter condition expression resource name endsWith projects TENANT PROJECT title Log writer for tenant namespace u language u u caption lt wagtail wagtailcore rich text RichText object at xeeaa gt The tenant specific logs should now start flowing to the tenant project To verify Select the tenant project from the GCP console project picker Go to the Logs Explorer page by selecting Logging from the navigation menu Tenant specific logs routed from the main project should show up in the Query results pane in the tenant project To verify you may run the log filter value we passed while creating the sink Run resource labels namespace name TENANT NAMESPACE in the query editor field and verify Setting up log based metricsWe can now define log based metrics to gain meaningful insights from incoming log entries For example your dev teams may want to create a log based metric to count the number of errors of a particular type in their application and set up Cloud Monitoring charts and alert policies to triage quickly Cloud Logging provides several system defined metrics out of box to collect general usage information however you can define your own log based metrics to capture information specific to your application To create a custom log based metric that counts the number of incoming log entries with an error message in your tenant project run code block StructValue u code u gcloud logging metrics create METRIC NAME r n description App Health Failure r n log filter resource labels namespace name TENANT NAMESPACE AND severity gt ERROR u language u u caption lt wagtail wagtailcore rich text RichText object at xeeaa gt Creating a chart for a log based metric Go to the Log based metrics page in the GCP console Find the metric you wish to view and then select View in Metrics Explorer from the menu The screenshot below shows the metric being updated in real time as log entries come in Optionally you can save this chart for future reference by clicking SAVE CHART in the toolbar and add this chart to an existing or new dashboard This will help your dev teams monitor trends in their logs as they come in and triage issues quickly in case of errors Next we will set up an alert for our log based metric so that the application team can catch and fix errors quickly Alerting on a log based metricGo to the Log based metrics page in the GCP console Find the metric you wish to alert on and select Create alert from metric from the menu Enter a value in the Monitoring filter field In our case this will be metric type logging googleapis com user error count Click Next and enter a Threshold value Click Next and select the Notification channel s you wish to use for the alert Give this alert policy a name and click Create Policy When an alert triggers a notification with incident details will be sent to the notification channel selected above Your dev team tenant will also be able to view it on their GCP console enabling them to triage quickly ConclusionIn this blog post we looked at one of the ways to empower your dev teams to effectively troubleshoot Kubernetes applications on shared GKE infrastructure Cloud Operations suite gives you the tools and configuration options necessary to effectively monitor and troubleshoot your systems in real time enabling early detection of issues and efficient troubleshooting To learn more check out the links below Cloud Operations Suite documentationCloud logging Quickstart guideCloud Logging and storage architectureGKE multi tenancy best practicesCreating metrics from logsConfiguring log based alerts |
2023-08-16 16:00:00 |
GCP |
Cloud Blog |
Cl/CD for Gitlab repositories with Cloud Build repositories Gen2 |
https://cloud.google.com/blog/products/application-development/cloud-build-second-gen-features-for-gitlab-and-terraform/
|
Cl CD for Gitlab repositories with Cloud Build repositories GenThe trailhead for any path to production starts at source control The integration of Cloud Build as the automation tool with the source repository is therefore crucial to increasing the delivery speed and ultimately becoming a high performing organization In this article we look at two exciting new capabilities of the recently launched second generation of Cloud Build repositories First Cloud Build can now connect to source code repositories in Gitlab and Gitlab Enterprise And second repository connections can now be managed declaratively in Terraform We will now dive into a simple end to end demonstration to experience both of these features Preparing the Gitlab EnvironmentFor our demo we use a private repository that is hosted on Gitlab com If you already have a Gitlab repository with a Cloud Build configuration you can set the GITLAB REPO URI variable to the HTTPS URI of your repository and continue with the next section To create a minimal Gitlab repository to experiment with Cloud Build you can perform the following steps Create a private repo in Gitlab com I am using cloud build demo as the name here Store the repo URI as the variable GITLAB REPO URI in my case this is GITLAB REPO URI lt USER gt cloud build demo git you can use ssh for the local clone to the workstation but we ll use HTTPS for the Cloud Build Repository belowIn a terminal e g cloud shell or locally initialize the repository with a cloudbuild yaml config as follows code block StructValue u code u mkdir cloud build demo amp amp cd cloud build demo r ncat lt lt EOF gt cloudbuild yaml r nsteps r n name ubuntu r n id just a demo r n args r n echo r n Probably the world s simplest pipeline r nEOF r ngit init initial branch main r ngit remote add gitlab GITLAB REPO URI r ngit add r ngit commit m initial import r ngit push u gitlab main u language u u caption lt wagtail wagtailcore rich text RichText object at xecbfb gt You should now see your repository content in our case just the cloudbuild yaml file in the Gitlab web UI Preparations in Google CloudTo get started we need to ensure that our Google Cloud project has the necessary APIs for Cloud Build and Secret Manager enabled on the project Our example requires us to add the two services to a Terraform configuration or to run the following commands in a terminal code block StructValue u code u export PROJECT ID lt my project id here gt r ngcloud services enable cloudbuild googleapis com secretmanager googleapis com project PROJECT ID u language u u caption lt wagtail wagtailcore rich text RichText object at xecbfe gt Cloud Build repositories are authenticated at the level of a so called host connection The authentication process of a host connection is specific to the source code repository and differs slightly between Gitlab and Github For host connection to access repositories in Gitlab you need to issue personal access tokens for both the api and read api scope as described in the document Connect to a GitLab host Once you have issued the tokens you can store them as environment variables GITLAB API TOKEN and GITLAB READ API TOKEN and run the command below to create secrets in Secret Manager If you look closely you ll find that we will also create the required secret for Gitlab webhooks but we won t use them for this demo In a last step we also authorize the Cloud Build Service Agent to use the secrets that we just created such that it can establish the host connection Note You could create these secrets in Terraform as well but the plain text values will be visible in your tf state code block StructValue u code u GITLAB API TOKEN SET TOKEN HERE r nGITLAB READ API TOKEN SET TOKEN HERE r n r ngcloud secrets create gitlab api token r n replication policy automatic project PROJECT ID r necho n GITLAB API TOKEN r n gcloud secrets versions add gitlab api token project PROJECT ID data file r nGITLAB API TOKEN SECRET REF gcloud secrets versions list gitlab api token format json jq r name r n r ngcloud secrets create gitlab read token r n replication policy automatic project PROJECT ID r necho n GITLAB READ API TOKEN r n gcloud secrets versions add gitlab read token project PROJECT ID data file r nGITLAB READ API TOKEN SECRET REF gcloud secrets versions list gitlab read token format json jq r name r n r ngcloud secrets create gitlab webhook token r n replication policy automatic project PROJECT ID r necho n not used here r n gcloud secrets versions add gitlab webhook token project PROJECT ID data file r nGITLAB WEBHOOK TOKEN SECRET REF gcloud secrets versions list gitlab webhook token format json jq r name r n r nPROJECT NUMBER gcloud projects describe PROJECT ID format value projectNumber r nCLOUD BUILD SA MEMBER serviceAccount service PROJECT NUMBER gcp sa cloudbuild iam gserviceaccount com r ngcloud secrets add iam policy binding gitlab api token member CLOUD BUILD SA MEMBER role roles secretmanager secretAccessor project PROJECT ID r ngcloud secrets add iam policy binding gitlab read token member CLOUD BUILD SA MEMBER role roles secretmanager secretAccessor project PROJECT ID r ngcloud secrets add iam policy binding gitlab webhook token member CLOUD BUILD SA MEMBER role roles secretmanager secretAccessor project PROJECT ID u language u u caption lt wagtail wagtailcore rich text RichText object at xeccde gt Configuring the Cloud Build Repository and triggers with TerraformWith the authentication credentials configured in Secret Manager we can move on to the Terraform configuration For simplicity we put everything in a single main tf file that looks as follows code block StructValue u code u variable project id r n type string r n r n r nvariable gitlab api token secret r n type string r n r n r nvariable gitlab read api token secret r n type string r n r n r nvariable gitlab webhook token secret r n type string r n r n r nvariable gitlab repo uri r n type string r n r n r nvariable build location r n type string r n default europe west r n r n r nprovider google r n project var project id r n r n r nresource google cloudbuildv connection gitlab connection r n location var build location r n name gitlab connection r n r n gitlab config r n authorizer credential r n user token secret version var gitlab api token secret r n r n read authorizer credential r n user token secret version var gitlab read api token secret r n r n webhook secret secret version var gitlab webhook token secret r n r n r n r nresource google cloudbuildv repository demo repo r n name gitlab demo repo r n location var build location r n parent connection google cloudbuildv connection gitlab connection id r n remote uri var gitlab repo uri r n r n r nresource google cloudbuild trigger demo trigger r n location var build location r n repository event config r n repository google cloudbuildv repository demo repo id r n push r n branch r n r n r n filename cloudbuild yaml r n u language u u caption lt wagtail wagtailcore rich text RichText object at xecece gt We provide variables for the Google Cloud project ID references to the externally created secrets in Secret Manager and an optional location override for the Cloud Build resources The Cloud Build specific Terraform resources are google cloudbuildv connection to specify the host connection with a name region and the credentials Host connections can be used by multiple repositories so the credentials can be managed centrally google cloudbuildv repository to specify the Gitlab repo we want to use and associate it with a host connection This requires that the host connections credentials have access to the repository specified as remote uri google cloudbuild trigger to run the Cloud Build pipeline on push events on any branch in the Gitlab repository To apply the Terraform configuration we execute the following two commands from within the folder that contains our main tf file code block StructValue u code u terraform init r nterraform apply r n var project id PROJECT ID r n var gitlab api token secret GITLAB API TOKEN SECRET REF r n var gitlab read api token secret GITLAB READ API TOKEN SECRET REF r n var gitlab webhook token secret GITLAB WEBHOOK TOKEN SECRET REF r n var gitlab repo uri GITLAB REPO URI u language u u caption lt wagtail wagtailcore rich text RichText object at xeea gt Once the resources are created we can see them in the Google Cloud Console under Cloud Build gt Repositories To test our simple pipeline and the repo trigger we can push an empty commit to our sample repo code block StructValue u code u cd cloud build demo r ngit commit m trigger pipeline allow empty r ngit push u gitlab main u language u u caption lt wagtail wagtailcore rich text RichText object at xeea gt In the Cloud Build dashboard in the Google Cloud Console you can see the build that was kicked off The same result is visible in the Gitlab UI under Builds gt Pipelines Next stepsYou can find more information about Cloud Build repositories and a side by side comparison of the features of the Gen and Gen repositories in the official Cloud Build repositories documentation If you are planning to use Cloud Build with a Git repository hosted on GitHub you should follow these instructions for your Terraform configuration |
2023-08-16 16:00:00 |
コメント
コメントを投稿