投稿時間:2022-03-29 23:37:44 RSSフィード2022-03-29 23:00 分まとめ(41件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
IT ITmedia 総合記事一覧 [ITmedia News] 「PS Plus」がリニューアル 初代プレステやPS2、PSP、PS3など旧世代機のタイトル最大240本がプレイ可能に https://www.itmedia.co.jp/news/articles/2203/29/news182.html itmedia 2022-03-29 22:52:00
AWS AWS Government, Education, and Nonprofits Blog AWS GovCloud (US) or standard? Selecting the right AWS partition https://aws.amazon.com/blogs/publicsector/aws-govcloud-us-standard-selecting-right-aws-partition/ AWS GovCloud US or standard Selecting the right AWS partitionThis blog post explores the options US public sector customers and their business partners should evaluate when selecting an AWS partition We discuss the differences between AWS GovCloud US and the AWS standard partition and how to decide which partition is the best match for your organization s security compliance availability and cost needs 2022-03-29 13:04:30
python Pythonタグが付けられた新着投稿 - Qiita pandasのリストをコピーする https://qiita.com/kam_Qiita/items/2ef5fcd9be6b11a83707 完成予想図beforeafterOKな例importcopydfafterdfbeforeapplylambdaxcopydeepcopyxこのように一行づつdeepcopyする必要があります。 2022-03-29 22:57:29
python Pythonタグが付けられた新着投稿 - Qiita python 仮想環境でファイルを実行する https://qiita.com/nvl-xx/items/a7d712077b889f102362 python仮想環境でファイルを実行するはじめにこのページはpython初心者が勉強したことを書き記す忘備録的な意味合いで作成しています。 2022-03-29 22:55:24
AWS AWSタグが付けられた新着投稿 - Qiita Amazon CognitoのホストされたUIを表示しないIDフェデレーション https://qiita.com/ta__k0/items/7af50df655a1812a2fc3 AmazonCognitoのホストされたUIを表示しないIDフェデレーションはじめにAmazonCognitoでIDプロバイダーを設定して、ホストされたUIHostedUIを利用した際にホストされたUIHostedUIを表示せず、IDプロバイダーのサインイン画面を表示する方法を残します。 2022-03-29 22:47:52
海外TECH MakeUseOf Where to Watch the 2022 Oscar-Winning Movies Online https://www.makeuseof.com/where-to-watch-2022-oscar-winners/ awards 2022-03-29 13:46:12
海外TECH MakeUseOf Everything You Need to Know About Gmail’s Login Security Warning Alerts https://www.makeuseof.com/gmail-login-security-warning-alerts/ Everything You Need to Know About Gmail s Login Security Warning AlertsGoogle s login security alerts in Gmail can be pretty annoying but is it a good idea to try and disable them Let s dive in and find out 2022-03-29 13:45:14
海外TECH MakeUseOf How to Mask a Clip Using Filmora (Desktop and Mobile) https://www.makeuseof.com/how-to-mask-clip-filmora/ filmora 2022-03-29 13:30:14
海外TECH MakeUseOf ADO D30 Ebike Review: Pure Pedal-Assisted Power https://www.makeuseof.com/ado-d30-ebike-review/ assisted 2022-03-29 13:05:14
海外TECH DEV Community Maintaining An Open Source Project - Cake Walk Or No? https://dev.to/appwrite/maintaining-an-open-source-project-cake-walk-or-no-4o8f Maintaining An Open Source Project Cake Walk Or No At Appwrite we aim to build awesome products and spread the word about open source With continued efforts to spread awareness about open source we come up with new and exciting Twitter spaces every month Last week we hosted a space on “Maintaining an open source project On top of that we were joined by some amazing speakers such as Ahmad Awais What makes open source so great Open source has led to the development of world class products and the best part of it is the community Collaboration is a big weapon here in the process of building something awesome we also learn how to work with people having different perspectives different backgrounds knowledge level culture and experiences which makes it more diverse and inclusive New features expanding existing features bug finding a lot can happen over discussions in open source projects here s a story from Ahmad on how he convinced Facebook to get licenses for React and GraphQL “We built something with React when you merge react core and Wordpress core million websites would be affected There s a clause that says you can t build anything that would compete with Facebook I started a GitHub issue and one thing led to another so every maintainer pitched their framework and we ended up convincing Facebook s legal team if you open source something open source it right We had a bunch of discussions We ended up getting MIT licenses for both React and GraphQL That was a big turning point for React Collaboration about open source can potentially make an impact and do wonders The dark side of open sourceWhile there s a never ending list of what we love about open source and how it is a major part of us there are some issues as well that a maintainer has to face The biggest one being criticism In the words of our Founder and CEO Eldad “Code is poetry not everyone is going to like it But what is important is to accept these criticisms with a light heart and work towards making the project better everyday In Spite of some bad comments and controversies there are businesses being built with open source Many big companies are shifting to open source There s GitHub sponsorships BuyMeACoffee or BuyMeABeer helping with the monetary support we are moving towards more open source People are realizing the huge value of open source Bonus As a perk Appwrite lets new employees choose an open source project they want to sponsor ️TakeawaysBeing a maintainer is not easy when do we know that we are ready to become one Is it when we get an interesting idea in mind Or do we wait for a few years gain some experience as a contributor first and then make a move Here s some quick tips and tricks to tackle that question Join decent sized open source projects Learn how things are working how issues are being managed what is breaking changes how they are releasing software Use the language framework project the way to get into it is by using it Be a part of a project you like see how they handle community in discord how they handle PR s reviews etcIf your project is relatively new don t overload yourself with discord and all the other communication channels Use existing channels go to the people who also work in the same languages as you Thank you for staying tuned till the end if you think we have added some value make sure to join our Discord Channel and follow us on Twitter to not miss more fun stuff by us Appwrite appwrite The work and life of an open source maintainer can be really strenuousbut also very fruitful Join us for a Twitter Space to understand what it takes to maintain an open source project Date March thTime AM PST onwardstwitter com i spaces jMJg… AM Mar 2022-03-29 13:22:57
海外TECH DEV Community How to Migrate From ECS to EKS and the #1 Trick to Make EKS Easier https://dev.to/castai/how-to-migrate-from-ecs-to-eks-and-the-1-trick-to-make-eks-easier-fl0 How to Migrate From ECS to EKS and the Trick to Make EKS EasierMigrating from Amazon ECS to EKS is probably the last thing you want to spend your time on Yet you re here So it s likely that ECS hasn t been serving you fully Or perhaps you re curious if you should start planning the migration and how long will it take I ll try to answer most of your questions starting from whether you should look at EKS at all to some practical migration and EKS management tips for when you ve made your decision Why migrate from ECS to EKS Companies that want to advance in Kubernetes are better off by using EKS Giants like Amazon HSBC JP Morgan Chase and Delivery Hero all use EKS because of the control and flexibility it offers Portability While ECS is AWS proprietary technology EKS is basically a Kubernetes as a platform service developed and maintained by AWS EKS clusters are actually portable you can recreate a similar experience on local environments development environments etc using vanilla Ks So you can probably tell in which scenario you ll face the risk of cloud vendor lock in ​​If you re building and running applications in ECS you might encounter vendor lock in issues in the long run If you decide to use another provider you ll have to define the entire architecture to match it  That s why designing your application to run on EKS leaves you more flexibility The abstraction layer of EKS helps you to package your containers and move them to another platform quickly That way you can run workloads on any other Kubernetes cluster whether it s on prem or the cloud provider offering you the best deal  On top of that you can find solutions on the market that allow you to switch between different managed Kubernetes services seamlessly Open source and community are two important points related to this With EKS you have lots of tooling built on top of it and the community itself is growing rapidly And you know what that means you get plenty of support as many problems already have their solutions  Open source also allows you to choose your tooling while in ECS everything is very opinionated and there s not much flexibility left at the end of the day Networking limitations​​Amazon ECS allows users to assign an elastic network interface ENI to a task using only one networking mode awsvpc Usually you can get only network interfaces per EC instance but ECS also supports containers with higher limits as long as you meet specific prerequisites In total you can run up to tasks per EC instance In EKS you get to enjoy greater flexibility in networking You can share an ENI between multiple pods and place more pods per instance NamespacesNamespaces come in handy because they isolate workloads running in the same Kubernetes cluster For example you can have a dev staging and production environment in one cluster They can all share the resources of the cluster  Trouble is you can t use namespaces in ECS The solution just doesn t include them as a concept In contrast to that EKS allows you to use them just as you would in self managed Kubernetes   No configuration flexibilityMany people choose ECS because it s so simple But there s a price to pay for this namely limited configuration options For example you get no access to cluster nodes which limits your troubleshooting capabilities And if you use ECS with Fargate prepare for even more limitations For example you don t get the option to easily decouple environment specific config from your container images for portability as you do in EKS ECS to EKS glossaryBefore migrating from ECS to EKS you need to become familiar with a few terms that are common to Kubernetes  ECS vs Ks building blocks this is the best place to get started reviewing the building blocks of ECS and Kubernetes will help you understand the differences between these two EKS Worker Node this is the EC Instance that is running your workloads Pods IaC Infrastructure as Code these are tools that allow you to define infrastructure in code that you usually commit into Git repositories In addition to that you get Git like diff output whenever there is a mismatch between your code and the infrastructure in the cloud Examples include Pulumi Terraform and AWS CloudFormationHelm the most popular packaging solution in the Kubernetes world  ALB Application Load Balancer NLB Network Load Balancer Internet Gateway an internet gateway is a horizontally scaled redundant and highly available VPC component that allows communication between your VPC and the internet VPC Amazon Virtual Private Cloud Amazon VPC enables you to launch AWS resources into a virtual network that you ve defined This virtual network closely resembles a traditional network that you d operate in your own data center with the benefits of using the scalable infrastructure of AWS  ECS comes with its own terminology that you re probably familiar with Here s how different ECS concepts translate into the world of EKS ECS Task Definition lt gt EKS Kubernetes Deployment YAMLECS Task lt gt EKS Kubernetes PodECS Cluster lt gt EKS ClusterIf you need more guidance here some good sources to check out are the ECS Workshop and EKS Workshop What you get with EKSYou might not need pods for every workload you re running But you can t deny that they offer unparalleled control over pod placement and resource sharing This is really valuable when you re dealing with most service based architectures EKS offers far more flexibility for managing the underlying resources You can run your clusters on EC instances Fargate and even on premises via EKS Anywhere If you re familiar with Kubernetes and want to get your hands on the flexibility and features it provides go for EKS  How to migrate from ECS to EKSAWS has some good tips on migrating to EKS  But we found some other things you need to take care of before your ECS to EKS migration Rewrite ECS Task Definition files to Ks deployment YAMLsFirst things first you need to rewrite your ECS Task Definition files to Kubernetes deployment YAMLs This part is unavoidable and relates to one of the biggest differences between ECS and EKS or vanilla Kubernetes Spin up your environmentYou also need to spin up the respective environment versions you have on EKS People typically choose to use Infrastructure as Code IaC for that using Terraform CloudFormation or Pulumi Good news the most popular IaC tools support EKS Assuming that you re already familiar with Docker and have application images packaged and available for use You could use the features of Terraform or an alternative IaC tool such as kubernetes provider to ship your Deployment YAMLs as part of the IaC flow  You could also make use of Helm Provider if your application packages use Helm Alternatively you could use CloudFormation which allows deploying workloads to EKS clusters as well assuming your applications are packaged using Helm You can get the above working in many more ways and each has its pros and cons But this simple solution is enough for now   Configure your CI CD pipelines You need to do this to deploy your applications into the EKS cluster NetworkingBoth ECS and EKS support similar networking capabilities ALB amp NLB You can use the basic constructs of networking that you re currently familiar with in ECS This article might come in handy if you re looking for more details about ingress with ALB   Run some testsRun your test suite against your new configuration to make sure that everything works properly Switch your traffic to the EKS clusterThis might vary depending on your configuration but to give you an idea of what needs to be done You could switch the IP your domain points to for it to point to the Load Balancer used by your EKS cluster This is how you make sure that your application traffic now points to the EKS cluster For stateful applications you need to think about other things as well such as ensuring that the Ks based application transitions to being the main user of the database smooth switch from ECS using the database for writing to the EKS app using the database for writing Make your EKS journey easier with automationConfiguring and running EKS doesn t have to be hard This is what all the managed Kubernetes tools are here for For example CAST AI comes with opinionated Kubernetes implementation that helps you manage all the infrastructure complexities Focus on high level work that interests you most  Creating and managing CAST Al components is easy you can do it through API and Terraform to automate infrastructure lifecycle management  You get to streamline autoscaling with a headroom policy to accommodate sudden spikes in demand And automate spot instance use to cut costs even more If the spot availability shrinks your workloads are automatically moved to on demand instances and never go down  All in all you get to benefit from automated cloud cost optimization the platform chooses the most cost effective AWS instance types and sizes and delivers detailed cost reports If you d like to see how this works in real life here s how the e commerce agency Snow Commerce moved to EKS and now rolls out apps seamlessly with a fully automated environment See how CAST AI works in real life 2022-03-29 13:15:09
海外TECH DEV Community 在Heroku一鍵架Ubuntu並用noVNC連進去 https://dev.to/wade3c/zai-heroku-jian-jia-ubuntubing-yong-novnclian-jin-qu-26cn 2022-03-29 13:10:32
海外TECH DEV Community How we scaled ingestion to one million rows per second https://dev.to/crate/how-we-scaled-ingestion-to-one-million-rows-per-second-3dfc How we scaled ingestion to one million rows per secondThis post was written by Niklas Schmidtmer and originally published at crate ioOne of CrateDB s key strengths is scalability With a truly distributed architecture CrateDB serves high ingest loads while maintaining sub second reading performance in many scenarios In this article we want to explore the process of scaling ingestion throughput While scaling one can meet a number of challenges which is why we set ourselves the goal of scaling to an ingest throughput of rows s As CrateDB indexes all columns by default we understand ingestion as the process of inserting data from a client into CrateDB as well as indexing But this should not become yet another artificial benchmark purely optimized at showing off numbers Instead we went with a representative use case and will discuss the challenges we met on the way to reaching the rows s throughout Table of content The aim The strategy The ingest process The benchmark process The data model The tools The infrastructure The results Single node Scaling out The conclusion The learnings The aimBesides the number of rows s we set ourselves additional goals to remain as close as possible to real world applications Representativeness The data model must be representative including numerical as well as textual columns in a non trivial structure Reproducibility The provisioning of all involved infrastructure components should be highly automated and must be easily reproducible Realism No shortcuts shall be taken in the sole interest of increasing throughput such as disabling CrateDB s indexing Sustainability The target throughput must be reached as an average value over a timespan of at least five minutes The strategyThe overall strategy to reach the throughput of rows s is relatively straightforward Start a single node cluster and find the ingest parameter values that yield the best performanceAdd additional nodes one by one until reaching the target throughputThe single node cluster will set the baseline throughput for one node At a first glance it could be assumed that the number of required nodes to reach the target throughout can be calculated as baseline throughput Let s revisit at this point CrateDB s architecture In CrateDB a table is broken down into shards and shards are distributed equally across the nodes The nodes form a fully meshed network Schematic representation of a node CrateDB cluster The ingest processWith that cluster architecture in mind let s break down the processing of a single INSERT statement A client sends a batched INSERT statement to any of the nodes We call the selected node the query handler node We will later be utilizing a load balancer with a round robin algorithm to ensure that the query handling load is distributed equally across the cluster The query handler node parses the INSERT statement and assigns each row a unique ID id Based on this ID and certain shard metadata the query handler node assigns each row the corresponding target shard From here two scenarios can apply The target shard is located on the query handler node Rows get added to the shard locally The target shard is located on a different node Rows are serialized on the query handler node transmitted over the network to the target node deserialized and finally written to the target shard For additional information on custom shard allocation please see the CREATE TABLE documentation Distribution of rows to three shards based on the system generated id column Scenario a is best from a performance perspective but as each node holds an equal number of shards number of total shards number of nodes scenario b will be the more frequent one Therefore we have to expect a certain overhead factor when scaling and throughput will be lower than baseline throughput number nodes The benchmark processBefore running any benchmarks the core question is How to identify that a node cluster has reached its optimal throughput The first good candidate to look at is CPU usage CPU cycles are required for query handling parsing planning executing queries as well as for the actual indexing of data A CPU usage in the range of consistently gt is a good first indicator that the cluster is well utilized and busy But looking at CPU usage alone can be misleading as there is a fine line between well utilizing and overloading the cluster In CrateDB each node has a number of thread pools for different operations such as reading and writing data A thread pool has a fixed number of threads that process operations If no free thread is available the request for a thread is rejected and the operation gets queued To reach the best possible throughput we aim to keep threads fully utilized and have the queue of INSERT queries filled sufficiently Threads should never be idle However we also don t want to overload the queue so that queueing time negatively impacts the throughput The state of each nodes thread pools can be inspected via the system table sys nodes The below query sums up all rejected operations across all thread pools and nodes Note that this metric isn t historized so the number represents the total of rejected operations since the nodes last restart SELECT SUM pools rejected FROM SELECT UNNEST thread pools AS pools FROM sys nodes x In our benchmarks we will increase concurrent INSERT queries up to a maximum where no significant amount of rejections appear For a more permanent monitoring of rejected operations and several more metrics take a look at CrateDB s JMX monitoring as well as CrateDB and Prometheus for long term metrics storage The data modelOn the CrateDB side the data model consists of a single table that stores CPU usage statistics from Unix based operating systems The data model was adopted from Timescale s Time Series Benchmark Suite The tags column is a dynamic object which gets provided as a JSON document during ingest This JSON document described the host on which the CPU metrics were captured One row consists of numeric metrics each modeled as top level columns CREATE TABLE IF NOT EXISTS doc cpu tags OBJECT DYNAMIC AS arch TEXT datacenter TEXT hostname TEXT os TEXT rack TEXT region TEXT service TEXT service environment TEXT service version TEXT team TEXT ts TIMESTAMP WITH TIME ZONE usage user INTEGER usage system INTEGER usage idle INTEGER usage nice INTEGER usage iowait INTEGER usage irq INTEGER usage softirq INTEGER usage steal INTEGER usage guest INTEGER usage guest nice INTEGER CLUSTERED INTO lt number of shards gt SHARDSWITH number of replicas The number of shards will be determined later as part of the benchmarks Replications redundant data storage are disabled so that we can measure the pure ingest performance All other table settings remain at their default values which also means that all columns will get indexed The toolsTo provision the infrastructure that our benchmark is running on as well as generating the INSERT statements we make use of two tools crate terraform Terraform scripts to easily start CrateDB clusters in the cloud For a certain set of variable values it spins up a CrateDB cluster in a reproducible way including a load balancer It also allows configuring certain performance critical properties such as disk throughput Going with Terraform guarantees that the setup will be easy to reproduce We will run all infrastructure on the AWS cloud nodeIngestBench The client tool generating batched INSERT statements Implemented in Node js it provides the needed high concurrency with a pool of workers that run as separate child processes The infrastructureFor CrateDB nodes we chose mg xlarge instances Based on AWS Graviton ARM architecture they provide powerful resources at a comparably low cost With CPU cores and GB RAM we try to get a high base throughput for a single node and therefore keep the number of nodes low Each node has a separate disk containing CrateDB s data directory which we provision with MiB s throughput and IOPS so that disks will not become a bottleneck Additionally we spin up another EC instance that will run the Node js ingest tool The benchmark instance is an mg xlarge instance We do not actually require the CPU cores and GB RAM that it provides but it is the smallest available instance type with a guaranteed network throughput of Gbps To keep latency between the CrateDB cluster and the benchmark instance as low as possible all of them are placed in the same subnet We also configure the load balancer to be an internal one so that all traffic remains within the subnet AWS setup used for benchmarks Below is the complete Terraform configuration Please see crate terraform aws for details on how to apply a Terraform configuration module cratedb cluster source git github com crate crate terraform git aws region eu west vpc id vpc subnet ids subnet availability zones eu west b ssh keypair cratedb terraform ssh access true instance type mg xlarge instance architecture arm The size of the disk storing CrateDB s data directory disk size gb disk iops disk throughput MiB s CrateDB specific configuration crate Java Heap size in GB available to CrateDB heap size gb cluster name crate cluster The number of nodes the cluster will consist of cluster size increase to scale the cluster ssl enable true enable utility vm true load balancer internal true cratedb tar download url utility vm instance type mg xlarge instance architecture arm disk size gb output cratedb value module cratedb cluster sensitive true The resultsEach benchmark run is represented by a corresponding call of our nodeIngestBench client tool with the following call node appCluster js batchsize shards lt number of shards gt processes lt processes gt max rows concurrent requests lt concurrent requests gt Let s break down the meaning of each parameter batchsize The number of rows that are passed in a single INSERT statement A relatively high value of rows keeps the query handling overhead low shards The number of shards the table will be split into Each shard can be written independently so we aim for a number that allows for enough concurrency On adding a new node we will increase the number of shards Shards are automatically distributed equally across the cluster For a real world table setup please also consider our Sharding and Partitioning Guide for Time Series Data processes The main nodeprocess will start this number of child processes workers that generate INSERT statements in parallel max rows The maximum number of rows that each child process will generate It can be used to control the overall runtime of the tool We will lower it slightly when scaling to keep the runtime at around five minutes concurrent requests The number of INSERT statements that each child process will run concurrently as asynchronous operations Single nodeWe start simple with a single node deployment to determine the throughput baseline As we have CPU cores we chose the same amount of shards A single process with a concurrency of queries shows a throughput of rows s Scaling outWe scale horizontally by adding one additional node at a time With each node we also add another ingest client process to increase concurrency However as indicated before with every additional node the node to node traffic also increases Since this has a negative impact on the cluster wide throughput we cannot scale the ingest load in a strictly linear way Instead we observe the rejected thread pool count after each scaling and decrease the concurrent requests parameter by one if needed Below are the full results which also include all required information to reproduce the benchmark run on your own We reach the target throughput of rows s with a node cluster As each rows contains metrics it equals a throughput of metrics s The max rows parameter was reduced from million rows per child process to million rows for the node cluster to remain within a runtime of five minutes Overall this leads to a table cardinality of million rows consuming GiB of disk space including indexes after running OPTIMIZE TABLE Assuming each node in the node cluster contributed equally to the overall throughput this means a per node throughput of rows s It can be seen that the per node throughput continuously decreases while scaling The conclusionOur ingest scenario replicated a use case of concurrently ingesting clients Each sent seven queries in parallel and generated million rows million metrics that were ingested at a calculative rate of slightly above one million rows per second With every additional node we simulated the addition of another client process and saw a linear increase in throughput With increasing cluster size the throughput per node slightly decreased This can be explained by the increasing distribution of data and the greater impact of node to node network traffic Once a certain cluster size is reached the impact of that effect becomes less As a consequence plotting the measured throughput in comparison to the projected throughput excluding any overhead of measured throughput before scaling cluster size increase shows that both lines clearly diverge from each other To take the node to node communication overhead into consideration we reduce the projected throughput by on each scaling measured throughput before scaling cluster size increase Measured and expected throughput now match very closely indicating that on each scaling of the cluster size increase is taken up by the node to node communication Despite the overhead our findings still clearly show that scaling a CrateDB cluster is an adequate measure to increase ingest throughput without having to sacrifice indexing The learningsWe want to wrap up these results with a summary of the learnings we made throughout this process They can serve as a checklist for your CrateDB ingest tuning as well Disk performance The disks storing CrateDB s data directory must have enough throughput SSD drives are a must have for good performance peak throughout can easily reach rates of gt MiB s Monitor your disk throughput closely and be aware of hardware restrictions Network performance The throughput and latency of your network become relevant at high ingest rates We saw outgoing network throughput of around GiB s from the benchmark instance towards the CrateDB cluster As node to node traffic increases while scaling ensure it also provides enough performance When running in the cloud certain instance types have restricted network performance When working with a mixture of public and private IP addresses hosts ensure to consistently use the private ones to prevent traffic needlessly being routed through the slower public internet Client side performance Especially when generating artificial benchmark data from a single tool understand its concurrency limitations In our case with Node js we initially generated asynchronous requests in a single process but still didn t get the maximum out of it In Node js each process is still limited by a thread pool for asynchronous events so only a multi process approach was able to overcome this barrier Always batch your INSERT statements don t use single statements Modern hardware Use a modern hardware architecture More recent CPU architectures have a performance advantage over older ones A CPU cores desktop machine will not be able to match the speed of a CPU cores server architecture of the latest generation 2022-03-29 13:09:53
海外TECH DEV Community How I built my own blog without much coding https://dev.to/narasimha1997/how-i-built-my-own-blog-without-much-coding-1pjm How I built my own blog without much codingTwo days back I started planning to build my own blogging site Since it was a calm weekend I had enough time to explore various ways I can try to build my own blogging site Most of the initial solutions that came to my mind involved building a full fledged blogging application on my own that involved many fancy features like Database user registration comments likes views count interactive content etc However soon I decided not to go about it because it would be an overkill for what I am intending to do My requirements to be precise at a high level were as follows Create a blog without much coding and it must be done in few hours so I can enjoy my weekend Should be easy to add new posts every now and then as easy as just creating a new file for every post Pagination this was an important requirement because I wanted the viewers to see few posts at a time in chronological order without bombarding their UI with all the available posts in a single list this would also increase the overall load time as the blog grows Should support markdown syntax because it has good expressability while maintaining simplicity Easy to deploy and publish in other words I wanted something like a CI CD mechanism that is deeply integrated with platforms like GitHub because I wanted to use Github Pages for serving my blog Going further in this post I will be explaining how each of these requirements was satisfied After exploration and quick googling I found this tool called jekyll to my surprise it more of less supported all my requirements with some additions Jekyll to the rescue Jekyll is a Ruby package that allows us to write content as plain text of course using Markdown as per requirement and transform it into a static website without having to worry much on building something from scratch as per requirement It also allows for customization we can add our own styles header footer etc To my surprise GitHub provides capabilities to build github pages with Jekyll they even have a well established workflow that listens for commits automatically trigger the build process and publishes the site with new changes as per requirement We also have many plugins built for Jekyll to extend its core functionality thank god we also have a pagination plugin as per requirement I decided to write this post to help others get started easily without writing much code Getting Started Create a GitHub Repository and enable gh pages This is fairly easy if you have used GithHub before most probably this will be like a cake walk for you Follow this tutorial to create a new repository Follow this tutorial to enable gh pages feature for the repository you created In my case I wanted all the codebase related to my blog to be under gh pages branch and not under main or master so I selected gh pages as the source branch GitHub also provides some pre configured jekyll themes for you to choose I selected hacker theme because I am a hacker fanboy who grew up watching Matrix and Mr Robot Once done clone the repository to make modifications locally and test it out In my case it was clone the repositorygit clone git github com lt your username gt lt your repo name gt git don t forget to check gh pages branchgit checkout gh pages Installing Ruby Gem and Jekyll for local development and testing To test your blog locally you might need to install Ruby and other tools this will be useful during the initial stages because you will be making lot of changes to the codebase Run these commands to install Ruby I use ubuntu if you are on a different Linux distribution based on Red Hat or other operating system you can refer to this page On Ubuntu start with an update just to stay updated sudo apt update install ruby gem will be installed along Ruby We get tools like gcc g and make via build essentialsudo apt install ruby full build essential zlibg devTo make sure you are all set just check ruby and gen versions ruby v on my system it shows ruby p revision e x linux gnu can be different on your machine based on architecture and OS you are using gem v on my machine gem or RubyGems is a package manager for Ruby just like how we have npm pip and cargo for Node Python and Rust Jekyll must be downloaded as a gem package so we use gem command to do that But for building the website locally we need lot of other tools github pages gem provides these tools for us jekyll is also packaged along with github pages Therefore you need to install only github pages gem use sudo if you are getting permission errorgem install github pages Configure your blogOnce jekyll and other tools are installed you can set up your blog The easiest way is to clone my repository and checkout the gh pages branch Most of the source code you see in my repository is borrowed from tocttou hacker blog Once cloned copy the contents of my repository to your repository under gh pages branch Run these commands clone my repogit clone git github com Narasimha blog git change directory to by repo you cloned just nowcd blog checkout gh pages branchgit checkout gh pages remove all my existing postsrm r posts md copy all the contents to your repo directorycp r path to your repoNow go back to your project directory and edit the config yml file according to your needs The current config yml looks like this title and description of the site will be used in lt title gt tag title Narasimha Prasanna HNdescription Software Developer Python JavaScript Go Rust use hacker themetheme jekyll theme hacker this is the base URL use http localhost blog to access locally baseurl blogplugins use paginator plugin jekyll paginatedefaults scope path type posts values layout post source destination sitepermalink title display posts in a pagepaginate paginate path page num this will be displayed as the banner of the blog s home pagebanner root prasanna desktop your linkedin profilelinkedin your Github profilegithub your portfolioportfolio The comments in this file will guide you to understand the meaning of each parameter Once modified you should be able to serve your blog locally Run jekyll serveThen you should be able to view the site at http localhost blog Jekyll supports live reloading so you can view your changes reflected on the site without running jekyll serve command again Publish your blog to Github Once you are satisfied with the configuration stage your changes make local commit and push it to the remote branch i e gh pages This can be done by executing following commands git add git commit m lt some nice message gt git push origin gh pagesNow go to the repository on Github you will see that a workflow has been triggered this workflow will perform almost similar steps you did locally and deploys the website Once the workflow is complete you can check your blog live at https lt your username gt github io lt your repo name gt for me it is which you can view here Originally published on my blog 2022-03-29 13:02:37
Apple AppleInsider - Frontpage News How Apple's logo started out as the 'most expensive,' and became the most iconic https://appleinsider.com/articles/22/03/29/how-apples-logo-started-out-as-the-most-expensive-and-became-the-most-iconic?utm_medium=rss How Apple x s logo started out as the x most expensive x and became the most iconicFrom its very first beginnings as an over elaborate illustration to its familiar and distinctive shape today Apple s logo has reflected its company s aims from the start There was no Apple logo when Steve Wozniak built the Apple I or when Steve Jobs sold it But there was when Apple was officially founded on April It was designed by Apple s third founder Ron Wayne whose logo didn t last a great deal longer than he did Wayne exited the company merely days after it was founded and his original logo was replaced within a year Read more 2022-03-29 13:50:14
Apple AppleInsider - Frontpage News New Flex 1U4 brings OWC's Thunderbolt storage to server racks https://appleinsider.com/articles/22/03/29/new-flex-1u4-brings-owcs-thunderbolt-storage-to-server-racks?utm_medium=rss New Flex U brings OWC x s Thunderbolt storage to server racksOWC has expanded its Flex product line with the Flex U a Thunderbolt based four bay storage and docking station that can be mounted with servers on a rack The OWC Flex U is not your typical docking and storage station in that it isn t designed specifically for use on or near a desk like the ThunderBay Flex Instead the Flex U is intended to be used by enterprise and business customers alongside servers and other hardware installed in a rack Containing four bays the Flex U can hold up to four drives including a mix of inch and inch SATA or SAS drives as well as U and M NVMe drives The drives are held in hot swappable bays allowing for quick changes of storage when needed Read more 2022-03-29 13:30:48
Apple AppleInsider - Frontpage News DOJ backs antitrust bill targeting Apple, Google, Amazon https://appleinsider.com/articles/22/03/29/doj-backs-antitrust-bill-targeting-apple-google-amazon?utm_medium=rss DOJ backs antitrust bill targeting Apple Google AmazonThe US Department of Justice has endorsed the proposed antitrust bill that is intended to curb anticompetitive Big Tech companies but which Apple says will instead harm consumers The American Innovation and Choice Online Act is a legislative proposal that would make it illegal for large technology firms to promote their own services over those from rivals While it remains a proposal its chance of being passed into law has been increased by public support from the DOJ According to the Wall Street Journal the Department of Justice s acting assistant attorney general for legislative affairs Peter Hyun has detailed the DOJ s position in a letter sent to the Senate Judiciary Committee Read more 2022-03-29 13:02:54
Apple AppleInsider - Frontpage News Top 7 Amazon deals for the week of March 28: Apple AirPods, Mac mini, TurboTax, iPad Air & more on sale https://appleinsider.com/articles/22/03/28/top-5-amazon-deals-for-the-week-of-march-28-airpods-mac-mini-turbotax-more-on-sale?utm_medium=rss Top Amazon deals for the week of March Apple AirPods Mac mini TurboTax iPad Air amp more on saleWeekly deals on Apple products high end audio equipment gaming gear and even tax software are going on now at Amazon with cash discounts of up to off We ve rounded up our favorites on March Save on Apple AirPods TurboTax software for Mac and audio equipmentBargain hunters can check out Amazon s daily deals on this dedicated sale page but we re highlighting our own top picks below Read more 2022-03-29 13:44:51
海外TECH Engadget NVIDIA's GeForce RTX 3090 Ti is now available for a staggering $1,999 https://www.engadget.com/nvidia-geforce-rtx-3090-ti-price-135534339.html?src=rss NVIDIA x s GeForce RTX Ti is now available for a staggering NVIDIA s GeForce RTX Ti is finally here and it s clear the no compromise design comes with the steep price tag to match The new flagship GPU is now available at an official price of That s more than the base RTX and closer to the price of line blurring GPUs like the old Titan RTX And don t be surprised if you pay more thanks to ongoing shortages ーwe re already seeing more expensive cards at retailers There s some justification for the steep price at least The RTX Ti effectively fulfills Ampere s potential with a full Streaming Multiprocessors enabled instead of higher clock speeds GHz base and GHz boost and GB of second generation higher clocked GDDRX memory with a wider Gbps of bandwidth This consumes a massive W of thermal design power the regular only uses W but you ll know that your game or editing app will run as smoothly as possible with today s technology The issue as you might guess is the word quot today s quot You re spending two grand on what s very clearly the swan song for NVIDIA s RTX series graphics chips The company already confirmed at GTC that its upcoming Ampere Next architecture likely the basis for the RTX series is due later in You re spending a lot of money on a GPU that could feel outdated in a matter of months The RTX Ti is for well heeled gamers and creators who can t or don t want to wait to upgrade 2022-03-29 13:55:34
海外TECH Engadget Unicode won't accept any new flag emoji https://www.engadget.com/unicode-no-flag-emoji-please-131509128.html?src=rss Unicode won x t accept any new flag emojiDon t expect to see new flags in your phone s emoji any time soon The Unicode Consortium has warned it will quot no longer accept proposals quot for flag emoji regardless of category They re more trouble than they re worth the organization said whether it s the inherent politics or the value they bring The Consortium noted that flag additions tend to quot emphasize the exclusion of others quot If the emoji team added regional flags for one country for instance it would highlight the lack of regional flags for other countries Moreover Unicode can t remove a character once added ーwhile it can update emoji it s hesitant to add a flag that might not last long Usage was also a major concern Flags are quot by far quot the least used emoji Unicode said and aren t even used that often in social media bios The Consortium is trying to limit the number of emoji it adds each year and there isn t much incentive to add flags that won t see widespread adoption In some cases such as for additional LGBTQ flags the outfit also believed its standard was quot not an effective mechanism quot for recognition and was expanding heart colors to help people take pride in their identities without using flags This doesn t mean you ll never see flags again Flags are automatically recommended for any country with a Unicode region code that is recognized by the United Nations For now though the flags you see will be fixed unless there are significant political upheavals 2022-03-29 13:15:09
海外TECH Engadget The Soundboks Go offers loudspeaker sound in a more portable package https://www.engadget.com/soundboks-go-bluetooth-loud-speaker-portable-131030862.html?src=rss The Soundboks Go offers loudspeaker sound in a more portable packageWhen Soundboks released its Gen portable loudspeaker I was happy it wasn t any larger than its predecessors It was on the edge of being truly portable as it was Now the company aims to deliver the same signature output capability in a new smaller package The Soundboks Go is about half the size of the Gen resembling more of a chubby briefcase than an end table and it continues to offer the incredible battery life and connectivity that have become synonymous with the brand As a bonus the company is also launching Direkt an in app platform that will offer live stream sets from DJs and artists across the globe to Soundboks users The Soundboks Go measures x x inches and weighs about pounds There s a convenient carry handle on top but you can also opt for the shoulder strap accessory The speaker is composed of one inch woofer and a inch silk dome tweeter enclosed in an ABS and polycarbonate cabinet and protective grill Inside the system relies on two W class D amplifiers to drive the audio output and its effective frequency range of Hz kHz Where the Gen had generous input and output possibilities the Go has slimmed the options down to a single mm aux input Bluetooth wireless connectivity will likely be the primary audio source for most casual users and just like the previous model the Soundboks Go will support TeamUp using SKAA wireless technology to communicate with up to three other nearby Soundboks Gen or Go speakers The Soundboks Gen and Soundboks Go The speaker is built for the outdoors with a rugged silicone rubber bumper around the edges and an IP rating dust tight and resistant to powerful jets of water but not meant to be submerged Battery life appears to be stellar with up to hours at medium volume and around hours at full blast with hours to fully recharge Like previous models the battery is removable and swappable if you bring a backup Plus you can continuously charge while playing if you have an outlet handy The Soundboks app offers a custom EQ as well as preset sound profiles for extra bass power and indoor listening Starting in April iOS users will get to enjoy the new Direkt live streaming platform as part of the app experience Every Friday and Saturday night you ll be able to access live sets from clubs and studios in Copenhagen Barcelona London Los Angeles and more The streams will be available for hours so users across the globe can all enjoy them Pre orders for the new Soundboks Go start today and the expected release date is April th 2022-03-29 13:10:30
海外TECH Engadget Arden brings BBQ indoors thanks to 'smoke elimination' technology https://www.engadget.com/firstbuild-arden-indoor-smoker-ge-appliances-130051688.html?src=rss Arden brings BBQ indoors thanks to x smoke elimination x technologyAs the weather warms up it s time for aspiring pit masters to dust off their aprons and meat probes before heading outside for some low and slow cooking Pellet grills have become a popular choice for backyard cooks as they offer the flavor of food cooked over wood with a much more convenient fuel source However you still have to go outside to use one and unless you have a screened in porch or shelter of some kind cooking in the rain is no fun And if you live in an apartment chances are you can t have a grill in the first place FirstBuild a product innovation lab backed by GE Appliances has built Arden an indoor smoker that burns wood pellets and quot eliminates quot the smoke so it s safe to use in your kitchen Like an outdoor pellet grill the Arden has a hopper for the fuel and burns it to produce smoke to flavor foods The difference here is this countertop unit has a separate heating element that helps to regulate the temperature so it s not solely relying on burning pellets to cook FirstBuild says the Arden circulates smoke around the chamber before a quot game changing smoke elimination technology quot uses quot a catalyst quot to get rid of it The company explains that the small appliance doesn t have a filter you need to clean or replace it just expels carbon dioxide and water vapor out of the back Details are scarce on exactly what happens during that process but it s clear the thing doesn t emit any smoke during a cook FirstBuild is using smoke elimination tech here that was built for the Monogram Smart Hearth Oven That appliance is an in wall electric unit that s designed to mimic the performance of wood fired brick ovens used in restaurants for pizza baking and roasting The company says the idea for a smoker was first implemented in an old GE fridge that had been converted to a BBQ cooker In order to bring the device indoors to escape the weather FirstBuild team outfitted it with the smoke trapping tech from the Hearth Oven nbsp The company says the smoker generated a ton of interest from its community so it asked if people would buy one and how big it needed to be The first answer was a resounding quot yes quot and the second was that it needed to fit on the counter like other kitchen appliances Unlike the old refrigerator the Arden is a moveable unit so you can stash it somewhere else when its not in use ーunless you really want to dedicate counter space to showing it off Despite easy moving it s still quite a large thing to have out all of the time The Arden is about the size of a mini fridge with enough capacity for two racks of ribs a small brisket or quot an average sized quot pork butt The device can also accommodate a whole chicken standing up on a rack or beer can Three removable shelves allow you to fit things as needed but based on FirstBuild s videos you ll need to cut racks of ribs in half to make them fit Cook times remain the same as outdoor smokers so you re looking at three to five hours for ribs and up to hours for a pork butt for example The Arden cooks at temperatures between degrees Fahrenheit and it allows you to use a meat probe to monitor internal temp It can also hold foods at a certain temperature once they re done cooking in case you can t get to them immediately nbsp Since the Arden has separate heat sources for the pellets and the main heat setting FirstBuild says the device offers more accurate overall temperature The company says this smoker also uses a lot less pellets since they smolder for flavor instead of burning to heat an entire grill While barbecuers seem to be impressed by the smoker flavor the Arden prototypes impart Mad Scientist BBQ s Jeremy Yoder noted that it s not as quot complex quot in the overall profile nbsp According to Yoder the smoke flavor is more on the surface so while you can certainly taste it it hasn t penetrated the meat like hardwood coals or a full size pellet grill can manage BBQ nerds will also notice the lack of a well defined smoke ring Yoder did confirm that the results on pork ribs are a massive improvement over what you can get faking it in a regular oven and they were even better than what he d had in some restaurants There is a smoke level adjustment on the Arden control panel so presumably you could dial that up to fit your desired taste profile Speaking of smoke it s unclear if the smoker stops smoldering pellets temporarily if you open the door during the cooking process nbsp Like it has in the past FirstBuild is taking the crowdfunding approach for initial pre orders If you snag one via Indiegogo the earliest devotees can secure it for After that you ll get a discount off the expected MSRP when the Arden goes on sale vs Prices will go up during the course of the campaign so opting in sooner will save you some cash The only downside to ordering early is backers will have to wait until summer to get one However the product lab crowdfunded the initial launch of the Opal Nugget Ice Maker in ーraising over million ーso it has a history of delivering the goods GE now offers a range of Opal machines so it will be interesting to see what happens if the Arden hits or surpasses that mark nbsp 2022-03-29 13:00:51
海外科学 NYT > Science Review: ‘Vagina Obscura,’ by Rachel E. Gross https://www.nytimes.com/2022/03/29/books/vagina-obscura-rachel-gross.html bodies 2022-03-29 13:49:32
ニュース BBC News - Home Queen attends Prince Philip memorial service at Westminster Abbey https://www.bbc.co.uk/news/uk-60902088?at_medium=RSS&at_campaign=KARANGA westminster 2022-03-29 13:10:23
ニュース BBC News - Home Russia-Ukraine war: Abramovich spotted in Istanbul peace talks https://www.bbc.co.uk/news/world-europe-60912474?at_medium=RSS&at_campaign=KARANGA billionaire 2022-03-29 13:49:27
ニュース BBC News - Home UK seizes first superyacht in British waters https://www.bbc.co.uk/news/business-60912754?at_medium=RSS&at_campaign=KARANGA businessman 2022-03-29 13:08:55
ニュース BBC News - Home P&O Ferries says sacking U-turn would cause collapse https://www.bbc.co.uk/news/business-60913206?at_medium=RSS&at_campaign=KARANGA secretary 2022-03-29 13:08:13
ニュース BBC News - Home SEND review: Children to receive earlier support in new government plans https://www.bbc.co.uk/news/education-60875163?at_medium=RSS&at_campaign=KARANGA government 2022-03-29 13:42:34
北海道 北海道新聞 英女王、夫の追悼式典に参列 つえ使い、歩いて入場 https://www.hokkaido-np.co.jp/article/662834/ 追悼 2022-03-29 22:19:07
北海道 北海道新聞 メッシが暗号資産の広告塔に 24億円超で3年契約 https://www.hokkaido-np.co.jp/article/662842/ 購入 2022-03-29 22:26:00
北海道 北海道新聞 余市―小樽間並行在来線バス転換合意 「住民への説明不足」 余市町議会委 疑問相次ぐ https://www.hokkaido-np.co.jp/article/662841/ 並行在来線 2022-03-29 22:26:00
北海道 北海道新聞 倶知安町、リゾート規制強化 施行時期には不透明感も 新年度に独自の景観計画、条例改正 https://www.hokkaido-np.co.jp/article/662840/ 不透明感 2022-03-29 22:25:00
北海道 北海道新聞 親の同意なく金融取引が可能に 18歳から、損失懸念も https://www.hokkaido-np.co.jp/article/662753/ 引き下げ 2022-03-29 22:24:11
北海道 北海道新聞 ロシア映画「ICE」が公開延期 ウクライナ侵攻が影響 https://www.hokkaido-np.co.jp/article/662795/ 公開予定 2022-03-29 22:22:04
北海道 北海道新聞 パートの加入「全企業で」 厚生年金巡り、政府社保会議 https://www.hokkaido-np.co.jp/article/662820/ 慶応義塾 2022-03-29 22:20:13
北海道 北海道新聞 チリ人被告、無罪を主張 筑波大生不明、フランスで初公判 https://www.hokkaido-np.co.jp/article/662821/ 被告 2022-03-29 22:20:07
北海道 北海道新聞 広3―2神(29日) 広島が逆転サヨナラで4連勝 https://www.hokkaido-np.co.jp/article/662835/ 開幕 2022-03-29 22:16:00
北海道 北海道新聞 日本代表、ベトナムと1―1 サッカーW杯最終予選 https://www.hokkaido-np.co.jp/article/662833/ 日本代表 2022-03-29 22:14:00
北海道 北海道新聞 NATO外相会合へ日本も招待 4月6~7日に開催 https://www.hokkaido-np.co.jp/article/662832/ 北大西洋条約機構 2022-03-29 22:13:00
北海道 北海道新聞 NY円、123円前半 https://www.hokkaido-np.co.jp/article/662826/ 外国為替市場 2022-03-29 22:06:00
北海道 北海道新聞 自民「5千円給付」白紙に 4月末にも緊急経済対策 https://www.hokkaido-np.co.jp/article/662751/ 岸田文雄 2022-03-29 22:05:21

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)