IT |
気になる、記になる… |
「iPhone 15」ではUWBチップがアップグレードされる?? ー 「iPhone 16」ではWi-Fi 7対応との予測も |
https://taisy0.com/2023/06/19/173172.html
|
apple |
2023-06-19 08:59:03 |
IT |
気になる、記になる… |
povo2.0、「データ追加180GB (365日間)」の期間限定トッピングを22,400円で提供へ ー MNPで8000円分還元キャンペーンも |
https://taisy0.com/2023/06/19/173168.html
|
乗り換え |
2023-06-19 08:42:43 |
IT |
ITmedia 総合記事一覧 |
[ITmedia News] dポイントのキャンペーンで進呈ミス 二重進呈と進呈漏れが発覚 1000ポイント分を補正へ |
https://www.itmedia.co.jp/news/articles/2306/19/news153.html
|
itmedianewsd |
2023-06-19 17:30:00 |
IT |
ITmedia 総合記事一覧 |
[ITmedia News] みずほ、システム開発・保守を生成AIで改善 富士通と実証実験 |
https://www.itmedia.co.jp/news/articles/2306/19/news149.html
|
itmedia |
2023-06-19 17:06:00 |
IT |
情報システムリーダーのためのIT情報専門サイト IT Leaders |
NTTデータ先端技術、Webシステムの脆弱性を検査するContrast Securityのツールを販売 | IT Leaders |
https://it.impress.co.jp/articles/-/24977
|
NTTデータ先端技術、Webシステムの脆弱性を検査するContrastSecurityのツールを販売ITLeadersNTTデータ先端技術は年月日、米コントラストセキュリティContrastSecurityの脆弱性検査ソフトウェアを販売開始すると発表した。 |
2023-06-19 17:36:00 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
プライベートサブネットのRDSへの接続方法 |
https://qiita.com/utsunomiya_ff/items/1d441a08bb63b9f7cf06
|
ecinstanceconnect |
2023-06-19 17:16:22 |
Azure |
Azureタグが付けられた新着投稿 - Qiita |
AzureのPaaSを用いたWeb開発で起きたタイムゾーン問題 |
https://qiita.com/masayakomono/items/316956bb7d7b76ad06c1
|
azure |
2023-06-19 17:47:13 |
海外TECH |
DEV Community |
Create your HTML forms without server for free! 💪🤑 |
https://dev.to/clement_grosieux/create-your-html-forms-without-server-for-free-7k5
|
Create your HTML forms without server for free Would you like to integrate a html contact form into your website without setting up a server I have the solution for you A free service that takes care of everything Table of Contents Service Introduction Free Obtaining the API Key seconds Setting up the FormConclusion Service Introduction Free Today I present to you webforms com There are many similar services but according to my research this one offers the best free package Currently you will have free access to Unlimited Forms Monthly SubmissionsI find this to be more than sufficient especially for a portfolio website Additionally if you wish to deploy the solution on multiple sites you can simply provide a different email address Obtention de l api key sec Go to webforms com start and scroll down to Once you have registered your email address do not use a disposable email as this is where you will receive the messages you will receive your API key via email And there you have it you now have your API key Setting up the FormTo set up your form create your form using the POST method Then simply add the webform URL lt form action method POST gt lt input type hidden name access key value YOUR ACCESS KEY HERE gt lt input type text name name required gt lt input type email name email required gt lt textarea name message required gt lt textarea gt lt button type submit gt Submit Form lt button gt lt form gt And that s it you re done Here s a practical example You will receive the form information via email ConclusionIt doesn t get any easier than this This solution allows you to quickly deploy forms You can further enhance it with Captchas and verifications but it remains just as simple Feel free to ask any questions or provide any feedback And don t forget to subscribe and like |
2023-06-19 08:47:52 |
海外TECH |
DEV Community |
5 React Libraries to Level Up your Projects in 2023 |
https://dev.to/livecycle/5-react-libraries-to-level-up-your-projects-in-2023-3d5i
|
React Libraries to Level Up your Projects in TL DRIn this article we ll look at libraries that can positively impact your React development experience by addressing some of the most common pain points in React development such as data fetching styling accessibility and state management IntroductionMastering the fundamentals of React is important And the truth is you can get quite far without a ton of additional libraries But there are a few foundational tools that can take your React development experience to the next level These libraries address the most common pain points in React development such as data fetching styling accessibility and state management and they do so in a minimal and non intrusive way This enables gradual adoption across your code base We ve put together a list of five such libraries that we think you should be aware of Why This is ImportantSharing devtools and impacting developer experience is important When developers have access to the right tools and resources they can spend more time building and creating and less time with distractions overhead and frustrations That s why we re always on the lookout for useful projects to share and it s also why we ve recently launched an open source tool of our own called “Preevy Developers use Preevy to easily create shareable preview environments and better collaborate with others on the latest changes Can you check out Preevy and leave us a star It really helps us to continue contributing useful tools and content to the open source community Thank you ️️️ And now with our introductions out of the way let s jump in and see what these tools are all about TanStack React QueryPut simply React Query makes fetching data in React a way better experience But it s not a data fetching library per se Instead it is a state management library that deals with asynchronous server state You provided it with an asynchronous function that then fetches data The useQuery hook then provides you with a bunch of useful utilities to handle the async function Loading flagResult cachingInvalidation and re fetching of resultsThis doesn t sound like that much But the influence it has on big code bases cannot be understated Typically code bases have a lot of logic to share fetch results globally refresh those when the data changes triggers to fetch data and so much more Most of this is simply no longer needed when using React Query Caching means you can call the useQuery hook all over your application and data gets shared between all occurrences ZustandEvery React developer knows the pain involved in sharing state across your application When first encountered with the problem you inevitably end up “prop drilling the data across the component tree Needless to say that doesn t make for clean code and isn t sustainable in the long run Thankfully React came up with Context providers to solve this issue Context is great if all you need to do is pass a few values down your component tree But it can become cumbersome to use for more complex global stores Both because developers need to be careful about performance implications and some developers aren t a big fan of its API If you want to set up from Context Zustand is your best bet It offers an extremely simple API that lets you create a store with values and functions Then you can access that store from anywhere in your application to read and write values Reactivity included If you want to store nested object data in your store consider using Immer alongside Zustand to easily change nested state Framer MotionAnimations are one of the best ways to give your React application a modern and polished feel But this isn t easy Using CSS animations is tricky and can result in a lot of code In contrast Framer Motion offers a powerful but simple API to create custom animations It s natively integrated into the React Ecosystem with a set of hooks and components For example this code is all that s required to smoothly animate the transformation from a circle into a square import motion from framer motion export const MyComponent gt lt motion div animate scale rotate e borderRadius gt Each value in the array represents a keyframe for the corresponding property The animation then loops through that Of course you can do much more than simply defining keyframes with Framer Motion You can also animate changes in Layout handle gestures or animate based on scrolling Class Variance Authority CVA TailwindCSS has quickly risen to become the prime way of styling React Applications But building reusable UI elements with it can be a challenge Say you create your own custom styled button using Tailwind Since you want to reuse it throughout your app you create a Component But now you want multiple variants of that component A primary style and a secondary style So now you need to piece your Tailwind classes together according to the prop value Now you also want different colors and various sizes for your button So add some props and even more conditional logic to figure out the correct combination of Tailwind classes This can get frustrating quite quickly Enter CVA short for Class Variance Authority It s a simple library that takes the pain away from building composable React components with Tailwind class names Take this example from their docs import React from react import cva type VariantProps from class variance authority const button cva button variants intent primary bg blue text white border transparent hover bg blue secondary bg white text gray border gray hover bg gray size small text sm py px medium text base py px compoundVariants intent primary size medium class uppercase defaultVariants intent primary size medium export interface ButtonProps extends React ButtonHTMLAttributes lt HTMLButtonElement gt VariantProps lt typeof button gt export const Button React FC lt ButtonProps gt className intent size props gt lt button className button intent size className props gt We declaratively describe the button styles for each parameter value CVA then does the work of figuring out the correct combination of styles We can even specify the default variants to make certain properties optional Radix UIIf you like to build fully custom styled interfaces but don t want to deal with the intricacies of developing high fidelity accessible UI components from scratch Radix UI is for you The library ships with various commonly used UI components Such as Dialogs Checkboxes and Dropdowns But with a twist While the components contain all the logic and interactivity they have zero styling This means you have full control over styling the components yourself This enables you to build a truly custom UI system that doesn t look like every other website While having full control over styling Radix does all the other work for you All components are fully accessible say through keyboard navigation If you like the flexibility of Radix but don t want to style everything from scratch shadcn ui is something you should check out It s a fully modular component library built on top of Radix and Tailwind Instead of installing an NPM package you can copy the code directly into your project and modify it to your liking Wrapping UpThe libraries discussed in this article can help you bring your React applications to the next level Adopting them will help your app have a better experience for both users and developers You can adopt all of them gradually in your project instead of with one big change And they re very straightforward to get started with So there s no need to spend hours on end studying the documentation before you can begin coding Hope you found this helpful |
2023-06-19 08:26:07 |
海外TECH |
DEV Community |
Building a Vision Transformer from Scratch in PyTorch 🔥 |
https://dev.to/akshayballal/building-a-vision-transformer-from-scratch-in-pytorch-1m1b
|
Building a Vision Transformer from Scratch in PyTorch IntroductionIn recent years the field of computer vision has been revolutionized by the advent of transformer models Originally designed for natural language processing tasks transformers have proven to be incredibly powerful in capturing spatial dependencies in visual data as well The Vision Transformer ViT is a prime example of this presenting a novel architecture that achieves state of the art performance on various image classification tasks In this article we will embark on a journey to build our very own Vision Transformer using PyTorch By breaking down the implementation step by step we aim to provide a comprehensive understanding of the ViT architecture and enable you to grasp its inner workings with clarity Of course we could always use the PyTorch s inbuilt implementation of the Vision Transformer Model but what s the fun in that We will start by setting up the necessary dependencies and libraries ensuring a smooth workflow throughout the project Next we will dive into data acquisition where we obtain a suitable dataset to train our Vision Transformer model To prepare the data for training we will define the necessary transformations required for augmenting and normalizing the input images With the data transformations in place we will proceed to create a custom dataset and data loaders setting the stage for training our model For understanding the Vision Transformer architecture it is crucial to build it from scratch In the subsequent sections we will dissect each component of the ViT model and explain its purpose We will begin with the Patch Embedding Layer responsible for dividing the input image into smaller patches and embedding them into a vector format Following that we will explore the Multi Head Self Attention Block which allows the model to capture global and local relationships within the patches Additionally we will delve into the Machine Learning Perceptron Block a key component that enables the model to capture the hierarchical representations of the input data By assembling these components we will construct the Transformer Block which forms the core building block of the Vision Transformer Finally we will bring it all together by creating the ViT model utilizing the components we have meticulously crafted With the completed model we can experiment with it fine tune it and unleash its potential on various computer vision tasks By the end of this post you will have gained a solid understanding of the Vision Transformer architecture and its implementation in PyTorch Armed with this knowledge you will be able to modify and extend the model to suit your specific needs or even build upon it for advanced computer vision applications Let s start You can find the colab notebook with all the code at the end of this article Step Install and Import Dependencies pip install q torchinfoimport torchfrom torch import nnfrom torchinfo import summary Step Get the Data Download the Data from the Web It will be a zip file for this Extract the zip file Delete the zip fileimport requestsfrom pathlib import Pathimport osfrom zipfile import ZipFile Define the URL for the zip fileurl Send a GET request to download the fileresponse requests get url Define the path to the data directorydata path Path data Define the path to the image directoryimage path data path pizza steak sushi Check if the image directory already existsif image path is dir print f image path directory exists else print f Did not find image path directory creating one image path mkdir parents True exist ok True Write the downloaded content to a zip filewith open data path pizza steak sushi zip wb as f f write response content Extract the contents of the zip file to the image directorywith ZipFile data path pizza steak sushi zip r as zipref zipref extractall image path Remove the downloaded zip fileos remove data path pizza steak sushi zip Step Define TransformationsResize the images using Resize to We choose the images size to be based on the ViT PaperConvert to Tensor using ToTensor from torchvision transforms import Resize Compose ToTensor Define the train transform using Composetrain transform Compose Resize ToTensor Define the test transform using Composetest transform Compose Resize ToTensor Step Create Dataset and DataLoaderWe can use PyTorch s ImageFolder DataSet library to create our Datasets For ImageFolder to work this is how your data folder needs to be structured data└ーpizza steak sushi ├ーtest │├ーpizza │├ーsteak │└ーsushi └ーtrain ├ーpizza ├ーsteak └ーsushiAll the pizza images will be in the pizza folder of train and test sub folders and so on for all the classes that you have There are two useful methods that you can call on the created training dataset and test datasettraining dataset classes that gives pizza steak sushi training dataset class to idx that gives pizza steak sushi from torchvision datasets import ImageFolderfrom torch utils data import DataLoaderBATCH SIZE Define the data directorydata dir Path data pizza steak sushi Create the training dataset using ImageFoldertraining dataset ImageFolder root data dir train transform train transform Create the test dataset using ImageFoldertest dataset ImageFolder root data dir test transform test transform Create the training dataloader using DataLoadertraining dataloader DataLoader dataset training dataset shuffle True batch size BATCH SIZE num workers Create the test dataloader using DataLoadertest dataloader DataLoader dataset test dataset shuffle False batch size BATCH SIZE num workers We can visualize a few training dataset images and see their labelsimport matplotlib pyplot as pltimport randomnum rows num cols num rows Create a figure with subplotsfig axs plt subplots num rows num cols figsize Iterate over the subplots and display random images from the training datasetfor i in range num rows for j in range num cols Choose a random index from the training dataset image index random randrange len training dataset Display the image in the subplot axs i j imshow training dataset image index permute Set the title of the subplot as the corresponding class name axs i j set title training dataset classes training dataset image index color white Disable the axis for better visualization axs i j axis False Set the super title of the figurefig suptitle f Random num rows num cols images from the training dataset fontsize color white Set the background color of the figure as blackfig set facecolor color black Display the plotplt show Understanding Vision Transformer ArchitectureLet us take some time now to understand the Vision Transformer Architecture This is the link to the original vision transformer paper Below you can see the architecture that is proposed in the image The Vision Transformer ViT is a type of Transformer architecture designed for image processing tasks Unlike traditional Transformers that operate on sequences of word embeddings ViT operates on sequences of image embeddings In other words it breaks down an input image into patches and treats them as a sequence of learnable embeddings At a broad level what ViT does is it Creates Patch EmbeddingsPasses embeddings through Transformer Blocks The patch embeddings along with the classification token are passed through multiple Transformer blocks Each Transformer block consists of a MultiHead Self Attention Block MSA Block and a Multi Layer Perceptron Block MLP Block Skip connections are established between the input to the Transformer block and the input to the MSA block as well as between the input to the MLP block and the output of the MLP block These skip connections help mitigate the vanishing gradient problem as more Transformer blocks are added Performs Classification The final output from the Transformer blocks is passed through an MLP block The classification token which contains information about the input image s class is used to make predictions We will dive into each of these steps in detail starting with the crucial process of creating patch embeddings Step Create Patch Embedding LayerFor the ViT paper we need to perform the following functions on the image before passing to the MultiHead Self Attention Transformer LayerConvert the image into patches of x size Embed each patch into dimensions So each patch becomes a x Vector There will be N H times W P number of patches for each image This results in an image that is of the shape x x Flatten the image along a single vector This will give a x Matrix which is our Image Embedding Sequence Prepend the Class Token Embeddings to the above outputAdd the Position Embeddings to the Class Token and Image Embeddings PATCH SIZE IMAGE WIDTH IMAGE HEIGHT IMAGE WIDTHIMAGE CHANNELS EMBEDDING DIMS IMAGE CHANNELS PATCH SIZE NUM OF PATCHES int IMAGE WIDTH IMAGE HEIGHT PATCH SIZE the image width and image height should be divisible by patch size This is a check to see that assert IMAGE WIDTH PATCH SIZE and IMAGE HEIGHT PATCH SIZE print Image Width is not divisible by patch size Step Converting the image into patches of x and creating an embedding vector for each patch of size This can be accomplished by using a ConvD Layer with a kernel size equal to patch size and a stride equal to patch sizeconv layer nn Convd in channels IMAGE CHANNELS out channels EMBEDDING DIMS kernel size PATCH SIZE stride PATCH SIZE We can pass a random image into the convolutional layer and see what happensrandom images random labels next iter training dataloader random image random images Create a new figurefig plt figure Display the random imageplt imshow random image permute Disable the axis for better visualizationplt axis False Set the title of the imageplt title training dataset classes random labels color white Set the background color of the figure as blackfig set facecolor color black We need to change the shape to and flatten the output to Pass the image through the convolution layerimage through conv conv layer random image unsqueeze print f Shape of embeddings through the conv layer gt list image through conv shape lt batch size num of patch rows num patch cols embedding dims Permute the dimensions of image through conv to match the expected shapeimage through conv image through conv permute Create a flatten layer using nn Flattenflatten layer nn Flatten start dim end dim Pass the image through conv through the flatten layerimage through conv and flatten flatten layer image through conv Print the shape of the embedded imageprint f Shape of embeddings through the flatten layer gt list image through conv and flatten shape lt batch size num of patches embedding dims Assign the embedded image to a variableembedded image image through conv and flattenShape of embeddings through the conv layer gt lt batch size num of patch rows num patch cols embedding dims Shape of embeddings through the flatten layer gt lt batch size num of patches embedding dims Prepending the Class Token Embedding and Adding the Position Embeddingsclass token embeddings nn Parameter torch rand EMBEDDING DIMS requires grad True print f Shape of class token embeddings gt list class token embeddings shape lt batch size emdedding dims embedded image with class token embeddings torch cat class token embeddings embedded image dim print f nShape of image embeddings with class token embeddings gt list embedded image with class token embeddings shape lt batch size num of patches embeddiing dims position embeddings nn Parameter torch rand NUM OF PATCHES EMBEDDING DIMS requires grad True print f nShape of position embeddings gt list position embeddings shape lt batch size num patches embeddings dims final embeddings embedded image with class token embeddings position embeddingsprint f nShape of final embeddings gt list final embeddings shape lt batch size num patches embeddings dims Shape of class token embeddings gt lt batch size emdedding dims Shape of image embeddings with class token embeddings gt lt batch size num of patches embeddiing dims Shape of position embeddings gt lt batch size num patches embeddings dims Shape of final embeddings gt lt batch size num patches embeddings dims Put the PatchEmbedddingLayer TogetherWe will inherit from the PyTorch nn Module to create our custom layer which takes in an image and throws out the patch embeddings which consists of the Image Embeddings Class Token Embeddings and the Position Embeddings class PatchEmbeddingLayer nn Module def init self in channels patch size embedding dim super init self patch size patch size self embedding dim embedding dim self in channels in channels self conv layer nn Convd in channels in channels out channels embedding dim kernel size patch size stride patch size self flatten layer nn Flatten start dim end dim self class token embeddings nn Parameter torch rand BATCH SIZE EMBEDDING DIMS requires grad True self position embeddings nn Parameter torch rand NUM OF PATCHES EMBEDDING DIMS requires grad True def forward self x output torch cat self class token embeddings self flatten layer self conv layer x permute dim self position embeddings return outputLet s pass a batch of random images from our patch embedding layer patch embedding layer PatchEmbeddingLayer in channels IMAGE CHANNELS patch size PATCH SIZE embedding dim IMAGE CHANNELS PATCH SIZE patch embeddings patch embedding layer random images patch embeddings shapetorch Size summary model patch embedding layer input size BATCH SIZE batch size input channels img width img height col names input size output size num params trainable col width row settings var names Step Creating the Multi Head Self Attention MSA Block Understanding MSA BlockAs a first step of putting together our transformer block for the Vision Transformer model we will be creating a MultiHead Self Attention Block Let us take a moment to understand the MSA Block The MSA block itself consists of a LayerNorm layer and the Multi Head Attention Layer The layernorm layer essentially normalizes our patch embeddings data across the embeddings dimension The Multi Head Attention layer takes in the input data as form of learnable vectors namely query key and value collectively known as qkv vectors These vectors together form the relationship between each patch of the input sequence with every other patch in the same sequence hence the name self attention So our input shape to the MSA Block will be the shape of our patch embeddings that we made using the PatchEmbeddingLayer gt batch size sequence length embedding dimensions And the output from the MSA layer will be of the same shape as the input MSA Block CodeNow let us begin to write code for out MSA Block This will be short as PyTorch has a pre built implementation of LayerNorm and the MultiHeadAttention Layer We just have to pass the right arguments to suit our architecture We can find the various parameters that are required for out MSA block in this table from the original ViT Paper class MultiHeadSelfAttentionBlock nn Module def init self embedding dims Hidden Size D in the ViT Paper Table num heads Heads in the ViT Paper Table attn dropout Default to Zero as there is no dropout for the the MSA Block as per the ViT Paper super init self embedding dims embedding dims self num head num heads self attn dropout attn dropout self layernorm nn LayerNorm normalized shape embedding dims self multiheadattention nn MultiheadAttention num heads num heads embed dim embedding dims dropout attn dropout batch first True def forward self x x self layernorm x output self multiheadattention query x key x value x need weights False return output Let s test our MSA Blockmultihead self attention block MultiHeadSelfAttentionBlock embedding dims EMBEDDING DIMS num heads print f Shape of the input Patch Embeddings gt list patch embeddings shape lt batch size num patches embedding dims print f Shape of the output from MSA Block gt list multihead self attention block patch embeddings shape lt batch size num patches embedding dims Shape of the input Patch Embeddings gt lt batch size num patches embedding dims Shape of the output from MSA Block gt lt batch size num patches embedding dims Beautiful so seems like our MSA block is working We can get more information about the MSA Block using torchinfosummary model multihead self attention block input size batch size num patches embedding dimension col names input size output size num params trainable col width row settings var names Step Creating the Machine Learning Perceptron MLP Block Understanding the MLP BlockThe Machine Learning Perceptron MLP Block in the transformer is a combination of a Fully Connected Layer also called as a Linear Layer or a Dense Layer and a non linear layer In the case of ViT the non linear layer is a GeLU layer The transformer also implements a Dropout layer to reduce overfitting So the MLP Block will look something like this Input →Linear →GeLU →Dropout →Linear →DropoutAccording to the paper the first Linear Layer scales the embedding dimensions to the dimensions for the ViT Base The Dropout is set to and the second Linear Layer scales down the dimensions back to the embedding dimensions MLP Block CodeLet us now assemble our MLP Block According to the ViT Paper the output from the MSA Block added to the input to the MSA Block denoted by the skip residual connection in the model architecture figure is passed as input to the MLP Block All the layers are provided by the PyTorch library We just need to assemble it class MachineLearningPerceptronBlock nn Module def init self embedding dims mlp size mlp dropout super init self embedding dims embedding dims self mlp size mlp size self dropout mlp dropout self layernorm nn LayerNorm normalized shape embedding dims self mlp nn Sequential nn Linear in features embedding dims out features mlp size nn GELU nn Dropout p mlp dropout nn Linear in features mlp size out features embedding dims nn Dropout p mlp dropout def forward self x return self mlp self layernorm x Let s test our MLP Blockmlp block MachineLearningPerceptronBlock embedding dims EMBEDDING DIMS mlp size mlp dropout summary model mlp block input size batch size num patches embedding dimension col names input size output size num params trainable col width row settings var names Amazing looks like the MLP Block is also working as expected Step Putting together the Transformer Blockclass TransformerBlock nn Module def init self embedding dims mlp dropout attn dropout mlp size num heads super init self msa block MultiHeadSelfAttentionBlock embedding dims embedding dims num heads num heads attn dropout attn dropout self mlp block MachineLearningPerceptronBlock embedding dims embedding dims mlp size mlp size mlp dropout mlp dropout def forward self x x self msa block x x x self mlp block x x return x Testing the Transformer Blocktransformer block TransformerBlock embedding dims EMBEDDING DIMS mlp dropout attn dropout mlp size num heads print f Shape of the input Patch Embeddings gt list patch embeddings shape lt batch size num patches embedding dims print f Shape of the output from Transformer Block gt list transformer block patch embeddings shape lt batch size num patches embedding dims Shape of the input Patch Embeddings gt lt batch size num patches embedding dims Shape of the output from Transformer Block gt lt batch size num patches embedding dims summary model transformer block input size batch size num patches embedding dimension col names input size output size num params trainable col width row settings var names Step Creating the ViT ModelFinally let s put together our ViT Model It s going to be as simple as combining whatever we have done till now One slight addition will be the classifier layer that we will add In ViT the classifier layer is a simple Linear layer with Layer Normalization The classification is performed on the zeroth index of the output of the transformer class ViT nn Module def init self img size in channels patch size embedding dims num transformer layers from table above mlp dropout attn dropout mlp size num heads num classes super init self patch embedding layer PatchEmbeddingLayer in channels in channels patch size patch size embedding dim embedding dims self transformer encoder nn Sequential TransformerBlock embedding dims embedding dims mlp dropout mlp dropout attn dropout attn dropout mlp size mlp size num heads num heads for in range num transformer layers self classifier nn Sequential nn LayerNorm normalized shape embedding dims nn Linear in features embedding dims out features num classes def forward self x return self classifier self transformer encoder self patch embedding layer x summary model vit input size BATCH SIZE batch size num patches embedding dimension col names input size output size num params trainable col width row settings var names That s it Now you can train this model just like you would any other model in PyTorch Let me know how it works out for you That s it Now you can train this model just like you would any other model in PyTorch Let me know how it works out for you I hope this step by step guide has helped you understand the Vision Transformer and inspired you to dive deeper into the world of computer vision and transformer models With the knowledge gained you are well equipped to push the boundaries of computer vision and unlock the potential of these groundbreaking architectures So go ahead build upon what you have learned and let your imagination run wild as you leverage the Vision Transformer to tackle exciting visual challenges Happy coding Colab Notebook Want to connect My WebsiteMy TwitterMy LinkedIn |
2023-06-19 08:15:57 |
海外TECH |
DEV Community |
Global Unique Constraint on a partitioned table in PostgreSQL and YugabyteDB |
https://dev.to/yugabyte/global-unique-constraint-on-a-partitioned-table-in-postgresql-and-yugabytedb-4nh6
|
Global Unique Constraint on a partitioned table in PostgreSQL and YugabyteDBOne limitation of PostgreSQL declarative partitioning when compared to some other databases like Oracle is the impossibility to create global indexes This includes the unique index that is necessary to enforce a primary key It means that there s no built in way to enforce the unicity of a key across all partitions except when it includes the partition key In this latter case local indexes that enforce the local uniqueness are sufficient to guarantee the global uniqueness of the compound key but some applications do not like compound primary keys With YugabyteDB you don t need declarative partitioning to scale because tables are split with automatic sharding to small tablets and at this level all indexes are global and can enforce uniqueness However on top of it you may want to use the PostgreSQL declarative partitioning for two reasons lifecycle management with the ability to drop old partitions or geo partitioning to assign partitions to specific regions with tablespaces How to guarantee global uniqueness There are two easy solutions when the application is designed to scale and one alternative for legacy applications which is the goal of this blog post The easy solutions are A primary key should be generated from a UUID or a Sequence and should be immutable Both are designed to generate unique values with a high probability in the case of UUID or even a guarantee in the case of a Sequence You may not need an additional index include the partition key in the primary key This means adding the date for lifecycle management or the region for geo partitioning to the local identifier Applications designed for geo distribution should do that Other alternatives are for legacy applications If for any reason you want an additional guarantee of uniqueness for the part that doesn t include the partition key there s no other choice than having a global source of truth and it will limit the scalability Sequence is one of this kind and are optimized in YugabyteDB by limiting their transactional behavior to the minimum necessary numbers are cached and the incrementing operations are never rolled back However a global index must be fully ACID and a transaction inserting a new primary key then becomes a global transaction Another approach a global query to check all partitions must also be a serializable global transaction Alternative with a global tableHere is an example of building a global unique index as a table maintained by a trigger In YugabyteDB tables and indexes are the same because a table is stored in its primary key Then in YugabyteDB this solution is not only logically equivalent to a global index but also physically equivalent In PostgreSQL this solution is limited because there s no sharding and a table with a primary key is two structures a B Tree index and a Heap Table I ve setup a cluster as in this previous post a tree region cluster across the solar system just for fun with a Docker Compose In short I started docker compose up and created the following tablespaces and table The customer table is geo partitioned to earth moon and mars Its primary key is compound of with a generated UUID id and the region identifier planet yugabyte d customers Table public customers Column Type Collation Nullable Default Storage Stats target Description id uuid not null gen random uuid plain planet text not null extended info text extended Partition key LIST planet Indexes customers pkey PRIMARY KEY lsm id HASH planet ASC Partitions customers earth FOR VALUES IN earth customers mars FOR VALUES IN mars customers moon FOR VALUES IN moon This is sufficient and optimal an insert will be a local transaction the composite primary key is guaranteed to be unique and we would be very unlucky if seeing duplicate UUIDs Global Unique IndexIf for any reason there is a need to guarantee that the UUID is globally unique I cannot directly create a unique index yugabyte create unique index customers unique id on customers id ERROR insufficient columns in UNIQUE constraint definitionDETAIL UNIQUE constraint on table customers lacks column planet which is part of the partition key Global Table maintained by TriggerHere is my alternative I create a table with the same columns as the main table s primary key but where only the id is in the primary key yugabyte create table customers unique id id uuid primary key planet text not null CREATE TABLEThis table is not partitioned This is not a problem in YugabyteDB because automatic sharding applies The only thing you have to take care is that if you partitioned for data governance reasons to keep sensitive data in specific regions then the information in this global table should not contain sensitive information This should not be a problem with a UUID and a region name To guarantee the uniqueness of id I don t need another column but I ve added the region discriminent planet as this table could also be used to find the region when only the id is known This is an alternative solution to the previous post I m taking the example from where duplicate indexes are maintained for this purpose This table must be maintained automatically when rows are inserted deleted or when the id is updated which should not happen as it si part of the primary key but we are talking about legacy application so better be safe for all unexpected cases Here is the trigger function create or replace function customers unique id returns trigger as declare rows smallint begin if tg op in DELETE UPDATE then delete from customers unique id where id old id and planet old planet elsif tg op in INSERT UPDATE then insert into customers unique id id planet values new id new planet end if get diagnostics rows row count if rows then raise affected rows expected tg op rows end if return new end language plpgsql and the trigger create trigger customers unique id after delete or insert or update of id planet on customers for each row execute function customers unique id Finally I initialize this table begin transaction delete from customers unique id insert into customers unique id select id planet from customers end I did that within a transaction starting with a delete in case there were some inserts after I created the trigger In PostgreSQL you can do all that including the DDL in a transaction but you must be in serializable for this case However doing so will lock the table in exclusive mode for the DDL commands For online changes I prefer to separate DDL short but with exclusive lock from DML can be long but non blocking YugabyteDB has an optimistic approach for DDL no exclusive lock but applications may get a serializable error and run DDL out of the main transaction so this is the right way to do Testing DMLI try to insert the same id with different regions and check the correct behavior yugabyte insert into customers values fdcafe dead beef face cffeecde mars one INSERT yugabyte insert into customers values fdcafe dead beef face cffeecde mars one ERROR duplicate key value violates unique constraint customers mars pkey yugabyte insert into customers values fdcafe dead beef face cffeecde moon one ERROR duplicate key value violates unique constraint customers unique id pkey yugabyte delete from customers where id text like fdcafe DELETE yugabyte insert into customers values fdcafe dead beef face cffeecde moon one INSERT yugabyte DML that doesn t violate my business logic id being globally unique succeeded the other failed Performance and scalabilityIn my lab the docker compose also starts a metrics container that runs my YBWR script every seconds to show the tablet activity I have run the following inserting rows to the moon region yugabyte insert into customers planet info select moon generate series INSERT Before creating the trigger this was running in Time ms with the following reads and writes rocksdb seek rocksdb next rocksdb insert dbname relname tserver tabletid leader yugabyte customers moon bfcfbdec L yb tserver base moon star yugabyte customers moon ccaebcadbaacc L yb tserver base moon star yugabyte customers moon fdbfaaaaca L yb tserver base moon star yugabyte customers moon acaabaaaaeb L yb tserver base moon star yugabyte customers moon ddfaebfabfffd L yb tserver base moon star yugabyte customers moon eeeaaacbee L yb tserver base moon star rows The inserts have to seek to read if the row exists to check for duplicates in the partitioned table and insert write the row to the LSM Tree to multiple tablets The tablets are all on the same region moon and in this small lab they are even in the same server This is a local transaction I ve run the same after creating the trigger rocksdb seek rocksdb next rocksdb insert dbname relname tserver tabletid leader yugabyte customers moon bfcfbdec L yb tserver base moon star yugabyte customers moon ccaebcadbaacc L yb tserver base moon star yugabyte customers moon fdbfaaaaca L yb tserver base moon star yugabyte customers moon acaabaaaaeb L yb tserver base moon star yugabyte customers moon ddfaebfabfffd L yb tserver base moon star yugabyte customers moon eeeaaacbee L yb tserver base moon star yugabyte customers unique id abdfddbbca L yb tserver base mars star yugabyte customers unique id cabdaccfe L yb tserver base moon star yugabyte customers unique id ceebaeecb L yb tserver base earth star yugabyte customers unique id ceedfbabbdb L yb tserver base mars star yugabyte customers unique id eefacbaddebc L yb tserver base moon star yugabyte customers unique id fafebcecbfcaebdffdcb L yb tserver base earth star rows This has run in Time ms times slower The reads and writes are times higher because of the new table to maintain but the major difference is that now multiple servers and regions are touched by the transaction YugabyteDB implements many optimizations for single shard single server and single region transactions In the latest case the transaction table itself can be remote With this global table we cannot benefit from those single region optimizations To SummarizeWhen you want to scale a geo distributed application you shouldchoose the right database The closest to PostgreSQL is YugabyteDB which provides all SQL features on a horizontally scalable infrastructure design your application to run the critical services locally to one region and this means avoiding transactions that have to read and write to other regions The SQL features that help for this design are declarative partitioning composite primary key sequences UUID generation extensions and triggers That s the reason why YugabyteDB implements all those features by re using the PostgreSQL code when possible and by pushing down the others to the distributed storage The other distributed databases that do not support triggers require you to change your application and add in addition to the business logic the necessary code to validate the data integrity like uniqueness and the regression tests for all race conditions on it |
2023-06-19 08:10:35 |
海外TECH |
Engadget |
The FCC is preparing to take a 'fresh look' at internet data caps |
https://www.engadget.com/the-fcc-is-preparing-to-take-a-fresh-look-at-internet-data-caps-084245899.html?src=rss
|
The FCC is preparing to take a x fresh look x at internet data capsFederal Communications Commission FCC chairperson Jessica Rosenworcel wants to open a formal Notice of Inquiry into the impact of internet data caps on consumers according to an FCC document spotted by Ars Technica The regulator will also consider taking action to ensure that data caps don t harm competition or impact access to broadband services according to the letter nbsp Internet access is no longer nice to have but need to have for everyone everywhere Rosenworcel said in a statement When we need access to the internet we aren t thinking about how much data it takes to complete a task we just know it needs to get done It s time the FCC take a fresh look at how data caps impact consumers and competition With the Notice of Inquiry the FCC would seek comment to better understand why the use of data caps continues to persist despite increased broadband needs of consumers and providers demonstrated technical ability to offer unlimited data plans according to the letter nbsp Rosenworcel would be unable to take any action on data caps at the moment though The FCC currently has just four members two Democrats and two Republicans as the Senate refused to confirm President Biden s first nominee Gigi Sohn and she subsequently withdrew her name for consideration The White House has since nominated telecom attorney Anna Gomez who appears to have the support of the telecom industry A nomination hearing for Gomez is scheduled for this Thursday June nd nbsp During the COVID pandemic broadband provider Comcast temporarily removed data caps but it continues to impose a TB data cap on certain contracts in some US regions Charter s deal with the FCC to not impose data caps on its Spectrum service struck when it acquired Time Warner ended this year but the company recently said it has no plans to restart data caps when the condition sunsets nbsp Along with the proposed Notice of Inquiry the FCC has opened a new portal to allow consumers to share how data caps have affected them on fixed or wireless broadband networks at fcc gov datacapstories That will help the FCC determine how data caps impact access for everyone including those with disabilities low income consumers and historically disadvantaged communities and access to online education telehealth and remote work the Commission wrote This article originally appeared on Engadget at |
2023-06-19 08:42:45 |
Java |
Java Code Geeks |
Prevent SQL Injections By Strengthen Your Web App Security |
https://www.javacodegeeks.com/2023/06/prevent-sql-injections-by-strengthen-your-web-app-security.html
|
Prevent SQL Injections By Strengthen Your Web App SecurityStrengthening the security of your web application is of paramount importance in today s digital landscape With increasing cybersecurity threats and potential vulnerabilities it is crucial to implement robust security measures to protect your application user data and infrastructure This introduction will provide an overview of key considerations and best practices for strengthening your web app |
2023-06-19 08:08:04 |
金融 |
日本銀行:RSS |
【記者会見】植田総裁(6月16日分) |
http://www.boj.or.jp/about/press/kaiken_2023/kk230619a.pdf
|
記者会見 |
2023-06-19 17:10:00 |
海外ニュース |
Japan Times latest articles |
China’s top diplomat tells U.S. it must choose between ‘cooperation or conflict’ |
https://www.japantimes.co.jp/news/2023/06/19/asia-pacific/politics-diplomacy-asia-pacific/blinken-china-wang-yi-xi-jinping/
|
China s top diplomat tells U S it must choose between cooperation or conflict Wang Yi China s top foreign policy official held talks with U S Secretary of State Antony Blinken on Monday in Beijing as the two sides looked |
2023-06-19 17:09:08 |
海外ニュース |
Japan Times latest articles |
Seven missing divers safe off Okinawa after coast guard rescue |
https://www.japantimes.co.jp/news/2023/06/19/national/missing-divers-off-okinawa/
|
coral |
2023-06-19 17:29:35 |
海外ニュース |
Japan Times latest articles |
COVID-19 was a natural experiment for climate policy |
https://www.japantimes.co.jp/opinion/2023/06/19/commentary/world-commentary/fossil-fuel-demand/
|
COVID was a natural experiment for climate policyWhen fossil fuel demand declines only in some countries supply does not fall because other parts of the world will absorb the unused fuel at lower |
2023-06-19 17:00:59 |
海外ニュース |
Japan Times latest articles |
Iran wants to make a deal the U.S. must refuse |
https://www.japantimes.co.jp/opinion/2023/06/19/commentary/world-commentary/iran-deal/
|
Iran wants to make a deal the U S must refuseKhameini thinks he has the leverage for an agreement easing punishment while keeping Iran s nuclear infrastructure The response should be sanctions sanctions and more sanctions |
2023-06-19 17:00:47 |
ニュース |
BBC News - Home |
Mortgage rates: Average two-year fix now above 6% |
https://www.bbc.co.uk/news/business-65931132?at_medium=RSS&at_campaign=KARANGA
|
december |
2023-06-19 08:46:26 |
ニュース |
BBC News - Home |
Boris Johnson: Rishi Sunak refuses to say if he will vote on Partygate report |
https://www.bbc.co.uk/news/uk-politics-65945198?at_medium=RSS&at_campaign=KARANGA
|
lockdown |
2023-06-19 08:41:50 |
ニュース |
BBC News - Home |
Brendan Rodgers: Celtic to appoint their former manager as Ange Postecoglou's successor |
https://www.bbc.co.uk/sport/football/65894218?at_medium=RSS&at_campaign=KARANGA
|
Brendan Rodgers Celtic to appoint their former manager as Ange Postecoglou x s successorBrendan Rodgers is expected to complete a return to Celtic four years and four months after his abrupt exit to Leicester City |
2023-06-19 08:31:46 |
ニュース |
Newsweek |
「顔やばすぎ!」玄関先に「激おこ」なある生物...犬の散歩拒否に納得 |
https://www.newsweekjapan.jp/stories/world/2023/06/post-101932.php
|
shelbytarterが撮影した個体がヒナを守ろうとしていたかは確かではないが、犬の判断は間違っていなかったと言えるだろう。 |
2023-06-19 17:20:00 |
IT |
週刊アスキー |
漫画に登場するメニューを食べよう! ニュウマン新宿、「眠れぬ夜はケーキを焼いて」の漫画家“午後”とコラボした「おいしい夏の話 15stories by 午後」 |
https://weekly.ascii.jp/elem/000/004/141/4141528/
|
storiesby |
2023-06-19 17:45:00 |
IT |
週刊アスキー |
OOFOS、新宿ルミネ1/2Fの催事スペースに「OOFOS POP UP SHOP@新宿ルミネ1」を出店 |
https://weekly.ascii.jp/elem/000/004/141/4141544/
|
oofos |
2023-06-19 17:30:00 |
IT |
週刊アスキー |
『ウマ娘』で★3育成ウマ娘「マーベラスサンデー」が本日より出走! |
https://weekly.ascii.jp/elem/000/004/141/4141549/
|
育成 |
2023-06-19 17:15:00 |
IT |
週刊アスキー |
新横浜プリンスホテル、肉料理を中心としたハワイアンメニューが楽しめる「ハワイアンブッフェ」を開催 |
https://weekly.ascii.jp/elem/000/004/141/4141511/
|
新横浜プリンスホテル |
2023-06-19 17:10:00 |
IT |
週刊アスキー |
商談プロセスにおける業務効率化を実現する帳票クラウドサービス「SVF Cloud for Salesforce」最新バージョン提供開始 |
https://weekly.ascii.jp/elem/000/004/141/4141553/
|
提供開始 |
2023-06-19 17:30:00 |
マーケティング |
AdverTimes |
カンヌライオンズ2023、日本から審査員は9人 Industry Craft審査委員長に八木義博氏 |
https://www.advertimes.com/20230619/article423336/
|
industrycraft |
2023-06-19 08:33:09 |
ニュース |
THE BRIDGE |
ChatGPTで予約「旅マガジン」が変わるーーChatGPTで雑誌はどう変わる【Creators × Publishing・イベントレポート後編】 |
https://thebridge.jp/2023/06/creators-x-publishing-how-does-gen-ai-change-magazine-business-the-last-part
|
ChatGPTで予約「旅マガジン」が変わるーChatGPTで雑誌はどう変わる【Creators×Publishing・イベントレポート後編】Creators×Publishingは「テクノロジーから見える編集のミライ」をテーマにした勉強会をお送りします。 |
2023-06-19 08:05:23 |
コメント
コメントを投稿