IT |
気になる、記になる… |
Apple、「Apple Music Classical」のサービスを開始(日本は対象外)ー Android向けアプリも近日公開予定 |
https://taisy0.com/2023/03/28/170097.html
|
android |
2023-03-28 14:52:07 |
js |
JavaScriptタグが付けられた新着投稿 - Qiita |
配列とオブジェクトのループ処理 |
https://qiita.com/manzoku_bukuro/items/4c569770de38a44e5944
|
constarr |
2023-03-28 23:20:33 |
js |
JavaScriptタグが付けられた新着投稿 - Qiita |
配列操作メソッドの使い方と注意点 |
https://qiita.com/manzoku_bukuro/items/c28c6f484315a0ff6ce2
|
letarr |
2023-03-28 23:20:15 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
[AWS Q&A 365][EKS]Daily Five Common Questions #16 |
https://qiita.com/shinonome_taku/items/30105c1273974fc15483
|
amazon |
2023-03-28 23:59:14 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
いまさら聞けないDNS |
https://qiita.com/T_unity/items/92a9dce90fcd59434548
|
関連 |
2023-03-28 23:57:24 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
【Redshift】ユーザーにアクセス権を付与するときのハマったこと |
https://qiita.com/kato_envsys/items/1c63650eeb32e16af888
|
ermissiondeniedforschema |
2023-03-28 23:13:07 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
AWS認定資格 マシーンラーニング スペシャリスト合格記 10個目取得、目標まであとひとつ |
https://qiita.com/meijab/items/3f8dc913e31453bda44e
|
chatgpt |
2023-03-28 23:07:43 |
海外TECH |
MakeUseOf |
6 Ways Microsoft Should Improve the Windows 11 Taskbar |
https://www.makeuseof.com/microsoft-improve-windows-11-taskbar/
|
windows |
2023-03-28 14:15:16 |
海外TECH |
DEV Community |
When Should we Move to Microservices? |
https://dev.to/codenameone/when-should-we-move-to-microservices-177c
|
When Should we Move to Microservices Last month I wrote about modular Monoliths and the value of modern Monolithic architecture One of the more interesting discussions that came out of that post and video is the inverse discussion when is it right to still pick Microservices Like any design choice the answer is subjective and depends on many things But there are still general rules of thumb and global metrics we can use Before we get into these problems we need to understand what it means to have a Microservice architecture Then we can gauge the benefit and price of having such an architecture Small MonolithsA common misconception is that microservices are simply broken down monoliths This isn t the case I ve talked to quite a few people who still hold that notion to be fair they might have a point This is how AWS defines Microservices Microservices are an architectural and organizational approach to software development where software is composed of small independent services that communicate over well defined APIs These services are owned by small self contained teams Microservices architectures make applications easier to scale and faster to develop enabling innovation and accelerating time to market for new features Smaller monoliths might fit the definition but they don t if you read between the lines The words “independent and “easier to scale hint at the problem The problem and advantage of a monolith is a single point of failure By having one service we can usually find problems more easily The architecture is much simpler If we break this service down into smaller pieces we essentially create distributed points of failure If one piece along the chain fails the entire architecture breaks down That isn t independent and it isn t easier to scale Microservices are NOT small monoliths and breaking down the Monolith isn t only about working with smaller projects It s about shifting the way we work What Makes a Microservice A good Microservice needs to follow these principles for robustness and scale Divided by business function this is a logical division A Microservice is a standalone “product that provides a complete package This means that the team responsible for the Microservice can make all the changes required for the business without dependencies Automation through CI CD without continuous delivery the cost of updating would eliminate every benefit of Microservices Independent deployment is implied since a commit on one Microservice will only trigger the CD of that specific service We can accomplish this through Kubernetes and Infrastructure as Code IaC solutions Encapsulation it should hide the underlying implementation details A service acts as a standalone product that publishes an API for other products We commonly accomplished this via REST interfaces but also messaging middleware This is further enhanced with API Gateways Decentralized with no single point of failure otherwise we would distribute failure Failures should be isolated without this a single service going down could create a domino effect Circuit breakers are probably the most important tools for isolating failures To satisfy this dependency every microservice handles its own data This means many databases which can be challenging at times Observable this is required to deal with failures on a scale Without proper observability we are effectively blind as the various teams can deploy automatically This is all good and well but what does that mean in practical terms Most of what it means is that we need to make several big changes to the way we handle some big ideas We need to move more of the complexity to the DevOps team We need to handle cross microservice transactional state differently This is one of the hardest concepts to grasp when dealing with Microservices In an ideal world all our operations will be simple and contained in a small microservice The service mesh framework surrounding our microservices will handle all the global complexities and manage our individual services for us But that isn t the real world In reality our Microservices might have a transactional state that carries between the services External services might fail and for that we need to take some unique approaches Reliance on the DevOps TeamIf your company doesn t have good DevOps and Platform Engineering teams Microservices aren t an option Instead of deploying one application we might deploy hundreds because of migration While the individual deployments are simple and automated you will still throw a lot of work at operations When something doesn t work or doesn t connect When a new service needs to integrate or the service configuration should be adopted Operations carry a greater burden when working with Microservices This requires great communication and collaboration It also means the team managing a specific service needs to take some of the OPS burdens back That isn t a simple task As developers we need to know many of the tools used to tie our separate services back to a single unified service Service Mesh lets us combine separate services and effectively acts as a load balancer between them It also provides security authorization traffic control and much more API Gateways should be used instead of invoking the API directly This can be awkward at times but it s often essential to avoid costs prevent rate limiting and more Feature Flags amp Secrets are useful in a monolith as well But they re impossible to manage at a Microservice scale without dedicated tools Circuit Breaking lets us kill a broken web service connection and recover gracefully Without this a single broken service can bring down the entire system Identity management must be separate You can t get away with an authentication table in the database when dealing with a Microservice environment I ll skip orchestration CI CD etc but they too need to be adapted for every service that comes out Some of those tools are opaque to developers but we need the help of DevOps in all the phases Saga PatternStateless services would be ideal carrying a state makes everything far more complex If we stored the state in the client we need to send it back and forth all the time If it is on the server we would need to either fetch it constantly cache it or save it locally and then all interaction would be performed against the current system That eliminates the scalability of the system A typical Microservice will store in its own database and work with local data A service that needs remote information will typically cache some data to avoid round trips to the other service This is one of the biggest reasons Microservices can scale In a Monolith the database should become the bottleneck of the application which means the Monolith is efficient and limited by the speed we can store and retrieve the data This has two major drawbacks Size the more data we have the larger the database and performance impacts all users at once Imagine querying an SQL table of every purchase ever made on Amazon just to find your specific purchase Domain databases have different use cases Some databases are optimized for consistency write speed read speed time data spatial data and more A microservice that tracks user information would probably use a time series database which is optimized for time related information whereas a purchase service will focus on a traditional conservative ACID database Note that a Monolith can use more than one database That can work perfectly well and can be very useful But it s the exception Not the rule The Saga pattern works by using compensating transactions to undo the effects of a saga if it fails When a saga fails the compensating transaction is executed to undo the changes made by the previous transaction This allows the system to recover from failures and maintain a consistent state We can accomplish this with tools such as Apache Camel but this is non trivial and requires far more involvement than a typical transaction in a modern system That means that for every major cross service operation you would need to do the equivalent undo operation that will restore the state back That is non trivial There are several tools for saga orchestration but this is a big subject that is beyond the scope of this post still I will explain it in broad terms What s important to understand about Saga is that it avoids the classic ACID database principles and focuses on “eventual consistency That means operations would bring the database to a consistent state at some point That is a very difficult process Imagine debugging a problem that only occurs when the system is in an inconsistent state…The following image demonstrated the idea in broad terms Let s say we have a money transfer process For the money transfer we need to first allocate funds We then verify that the recipient is valid and exists Then we need to deduct the funds from our account And finally we need to add the money to the recipient s account That is a successful transaction With a regular database this would be one transaction and we can see this in the blue column on the left But if something goes wrong we need to run the reverse process If a failure occurs when allocating funds we need to remove the allocation We need to create a separate block of code that does the inverse operation of the allocation If verifying a recipient fails we need to remove that recipient But then we need to also remove the allocation If deducting the funds fails we need to restore the funds remove the recipient and remove the allocation Finally if adding the funds to the recipient fails we need to run all the undo operations Another problem in Saga is illustrated in the CAP theorem CAP stands for Consistency Availability and Partition Tolerance The problem is we need to pick any two…Don t get me wrong you might have all three But in a case of a failure you can only guarantee two Availability means that requests receive responses But there s no guarantee that they contain the most recent writes Consistency means that every read receives the most recent write on an error Tolerance means that everything will keep working even if many messages get dropped along the way This differs greatly from our historic approach to failure with transactions Should We Pick Microservices Hopefully you now understand how hard it is to deploy Microservices properly We need to make some big compromises This new way isn t necessarily better in some regards it is worse But the proponents of Microservices still have a point we can gain a lot through Microservices and should focus on those benefits too We mentioned the first requirement upfront DevOps Having a good DevOps team is a prerequisite to considering Microservices I saw teams trying to hack their way through this without an OPS team and they ended up spending more time on operational complexity than writing code It wasn t worth the effort The biggest benefit of Microservice is to the team That is why having a stable team and scope is crucial Splitting teams into vertical teams that work independently is a tremendous benefit The most modular monolith in the world can t compete with that When we have hundreds of developers following the git commits alone and tracking the code changes in scale becomes untenable The value of Microservices is only realized in a large team This sounds reasonable enough but in a startup environment things shift suddenly A colleague of mine works for a startup that employed dozens of developers They decided to follow a Microservice architecture and built a lot of them…Then came the downsizing and maintaining dozens of services in multiple languages became a problem Splitting a Monolith is hard but doable Unifying Microservices to a Monolith is probably harder I m unaware of anyone who seriously tried to do that but would be curious to hear stories Not One SizeIn order to move to a Microservice architecture we need a bit of a mind shift A good example is in the databases A good example would be a user tracking Microservice In a Monolith we would write the data to a table and move on with our work But this is problematic…As data scales this user tracking table can end up containing a great deal of data that is hard to analyze in real time without impacting the rest of the operating system With a Microservice we can offer several advantages The interface to the microservice can use messaging which means the cost to send tracking information will be minimal Tracking data can use a Time Series database which would be more efficient for this use case We can stream the data and process it asynchronously to derive additional value from that data There are complexities data will no longer be localized So if we send tracking data asynchronously we need to send everything necessary as the tracking service won t be able to go back to the original service to get additional meta data But it has a locality advantage if regulation changes about tracking storage there s a single place where this is stored Dynamic Control and RolloutDid you ever push a button to a release that broke production I did more than once way too many times That s a terrible feeling Microservices can still fail in production and can still fail catastrophically but often their failure is more localized It is also easier to roll them out to a specific subset of the system Canary and verify These are all policies that can be controlled in depth by the people who actually have their fingers on the user s pulse OPS Observability for Microservices is essential expensive but also more powerful Since everything occurs at the network layer it is all exposed to the observability tools An SRE or a DevOps can understand a failure with greater detail This comes at the expense of the developer who might need to face increased complexity and limited tooling Applications can become too big to fail Even with modularity some of the largest monoliths around have so much code it takes hours to run through a full CI CD cycle Then if the deployment fails reverting to the last good version might also take a while SegmentationBack in the day we used to divide teams based on layers Client Server DB etc This made sense since each of those required a unique set of skills Today vertical teams make more sense but we still have specialties Typically a mobile developer wouldn t work on the backend But let s say we have a mobile team that wants to work with GraphQL instead of REST With a Monolith we would either tell them to “live with it or we would have to do the work With Microservices we can create a simple service for them with very little code A simple facade to the core services We won t need to worry about a mobile team writing server code since this would be relatively isolated We can do the same for every client layer this makes it easier to integrate a team vertically Too BigIt is hard to put the finger on a size that makes a monolith impractical but here s what you should ask yourself How many teams do we have or want If you have a couple of teams then a monolith is probably great If you have a dozen teams then you might face a problem there Measure pull request and issue resolution times As a project grows your pull requests will spend more time waiting to merge and issues will take longer to resolve This is inevitable as complexity tends to grow in the project Notice that a new project will have larger features and that might sway the results once you account for that in the project stats the decrease in productivity should be measurable Notice that this is one metric In many cases it can indicate other things such as the need to optimize the test pipeline the review process modularity etc Do we have experts who know the code At some point a huge project becomes so big that the experts start losing track of the details This becomes a problem when bugs become untenable and there s no authoritative figure that can make a decision without consultation Are you comfortable spending money Microservices will cost more There s no way around that There are special cases where we can tune scale but ultimately observability and management costs would remove any potential cost savings Since personnel costs usually exceed the costs of cloud hosting the total might still play in your favor as those costs might decrease if the scale is big enough Trade OffsThe trade offs of monolith vs microservice are illustrated nicely in the following radar chart Notice that this chart was designed with a large project in mind The smaller the project the better the picture is for the Monolith Notice that Microservices deliver a benefit in larger projects in fault tolerance and team independence But they pay a price in cost They can reduce R amp D spend but they mostly shift it to DevOps so that isn t a major benefit Final WordThe complexity of Microservices is immense and sometimes ignored by the implementing teams Developers use Microservices as a cudgel to throw away parts of the system they don t want to maintain instead of building a sustainable scalable architecture worthy of replacing a monolith I firmly believe that projects should start off with a monolith Microservices are an optimization for scaling a team and optimizing prematurely is the root of all evil The question is when is the right time to do such an optimization There are some metrics we can use to make that decision easier Ultimately the change isn t just splitting a monolith It means rethinking transactions and core concepts By starting with a monolith we have a blueprint we can use to align our new implementation as it strengthens |
2023-03-28 14:52:41 |
海外TECH |
DEV Community |
#TestCulture 🦅 Episode 30 – X-Teams |
https://dev.to/mathilde_llg/testculture-episode-30-x-teams-23ho
|
TestCulture Episode X TeamsThe X Team model is based on the idea that teams should be flexible and adaptable with the ability to shift focus and change direction quickly in response to new challenges and opportunities An X Team is made up of a core group of individuals who are responsible for the team s core functions as well as a range of external stakeholders who can provide additional support and expertise as needed The X Team model emphasizes the importance of collaboration and cross functional communication When a team manages to organize itself and industrialize its Software Development perfectly through defined roles and responsibilities with a real team spirit and clear objectives it is nevertheless confronted with a glass ceiling This limit is materialized by the inability to obtain credit from management or a vision of the product which is not very competitive on the market Applying the X Team model to test maturity can help teams become more flexible and adaptable in response to new challenges and opportunities This model emphasizes the importance of collaboration and cross functional communication which can help teams achieve their objectives more effectively…Learn more about Teams A thread on Twitter |
2023-03-28 14:51:44 |
海外TECH |
DEV Community |
ChatGPT's help and guidance for solving leetcode/hacker-rank questions |
https://dev.to/liopun/chatgpts-help-and-guidance-for-solving-leetcodehacker-rank-questions-5gb5
|
ChatGPT x s help and guidance for solving leetcode hacker rank questionsAre you tired of getting stuck on Leetcode or HackerRank questions LeetChatGPT an open source browser extension that will transform your coding experience With LeetChatGPT you ll get instant feedback guidance and help powered by ChatGPT a state of the art language model Demo Source code LeetChatGPT supports both Leetcode and HackerRank questions making it a versatile tool for any coding enthusiast But that s not all it also has two unique modes to enhance your coding experience The first is the Timer Mode which provides feedback and help for your current solution when a timer runs out This feature is perfect for those who struggle with time management or want to challenge themselves to complete problems quickly The second mode is the Manual Mode which allows you to get feedback for your current solution towards a brute force or optimal solution on demand This feature is perfect for those who want to learn how to approach a problem optimally or those who want to get a better understanding of the solution they came up with LeetChatGPT also has the ability to continue chat with ChatGPT making it feel like you have a coding buddy helping you every step of the way Additionally it has markdown rendering and code highlights making your coding experience even more enjoyable Lastly LeetChatGPT supports both ChatGPT and ChatGPT Plus providing you with even more advanced and personalized assistance With LeetChatGPT you ll never get stuck on a Leetcode or HackerRank question again Try out LeetChatGPT today by visiting and let ChatGPT guide you to success And don t forget to check out the demo and source code on GitHub Happy coding |
2023-03-28 14:31:04 |
海外TECH |
DEV Community |
Code Review on a GitHub Pull Request from Visual Studio Code |
https://dev.to/this-is-learning/code-review-on-a-github-pull-request-from-visual-studio-code-328l
|
Code Review on a GitHub Pull Request from Visual Studio CodeDoing Code Review on a GitHub Pull Request without leaving Visual Studio Code Easy Last week we learned how to create a PR from VSCode today we ll see how to review it Spoiler you can do it from the extension you already installed last week I will showcase in the video all the advantages of reviewing a Pull Request from the editor starting by checking out the branch with a single click so that you can compile it locally and test it Aaaand there s even more as anticipated this is part of a series of three videos so there s one more coming out in the next few days Not a fan of video content No problem as usual I write down the concepts explained in the video in written form as well I mean if you want to watch the video and leave a like that would be awesome Install the official GitHub Extension If you already installed the extension last week you can skip this step The first thing you need to do is to install the official GitHub Pull Requests and Issues extension for Visual Studio Code You can find it in the marketplace by searching for GitHub or by clicking here Note make sure to not get confused the extension called GitHub is an old one and deprecated The new one is called GitHub Pull Requests and Issues As soon as the extension is installed you ll see a new icon in the Activity Bar on the left side of Visual Studio Code Opening it the first time will ask you to login to GitHub just click on the button and a browser tab will open where you can login to your GitHub account Changed filesThe first thing you notice when selecting a Pull Request from the sidebar is the changed files When clicking on one of them a Diff Editor will open so that you can see the changes made to the file by comparing them to the base branch Pull Request OverviewIf you click on Description from the sidebar you ll see the Pull Request overview From here you can see all the relevant information you can also find on the web UI on GitHub It s not readonly though but you can fully interact with it from the editor You can for example edit labels assignees reviewers and even add comments Add commentsSpeaking of comments we just saw you can add some to the Pull Request but you can also add comments to specific lines of code and files The UI interaction is the same as on GitHub you can add a comment by clicking on the line number and then clicking on the Add comment button or by dragging the mouse over the lines you want to comment in case of a multiline comment Checkout the branchProbably the most interesting feature of the extension is the ability to checkout the branch of the Pull Request directly from the editor This gives you a lot of advantages for example you can run and build the code in local so you can test your application In case you don t have a CI CD pipeline in place you should though at least CI you can also run the tests locally to make sure they pass Check GitHub ActionsSpeaking of CI CD you can also check the status of the GitHub Actions workflow directly from the editor Edit tabsBefore calling it a day I want to show you another cool feature of the extension the ability to edit the tabs and organize the filters in the sidebar If you hover the mouse on them you notice a pencil icon Click on it It will open your local vscode settings and you can notice this piece of configuration githubPullRequests queries label Waiting For My Review query is open review requested user label Assigned To Me query is open assignee user label Created By Me query is open author user Do you recognize the labels Those are the default ones defining your tabs when you install the extension You can change them to whatever you want for example I added one for PRs where I ve been mentioned by adding a new element to the array label Pull Requests where I ve been mentioned query is open mentions user The syntax is pretty straightforward on label you put the label And on query you can define how PRs will be filtered as you would query them on GitHub ConclusionThat s it for today Last week we learned how to create a Pull Request from Visual Studio Code and today we saw how to give it a review Who knows what s in part What will we learn next week Well I know but I m not telling you See you next week Thanks for reading this article I hope you found it interesting I recently launched my Discord server to talk about Open Source and Web Development feel free to join Do you like my content You might consider subscribing to my YouTube channel It means a lot to me ️You can find it here Feel free to follow me to get notified when new articles are out Leonardo MontiniFollow I talk about Open Source GitHub and Web Development I also run a YouTube channel called DevLeonardo see you there |
2023-03-28 14:05:31 |
海外TECH |
DEV Community |
File Uploads for the Web (2): Upload Files with JavaScript |
https://dev.to/austingil/file-uploads-for-the-web-2-upload-files-with-javascript-1j5k
|
File Uploads for the Web Upload Files with JavaScriptWelcome back to this series all about uploading files to the web If you miss the first post I d recommend you check it out because it s all about uploading files via HTML Upload files with HTMLUpload files with JavaScriptReceiving file uploads with Node js Nuxt js Optimizing storage costs with Object StorageOptimizing delivery with a CDNSecuring file uploads with malware scansIn this post we ll do the same thing using JavaScript We left the project off with the form that looks like this lt form action api method post enctype multipart form data gt lt label for file gt File lt label gt lt input id file name file type file gt lt button gt Upload lt button gt lt form gt In the previous post we learned that in order to access a file on the user s device we had to use an lt input gt with the “file type And in order to create the HTTP request to upload the file we had to use a lt form gt element When dealing with JavaScript the first part is still true We still need the file input to access the files on the device However browsers have a Fetch API that we can use to make HTTP requests without forms I still like to include a form because Progressive enhancement If JavaScript fails for whatever reason the HTML form will still work I m lazy The form will actually make my work easier later on as we ll see With that in mind for JavaScript to submit this form I ll set up a “submit event handler const form document querySelector form form addEventListener submit handleSubmit param Event event function handleSubmit event The rest of the logic will go here Throughout the rest of this post we ll only be looking at the logic within the event handler function handleSubmit So the first thing I need to do in this submit handler is call the event s preventDefault method to stop the browser from reloading the page to submit the form I like to put this at the end of the event handler so that if there is an exception thrown within the body of this function preventDefault will not be called and the browser will fall back to the default behavior param Event event function handleSubmit event Any JS that could fail goes here event preventDefault Next we ll want to construct the HTTP request using the Fetch API The Fetch API expects the first argument to be a URL and a second optional argument as an Object We can get the URL from the form s action property It s available on any form DOM node which we can access using the event s currentTarget property If the action is not defined in the HTML it will default to the browser s current URL param Event event function handleSubmit event const form event currentTarget const url new URL form action fetch url event preventDefault Relying on the HTML to define the URL makes it more declarative keeps our event handler reusable and our JavaScript bundles smaller It also maintains functionality if the JavaScript fails By default Fetch sends HTTP requests using the GET method but to upload a file we need to use a POST method We can change the method using fetch s optional second argument I ll create a variable for that object and assign the method property but once again I ll grab the value from the form s method attribute in the HTML const url new URL form action type Parameters lt fetch gt const fetchOptions method form method fetch url fetchOptions Now the only missing piece is actually including the payload in the body of the request If you ve ever created a Fetch request in the past you may have included the body as a JSON string or a URLSearchParams object Unfortunately neither of those will work to send a file as they don t have access to the binary file contents Fortunately there is the FormData browser API We can use it to construct the request body from the form DOM node And conveniently when we do so it even sets the request s Content Type header to multipart form data also a necessary step to transmit the binary data const url new URL form action const formData new FormData form type Parameters lt fetch gt const fetchOptions method form method body formData fetch url fetchOptions That s really the bare minimum needed to upload files with JavaScript Let s do a little recap Access to the file system using a file type input Construct an HTTP request using the Fetch or XMLHttpRequest API Set the request method to POST Include the file in the request body Set the HTTP Content Type header to multipart form data Today we looked at a convenient way of doing that using an HTML form element with a submit event handler and using a FormData object in the body of the request The current handleSumit function should look like this param Event event function handleSubmit event const url new URL form action const formData new FormData form type Parameters lt fetch gt const fetchOptions method form method body formData fetch url fetchOptions event preventDefault Unfortunately the current submit handler is not very reusable Every request will include a body set to a FormData object and a “Content Type header set to multipart form data This is too brittle Bodies are not allowed in GET requests and we may want to support different content types in other POST requests We can make our code more robust to handle GET and POST requests and send the appropriate Content Type header We ll do so by creating a URLSearchParams object in addition to the FormData and running some logic based on whether the request method should be POST or GET I ll try to lay out the logic below Is the request using a POST method Yes is the form s enctype attribute multipart form data Yes set the body of the request to the FormData object The browser will automatically set the “Content Type header to multipart form data No set the body of the request to the URLSearchParams object The browser will automatically set the “Content Type header to application x www form urlencoded No We can assume it s a GET request Modify the URL to include the data as query string parameters The refactored solution looks like param Event event function handleSubmit event type HTMLFormElement const form event currentTarget const url new URL form action const formData new FormData form const searchParams new URLSearchParams formData type Parameters lt fetch gt const fetchOptions method form method if form method toLowerCase post if form enctype multipart form data fetchOptions body formData else fetchOptions body searchParams else url search searchParams fetch url fetchOptions event preventDefault I really like this solution for a number of reasons It can be used for any form It relies on the underlying HTML as the declarative source of configuration The HTTP request behaves the same as with an HTML form This follows the principle of progressive enhancement so file upload works the same when JavaScript is working properly or when it fails So that s it That s uploading files with JavaScript I hope you found this useful and plan to stick around for the whole series In the next post we ll move to the back end to see what we need to do to receive files Upload files with HTMLUpload files with JavaScriptReceiving file uploads with Node js Nuxt js Optimizing storage costs with Object StorageOptimizing delivery with a CDNSecuring file uploads with malware scansThank you so much for reading If you liked this article and want to support me the best ways to do so are to share it sign up for my newsletter and follow me on Twitter Originally published on austingil com |
2023-03-28 14:01:24 |
Apple |
AppleInsider - Frontpage News |
India smartphone market contracting, but increasing in value because of iPhone |
https://appleinsider.com/articles/23/03/28/india-smartphone-market-contracting-but-increasing-in-value-because-of-iphone?utm_medium=rss
|
India smartphone market contracting but increasing in value because of iPhoneIn Shipments of smartphones made in India declined overall but the value of the market is increasing thanks to premium smartphones such as the iPhone iPhone shipments increase in IndiaThe latest note from Counterpoint Research examines Made in India smartphone shipments They found that the decline year over year to million units was primarily due to lower consumer demand due to macroeconomic factors especially in the second half of Read more |
2023-03-28 14:35:12 |
海外TECH |
Engadget |
Apple accused of illegally firing pro-union workers |
https://www.engadget.com/apple-accused-of-illegally-firing-pro-union-workers-140058541.html?src=rss
|
Apple accused of illegally firing pro union workersApple is once again facing accusations of cracking down on union organizers The Communications Workers of America union CWA has filed charges with the National Labor Relations Board NLRB asserting that Apple illegally intimidated and fired workers at Houston and Kansas City Missouri stores in retaliation for their labor organization efforts The ex employees in Kansas City were ostensibly cut loose for being slightly late calling out from work or even making typos in timesheets but were also made to sign a quot release of all claims quot to get their severance pay They couldn t challenge Apple s practices once they left in other words In Houston Apple allegedly questioned workers individually about their union support and offered improved conditions if they dropped their labor support Those that persisted in pro union activity were disciplined and threatened with deteriorating conditions the CWA claims Only two US stores in Oklahoma City and Towson Maryland unionized in Abroad a store in Glasgow became the third Other employees such as those in St Louis Missouri have filed for union elections Staff in Atlanta called off a vote last spring after accusing Apple of intimidation tactics We ve asked Apple for comment The company has historically opposed unionization efforts reportedly holding mandatory anti union meetings Apple is also said to have withheld benefits from unionized workers at the Towson store while claiming that they needed to strike a collective bargaining agreement The firm has tried to head off labor movements by raising wages expanding benefits and relaxing schedules Fights between tech giants and their rank and file workers aren t new Labor organization in tech reached a fever pitch in with workers at companies like Activision Blizzard Amazon and Microsoft either unionizing or making their displeasure known Those brands meanwhile have frequently tried to block unionization attempts The CWA s charges suggest those battles are continuing well into the new year This article originally appeared on Engadget at |
2023-03-28 14:00:58 |
Cisco |
Cisco Blog |
Recent Innovations in Cisco ACI – AlgoSec Solution for Network Security Policy Management |
https://feedpress.me/link/23532/16045272/recent-innovations-in-cisco-aci-algosec-solution-for-network-security-policy-management
|
Recent Innovations in Cisco ACI AlgoSec Solution for Network Security Policy ManagementAlgoSec s Security Management solutions for Cisco ACI extend ACI s policy driven automation to security devices in ACI fabrics provide visibility into the security posture of ACI and helpensure continuous compliance across multi cloud environments |
2023-03-28 14:58:32 |
Cisco |
Cisco Blog |
QDOBA serves up improved customer experiences |
https://feedpress.me/link/23532/16045215/qdoba-serves-up-improved-customer-experiences
|
QDOBA serves up improved customer experiencesWith online ordering and QR code driven menus leading a restaurant evolution QDBOA implemented the Cisco Meraki full stack to transform customer experiences and deliver simplicity security and accelerated ROI |
2023-03-28 14:04:15 |
ニュース |
BBC News - Home |
Gary Lineker wins appeal over £4.9m tax bill |
https://www.bbc.co.uk/news/entertainment-arts-65103265?at_medium=RSS&at_campaign=KARANGA
|
billthe |
2023-03-28 14:39:00 |
ニュース |
BBC News - Home |
Humza Yousaf: What do young people want from new SNP leader? |
https://www.bbc.co.uk/news/newsbeat-65099588?at_medium=RSS&at_campaign=KARANGA
|
politics |
2023-03-28 14:15:01 |
ニュース |
BBC News - Home |
Newcastle: Premier League chief Richard Masters 'can't' say if Saudi ownership being re-examined |
https://www.bbc.co.uk/sport/football/65102462?at_medium=RSS&at_campaign=KARANGA
|
Newcastle Premier League chief Richard Masters x can x t x say if Saudi ownership being re examinedPremier League chief executive Richard Masters tells MPs he cannot comment on whether his organisation is investigating who has control of Newcastle |
2023-03-28 14:18:08 |
ニュース |
BBC News - Home |
Who is the new SNP leader? |
https://www.bbc.co.uk/news/uk-scotland-scotland-politics-64874821?at_medium=RSS&at_campaign=KARANGA
|
nicola |
2023-03-28 14:22:30 |
海外TECH |
reddit |
The Legend of Zelda: Tears of the Kingdom – Mr. Aonuma Gameplay Demonstration |
https://www.reddit.com/r/tearsofthekingdom/comments/124pp37/the_legend_of_zelda_tears_of_the_kingdom_mr/
|
The Legend of Zelda Tears of the Kingdom Mr Aonuma Gameplay Demonstration submitted by u TFSIBU to r tearsofthekingdom link comments |
2023-03-28 14:00:43 |
コメント
コメントを投稿