IT |
気になる、記になる… |
Google、「Google Play ベスト オブ 2022」を発表 − ベストアプリは「U-NEXT」、ベストゲームは「ヘブンバーンズレッド」 |
https://taisy0.com/2022/12/01/165613.html
|
google |
2022-11-30 16:22:03 |
AWS |
AWS News Blog |
Announcing Amazon DocumentDB Elastic Clusters |
https://aws.amazon.com/blogs/aws/announcing-amazon-documentdb-elastic-clusters/
|
Announcing Amazon DocumentDB Elastic ClustersAmazon DocumentDB with MongoDB compatibility is a scalable highly durable and fully managed database service for operating mission critical JSON workloads It is one of AWS fast growing services with customers including BBC Dow Jones and Samsung relying on Amazon DocumentDB to run their JSON workloads at scale Today I am excited to announce the general availability … |
2022-11-30 16:58:50 |
AWS |
AWS News Blog |
New — Amazon Athena for Apache Spark |
https://aws.amazon.com/blogs/aws/new-amazon-athena-for-apache-spark/
|
New ーAmazon Athena for Apache SparkWhen Jeff Barr first announced Amazon Athena in it changed my perspective on interacting with data With Amazon Athena I can interact with my data in just a few stepsーstarting from creating a table in Athena loading data using connectors and querying using the ANSI SQL standard Over time various industries such as financial … |
2022-11-30 16:55:19 |
AWS |
AWS Japan Blog |
AWS KMS 外部キーストア (XKS) の発表 |
https://aws.amazon.com/jp/blogs/news/announcing-aws-kms-external-key-store-xks/
|
awskms |
2022-11-30 16:55:36 |
AWS |
AWS Japan Blog |
新規 – Lambda SnapStart で Lambda 関数を高速化 |
https://aws.amazon.com/jp/blogs/news/new-accelerate-your-lambda-functions-with-lambda-snapstart/
|
awslambda |
2022-11-30 16:51:18 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
量子インスパイアド離散フーリエ変換 |
https://qiita.com/wotto27oct/items/0e67f73963842e07d873
|
athinnerleftlanglerightd |
2022-12-01 01:32:16 |
golang |
Goタグが付けられた新着投稿 - Qiita |
Goの並列テストでよくあるバグ(tt := tt忘れ)に対する対策 |
https://qiita.com/tenntenn/items/a003fe8774b82325e2df
|
testingtparallel |
2022-12-01 01:26:16 |
技術ブログ |
Developers.IO |
[アップデート]非準拠リソースのプロビジョニングを抑止できる!AWS Control Towerでプロアクティブなガードレールが利用可能になりました #reinvent |
https://dev.classmethod.jp/articles/reinvent-2022-control-tower-proactive-guardrail/
|
ashissan |
2022-11-30 16:58:35 |
技術ブログ |
Developers.IO |
【速報】SimSmapce Weaverに細かい新機能(local, plugins for UE5, Unity)が出てました。 #reinvent |
https://dev.classmethod.jp/articles/breaking-simspace-weaver-details/
|
keynote |
2022-11-30 16:40:11 |
海外TECH |
MakeUseOf |
How to Fix the “The Operation Can’t Be Completed Because the Disk Is Full” Error on a Mac |
https://www.makeuseof.com/fix-operation-cant-be-completed-because-disk-is-full-error-mac/
|
How to Fix the “The Operation Can t Be Completed Because the Disk Is Full Error on a MacAs the name suggests this error indicates that your Mac is running out of storage space Here we ll teach you how to get around it |
2022-11-30 16:46:16 |
海外TECH |
MakeUseOf |
The 8 Best Online Paraphrasing Tools You Need to Know About |
https://www.makeuseof.com/best-online-paraphrasing-tools/
|
The Best Online Paraphrasing Tools You Need to Know AboutWhether you re looking to shorten work to meet a word count or you can t quite get the wording right these eight online paraphrasing tools will help |
2022-11-30 16:30:16 |
海外TECH |
MakeUseOf |
How to Fix “The Application Was Unable to Start” 0xc000003e Error in Windows 10 & 11 |
https://www.makeuseof.com/windows-0xc000003e-error-fix/
|
windows |
2022-11-30 16:15:17 |
海外TECH |
DEV Community |
Web resource caching: Client-side |
https://dev.to/apisix/web-resource-caching-client-side-1jak
|
Web resource caching Client sideThe subject of Web resource caching is as old as the World Wide Web itself However I d like to offer an as exhaustive as possible catalog of how one can improve performance by caching Web resource caching can happen in two different places client side on the browser and server side This post is dedicated to the former the next post will focus on the latter Caching The idea behind caching is simple if a resource is a time or resource consuming to compute do it once and store the result When somebody requests the resource afterward return the stored result instead of computing it a second time It looks simple and it is but the devil is in the detail as they say The problem is that a computation is not a mathematical one In mathematics the result of a computation is constant over time On the Web the resource you requested yesterday may be different if you request it today Think about the weather forecast for example It all boils down to two related concepts freshness and staleness A fresh response is one whose age has not yet exceeded its freshness lifetime Conversely a stale response is one where it has A response s freshness lifetime is the length of time between its generation by the origin server and its expiration time An explicit expiration time is the time at which the origin server intends that a stored response can no longer be used by a Cache without further validation whereas a heuristic expiration time is assigned by a Cache when no explicit expiration time is available A response s age is the time that has passed since it was generated by or successfully validated with the origin server When a response is fresh in the cache it can be used to satisfy subsequent requests without contacting the origin server thereby improving efficiency RFC Freshness Early Web resource cachingRemember that the WWW was relatively simple at its beginning compared to nowadays The client would send a request and the server would return the requested resource When the resource was a page whether it was a static page or a server rendered page was unimportant Hence early client side caching was pretty rustic The first specification of Web caching is defined in RFC aka HTTP Caching in Note that it has been superseded by RFC since I won t talk here about the Pragma HTTP header since it s deprecated The most straightforward cache management is through the Expire response header When the server returns the resource it specifies after which timestamp the cache is stale The browser has two options when a cached resource is requested Either the current time is before the expiry timestamp the resource is considered fresh and the browser serves it from the local cacheOr it s after the resource is considered stale and the browser requires the resource from the server as it was not cachedThe benefit of Expire is that it s a purely local decision It doesn t need to send a request to the server However it has two main issues The decision to use the locally cached resource or not is based on heuristics The resource may have changed server side despite the Expiry value being in the future so the browser serves an out of date resource Conversely the browser may send a request because the time has expired but the resource hasn t changed Moreover Expire is pretty basic A resource is either fresh or stale either return it from the Cache or send the request again We may want to have more control Cache Control to the rescueThe Cache Control header aims to address the following requirements Never cache a resource at allValidate if a resource should be served from the cache before serving itCan intermediate caches proxies cache the resource Cache Control is an HTTP header used on the request and the response The header can contain different directives separated by commas Exact directives vary depending on whether they re part of the request or the response All in all Cache Control is quite complex It might be well the subject of a dedicated post I won t paraphrase the specification However here s a visual help on how to configure Cache Control response headers The Cache Control page of Mozilla Developer Network has some significant use cases of Cache Control complete with configuration As Expire Cache Control is also local the browser serves the resource from its cache if needed without any request to the server Last Modified and ETagTo avoid the risk of serving an out of date resource the browser must send a request to the server Enters the Last Modified response header Last Modified works in conjunction with the If Modified Since request header The If Modified Since request HTTP header makes the request conditional the server sends back the requested resource with a status only if it has been last modified after the given date If the resource has not been modified since the response is a without any body the Last Modified response header of a previous request contains the date of last modification Unlike If Unmodified Since If Modified Since can only be used with a GET or HEAD If Modified SinceLet s use a diagram to make clear how they interact Note the If Unmodified Since has the opposite function for POST and other non idempotent methods It returns a Precondition Failed HTTP error to avoid overwriting resources that have changed The problem with timestamps in distributed systems is that it s impossible to guarantee that all clocks in the system have the same time Clocks drift at different paces and need to synchronize to the same time at regular intervals Hence if the server that generated the Last Modified header and the one that receives the If Modified Since header are different the results could be unexpected depending on their drift Note that it also applies to the Expire header Etags are an alternative to timestamps to avoid the above issue The server computes the hash of the served resource and sends the ETag header containing the value along with the resource When a new request comes in with the If None Match containing the hash value the server compares it with the current hash If they match it returns a as above It has the slight overhead of computing the hash vs just handing the timestamp but it s nowadays considered a good practice The Cache APIThe most recent way to cache on the client side is via the Cache API It offers a general cache interface you can think of it as a local key value provided by the browser Here are the provided methods Cache match request options Returns a Promise that resolves to the response associated with the first matching request in the Cache object Cache matchAll request options Returns a Promise that resolves to an array of all matching responses in the Cache object Cache add request Takes a URL retrieves it and adds the resulting response object to the given cache This is functionally equivalent to calling fetch then using put to add the results to the cache Cache addAll requests Takes an array of URLs retrieves them and adds the resulting response objects to the given cache Cache put request response Takes both a request and its response and adds it to the given cache Cache delete request options Finds the Cache entry whose key is the request returning a Promise that resolves to true if a matching Cache entry is found and deleted If no Cache entry is found the Promise resolves to false Cache keys request options Returns a Promise that resolves to an array of Cache keys The Cache API works in conjunction with Service Workers The flow is simple You register a service worker on a URLThe browser calls the worker before the URL fetch callFrom the worker you can return resources from the cache and avoid any request to the serverIt allows us to put resources in the cache after the initial load so that the client can work offline depending on the use case SummaryHere s a summary of the above alternatives to cache resources client side Order Alternative Managed by Local Pros Cons Service worker Cache API You Yes Flexible Requires JavaScript coding skills Coding and maintenance time Expire Browser Yes Easy configuration Guess based Simplistic Cache Control Browser Yes Fine grained control Guess based Complex configuration Last Modified Browser No Just works Sensible to clock drift ETag Browser No Just works Slightly more resource sensitive to compute the hashNote that those alternatives aren t exclusive You may have a short Expire header and rely on ETag You should probably use both a level alternative and a level A bit of practiceLet s put the theory that we have seen above into practice I ll set up a two tiered HTTP cache The first tier caches resources locally for seconds using Cache ControlThe second tier uses ETag to avoid optimizing the data load over the networkI ll use Apache APISIX APISIX sits on the shoulder of giants namely NGINX NGINX adds ETag response headers by default We only need to add the Cache Control response header We achieve it with the response rewrite plugin upstreams id type roundrobin nodes content routes uri upstream id plugins response rewrite headers set Cache Control max age Let s do it without a browser first curl v localhost HTTP OKContent Type text html charset utf Content Length Connection keep aliveDate Thu Nov GMTAccept Ranges bytesLast Modified Wed Nov GMTETag ef Server APISIX Cache Control max age To prevent the server from sending the same resource we can use the ETag value in an If None Match request header curl H If None Match ef v localhost The result is a Not Modified as expected HTTP Not ModifiedContent Type text html charset utf Content Length Connection keep aliveDate Thu Nov GMTAccept Ranges bytesLast Modified Wed Nov GMTETag ef Server APISIX Cache Control max age Now we can do the same inside a browser If we use the resend feature a second time before seconds have passed the browser returns the resource from the cache without sending the request to the server ConclusionIn this post I described several alternatives to cache web resources Expiry and Cache Control Last Modified and ETag and the Cache API and web workers You can easily set the HTTP response headers via a reverse proxy or an API Gateway With Apache APISIX ETags are enabled by default and other headers are easily set up In the next post I will describe caching server side You can find the source code for this post on GitHub ajavageek web caching To go further RFC HTTP Caching obsolete RFC HTTP CachingHTTP cachingCache ControlPrevent unnecessary network requests with the HTTP CacheCache APIService worker caching and HTTP cachingOriginally published at A Java Geek on November th |
2022-11-30 16:25:00 |
海外TECH |
DEV Community |
React Material Tailwind - Beginners’ Guide and Free Sample |
https://dev.to/sm0ke/react-material-tailwind-beginners-guide-and-free-sample-5eh7
|
React Material Tailwind Beginners Guide and Free SampleHello Coders This article aims to help developers to accommodate Material Tailwind a popular UI library for React actively supported by Creative Tim Those interested in this topic will learn how to set up the library and how to build a simple landing page plus signIN and signOUT pages The sources of this coding experiment can be found on GitHub under the MIT license and can be used or incorporated in commercial projects or eLearning activities Thank you React Material Tailwind LIVE DemoReact Material Tailwind the official page What is Material TailwindMaterial Tailwind is a free and open source UI library inspired by Material Design that provides a unique experience for developers working with React and Tailwind CSS It is a highly customizable component library that provides a great user experience thus enabling you to build stunning and responsive web applications This open source library tries to combine a popular design concept with the trendies CSS Framework nowadays Before starting to write the code let s say highlight the key points of the Material Tailwind tool Free and open sourceMaterial Tailwind is a free and open source UI components library this means that the code is readily available for everyone to modify and improve Whether you are a developer or a user you can contribute to the code or the library s documentation Easy to use Material Tailwind is easy to use and integrate into React and HTML projects You don t have to spend much time learning how to use its web components If you are familiar with Tailwind CSS working with Material Tailwind will be a breeze Fully customizableThe components provided by Material Tailwind are fully customizable giving developers total control over the layout of their applications Material Tailwind is a flexible UI library that enables you to design any UI layout according to your requirement Excellent documentationMaterial Tailwind has excellent documentation making it easy for new and existing users to set up and learn about its components quickly The documentation explains how each UI component works with detailed code samples How to Set UPMaterial Tailwind works perfectly with HTML and React projects In this section I will guide you through setting up Material Tailwind using React Create your React app by running the code below npx create react app my project cd my projectInstall Tailwind CSS and its peer dependencies via npm and run the commands below to generate both tailwind config js and postcss config js npm install D tailwindcss postcss autoprefixer npx tailwindcss init p Install Material Tailwind to your React app npm i material tailwind react Wrap your Tailwind CSS configurations with the withMT function within the tailwind config js file const withMT require material tailwind react utils withMT module exports withMT content src js jsx ts tsx theme extend plugins Add the following Tailwind CSS directives to your src index css file tailwind base tailwind components tailwind utilities Wrap the entire application with ThemeProvider from Material Tailwind ThemeProvider enables us to use the default styles provided by Material Tailwind import React from react import ReactDOM from react dom client import App from App import index css import ThemeProvider from material tailwind react const root ReactDOM createRoot document getElementById root root render lt React StrictMode gt lt ThemeProvider gt lt App gt lt ThemeProvider gt lt React StrictMode gt Congratulations You ve successfully added Material Tailwind to your React application In the remaining sections I will guide you through creating a landing page and an authentication page with Material Tailwind Code a Landing PageIn this section you ll learn how to build a landing page with Material Tailwind Install React Router a JavaScript library that enables us to navigate between pages in a React application npm install react router domCreate a pages folder within the src folder containing the components for the Login Home and Register routes mkdir pagescd pagestouch Home js Login js Register jsCopy the code below into the App js file The code snippet below uses React Router v for navigating between the web pages import React from react import BrowserRouter Routes Route from react router dom import Home from pages Home import Login from pages Login import Register from pages Register function App return lt BrowserRouter gt lt Routes gt lt Route path element lt Home gt gt lt Route path login element lt Login gt gt lt Route path register element lt Register gt gt lt Routes gt lt BrowserRouter gt export default App Update the Home js file to contain the code below import React from react import Nav from components Nav import Hero from components Hero import FirstSection from components FirstSection import SecondSection from components SecondSection import Footer from components Footer import Testimonial from components Testimonial import Faq from components Faq const Home gt return lt div gt lt Nav gt lt Hero gt lt FirstSection gt lt SecondSection gt lt Testimonial gt lt Faq gt lt Footer gt lt div gt export default Home From the code snippet above Home is divided into seven sub components Next create a components folder containing the sub components mkdir componentscd componentstouch Nav js Hero js FirstSection js SecondSection js Testimonial js Faq js Footer jsCoding the NAV ComponentCopy the code below into the Nav js file import Typography from material tailwind react import React from react import Link from react router dom const Nav gt return lt gt lt nav className h vh flex flex row truncated gt lt div className flex flex row items center gt lt Typography className font bold text lg font poppins text purple gt Meetup lt Typography gt lt div gt lt div className flex flex row items center space x gt lt Link to login className font poppins text white truncated gt Login lt Link gt lt Link to register className font poppins text gray truncated gt Register lt Link gt lt div gt lt nav gt lt gt export default Nav From the code snippet above Typography is the component provided by Material Tailwind for displaying texts on web pages The font size of the text can be increased or reduced using Tailwind CSS Coding the HERO ComponentCopy the code below into the Hero js file import Button Typography from material tailwind react import React from react import hero from images meetup jpg const Hero gt return lt div className w full lg p px flex truncated gt lt div className lg w w full lg px lg pr gt lt Typography className text xl truncated gt Create a great circle of friends lt Typography gt lt Typography className font poppins mb gt Lorem ipsum dolor sit amet consectetur adipiscing elit lt Typography gt lt Button size lg color purple gt Get Connected lt Button gt lt div gt lt div className lg w w full lg block hidden gt lt img src hero alt Hero gt lt div gt lt div gt export default Hero Material Tailwind also provides a Button component that accepts various props you can find them here Coding the Info ComponentCopy the code below into the FirstSection js file import Typography Button from material tailwind react import React from react import connect from images connect jpg const FirstSection gt return lt div className w full lg p p flex items center justify between gt lt div className lg w w full lg block hidden gt lt img src connect alt Hero gt lt div gt lt div className lg w w full lg px lg pl gt lt Typography className text xl truncated gt Create a great circle of friends lt Typography gt lt Typography className font poppins mb gt Lorem ipsum dolor sit amet Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore lt Typography gt lt Button size lg color purple gt learn more lt Button gt lt div gt lt div gt export default FirstSection Here is the result In the same way other components with different topologies are coded Features ComponentTestimonial ComponentFAQs Component Coding the Footer ComponentCopy the code below into the Footer js file import Button Typography from material tailwind react import React from react import Icons from icons const Footer gt return lt footer className w full footer py pt px gt lt div className flex flex col justify between items center border b px md px pb border b purple mb gt lt Typography className font poppins text xl font semibold text white mb gt Do you want to know more or just have any questions write to us lt Typography gt lt Button size lg color red className text white gt Contact Us lt Button gt lt div gt lt div className w full flex items center flex col justify center gt lt Typography className font poppins text lg font semibold text purple gt Meetup lt Typography gt lt Icons gt lt div gt lt footer gt export default Footer Code the Authentication PagesThe pages are pretty simple with a classic layout the centered elements Coding the SignIN PageThe Login component should be updated to accept a username and password from a user import Button Input Typography from material tailwind react import React from react import Link from react router dom const Login gt return lt form className w full min h vh flex flex col items center justify center space y md px auto px gt lt Typography className text xl font bold font poppins mb gt Login lt Typography gt lt div className md w w full gt lt label className font poppins gt Username lt label gt lt Input label Username className font poppins gt lt div gt lt div className md w w full gt lt label className font poppins gt Password lt label gt lt Input label Password type password className font poppins gt lt div gt lt div className md w w full flex items center justify center gt lt Button size lg color purple className w gt Sign In lt Button gt lt div gt lt Typography className font poppins gt Don t have an account lt Link to register className hover text purple gt Create one lt Link gt lt Typography gt lt form gt export default Login Material Tailwind provides an Input component that allows you to accept users data via forms The Input component supports different props for various types of data The registration page is quite similar to the signIN and the code can be found on GitHub Congratulations You ve completed the project for this article ConclusionSo far you ve learned the following What Material Tailwind is Why you should use Material Tailwind How to set up Material Tailwind in a React applicationHow to build a landing page with Material Tailwind andHow to build a login and register route with Material Tailwind Material Tailwind provides a faster way of building stunning and user friendly web applications It is free and open source so you are welcome to engage with the community as a developer or contributor This tutorial explains a few UI components provided by Material Tailwind You can also explore the documentation and the GitHub repository for other features that it offers Thanks for reading For more resources and support please access Free support provided by AppSeed email amp Discord More free apps crafted in Flask Django and React |
2022-11-30 16:17:10 |
海外TECH |
Engadget |
Amazon bundles the Echo Show 8 with an Echo Show 5 Kids for only $70 |
https://www.engadget.com/amazon-echo-show-8-deal-bundle-echo-show-5-kids-70-161408112.html?src=rss
|
Amazon bundles the Echo Show with an Echo Show Kids for only Cyber Monday has come and gone but if you re still looking to pick up a new smart display or two ahead of the holidays a newer deal on Amazon s Echo Shows may be of interest The retailer is currently offering a bundle that pairs its Echo Show with the Kids edition of its Echo Show for nbsp Buy Echo Show Echo Show Kids at Amazon We ve seen the Echo Show alone go for for much of the last two months but that still equals the lowest price we ve tracked Normally it retails closer to With this deal you re effectively getting an Echo Show Kids thrown in at no extra cost That device is currently available on its own for but its average street price over the last few months has sat closer to Most people don t need a smart display but for those who like using a voice assistant to pull up the weather control smart lights and doorbells stream podcasts and so on it can provide more context than a screenless smart speaker Amazon and Google are really your only options in this market but if you re already partial to Alexa the Echo Show is your best bet nbsp We gave the inch display a review score of last year and currently recommend it in our guide to the best smart displays It can t double as a smart home hub like the bigger Echo Show but its display is big and sharp x enough to comfortably stream video or display photos around the house its speakers are powerful enough to fill a room its processor can keep up with most tasks and its megapixel camera is suitable for video calls And while no Alexa or Google Assistant device is ideal for the privacy conscious there s at least a physical camera shutter and mic mute button built in The Echo Show isn t as quick spacious or loud and its megapixel camera is a noticeable downgrade Still it can do just about everything the larger models can do and its inch screen makes it a better fit for bathrooms or bedside tables We gave it a score of last year The Kids version has the same hardware as the normal model but it comes with a two year warranty a year of Amazon s Kids content service and a more child friendly interface Whether you re okay putting an Amazon mic and camera in your kid s room is up to you but there is a camera cover and various parental controls for limiting and monitoring how the device is used If you re not beholden to Alexa we ll note that Google s Nest Hub our top pick among Google displays is still on sale for We generally find the Google Assistant to be a little smarter than Alexa particularly for web queries and it naturally plays nicer with widely used Google services like Gmail Google Calendar YouTube and the like The Nest Hub also lacks a camera which may be a positive if you don t care about video calling That said if you already own a bunch of Alexa enabled devices and want a couple new displays for around the house this is a good deal regardless of Alexa s broader struggles Follow EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice |
2022-11-30 16:14:08 |
Cisco |
Cisco Blog |
How New Features in Automation Exchange Can Help You With Security |
https://blogs.cisco.com/developer/automationexchangenewfeatures01
|
How New Features in Automation Exchange Can Help You With SecuritySee how new security features in DevNet Automation Exchange Scorecard and KubeClarity help developers address security concerns earlier in the development process |
2022-11-30 16:24:51 |
Cisco |
Cisco Blog |
Customer Journeys to the Cloud with Cisco and Amazon Web Services (AWS) |
https://blogs.cisco.com/partner/customer-journeys-to-the-cloud-with-cisco-and-amazon-web-services-aws
|
Customer Journeys to the Cloud with Cisco and Amazon Web Services AWS As a leading provider of hybrid cloud solutions Cisco can provide customers with effective cloud transformation assistance This blog explains how Cisco software solutions on AWS can assist customers at every stage of cloud transformation |
2022-11-30 16:00:57 |
海外TECH |
CodeProject Latest Articles |
How to Use a Custom Model in CodeProject.AI Server in Docker |
https://www.codeproject.com/Articles/5348417/How-to-Use-a-Custom-Model-in-CodeProject-AI-Server
|
codeproject |
2022-11-30 16:51:00 |
海外科学 |
NYT > Science |
Physicists Create ‘the Smallest, Crummiest Wormhole You Can Imagine’ |
https://www.nytimes.com/2022/11/30/science/physics-wormhole-quantum-computer.html
|
black |
2022-11-30 16:08:31 |
海外科学 |
NYT > Science |
Hurricane Season Ends, Marked by Quiet August and Deadly September |
https://www.nytimes.com/2022/11/29/climate/hurricane-season-noaa.html
|
Hurricane Season Ends Marked by Quiet August and Deadly SeptemberThe six month total of named storms was about average But two late season hurricanes proved catastrophic in Florida and Puerto Rico |
2022-11-30 16:56:42 |
海外科学 |
NYT > Science |
Can This Man Stop Lying? |
https://www.nytimes.com/2022/11/29/health/lying-mental-illness.html
|
illness |
2022-11-30 16:18:10 |
海外TECH |
WIRED |
Why China Is Still Stuck in a Zero-Covid Nightmare |
https://www.wired.com/story/china-protests-zero-covid/
|
fraught |
2022-11-30 16:35:59 |
金融 |
金融庁ホームページ |
金融機関における貸付条件の変更等の状況について更新しました。 |
https://www.fsa.go.jp/ordinary/coronavirus202001/kashitsuke/20200430.html
|
金融機関 |
2022-11-30 17:00:00 |
金融 |
金融庁ホームページ |
「新型コロナウイルス感染症関連情報」特設ページを更新しました。 |
https://www.fsa.go.jp/ordinary/coronavirus202001/press.html
|
感染拡大 |
2022-11-30 17:00:00 |
金融 |
金融庁ホームページ |
「自己資本比率規制(第1の柱・第3の柱)に関する告示の一部改正(案)」等及び「主要行等向けの総合的な監督指針の一部改正(案)」等に対するパブリック・コメントの結果等について公表しました。 |
https://www.fsa.go.jp/news/r4/ginkou/20221130.html
|
自己資本比率 |
2022-11-30 17:00:00 |
金融 |
金融庁ホームページ |
バーゼル銀行監督委員会による「銀行のノンバンク金融仲介向けエクスポージャーに関するニューズレター」について掲載しました。 |
https://www.fsa.go.jp/inter/bis/20221130/20221130.html
|
銀行 |
2022-11-30 17:00:00 |
金融 |
金融庁ホームページ |
BIS決済・市場インフラ委員会および証券監督者国際機構による報告書「『金融市場インフラのための原則』の実施状況に関するモニタリング(金融市場インフラのサイバーレジリエンスに関するレベル3評価)」について掲載しました。 |
https://www.fsa.go.jp/inter/ios/20221130.html
|
証券監督者国際機構 |
2022-11-30 17:00:00 |
ニュース |
BBC News - Home |
Lady Susan Hussey quits over remarks to charity boss Ngozi Fulani |
https://www.bbc.co.uk/news/uk-63810468?at_medium=RSS&at_campaign=KARANGA
|
consort |
2022-11-30 16:23:13 |
ニュース |
BBC News - Home |
Eurostar security staff to strike in run-up to Christmas |
https://www.bbc.co.uk/news/business-63808685?at_medium=RSS&at_campaign=KARANGA
|
december |
2022-11-30 16:20:07 |
ニュース |
BBC News - Home |
Pele: Brazil legend in hospital but daughter confirms 'no emergency' |
https://www.bbc.co.uk/sport/football/63812974?at_medium=RSS&at_campaign=KARANGA
|
confirms |
2022-11-30 16:12:59 |
ニュース |
BBC News - Home |
Newborn baby's body found at Waterbeach recycling centre |
https://www.bbc.co.uk/news/uk-england-cambridgeshire-63806552?at_medium=RSS&at_campaign=KARANGA
|
medical |
2022-11-30 16:52:45 |
ニュース |
BBC News - Home |
Prisoners could be held in police cells to cut overcrowding |
https://www.bbc.co.uk/news/uk-63809206?at_medium=RSS&at_campaign=KARANGA
|
increase |
2022-11-30 16:34:35 |
ニュース |
BBC News - Home |
World Cup 2022: Iranian man killed celebrating football team's loss - report |
https://www.bbc.co.uk/news/world-middle-east-63805284?at_medium=RSS&at_campaign=KARANGA
|
football |
2022-11-30 16:25:09 |
ニュース |
BBC News - Home |
Who is Lady Susan Hussey? |
https://www.bbc.co.uk/news/uk-63812608?at_medium=RSS&at_campaign=KARANGA
|
palace |
2022-11-30 16:01:18 |
ニュース |
BBC News - Home |
Pakistan v England: Tourists wait to learn if series starts on Thursday |
https://www.bbc.co.uk/sport/cricket/63808526?at_medium=RSS&at_campaign=KARANGA
|
england |
2022-11-30 16:11:32 |
ニュース |
BBC News - Home |
World Cup 2022: Wahbi Khazri gives Tunisia shock lead against France |
https://www.bbc.co.uk/sport/av/football/63812356?at_medium=RSS&at_campaign=KARANGA
|
world |
2022-11-30 16:33:27 |
ニュース |
BBC News - Home |
World Cup 2022: Australia take the lead as Leckie scores 'huge' goal |
https://www.bbc.co.uk/sport/av/football/63811846?at_medium=RSS&at_campaign=KARANGA
|
World Cup Australia take the lead as Leckie scores x huge x goalAustralia forward Mathew Leckie strikes his side into the lead against Denmark moments after France concede a Tunisia goal in Group D at the World Cup |
2022-11-30 16:40:56 |
海外TECH |
reddit |
Someone is selling my design (without consent) on Etsy, 😂😂😂 How would you guys feel? |
https://www.reddit.com/r/3Dprinting/comments/z8vb21/someone_is_selling_my_design_without_consent_on/
|
Someone is selling my design without consent on Etsy How would you guys feel submitted by u mfactory osaka to r Dprinting link comments |
2022-11-30 16:06:46 |
海外TECH |
reddit |
Poll 2: Which song on Unseen World “Progress” do you like most? |
https://www.reddit.com/r/BandMaid/comments/z8vg1j/poll_2_which_song_on_unseen_world_progress_do_you/
|
Poll Which song on Unseen World “Progress do you like most Which song on Unseen World “Progress do you like most Share your thoughts Manners is omitted here because Reddit allows only options Giovanni H G K BLACK HOLE Honkai NO GOD Warning Vote also Poll Which song on Unseen World “Roots do you like most View Poll submitted by u t shinji to r BandMaid link comments |
2022-11-30 16:12:13 |
GCP |
Cloud Blog |
Cloud CISO Perspectives: November 2022 |
https://cloud.google.com/blog/products/identity-security/cloud-ciso-perspectives-november-2022/
|
Cloud CISO Perspectives November Welcome to November s Cloud CISO Perspectives I d like to celebrate the first year of the Google Cybersecurity Action Team GCAT and look ahead to the team s goals for As with all Cloud CISO Perspectives the contents of this newsletter are posted to the Google Cloud blog If you re reading this on the website and you d like to receive the email version you can subscribe here GCAT one year laterWe launched the Google Cybersecurity Action Team in October as a premier security advisory team with the singular mission of supporting the security and digital transformation of governments critical infrastructure enterprises and small businesses The core mission is to help guide customers through the cycle of their security transformation starting with their first cloud adoption roadmap and implementation through increasing their cyber resilience preparedness for potential events and to even help engineer new solutions in partnership with them as requirements change aside block StructValue u title u Hear monthly from our Cloud CISO in your inbox u body lt wagtail wagtailcore rich text RichText object at xecfea gt u btn text u Subscribe today u href u u image None Readers know that cybersecurity has only become more top of mind yet organizations face continued challenges as they kick off and advance their security transformations Our desire to help with and accelerate this process is directly tied to Google Cloud s shared fate model where we take an active stake in the security posture of our customers by offering secure defaults capabilities to ensure secure deployments and configurations opinionated guidance on how to configure cloud workloads for security and assistance with measuring reducing accepting and transferring risk We ve gotten very positive feedback on our strategy of deploying the right people with the right expertise at the right moment during customers transformation journeys and doing it in an integrated way so that the handoff from one specialist team to the next is seamless This may not seem revolutionary but focusing on making customers more secure from the beginning of their journey helps reduce toil and ingrain better security practices earlier on We focus heavily on how we build the institutional memory of particular customers on particular teams so that if a customer comes back we can deploy the same or adjacent people to work with them Most organizations are not solely on one cloud platform so it s helpful to make sure we ve got people we can re deploy who understand the customer s broader multicloud and hybrid environment We look at the challenges that we see in engagements with customers and use those as a fast feedback loop into which future solutions and blueprints and products we should be working on Ultimately GCAT s role is at the forefront of making these transformations less daunting We ve also found that our quarterly Threat Horizons report helps progress towards that goal Threat Horizons offers a unique fusion of security data and strategic threat intelligence pulled together from across research teams at Google geared for security leaders and their leadership teams Many CISOs and other leaders have told us that they find Threat Horizons helpful in part because our research often reflects their own findings and can help make their arguments stronger As GCAT moves into its second year we plan on further developing partnerships with our consulting teams Professional Services and Mandiant and we ll continue to scale our offerings through specializations and feedback loops You can also listen here to my conversation with Google Cloud security experts Anton Chuvakin and Timothy Peacock on the Cloud Security podcast on the first year of GCAT and how it fits in with industry trends Security Talks in DecemberOur Google Cloud Security Talks event for Q will focus on two topics that we ve emphasized continuously in our Cloud CISO Perspectives ーthreat detection and Zero Trust Join us on December to hear from leaders across Google as well as leading edge customers on these two critical initiatives Click here to reserve your spot and we ll see you there virtually Google Cybersecurity Action Team highlightsHere are the latest updates products services and resources from our security teams this month Securing tomorrow today We updated our internal encryption in transit protocol to protect communications within Google from potential quantum computing threats Here s why Making Cobalt Strike harder for threat actors to abuse We took steps with Cobalt Strike s vendor to hunt down cracked versions of the popular red team software which often are used in cyberattacks Read more How data embassies can strengthen resiliency with sovereignty Data embassies extend the concept of using a digital haven to reduce risk made possible by the flexible distributed nature of the cloud Here s how they work and how they intersect with Google Cloud Read more For a successful cloud transformation change your culture first To fully incorporate all the benefits of a cloud transformation an organization should update its security mindset and culture along with its technology Read more From the FBI to Google Cloud meet CISO Director MK Palmore Following three decades in the Marines and the FBI MK Palmore came to Google Cloud s Office of the Chief Information Security Officer in to help Google tackle some of the hardest security problems the industry faces right now Read more Does the internet need sunscreen No submarine cables are protected from solar storms A Google team set out to analyze the risks that undersea cables face from solar storms Here s what they learned Read more CISO Survival Guide How financial services organizations can more securely move to the cloud The first day in the cloud can be daunting for financial services organizations What are the key questions they face and how can they best respond to them Read more Multicloud Mindset Thinking about open source and security in a multicloud world Security leaders and architects are shifting away from traditional security models which are increasingly insufficient for protecting multicloud environments Here s what you need to know about the trend Read more Google Cloud security tips tricks and updates more reasons to use Chrome s cloud based management Take a deep dive into recent improvements to the Chrome Browser Cloud Management tool Read more Introducing Cloud Armor features to help improve efficacy Google Cloud Armor can be used more efficiently with two new features an auto deploy option for proposed rules generated by Adaptive Protection and advanced rule tuning Read more IAM Deny creates a simple way to harden your security posture at scale New Identity and Access Management Deny policies can more easily create rules that broadly restrict resource access a powerful coarse grained control to help implement security policies at scale Read more Chronicle Security Operations offers new faster search and investigative experience A new investigative experience comes to Chronicle Security Operations with lightning fast search across any form of structured data and greater flexibility to pivot and drill down when conducting complex open ended threat investigations Read more How to analyze security and compliance of your dependencies with the Open Source Insights dataset The Open Source Insights project scans millions of open source packages computes their dependency graphs and annotates those graphs with security advisories license information popularity metrics and other metadata Read more How to migrate on premises Active Directory users to Google Cloud Managed Microsoft AD For organizations operating in Microsoft centered environments Google Cloud offers a highly available hardened Managed Service for Microsoft Active Directory running on Windows virtual machines Read more Announcing Private Marketplace now in Preview Looking to reduce employee usage of shadow IT and out of date software IT and cloud administrators can now create a private curated version of Google Cloud Marketplace for their organizations Read more New Mobile SDK can help reCAPTCHA Enterprise protect iOS Android apps The reCAPTCHA Enterprise Mobile SDK can help block fake users and bots from accessing mobile apps while allowing legitimate users to proceed and it s now generally available to developers Read more Practicing the principle of least privilege with Cloud Build and Artifact Registry How to help reduce the blast radius of misconfigurations and malicious users using Cloud Build and Artifact Registry Read more Automate cleanup of unused Google Cloud projects Part of reducing technological debt means getting rid of abandoned projects but doing that manually is time consuming You can automate that process using Remora a serverless solution that works with the Unattended Project Recommender Read more Should I use Cloud Armor Cloud Armor provides DDoS defense and additional security for apps and websites running on Google Cloud on prem or on other platforms This guide can help you decide when to use this powerful tool Read more How to configure Traffic Director Traffic Director is a managed Google service that helps solve common networking challenges related to flow security and observability Here s how to use it Read more Compliance amp ControlsGoogle Cloud completes Korea Financial Security Institute audit Earlier this year we worked with South Korean auditors to support a group of leading South Korean FSIs interested in expanding their adoption of Google Cloud Read more Google Public Sector announces continuity of operations offering for government entities under cyberattack Every U S government agency is now expected to have a Continuity of Operations Plan COOP in place Google Workspace is positioned to help with these business and collaboration continuity needs ensuring agency teams can continue to work effectively and securely in the event of an incident Read more Announcing Assured Workloads for Israel in Preview Assured Workloads helps customers create and maintain controlled environments The Assured Workloads Preview for Israel provides data residency in our new Israel Cloud region cryptographic control over data and service usage restrictions that help keep organizations in policy compliance Read more Google Cloud Security PodcastsWe launched a new weekly podcast focusing on Cloud Security in February Hosts Anton Chuvakin and Timothy Peacock chat with cybersecurity experts about the most important and challenging topics facing the industry today This month they discussed Google Workspace security from threats to Zero Trust Is compliance changing Have hardware keys really stopped phishing Which security assumptions do we need to revisit We discuss these important hybrid workplace security questions and more with Nikhil Sinha and Kelly Anderson of Google Workspace Listen here Secrets of cloud security incident response Cloud transformations also change security standards and protocol including incident response challenges creating effective partnerships with cloud service providers and even the definition of a security incident with Google security specialists Matt Linton and John Stone Listen here A deep dive on the release of detection rules for CobaltStike abuse In this conversation with Greg Sinclair security engineer at Google Cloud we discuss his blog post explaining how and why Google Cloud took action to limit the scope of malicious actor abuse of Cobalt Strike Listen here Who observes Cloud Security Observability From improving detection and response to making network communications more secure to its impact on the shift to TLS here is everything you wanted to know about “observability data but were afraid to ask with Jeff Bollinger director of incident response and detection engineering at LinkedIn Listen here Cloud threats and incidents ーRansomOps misconfigurations and cryptominers How are cloud environments attacked and compromised today and is cloud security a misnomer With Alijca Cade director of financial services at Google Cloud s Office of the CISO Ken Westin director of security strategy at Cybereason and Robert Wallace senior director at Mandiant Listen here To have our Cloud CISO Perspectives post delivered every month to your inbox sign up for our newsletter We ll be back next month with more security related updates |
2022-11-30 17:00:00 |
GCP |
Cloud Blog |
Built with BigQuery: Zeotap uses Google BigQuery to build highly customized audiences at scale |
https://cloud.google.com/blog/products/data-analytics/built-bigquery-zeotap-uses-google-bigquery-build-highly-customized-audiences-scale/
|
Built with BigQuery Zeotap uses Google BigQuery to build highly customized audiences at scaleZeotap s mission is to help brands monetise customer data in a privacy first Europe Today Zeotap owns three data solutions Zeotap CDP is the next generation Customer Data Platform that empowers brands to collect unify segment and activate customer data Zeotap CDP puts privacy and security first while empowering marketers to unlock and derive business value in their customer data with a powerful and marketer friendly user interface Zeotap Data delivers quality targeting at scale by enabling the activation of tried and tested Champion Segments across programmatic advertising and social platforms ID is a universal marketing ID initiative that paves the way for addressability in the cookieless future Zeotap s CDP is a SaaS application that is hosted on Google Cloud A client can use Zeotap CDP SaaS product suite to onboard its first party data use the provided tools to create audiences and activate them on marketing channels and advertising platforms Zeotap partnered with Google Cloud to provide a customer data platform that is differentiated in the market with a focus on privacy security and compliance Zeotap CDP built with BigQuery is empowered with tools and capabilities to democratize AI ML models to predict customer behavior and personalize the customer experience to enable the next generation digital marketing experts to drive higher conversion rates return on advertising spend and reduce customer acquisition cost The capability to create actionable audiences that are highly customized the first time improve speed to market to capture demand and drive customer loyalty are differentiating factors However as the audiences get more specific it becomes more difficult to estimate and tune the size of the audience segment Being able to identify the right customer attributes is critical for building audiences at scale Consider the following example a fast fashion retailer has a broken size run and is at risk of taking a large markdown because of an excess of XXS and XS sizes What if you are able to instantly build an audience of customers who have a high propensity for this brand or style tend to purchase at full price and match the size profile for the remaining inventory to drive full price sales and avoid costly markdowns Most CDPs provide size information only after a segment is created and its data processed If the segment sizes are not relevant and quantifiable the target audiences list has to be recreated impacting speed to market and capturing customer demand Estimating the segment size and tuning the size of the audience segment is often referred to as the segment size estimation problem The segment size needs to be estimated and segments should be available for exploration and processing with a sub second latency to provide a near real time user experience Traditional approaches to solve this problem relies on pre aggregation database models which involve sophisticated data ingestion and failure management thus wasting a lot of compute hours and requiring extensive pipeline orchestration There are a number of disadvantages with this traditional approach Higher cost and maintenance as multiple Extract Transform and Load ETL processes are involvedHigher failure rate and re processing required from scratch in case of failuresTakes hours days to ingest data at large scaleZeotap CDP relies on the power of Google Cloud Platform to tackle this segment size estimation problem using BigQuery for processing and estimation the BI Engine to provide sub second latency required for online predictions and Vertex AI ecosystem with BigQuery ML to provide a no code AI segmentation and lookalike audiences Zeotap CDP s strength is to offer this estimation at the beginning of segment creation before any kind of data processing using pre calculated metrics Any correction in segment parameters can be made near real time saving a lot of user s time The data cloud with BigQuery at its core functions as a data lake at scale and the analytical compute engine that calculates the pre aggregated metrics The BI engine is used as a caching and acceleration layer to make these metrics available with near sub second latency Compared to the traditional approach this setup does not require a heavy data processing framework like Spark Hadoop or sophisticated pipeline management Microservices deployed on the GKE platform are used for orchestration using BigQuery SQL ETL capabilities This does not require a separate data ingestion in the caching layer as the BI engine works seamlessly in tandem with BigQuery and is enabled using a single setting The below diagram depicts how Zeotap manages the first party data and solves for the segment size estimation problem The API layer powered by Apigee provides secure client access to Zeotap s API infrastructure to read and ingest first party data in real time The UI Services Layer backed by GKE and Firebase provides access to Zeotap s platform front ending audience segmentation real time workflow orchestration management analytics amp dashboards The Stream amp Batch processing manages the core data ingestion using PubSub Dataflow and Cloud Run Google BigQuery Cloud SQL BigTable and Cloud Storage make up all of the Storage layer The Destination Platform allows clients to activate its data across various marketing channels data management and ad management platforms like Google DDP TapTap TheTradeDesk etc plus more than such integrations Google BigQuery is at the heart of the Audience Platform to allow clients to slice and dice its first party assets enhance it with Zeotap s universal ID graph or its third party data assets and push to downstream destinations for activation and funnel analysis The Predictive Analytics layer allows clients to create and activate machine learned e g CLV and RFM modeling based segments with just a few clicks Cloud IAM Cloud Operations suite and Collaborations tools deliver the cross sectional needs of security logging and collaboration For segment audience size estimation the core data that is client s first party data resides in its own GCP project First step here is to identify low cardinality columns using BigQuery s “approx count distinct capabilities At this time Zeotap supports a sub second estimation on only low cardinality represents the number of unique values dimensions like Gender with Male Female M N values and Age with limited age buckets A sample query looks like this Once pivoted by columns the results look like thisNow the cardinality numbers are available for all columns they are divided into two groups one below the threshold low cardinality and one above the threshold high cardinality Next step is to run a reverse ETL query to create aggregates on low cardinality dimensions and corresponding HLL sketches for user count measure dimensions A sample query looks like thisThe resultant data is loaded into a separate estimator Google Cloud project for further processing and analysis This project contains a metadata store with datasets required for processing client requests and is front ended with BI engine to provide acceleration to estimation queries With this process the segment size is calculated using pre aggregated metrics without processing the entire first party dataset and enables the end user to create and experiment with a number of segments without incurring any delays as in the traditional approach This approach obsoletes ETL steps required to realize this use case which drives a benefit of over time reduction and cost reduction for the segment size estimation Also enabling BI engine on top of BigQuery boosts query speeds by more than optimizes resource utilization and improves query response as compared to native BigQuery queries The ability to experiment with audience segmentation is one of the many capabilities that Zeotap CDP provides their customers The cookieless future will drive experimentation with concepts like topics for IBA Interest based advertising and developing models that support a wide range of possibilities in predicting customer behavior There is an ever increasing demand for shared data where customers are requesting access to the finished data in the form of datasets to share both within and across the organization through external channels These datasets unlock more opportunities where the curated data can be used as is or coalesced with other datasets to create business centric insights or fuel innovation by enabling ecosystem or develop visualizations To meet this need Zeotap is leveraging Google Cloud Analytics Hub to create a rich data ecosystem of analytics ready datasets Analytics Hub is powered by Google BigQuery which provides a self service approach to securely share data by publishing and subscribing to trusted data sets as listings in Private and Public Exchanges It allows Zeotap to share the data in place having full control while end customers have access to fresh data without the need to move data at large scale Click here to learn more about Zeotap s CDP capabilities or to request a demo The Built with BigQuery advantage for ISVs Google is helping tech companies like Zeotap build innovative applications on Google s data cloud with simplified access to technology helpful and dedicated engineering support and joint go to market programs through the Built with BigQuery initiative launched in April as part of the Google Data Cloud Summit Participating companies can Get started fast with a Google funded pre configured sandbox Accelerate product design and architecture through access to designated experts from the ISV Center of Excellence who can provide insight into key use cases architectural patterns and best practices Amplify success with joint marketing programs to drive awareness generate demand and increase adoption BigQuery gives ISVs the advantage of a powerful highly scalable data warehouse that s integrated with Google Cloud s open secure sustainable platform And with a huge partner ecosystem and support for multi cloud open source tools and APIs Google provides technology companies the portability and extensibility they need to avoid data lock in Click here to learn more about Built with BigQuery We thank the Google Cloud and Zeotap team members who co authored the blog Zeotap Shubham Patil Engineering Manager Google Bala Desikan Principal Architect and Sujit Khasnis Cloud Partner EngineeringRelated ArticleBuilt with BigQuery How True Fit s data journey unlocks partner growthTrue Fit a data driven personalization platform built on Google Data Cloud to provide fit personalization for retailers by sharing curat Read Article |
2022-11-30 17:00:00 |
GCP |
Cloud Blog |
6 common mistakes to avoid in RESTful web API Design |
https://cloud.google.com/blog/products/api-management/restful-web-api-design-best-practices/
|
common mistakes to avoid in RESTful web API DesignImagine ordering a “ready to assemble table online only to find that the delivery package did not include the assembly instructions You know what the end product looks like but have little to no clue how to start assembling the individual pieces to get there A poorly designed API tends to create a similar experience for a consumer developer Well designed APIs make it easy for consumer developers to find explore access and use them In some cases good quality APIs even spark new ideas and open up new use cases for consumer developers There are methods to improve API design ーlike following RESTful practices But time and again we are seeing customers unknowingly program minor inconveniences into their APIs To help you avoid these pitfalls here are six of the most common mistakes we have seen developers make while creating the API ーand guidance on how to get it right Thinking inside out vs outside inBeing everything for everybody often means that nothing you do is the best it could be and that is just as true for APIs When customers turn to APIs they are looking for specific solutions to make their work easier and more productive If there is an API that better works to their needs they will choose that one over yours This is why it s so important to know what your customers need to do their work better and then building to fill those needs In other words start thinking Outside in as opposed to Inside Out Specifically Inside out refers to designing APIs around internal systems or services you would like to expose Outside in refers to designing APIs around customer experiences you want to create Read more about the Outside in perspective in the API product mindset The first step to this is learning from your customers ーbe it internal consumer developers or external customers ーand their use cases Ask them about the apps they are building their pain points and what would help streamline or simplify their development Write down their most significant use cases and create a sample API response that only gives them the exact data they need for each case As you test this look for overlap between payloads and adapt your designs to genericize them across common or similar use cases If you can t connect with your customers ーbecause you don t have direct access they don t have time or they just don t know what they want ーthe best approach is to imagine what you would build with your APIs Think big and think creatively While you don t want to design your APIs for vaporware thinking about the big picture can make it easier to build non breaking changes in the future For example the image below showcases APIs offered by Google Maps Even without diving into the documentation looking at the names like “Autocomplete or “Address Validation clearly outlines the purposes and potential fit for a customer s use case Making your APIs too complex for usersCustomers turn to APIs to bypass complicated programming challenges so they can get to the part they know how to do well If they feel like using your API means learning a whole new system or language then it isn t fitting their needs and they will likely look for something else It s up to your team to make an API that is strong and smart enough to do what your customer wants but also simple enough to hide how complicated the tasks your API solves for really are For example if you know your customers are using your APIs to present information about recently open restaurants and highly rated pizzeria to their consumers providing them with a simple API call as below would be of great help code block StructValue u code u GET restaurants location Austin amp category Pizzeria amp open true amp sort priority created at u language u u caption lt wagtail wagtailcore rich text RichText object at xecca gt To see if your API design is simple enough pretend you are building the whole system from scratch ーor if you have a trusted customer who is willing to help ask them to test it and report their results If you can complete the workflow without having to stop to figure something out then you re good to go On the other hand if you catch rough edges caused by trying to code around system complexity issues then keep trying to refactor The API will be ready when you can say that nothing is confusing and that it either meets your customers needs or can easily be updated as needs change Creating “chatty APIs with too many callsMultiple network calls slow down the process and creates higher connection overhead ーwhich means higher operational costs This is why it s so important to minimize the number of API calls The key to this is outside in design simplify Look for ways to reduce the number of API calls a customer must make in their application s workflow If your customers are building mobile applications for example they often need to minimize their network traffic to reduce battery drain and requiring a couple calls instead of a dozen can make a big difference Rather than deciding between building distinct data driven microservices and streamlining API usage consider offering both fine grained APIs for specific data types and “experience APIs APIs that are designed to power user experiences Here is a further theoretical discussion on Experience APIs around common or customer specific user interfaces These experience APIs compose multiple smaller domains into a single endpoint making it much simpler for your customers ーespecially those building user interfaces ーto render their screens easily and quickly Another option here is to use something like GraphQL to allow for this type of customizability Generally you should avoid building a unique endpoint for every possible screen but common screens like home pages and user account information can make a world of difference to your API consumers Not allowing for flexibilityEven if you ve followed all of the steps above you may find that there are edge cases that do not fit under your beautifully designed payloads Maybe your customer needs more data in a single page of results than usual or the payload has way more data than their app requires You can t create a one size fits all solution but you also don t want a reputation for building APIs that are limiting Here are simple options to make your endpoints more flexible Filter out response properties You can either use query parameters for sorting and pagination or use GraphQL which provides these types of details natively By giving customers the option to request only the properties they need it guarantees that they won t have to sort through tons of unnecessary data to get what they need For example if some of your customers only need the title author and bestseller ranking give them the ability to retrieve only that data with a query string parameter code block StructValue u code u GET books fields title author ranking u language u u caption lt wagtail wagtailcore rich text RichText object at xecbc gt Ability to sort with pagination Generally you don t want to guarantee the order of objects in an API response because minor changes in logic or peculiarities in your data source might change the sort order at some point In some cases however your customers may want to sort by a particular field Giving them that option combined with a pagination option will give them a highly efficient API when they only want the top few results For example Spotify API utilizes a simple offset and limit parameter set to allow pagination A sample endpoint as shown in the documentation would look like thiscode block StructValue u code u curl offset amp limit u language u u caption lt wagtail wagtailcore rich text RichText object at xecbf gt Use mature compositions like GraphQL Since customer data needs can differ giving them on the fly composites lets them build to the combinations of data they need rather than being restricted to a single data type or a pre set combination of data fields Using GraphQL can even bypass the need to build experience APIs but when this isn t an option you can use query string parameter options like “expand to create these more complex queries Here is a sample response that demonstrates a collection of company resources with embedded properties includedcode block StructValue u code u data r n r n CompanyUid ecf fca r n name ABCCo r n status Active r n embedded r n organization r n CompanyUid ecf fca r n name ABCCo r n type Company r n taxId r n city Portland r n notes r n r n r n r n u language u u caption lt wagtail wagtailcore rich text RichText object at xecbd gt Making design unreadable to humans “K eep “I t “S imply “S tupid when you are designing your API While APIs are meant for computer to computer interaction the first client of an API is always a human and the API contract is the first piece of documentation Developers are more apt to study your payload design before they dig into your docs Observation studies suggest that developers spend more than of their time in editor and client as compared to on reference For example if you skim through the payload below it takes some time to understand because instead of property names it includes an “id Even the property name “data does not suggest anything meaningful aside from just being an artifact of the JSON design A few extra bytes in the payload can save a lot of early confusion and accelerate adoption of your API Notice how user ids appearing on the left of the colon in the position where other examples of JSON ideally have property names creates confusion in reading the payload code block StructValue u code u id a r n data r n r n r n AirportCode LAX r n AirportName Los Angeles r n From LAX r n To Austin r n departure T r n arrival T r n r n More data r n r n u language u u caption lt wagtail wagtailcore rich text RichText object at xecbd gt We think that JSON like this is more difficult to learn If you want to eliminate any ambiguity in the words you choose to describe the data keep the payload simple and if any of those labels could be interpreted in more than one way adjust them to be more clear Here is a sample response from Airlines endpoint of aviationstack API Notice how the property names clearly explain the expected result while maintaining a simple JSON structure code block StructValue u code u data r n r n airline name American Airlines r n iata code AA r n iata prefix accounting r n icao code AAL r n callsign AMERICAN r n type scheduled r n status active r n fleet size r n fleet average age r n date founded r n hub code DFW r n country name United States r n country iso US r n r n r n u language u u caption lt wagtail wagtailcore rich text RichText object at xecbd gt Know when you can break the RESTful rulesBeing true to the RESTful basics ーsuch as using the correct HTTP verbs status codes and stateless resource based interfaces ーcan make your customers lives easier because they don t need to learn an all new lexicon but remember that the goal is just to help them get their job done If you put RESTful design first over user experience then it doesn t really serve its purpose Your goal should be helping your customers be successful with your data as quickly and easily as possible Occasionally that may mean breaking some rules of REST to offer simpler and more elegant interfaces Just be consistent in your design choices across all of your APIs and be very clear in your documentation about anything that might be peculiar or nonstandard ConclusionBeyond these common pitfalls we have also created a comprehensive guide packaging up our rich experience designing and managing APIs at incredible scale with Google Cloud s API management product Apigee Apigee ーGoogle Cloud s native API management platform ーhelps you build manage and secure APIs ーfor any use case scale or environment Get started with Apigee today or check out our documentation for additional information |
2022-11-30 17:00:00 |
GCP |
Cloud Blog |
Low-latency fraud detection with Cloud Bigtable |
https://cloud.google.com/blog/products/databases/fraud-detection-with-cloud-bigtable/
|
Low latency fraud detection with Cloud BigtableEach time someone makes a purchase with a credit card financial companies want to determine if that was a legitimate transaction or if it is using a stolen credit card abusing a promotion or hacking into a user s account Every year billions of dollars are lost due to credit card fraud so there are serious financial consequences Companies dealing with these transactions need to balance predicting fraud accurately and predicting fraud quickly In this blog post you will learn how to build a low latency real time fraud detection system that scales seamlessly by using Bigtable for user attributes transaction history and machine learning features We will follow an existing code solution examine the architecture define the database schema for this use case and see opportunities for customizations The code for this solution is on GitHub and includes a simplistic sample dataset and pre trained fraud detection model plus a Terraform configuration This blog and example s goal is to showcase the end to end solution rather than machine learning specifics since most fraud detection models in reality can involve hundreds of variables If you want to spin up the solution and follow along clone the repo and follow the instructions in the README to set up resources and run the code code block StructValue u code u git clone r ncd java docs samples bigtable use cases fraudDetection u language u u caption lt wagtail wagtailcore rich text RichText object at xede gt Fraud detection pipelineWhen someone initiates a credit card purchase the transaction is sent for processing before the purchase can be completed The processing includes validating the credit card checking for fraud and adding the transaction to the user s transaction history Once those steps are completed and if there is no fraud identified the point of sale system can be notified that the purchase can finish Otherwise the customer might receive a notification indicating there was fraud and further transactions can be blocked until the user can secure their account The architecture for this application includes Input stream of customer transactionsFraud detection modelOperational data store with customer profiles and historical dataData pipeline for processing transactionsData warehouse for training the fraud detection model and querying table level analyticsOutput stream of fraud query resultsThe architecture diagram below shows how the system is connected and which services are included in the Terraform setup Pre deploymentBefore creating a fraud detection pipeline you will need a fraud detection model trained on an existing dataset This solution provides a fraud model to try out but it is tailored for the simplistic sample dataset When you re ready to deploy this solution yourself based on your own data you can follow our blog on how to train a fraud model with BigQuery ML Transaction input streamThe first step towards detecting fraud is managing the stream of customer transactions We need an event streaming service that can horizontally scale to meet the workload traffic so Cloud Pub Sub is a great choice As our system grows additional services can subscribe to the event stream to add new functionality as part of a microservice architecture Perhaps the analytics team will subscribe to this pipeline for real time dashboards and monitoring When someone initiates a credit card purchase a request from the point of sale system will come in as a Pub Sub message This message will have information about the transaction like location transaction amount merchant id and customer id Collecting all the transaction information is critical for us to make an informed decision since we will update the fraud detection model based on purchase patterns over time as well as accumulate recent data to use for the model inputs The more data points we have the more opportunities we have to find anomalies and make an accurate decision Transaction pipelinePub sub has built in integration with Cloud Dataflow Google Cloud s data pipeline tool which we will use for processing the stream of transactions with horizontal scalability It s common to design Dataflow jobs with multiple sources and sinks so there is a lot of flexibility in pipeline design Our pipeline design here only fetches data from Bigtable but you could also add additional data sources or even rd party financial APIs to be part of the processing Dataflow is also great for outputting results to multiple sinks so we can write to databases publish an event stream with the results and even call APIs to send emails or texts to users about the fraud activity Once the pipeline receives a message our Dataflow job does the following Fetch user attributes and transaction history from BigtableRequest a prediction from Vertex AIWrite the new transaction to BigtableSend the prediction to a Pub Sub output streamcode block StructValue u code u Pipeline pipeline Pipeline create options r n r nPCollection lt RowDetails gt modelOutput r n pipeline r n apply r n Read PubSub Messages r n PubsubIO readStrings fromTopic options getInputTopic r n apply Preprocess Input ParDo of PREPROCESS INPUT r n apply Read from Cloud Bigtable r n ParDo of new ReadFromTableFn config r n apply Query ML Model r n ParDo of new QueryMlModelFn options getMLRegion r n r nmodelOutput r n apply r n TransformParsingsToBigtable r n ParDo of WriteCBTHelper MUTATION TRANSFORM r n apply r n WriteToBigtable r n CloudBigtableIO writeToTable config r n r nmodelOutput r n apply r n Preprocess Pub Sub Output r n ParDo of r n new DoFn lt RowDetails String gt r n ProcessElement r n public void processElement r n Element final RowDetails modelOutput r n final OutputReceiver lt String gt out r n throws IllegalAccessException r n out output modelOutput toCommaSeparatedString r n r n r n apply Write to PubSub r n PubsubIO writeStrings to options getOutputTopic r n r npipeline run u language u u caption lt wagtail wagtailcore rich text RichText object at xefc gt Operational data storeTo detect fraud in most scenarios you cannot just look at just one transaction in a silo you need the additional context in real time in order to detect an anomaly Information about the customer s transaction history and user profile are the features we will use for the prediction We ll have lots of customers making purchases and since we want to validate the transaction quickly we need a scalable and low latency database that can act as part of our serving layer Cloud Bigtable is a horizontally scalable database service with consistent single digit millisecond latency so it aligns great with our requirements Schema designOur database will store customer profiles and transaction history The historical data provides context which allows us to know if a transaction follows its customer s typical purchase patterns These patterns can be found by looking at hundreds of attributes A NoSQL database like Bigtable allows us to add columns for new features seamlessly unlike less flexible relational databases which would require schema changes to augment Data scientists and engineers can work to evolve the model over time by mixing and matching features to see what creates the most accurate model They can also use the data in other parts of the application generating credit card statements for customers or creating reports for analysts Bigtable as an operational data store here allows us to provide a clean current version of the truth shared by multiple access points within our system For the table design we can use one column family for customer profiles and another for transaction history since they won t always be queried together Most users are only going to make a few purchases a day so we can use the user id for the row key All transactions can go in the same row since Bigtable s cell versioning will let us store multiple values at different timestamps in row column intersections Our table example data includes more columns but the structure looks like this Since we are recording every transaction each customer is making the data could grow very quickly but garbage collection policies can simplify data management For example we might want to keep a minimum of transactions then delete any transactions older than six months Garbage collection policies apply per column family which gives us flexibility We want to retain all the information in the customer profile family so we can use a default policy that won t delete any data These policies can be managed easily via the Cloud Console and ensure there s enough data for decision making while trimming the database of extraneous data Bigtable stores timestamps for each cell by default so if a transaction is incorrectly categorized as fraud not fraud we can look back at all of the information to debug what went wrong There is also the opportunity to use cell versioning to support temporary features For example if a customer notifies us that they will be traveling during a certain time we can update the location with a future timestamp so they can go on their trip with ease QueryWith our pending transaction we can extract the customer id and fetch that information from the operational data store Our schema allows us to do one row lookup to get an entire user s information code block StructValue u code u Table table getConnection getTable TableName valueOf options getCBTTableId r nResult row table get new Get Bytes toBytes transactionDetails getCustomerID r n r nCustomerProfile customerProfile new CustomerProfile row u language u u caption lt wagtail wagtailcore rich text RichText object at xefa gt Request a predictionNow we have our pending transaction and the additional features so we can make a prediction We took the fraud detection model that we trained previously and deployed it to Vertex AI Endpoints This is a managed service with built in tooling to track our model s performance code block StructValue u code u PredictRequest predictRequest r n PredictRequest newBuilder r n setEndpoint endpointName toString r n addAllInstances instanceList r n build r n r nPredictResponse predictResponse predictionServiceClient predict r n predictRequest r ndouble fraudProbability r n predictResponse r n getPredictionsList r n get r n getListValue r n getValues r n getNumberValue r n r nLOGGER info fraudProbability fraudProbability u language u u caption lt wagtail wagtailcore rich text RichText object at xeeded gt Working with the resultWe will receive the fraud probability back from the prediction service and then can use it in a variety of ways Stream the predictionWe will receive the fraud probability back from the prediction service and need to pass the result along We can send the result and transaction as a Pub Sub message in a result stream so the point of sale service and other services can complete processing Multiple services can react to the event stream so there is a lot of customization you can add here One example would be to use the event stream as a Cloud Function trigger for a custom function that notifies users of fraud via email or text Another customization you could add to the pipeline would be to include a mainframe or a relational database like Cloud Spanner or AlloyDB to commit the transaction and update the account balance The payment will only go through if the balance can be removed from the remaining credit limit otherwise the customer s card will have to be declined code block StructValue u code u modelOutput r n apply r n Preprocess Pub Sub Output r n ParDo of r n new DoFn lt RowDetails String gt r n ProcessElement r n public void processElement r n Element final RowDetails modelOutput r n final OutputReceiver lt String gt out r n throws IllegalAccessException r n out output modelOutput toCommaSeparatedString r n r n r n apply Write to PubSub r n PubsubIO writeStrings to options getOutputTopic u language u u caption lt wagtail wagtailcore rich text RichText object at xecdb gt Update operational data storeWe also can write the new transaction and its fraud status to our operational data store in Bigtable As our system processes more transactions we can improve the accuracy of our model by updating the transaction history so we will have more data points for future transactions Bigtable scales horizontally for reading and writing data so keeping our operational data store up to date requires minimal additional infrastructure setup Making test predictionsNow that you understand the entire pipeline and it s up and running we can send a few transactions to the Pub Sub stream from our dataset If you ve deployed the codebase you can generate transactions with gcloud and look through each tool in the Cloud Console to monitor the fraud detection ecosystem in real time Run this bash script from the terraform directory to publish transactions from the testing data code block StructValue u code u NUMBER OF LINES r nPUBSUB TOPIC terraform chdir output pubsub input topic tr d r nFRAUD TRANSACTIONS FILE datasets testing data fraud transactions csv r nLEGIT TRANSACTIONS FILE datasets testing data legit transactions csv r n r nfor i in eval echo NUMBER OF LINES r ndo r n Send a fraudulent transaction r n MESSAGE sed i q d FRAUD TRANSACTIONS FILE r n echo MESSAGE r n gcloud pubsub topics publish PUBSUB TOPIC message MESSAGE r n sleep r n r n Send a legit transaction r n MESSAGE sed i q d LEGIT TRANSACTIONS FILE r n echo MESSAGE r n gcloud pubsub topics publish PUBSUB TOPIC message MESSAGE r n sleep r ndone u language u u caption lt wagtail wagtailcore rich text RichText object at xedae gt SummaryIn this piece we ve looked at each part of a fraud detection pipeline and how to ensure each has scale and low latency using the power of Google Cloud This example is available on GitHub so explore the code launch it yourself and try making modifications to match your needs and data The Terraform setup included uses dynamically scalable resources like Dataflow Pub sub and Vertex AI with an initial one node Cloud Bigtable instance that you can scale up to match your traffic and system load Related ArticleHow Cloud Bigtable helps Ravelin detect retail fraud with low latencyDetecting fraud with low latency and accepting payments at scale is made easier thanks to Bigtable Read Article |
2022-11-30 17:00:00 |
GCP |
Cloud Blog |
BigQuery Geospatial Functions - ST_IsClosed and ST_IsRing |
https://cloud.google.com/blog/products/data-analytics/learn-how-to-use-the-latest-geospatial-functions-in-bigquery/
|
BigQuery Geospatial Functions ST IsClosed and ST IsRingGeospatial data analytics lets you use location data latitude and longitude to get business insights It s used for a wide variety of applications in industry such as package delivery logistics services ride sharing services autonomous control of vehicles real estate analytics and weather mapping BigQuery Google Cloud s large scale data warehouse provides support for analyzing large amounts of geospatial data This blog post discusses two geography functions we ve recently added in order to expand the capabilities of geospatial analysis in BigQuery ST IsClosed and ST IsRing BigQuery geospatial functionsIn BigQuery you can use the GEOGRAPHY data type to represent geospatial objects like points lines and polygons on the Earth s surface In BigQuery geographies are based on the Google S Library which uses Hilbert space filling curves to perform spatial indexing to make the queries run efficiently BigQuery comes with a set of geography functions that let you process spatial data using standard ANSI compliant SQL If you re new to using BigQuery geospatial analytics start with Get started with geospatial analytics a tutorial that uses BigQuery to analyze and visualize the popular NYC Bikes Trip dataset The new ST IsClosed and ST IsRing functions are boolean accessor functions that help determine whether a geographical object a point a line a polygon or a collection of these objects is closed or is a ring Both of these functions accept a GEOGRAPHY column as input and return a boolean value The following diagram provides a visual summary of the types of geometric objects For more information about these geometric objects see Well known text representation of geometry in Wikipedia Is the object closed ST IsClosed The ST IsClosed function examines a GEOGRAPHY object and determines whether each of the elements of the object has an empty boundary The boundary for each element is defined formally in the ST Boundary function The following rules are used to determine whether a GEOGRAPHY object is closed A point is always closed A linestring is closed if the start point and end point of the linestring are the same A polygon is closed only if it s a full polygon A collection is closed if every element in the collection is closed An empty GEOGRAPHY object is not closed Is the object a ring ST IsRing The other new BigQuery geography function is ST IsRing This function determines whether a GEOGRAPHY object is a linestring and whether the linestring is both closed and simple A linestring is considered closed as defined by the ST IsClosed function The linestring is considered simple if it doesn t pass through the same point twice with one exception if the start point and end point are the same the linestring forms a ring In that case the linestring is considered simple Seeing the new functions in actionThe following query shows you what the ST IsClosed and ST IsRing function return for a variety of geometric objects The query creates a series of ad hoc geography objects and uses the UNION ALL statement to create a set of inputs The query then calls the ST IsClosed and ST IsRing functions to determine whether each of the inputs are closed or are rings You can run this query in the BigQuery SQL workspace page in the Google Cloud console code block StructValue u code u WITH example AS r n SELECT ST GeogFromText POINT AS geography r n UNION ALL r n SELECT ST GeogFromText LINESTRING AS geography r n UNION ALL r n SELECT ST GeogFromText LINESTRING AS geography r n UNION ALL r n SELECT ST GeogFromText POLYGON AS geography r n UNION ALL r n SELECT ST GeogFromText MULTIPOINT AS geography r n UNION ALL r n SELECT ST GeogFromText MULTILINESTRING AS geography r n UNION ALL r n SELECT ST GeogFromText GEOMETRYCOLLECTION EMPTY AS geography r n UNION ALL r n SELECT ST GeogFromText GEOMETRYCOLLECTION POINT LINESTRING AS geography r nSELECT r n geography r n ST IsClosed geography AS is closed r n ST IsRing geography AS is ring r nFROM example u language u u caption lt wagtail wagtailcore rich text RichText object at xeedf gt The console shows the following results You can see in the is closed and is ring columns what each function returns for the various input geography objects The new functions with real world geography objectsIn this section we show queries using linestring objects that represent line segments that connect some of the cities in Europe We show the various geography objects on maps and then discuss the results that you get when you call ST IsClosed and ST IsRing for these geography objects You can run the queries by using the BigQuery Geo Viz tool The maps are the output of the tool In the tool you can click the Show results button to see the values that the functions return for the query Start point and end point are the same no intersectionIn the first example the query creates a linestring object that has three segments The segments are defined by using four sets of coordinates the longitude and latitude for London Paris Amsterdam and then London again as shown in the following map created by the Geo Viz tool The query looks like the following code block StructValue u code u WITH example AS r nSELECT ST GeogFromText LINESTRING AS geography r nSELECT r n geography r n ST IsClosed geography AS is closed r n ST IsRing geography AS is ring r nFROM example u language u u caption lt wagtail wagtailcore rich text RichText object at xeef gt In the example table that s created by the query the columns with the function values show the following ST IsClosed returns true The start point and end point of the linestring are the same ST IsRing returns true The geography is closed and it s also simple because there are no self intersections Start point and end point are different no intersectionAnother scenario is when the start and end points are different For example imagine two segments that connect London to Paris and then Paris to Amsterdam as in this map The following query represents this set of coordinates code block StructValue u code u WITH example AS r nSELECT ST GeogFromText LINESTRING AS geography r nSELECT r n geography r n ST IsClosed geography AS is closed r n ST IsRing geography AS is ring r nFROM example u language u u caption lt wagtail wagtailcore rich text RichText object at xefab gt This time the ST IsClosed and ST IsRing functions return the following values ST IsClosed returns false The start point and end point of the linestring are different ST IsRing returns false The linestring is not closed It s simple because there are no self intersections but ST IsRing returns true only when the geometry is both closed and simple Start point and end point are the same with intersectionThe third example is a query that creates a more complex geography In the linestring the start point and end point are the same However unlike the earlier example the line segments of the linestring intersect A map of the segments shows connections that go from London to Zürich then to Paris then to Amsterdam and finally back to London In the following query the linestring object has five sets of coordinates that define the four segments code block StructValue u code u WITH example AS r nSELECT ST GeogFromText LINESTRING AS geography r nSELECT r n geography r n ST IsClosed geography AS is closed r n ST IsRing geography as is ring r nFROM example u language u u caption lt wagtail wagtailcore rich text RichText object at xedef gt In the query ST IsClosed and ST IsRing return the following values ST IsClosed returns true The start point and end point are the same and the linestring is closed despite the self intersection ST IsRing returns false The linestring is closed but it s not simple because of the intersection Start point and end point are different with intersectionIn the last example the query creates a linestring that has three segments that connect four points London Zürich Paris and Amsterdam On a map the segments look like the following The query is as follows code block StructValue u code u WITH example AS r nSELECT ST GeogFromText LINESTRING AS geography r nSELECT r n geography r n ST IsClosed geography AS is closed r n ST IsRing geography AS is ring r nFROM example u language u u caption lt wagtail wagtailcore rich text RichText object at xeecf gt The new functions return the following values ST IsClosed returns false The start point and end point are not the same ST IsRing returns false The linestring is not closed and it s not simple Try it yourselfNow that you ve got an idea of what you can do with the new ST IsClosed and ST IsRing functions you can explore more on your own For details about the individual functions read the ST IsClosed and ST IsRing entries in the BigQuery documentation To learn more about the rest of the geography functions available in BigQuery Geospatial take a look at the BigQuery geography functions page Thanks to Chad Jennings Eric Engle and Jing Jing Long for their valuable support to add more functions to BigQuery Geospatial Thank you Mike Pope for helping review this article |
2022-11-30 17:00:00 |
コメント
コメントを投稿