投稿時間:2022-04-27 05:31:34 RSSフィード2022-04-27 05:00 分まとめ(37件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
AWS AWS AWS Data Exchange for Marketing & Advertising Organizations | Amazon Web Services https://www.youtube.com/watch?v=DOK4HiMcPxg AWS Data Exchange for Marketing amp Advertising Organizations Amazon Web ServicesOptimize marketing campaigns improve digital strategies and personalize audiences that grow your business with AWS Data Exchange Gain valuable insights and construct predictive models to anticipate customer needs and preferences using AWS Data Exchange s catalog of marketing database products Use a wide variety of AWS analytics and machine learning tools so you can gain a degree view into your customers needs and preferences Learn more at Subscribe More AWS videos More AWS events videos ABOUT AWSAmazon Web Services AWS is the world s most comprehensive and broadly adopted cloud platform offering over fully featured services from data centers globally Millions of customers ーincluding the fastest growing startups largest enterprises and leading government agencies ーare using AWS to lower costs become more agile and innovate faster AWSDataExchange ThirdPartyData MarketingData DataSets AWS AmazonWebServices CloudComputing 2022-04-26 19:31:22
python Pythonタグが付けられた新着投稿 - Qiita 【Kaggle】H&M Recommendation Competition ルールベースを用いた解法 https://qiita.com/Ashme/items/9294168993752bce15d9 ashme 2022-04-27 04:23:16
海外TECH MakeUseOf 8 Remote Work Security Mistakes You Should Avoid https://www.makeuseof.com/remote-work-security-mistakes-to-avoid/ missteps 2022-04-26 19:30:14
海外TECH MakeUseOf How to Fix Windows 10 When It Automatically Compresses Files https://www.makeuseof.com/windows-10-automatically-compresses-files-fix/ automatically 2022-04-26 19:15:13
海外TECH MakeUseOf Variable Refresh Rate Support Is Coming to PS5: What You Need to Know https://www.makeuseof.com/ps5-variable-refresh-rate-support-what-you-need-to-know/ Variable Refresh Rate Support Is Coming to PS What You Need to KnowAs the PS s gaming library grows so do its features Gamers can now look forward to variable refresh rates on their PS games Let s dive in 2022-04-26 19:08:45
海外TECH DEV Community Let's learn, build and sell an API https://dev.to/itsrakesh/lets-learn-build-and-sell-an-api-32h0 Let x s learn build and sell an APIIf you are in tech then you may have heard this popular term called API Some people use APIs for fun some for money and some for their applications There are N ways you can use APIs In this blog let s learn what exactly is an API how you can build your own API and how you can monetize your API Let s get started What is an API I am taking a popular example to explain this Imagine you go to a restaurant to eat some food Now you don t directly go to the kitchen and cook yourself and then eat it right Of course they don t allow you to do so You call a waiter and order your food Then the waiter goes to the kitchen and brings your food Here you can compare API with the waiter So API is an intermediary between two applications and makes it possible for those two applications to communicate with each other If we put this in our example one application is you the customer another application is the restaurant kitchen where the food is prepared and the waiter is an API who acts as an intermediary between you and the kitchen Why do we need APIs Imagine you have data and you want to share that data to allow developers to build software with your data Now you need some sort of way you can make this possible That s where APIs can help you You can build an API to share your data and other resources so that developers can use your API to build services or software Let s understand this with an example Let s say you are building an app that suggests the vehicle take the route with less traffic For this you need traffic data of different routes so that you can train a machine learning model and build your app It s not an easy task to count the number of vehicles travelling on different routes and prepare data So what you can do is use a rd party service that provides their data with APIs How to build an API Another thing you need to know about API is not just about data it can be a set of functions objects and commands For example browser API provides various functions objects etc to use in your applications to interact with the browser Before building our own API let s use an API We will be using a JokeAPI Before that let s learn some terms of API Endpoint An endpoint is an API server URL where you can access all the different resources that API provides Endpoints are actions like GET POST DELETE etc which you can perform on different routes For example GET is an API endpointPOST is another endpointand so on Paths Paths are different URLs of an API For example is a path routeParameter All the paths are pre defined in the API server If you have a path that can t be pre defined in the server then you can use parameters Parameters are key value pairs and start after from the end of a path For example here userId is a parameter If you have more than one parameter then you can append them by adding amp after each parameter For example api key yuffuft Let s use an APIOpen a new browser tab paste this URL and see You will receive something like this This is called a response you got from JokeAPI for your request And the format of the response is JSON JSON is a popular output format for APIs If you visit JokeAPI documentation you can try out different categories and filters In the above options each category is a different route path like and all the options below category can be appended as parameters like type twopart amp amount Let s try to tweak the options After tweaking the options copy the URL and paste it into the browser Now you will get a response with all the filters applied Let s build our own APIYou can build two types of APIs Software As mentioned somewhere above a software API is just a set of functions objects and commands it doesn t require a database For example jQuery API Browser API etc API service An API service gives people access to their data through API For example JokeAPi The Movie Database Open Weather API etc Let s build an API service to add delete edit and get your daily tasks We need a database and a server to create an API service Let s use MongoDB as our database and NodeJs and ExpressJs for creating a server Open your IDE or code editor Create a folder and name it something like todo api Before we start make sure you have these dev tools installed NodeJsMongoDBInitialize npm with npm initInstall express mongoose and axios packages as we use them for the project npm i express mongoose axiosInstall nodemon as a dev dependency Nodemon restarts the server every time we make changes to the code so that we don t need to manually restart npm i nodemon save devAdd a script to start the server with nodemon todo api package json scripts dev nodemon server js Next create a file called server js in the root and paste this boilerplate code todo api server js Snippet const express require express const mongoose require mongoose const app express const PORT process env PORT const MONGODB URI process env MONGODB URI mongodb localhost todoapiDB app use express json mongoose connect MONGODB URI useNewUrlParser true then gt app listen PORT console log Server stated on port catch err gt console log err Now start the server with this command npm run devVisit http localhost in your browser and see the response You should see this in your browser What it is telling you is there is no endpoint like GET http localhost defined in the server So let s add an endpoint As we are using expressjs we can register a route like this todo api server js Snippet app get req res gt res send Hello World Now visit the URL again and you will see the response So this is a simple GET request we created on our server Next create a simple model in our database to store our tasks todo api models tasks model js Snippet const mongoose require mongoose const Schema mongoose Schema const taskSchema new Schema name type String required true module exports mongoose model Task taskSchema and require the model in server jstodo api server js Snippet const Task require models tasks model Before we move further it s not possible to do everything from the browser so let s use an API tool called Postman Download it from here Free After downloading test it by entering the URL http localhost and clicking Send Now define a route that gets all the tasks todo api server js Snippet GET http localhost getTasksapp get getTasks async req res gt try const response await Task find res json response catch err res json message err If you test this now you will get an empty response as we have not added any tasks to our database So let s create a route to add tasks to our database To send data in our request we need to make a POST request todo api server js Snippet POST http localhost postTaskapp post postTask async req res gt try const response await Task create req body res json response catch err res json message err Now in postman change the GET request to POST Then go to the Body tab and select raw gt JSON from dropdown Write a JSON object in the body field and make a POST request to http localhost postTask You will receive a response back containing name the name of the task id the unique id of the task generated by MongoDB Add a few more tasks and make a GET request to http localhost you will see all your tasks Now let s add a route to delete a task todo api server js Snippet DELETE http localhost deleteTask idapp delete deleteTask id async req res gt try const response await Task remove id req params id res json response catch err res json message err In the above route http localhost deleteTask id id is called a Path Variable It is used when we can t pre define a route You can also use Query Parameter for our case So change the request method to DELETE in postman and copy one of your tasks id and paste in the path variable value and click Send If you now make a GET request to getTasks you won t see the deleted task That means you successfully deleted the Task Now let s edit a task We all make mistakes so we need an edit button I Hope Elon Musk adds an edit button to Twitter To edit data we have to make a PATCH request Let s create a route for that You can make use PUT request to edit a document But PATCH request is better if we want to edit partial data todo api server js Snippet PATCH http localhost editTask idapp patch editTask id async req res gt try const response await Task updateOne id req params id set req body res json response catch err res json message err Same as the POST request add body to your PATCH request Copy the id of the task that you wish to edit and paste it into the path variable value field and click Send Now make a GET request to getTasks you will see Task updated So that s it We learned important RESTAPI methods while building our small todo application Here is the postman collection containing the four requests LinkHere is the GitHub repository for this tutorial Link How to sell monetize an API Data is the new oil a popular quote of the st century and it is true If you have data then you can make loads of API is one great way to sell monetize your data Let s see how we can monetize our API To monetize our API we are going to use RapidAPIRapid API is the world s largest API hub where you can explore different APIs and create and manage your own APIs Before continuing host your API server somewhere like Heroku because you know localhost doesn t work outside of your computer And replace all http localhost with in your postman collection Let s start by creating an account if you don t have one already After creating an account click on My APIs on the top right Click on Add New API on the left panel Fill in the details for API Name Short Description and Category For Specify using select Postman Collection And then upload the collection file You can download your postman collection by exporting the collection as a JSON file To do so open your postman collection and click three dots gt Export Export Or you can download the JSON file from this tutorial GitHub repository Make sure to change the domain name After uploading the file and clicking Add API Fill in the details for Describe and click on Save Next add a base URL Finally make your API public so every on the internet can see your API If they like it they can subscribe to your API Let s actually monetize our API by adding Plans amp Pricing So go to the Plans amp Pricing tab Here you can choose different plans and set the number of requests for different plans Let s add a PRO plan Choose Montly Subscription or Pay Per Use Set a price Choose rate limit number of requests per second minute hour Explore more on Rapid API docs That s it I hope you learned something new from this article Feel free to ask any questions or doubts or anything in the comments Follow me for more stuff like this Thank you 2022-04-26 19:35:08
海外TECH DEV Community Leverage Grafana Cloud with Kubecost https://dev.to/kubecost_26/leverage-grafana-cloud-with-kubecost-1092 Leverage Grafana Cloud with KubecostWe ve rolled out a Kubecost integration for teams using Grafana Cloud Kubecost users can take advantage of the integration to get the best of all worldsーmanaged observability services working in concert with cloud cost optimization What is Grafana Cloud Grafana Cloud is a composable observability platform integrating metrics traces and logs with Grafana Users get the benefit of the best open source observability software without the overhead of installing maintaining and scaling their observability stack As a part of its monitoring services Grafana Cloud offers a managed backend that can store Prometheus metrics This hosted service allows you to aggregate and store metrics from multiple Prometheus instances into a single dedicated space This centralized location makes it easier to query your data while also providing long term storage for historical analysis and capacity planning Kubecost Grafana Cloud If you wish to reduce the overhead of managing and maintaining your observability stack and keep better track of your cloud spend ーGrafana Cloud and Kubecost are here to help Kubecost automatically provides cost visibility savings and optimization recommendations and ongoing governance for deployments in any Kubernetes environment Teams can successfully reduce the operational burden of managing multiple cost views and manually tracking spend for each of their services With the help of Kubecost teams are even able to track and allocate granular spend on their own Customers can now use Kubecost to track cloud spend leveraging their Grafana Cloud stack by using a Custom Prometheus integration To compliment the integration we ve published a Kubecost Grafana Cloud Community Dashboard so that customers can visualize their cost data directly in their hosted Grafana Get StartedIn this section we ll walk through a quick example of how you can use Kubecost and Grafana Cloud in sync You ll need a running Kubernetes cluster and a Grafana Cloud account Step Install the Grafana Agent in your clusterUsing your Grafana Cloud account credentials install the Grafana Agent for Kubernetes on your cluster as a prerequisite for the following steps Step Configure Kubecost scraping configuration for the Grafana AgentOnce you ve set up the Grafana Agent we ll need to add some extra configuration to the way Grafana Cloud scrapes metrics so that Kubecost can offer more accurate cost estimates Create a file called extra scrape configs yaml with the following contents replacing the grafana prometheus remoteWrite url username and password placeholders to match your Grafana Cloud details which you ll find by visiting your organization s Grafana Cloud Portal gt Prometheus gt Password API key kind ConfigMapmetadata name grafana agentapiVersion vdata agent yaml server http listen port metrics wal directory tmp grafana agent walglobal scrape interval s external labels cluster cloudconfigs name integrations remote write url lt grafana prometheus remoteWrite url gt basic auth username Grafana Cloud username password Grafana Cloud API key password scrape configs bearer token file var run secrets kubernetes io serviceaccount token job name integrations kubernetes cadvisor kubernetes sd configs role node metric relabel configs source labels name Next apply the changes in the same namespace as your Grafana Agent deployment kubectl apply extra scrape configs yaml n lt namespace gt Re start the Grafana Agent kubectl rollout restart deployment grafana agent n lt namespace gt Step Configure Kubecost to query metrics from Grafana Cloud PrometheusIf you haven t yet installed Kubecost use the commands below or check out our installation guide to get set up Our standard helm install only takes minutes helm repo add kubecost helm upgrade i create namespace kubecost kubecost cost analyzer namespace kubecost If you already have a Kubecost deployment on your cluster hooray Now we ll set up some basic auth credentials so that Kubecost can query data from Grafana Cloud Grab your Grafana Cloud username and API key from Step and create two files in your working directory called USERNAME and PASSWORD respectively Then generate a Kubernetes secret called dbsecret in the same namespace as Kubecost is installed The namespace is typically kubecost kubectl create secret generic dbsecret namespace kubecost from file USERNAME from file PASSWORDReload Kubecost using the secret you ve just created and the Prometheus query URL that you can get from your organization s Grafana Cloud Console gt Prometheus gt Query Endpoint helm upgrade kubecost kubecost cost analyzer namespace kubecost set global prometheus fqdn lt grafana prometheus query url gt set global prometheus enabled false set global prometheus queryServiceBasicAuthSecretName dbsecretThat s it By now you should have successfully completed the Kubecost integration with Grafana Cloud You can view the Kubecost UI in your browser by port forwarding to http localhost on your machine kubectl port forward namespace kubecost deployment kubecost cost analyzer From there you can start by exploring cost allocation trends by clicking the Allocation tab or by discovering quick savings and cost optimization insights via the Savings tab We ve compiled a list of Getting Started guides to help you take advantage of all of the Kubecost features Optionally you can also add our Kubecost Community Dashboard to your Grafana Cloud organization to visualize your cloud costs in Grafana Step optional Configure Kubecost recording rules in Grafana CloudFor even richer Kubecost data consider adding Prometheus recording rules to Grafana Cloud While they are optional they may improve cost accuracy We re here to help For more information and troubleshooting check out the Kubecost documentation and Grafana Cloud documentation Join us on Slack for any other help and general Kubernetes and Cloud Cost optimization banter 2022-04-26 19:33:11
海外TECH DEV Community How to security scan your web API for vulnerabilities https://dev.to/intesar/how-to-security-scan-your-web-api-for-vulnerabilities-2jof How to security scan your web API for vulnerabilitiesAbout me I write review and build API security tools and best practices The purpose of this article is to show Appsec developers how to get started with API security scanning with an open source API In the process you will learn what vulnerabilities will look like And at the end of the write up I ll share a couple of tool recommendations for you to play with API is the new internet protocol kind of It s the gateway to all kinds of applications you re building or integrating with example mobile web AI serverless microservices blockchain web etc APIs now dominate the internet traffic This is evident from the recent Akamai report that over of the internet web traffic are API calls Without your realization you and your re organization are using APIs predominately APIs are also the most attacked surface They have overtaken traditional attacked surfaces like networks computers etc Which means your chances of getting a security incident breach this quarter is more likely at the APIs layer Since APIs are a new paradigm Most organizations are under prepared when it comes to API security API security validation are hard to achieve it s still in it s early stage mostly human powered under staff and done not as frequent as new code is deployed Traditional security penetration testing staff focuses on mobile and web front ends making the matters even worst for the APIs Here are a few tools you can use to get started with API security Use this opensource API for scanning and review the vulnerability report v api docsTool EthicalCheck Pros free point and scan solutionCons Only covers OWASP Tool BurpPros free community edition write your own testsCons Learning curveI avoided adding commercial tools since most of the tools are closed and offer a custom pricing If you have any questions Feel free to reach out to me at my email and twitterintesar mohammed gmail com 2022-04-26 19:26:07
海外TECH DEV Community Advent of Code 2020: Day 1 https://dev.to/koltonmusgrove/advent-of-code-2020-day-1-3cfa Advent of Code Day Day Report RepairProblemCodeWalkthrough Walkthrough for Problem For this problem we are given a list of numbersーone per lineーand asked to find two entries that sum to the number Once we have the two numbers we multiply them together to get our answer Naive ApproachThe naive approach to solving the classic sum two problem is using a nested search to test every combination of numbers that add to the desired amount let mut v Vec lt i gt Vec new load datalet input File open input txt unwrap let reader BufReader new input for line in reader lines let integer line unwrap parse lt i gt unwrap v push integer for every number in the vector check if any other value in the vector sums with that number to equal If a pair of numbers is found print them and the answer for num in amp v for num in amp v if num num println Values and num num println Answer num num break This approach is very time inefficient For every additional unique number in the input this solution adds an additional computation step for every item in the vector This solution has a time complexity O n Optimal ApproachOne more efficient approach is through the use of a binary search algorithm First we need a binary search algorithm to use fn binary search vector amp Vec lt i gt len usize target amp i gt Option lt bool gt set the low and high indices let mut low i let mut high i len as i while low lt high find the mid point by floor dividing the sum of the high and low let mid high low low let mid index mid as usize let val vector mid index return the index of the number if it is found or set the high and low to reduce the search space if val target return Some true if val lt target low mid if val gt target high mid Some false Now that we have our binary search implementation we can use it to search for an addend for each item in the list which sums with it to let mut v Vec lt i gt Vec new load datalet input File open input txt unwrap let reader BufReader new input for line in reader lines let integer line unwrap parse lt i gt unwrap v push integer for num in amp v calculate the value we would need to sum with num to get to let target num check if that item is in the vector let answer binary search amp v v len amp target if answer unwrap true println Values and num target println Answer num target This function will give us an answer to the first problem in a reasonable amount of time regardless of the size of the input to the problem It has a time complexity of N log N However a simpler and equally efficient option is to use sets println n Puzzle n let mut set HashSet lt i gt HashSet new let input File open input txt unwrap let reader BufReader new input I decided to use a set because it eliminates duplicate values and reduces the iteration and accessing times to roughly for line in reader lines let integer line unwrap parse lt i gt unwrap set insert integer iterate over all of the items in the set checking if the second value for the solution is in the set If so print and exit for number in amp set let target i number if set contains amp target println Values and number target println Answer number target break While this looks like the naive approach the use of sets allows for instantaneous item accessing and a time complexity of O n Problem The second problem of day one is very similar to the first The only difference is that we now need to find a solution that returns three addends that sum to Like the last problem our answer is the product of the three numbers Naive ApproachThis naive solution for problem two is almost the same as the naive solution to problem The only difference is that we now add an additional loop to the search portion of the solution let mut v Vec lt i gt Vec new load datalet input File open input txt unwrap let reader BufReader new input for line in reader lines let integer line unwrap parse lt i gt unwrap v push integer for every number in the vector check if any other value in the vector sums with that number to equal If a pair of numbers is found print them and the answerfor num in amp v for num in amp v for num in amp v if num num num println Values and num num num println Answer num num num process exit x This is an even slower algorithm than the naive solution in problem Because of the extra loop this solution now has the time complexity O n Ouch Optimal ApproachThe optimal solution to this problem is also similar to its problem one counterpart For this solution though we add a loop that iterates over every item in the list and then runs the problem one solution to find the other two numbers This significantly increases the time complexity of the algorithm but there are no other solutions that have a relatively similar level of coding complexity let mut set HashSet lt i gt HashSet new let input File open input txt unwrap let reader BufReader new input for line in reader lines let integer line unwrap parse lt i gt unwrap set insert integer While this implementation uses nested for loops it is only O n in the worst case and is still the best solution to this problem In terms of space complexity it could be more efficient if I didn t copy all of the data into a set first but I valued speed more than space in this instance for number in amp set for number in amp set let target i number number if set contains amp target println Values and number number target println Answer number number target process exit x 2022-04-26 19:22:15
海外TECH DEV Community Install WordPress from CLI https://dev.to/thedevdrawer/install-wordpress-from-cli-3cio Install WordPress from CLIAt my agency we do a lot of work with WordPress If you are also a WordPress developer you may be in the same situation When you are creating a new local development site with WordPress it could take time more time than you want to spend If this is the case try to install it all from CLI It is easy to do if you are able to SSH into your local or remote server View This On YouTube SSH Into Your ServerThis entire tutorial relies on you using SSH and having an already existing server So stop and set that up if you have not already done so Also keep in mind you can use things like Docker to do this but if you normally do this manually it may save you some time Open your CLI and type the following command change out the username and server ip address to match your setup ssh username Enter your password and create your new virtual host folder I use Roverwire Virtualhost Manage Script It makes it as to create virtual hosts on a Linux server using only SSH You can view more about it here If you use Roverwire you can simply type sudo virtualhost create sample localOnce that is done we can get WordPress set up Download and Extract WordPressFirst things first create a new folder for your website In this tutorial I will be using sample local The following commands will install a folder called wordpress in your newly created server folder Switch to the folder you created on the servercd var www samplelocal Download latest Versionsudo wget Extract WordPresssudo tar xfz latest tar gz Move FoldersOnce you have the WordPress folder you will need to move it and then remove the extractable file you downloaded previously Move WordPress folder to the parentsudo mv wordpress Remove WordPress foldersudo rmdir wordpress Remove downloaded filesudo rm f latest tar gzFinally after setting this up you can move on to setting up the database using the MySQL instructions below Once the database is set up you can go to the domain you created above and run the typical WordPress installation Setup DatabaseThe username and password combination for your MySQL server should already be set up You can log in by using the following command mysql p u mysqlusernameYou will then be prompted for the password Once you have logged in successfully you should now see a MySQL prompt instead of the normal SSH prompt It will look like this mysql gt Create Your Databasecreate database databasename Exit MySQLexitLastly you may need to set up shared or global permissions to your new website You can do this a few different ways but I found that a shared user works best sudo chown R shareuser var www samplelocalor for a global usersudo chown R www data www data var www samplelocal Modify Hosts FileNow your WordPress website is ready to go To view your new website locally you will need to modify your hosts file You can do this using the following instructions Windows is typically located at SystemRoot System drivers etc hostsOSX and Unix Linux is typically located at etc hostsAdd the following to your host file domainname fdg should be changed to the virtual host that was created above sample local Open Your WebsiteNow that you have successfully created your virtual host installed WordPress and set up your hosts You can browse the website URL to finish the installation in the WordPress installer Here is a quick video on how to set up WordPress once you get to this step Good luck 2022-04-26 19:01:45
Apple AppleInsider - Frontpage News Apple slows hiring of new Geniuses for some Apple Store locations https://appleinsider.com/articles/22/04/26/apple-slows-hiring-of-new-geniuses-for-some-apple-store-locations?utm_medium=rss Apple slows hiring of new Geniuses for some Apple Store locationsApple is reportedly slowing down the hiring of Geniuses for some Apple Store locations in what could be considered a cost saving move The Genius is a well known role in Apple s retail efforts providing tech support at the store itself Despite being a big feature of the Apple Store it seems Apple isn t hiring Geniuses as quickly as it did previously Some stores were informed by Apple corporate that Genius positions left vacant after employees leave weren t being filled an unnamed person familiar with the matter told Bloomberg In some cases Apple also withdrew a number of verbal job offers for the position Read more 2022-04-26 19:53:28
Apple AppleInsider - Frontpage News Compared: Apple Studio Display vs Samsung Smart Monitor M8 https://appleinsider.com/inside/studio-display/vs/compared-apple-studio-display-vs-samsung-smart-monitor-m8?utm_medium=rss Compared Apple Studio Display vs Samsung Smart Monitor MSamsung s Smart Monitor M adds cloud and smart TV features to what could be taken for an Apple like display but is it a good alternative to Apple s consumer aimed screen the Studio Display Apple s Studio Display left Samsung s Smart Monitor M right Apple and Samsung haven t had the best relationship with the rivals often butting heads regarding product designs When Samsung introduced the Smart Monitor M in January it seemed that it was another display Samsung made to muscle into territory occupied by the inch iMac Read more 2022-04-26 19:47:17
海外TECH Engadget 'Sifu' is getting difficulty options to help more people actually finish the game https://www.engadget.com/sifu-difficulty-options-content-roadmap-sloclap-194237152.html?src=rss x Sifu x is getting difficulty options to help more people actually finish the gameSifu has been a critical and commercial success for Sloclap but the developer isn t resting on its laurels The studio has revealed a roadmap of updates for the notoriously tough beat em up which includes the imminent addition of difficulty modes Check out our free content update roadmap for Sifu At this stage four major updates are planned the first one will be available next Tuesday May rd along with our physical edition SifuGamepic twitter com UBEWwJKSーSifuGame SifuGame April Starting on May rd you ll be able to select from student disciple and master difficulty options which could help more folks finish the game and give returning players an even more challenging experience Sloclap will also add an advanced training option and outfit selection features next week Over the summer Sifu will receive an advanced scoring system as well as some more outfits and intriguing gameplay modifiers Those include a one health point option good luck with that stronger enemies a way to unlock all skills and a bullet time mode More outfits and modifiers will be added over the rest of the year as well as a replay editor in the fall and an all new arenas mode in winter All of these will be free updates Sifu arrived in February on PlayStation PS and PC Though it debuted just a few days before the all conquering Elden Ring it still sold a respectable one million copies in just three weeks 2022-04-26 19:42:37
海外TECH Engadget Ubisoft shuts down online services for 91 games https://www.engadget.com/ubisoft-shuts-down-online-services-91-games-191902264.html?src=rss Ubisoft shuts down online services for gamesYou might be disappointed if you were planning an Ubisoft themed nostalgic gaming session Kotakureports Ubisoft has shut down online services for games Many of them are ancient or versions for old and sometimes defunct platforms You aren t about to play Assassin s Creed Brotherhood using the long dead OnLive service for instance However there are some games you could still play on current hardware or might have good reason to revisit The first two Far Cry games have lost online support for PC for instance and Blood Dragon won t connect on PC PS and Xbox Just Dance fans may need to stick to newer games While it isn t surprising that Ubisoft dropped support for PS Wii Wii U and Xbox versions of Just Dance and before PS and Xbox One players might not enjoy losing access to the songs from Just Dance or Other classics you might miss include Beyond Good amp Evil the original Ghost Recon multiple Rainbow Six games older Settlers titles and certain Splinter Cell releases including Chaos Theory and Conviction Games that used Ubisoft Connect won t let you earn Units and you can t unlock content on any platform or access it on PCs Ubisoft isn t exactly rushing to leave some players in the dark ーit s just now shutting off Rainbow Six Lockdown support for PS GameCube and original Xbox owners All the same you probably won t be thrilled if you ve kept an old console around to play the games of your youth 2022-04-26 19:19:02
海外科学 NYT > Science New Rules on Light Bulbs: LED vs. Incandescent https://www.nytimes.com/2022/04/26/climate/biden-incandescent-led-light-bulb.html New Rules on Light Bulbs LED vs IncandescentThe administration set efficiency standards that will phase out sales of incandescent bulbs in favor of LEDs reducing Americans electrical bills over time 2022-04-26 19:34:19
海外科学 NYT > Science Do Vaccines Protect Against Long Covid? https://www.nytimes.com/article/long-covid-vaccines.html covid 2022-04-26 19:26:07
海外TECH WIRED Elon Musk's Twitter Buy Exposes a Privacy Minefield https://www.wired.com/story/elon-musk-twitter-privacy-anonymity exposes 2022-04-26 19:54:55
ニュース BBC News - Home Russia to suspend gas supplies to Poland https://www.bbc.co.uk/news/business-61237519?at_medium=RSS&at_campaign=KARANGA gazprom 2022-04-26 19:19:10
ニュース BBC News - Home Ros Atkins on… Finland, Nato and Russia https://www.bbc.co.uk/news/world-61237116?at_medium=RSS&at_campaign=KARANGA sweden 2022-04-26 19:03:47
ニュース BBC News - Home P&O ferry European Causeway docks after losing power in Irish Sea https://www.bbc.co.uk/news/uk-northern-ireland-61229753?at_medium=RSS&at_campaign=KARANGA larne 2022-04-26 19:12:20
ビジネス ダイヤモンド・オンライン - 新着記事 マクドナルドが21カ月連続増収で盤石、KFC・モスの「コロナ勝ち組」には異変? - コロナで明暗!【月次版】業界天気図 https://diamond.jp/articles/-/302511 2022-04-27 04:55:00
ビジネス ダイヤモンド・オンライン - 新着記事 鳥貴族・天狗は増収、コロワイドだけが減収に見える「業績格差の真相」 - コロナで明暗!【月次版】業界天気図 https://diamond.jp/articles/-/302496 前年同期 2022-04-27 04:50:00
ビジネス ダイヤモンド・オンライン - 新着記事 ツルハとコスモス薬品の稼ぎ方は別物!ドラッグストアの「常識外れ」はどっち? - ビジネスに効く!「会計思考力」 https://diamond.jp/articles/-/302429 業界再編 2022-04-27 04:45:00
ビジネス ダイヤモンド・オンライン - 新着記事 給付金の不正受給より深刻な「不受給」問題、困窮する人に届かない社会保障の限界 - 政策・マーケットラボ https://diamond.jp/articles/-/302416 不正受給 2022-04-27 04:42:00
ビジネス ダイヤモンド・オンライン - 新着記事 クスリのアオキ、知られざるその実像は?取引先が明かす「意外な社風」 - ダイヤモンド・リテイルメディア https://diamond.jp/articles/-/301943 クスリのアオキ、知られざるその実像は取引先が明かす「意外な社風」ダイヤモンド・リテイルメディアドラッグストアDgSだけでなく、国内小売業界で大きく注目される企業でありながら、ベールに包まれている部分も多いクスリのアオキ。 2022-04-27 04:40:00
ビジネス ダイヤモンド・オンライン - 新着記事 経済抑止力から経済安保重視のジレンマ、ウクライナ後の難しい選択 - 経済分析の哲人が斬る!市場トピックの深層 https://diamond.jp/articles/-/302428 利害関係 2022-04-27 04:35:00
ビジネス ダイヤモンド・オンライン - 新着記事 銀行の自己資本比率規制「バーゼル3」、有価証券の信用リスク見直しを解説 - きんざいOnline https://diamond.jp/articles/-/302427 online 2022-04-27 04:30:00
ビジネス ダイヤモンド・オンライン - 新着記事 プーチンの「核使用の可能性」が、5月9日を前に高まっている理由 - ロシアから見た「正義」 “反逆者”プーチンの挑戦 https://diamond.jp/articles/-/302425 長期 2022-04-27 04:25:00
ビジネス ダイヤモンド・オンライン - 新着記事 「中国政府の命令か、住民の声か」上海の封鎖現場で職員が苦悩、自殺の悲劇も - DOL特別レポート https://diamond.jp/articles/-/302357 中国政府 2022-04-27 04:20:00
ビジネス ダイヤモンド・オンライン - 新着記事 プーチンの天敵、ドローンの大活躍で「防衛省・自衛隊」が追い詰められる理由 - DOL特別レポート https://diamond.jp/articles/-/302413 年貢の納め時 2022-04-27 04:17:00
ビジネス ダイヤモンド・オンライン - 新着記事 円安・インフレ対策で資産運用を始めるべき?山崎元の親切な回答 - 山崎元のマルチスコープ https://diamond.jp/articles/-/302424 外貨建て 2022-04-27 04:15:00
ビジネス ダイヤモンド・オンライン - 新着記事 ソニー創業者・盛田昭夫氏が4800億円で映画会社を買った理由、出井元CEOが分析 - 元ソニーCEO・出井伸之「人生の経営」 https://diamond.jp/articles/-/302354 2022-04-27 04:10:00
ビジネス ダイヤモンド・オンライン - 新着記事 「年収を他人と比べる」のが、そもそも間違っている明白な理由 - ニュース3面鏡 https://diamond.jp/articles/-/302098 終身雇用 2022-04-27 04:05:00
ビジネス ダイヤモンド・オンライン - 新着記事 超多忙マスク氏、ツイッターに割く時間どう捻出? - WSJ発 https://diamond.jp/articles/-/302555 時間 2022-04-27 04:04:00
ビジネス 不景気.com 22年3月の失業率は2.6%に改善、求人倍率も1.22倍に改善 - 不景気com https://www.fukeiki.com/2022/04/unemployment-rate-22-03.html 労働力調査 2022-04-26 19:12:19
ビジネス 東洋経済オンライン ベトナム初「ハノイ都市鉄道」で渋滞解消なるか 待望の開業、バイクに慣れた市民は利用する? | 海外 | 東洋経済オンライン https://toyokeizai.net/articles/-/584542?utm_source=rss&utm_medium=http&utm_campaign=link_back 東洋経済オンライン 2022-04-27 04:30:00
GCP Cloud Blog Monitor & analyze BigQuery performance using Information Schema https://cloud.google.com/blog/topics/developers-practitioners/monitor-analyze-bigquery-performance-using-information-schema/ Monitor amp analyze BigQuery performance using Information SchemaIn the exponentially growing data warehousing space it is very important to capture process and analyze the metadata and metrics of the jobs queries for the purposes of auditing tracking performance tuning capacity planning etc  Historically on premise on prem legacy data warehouse solutions have mature methods of collecting and reporting performance insights via query log reports workload repositories etc  However all of this comes with an overhead of cost storage amp cpu To give customers easy access and visibility to BigQuery metadata and metrics Google Cloud launched Information Schema in The Information Schema gives customers a lens to consume the metadata and performance indicators for every BigQuery job query API  The storage associated with the Information Schema Views is free  Users only pay for the cost of the Compute associated with analyzing this information  There are multiple factors that contribute to BigQuery spend The two most common are storage and processing querying which also tend to be the largest items on your bill at the end of the month  In this blog we will equip you with an easy way to analyze and decipher the key BigQuery metrics using the Information Schema Before getting started it is important to understand the concept of a “slot in BigQuery For the purpose of this blog we will be looking at the “Jobs Metadata by TimeSlice view in the Information Schema More on the Information Schema views here  In this blog we ll look at a couple of use cases Analyze BigQuery Slot Consumption and Concurrency for a Point in TimeAnalyze Query Throughput and Busy for a Time PeriodOne important highlight is to note the difference between “Concurrent Query Count and “Query Throughput Count   “Concurrent Query Count represents the actual number of queries running at a specific point in time   “Query Count which is often used to describe the number of queries running over some interval of time   The ultimate goal of this exercise is to produce a result set that we can export to Google Sheets and drop into a Pivot Table We can then create visualizations for slot consumption and concurrency This is critically important as it pertains to “Right sizing reservation allocationsUnderstanding the impact of newly introduced workloads into your environmentDetermining if workloads are relying on idle slot capacity in order to meet SLAsProactively identifying trends which might ultimately result in concurrency limit errorsAlternatively you can run this from Google Sheets using the BigQuery connector The queries and charting will be similar in either case In short we will use this information to optimize spend while ensuring consistently for the workloads that need it most Also a couple of key points to remember and act on for both the below scripts Change the Region qualifier of the Information Schema if you aren t part of the region US For example if your datasets are in us east change the qualifier to us east i e region us east INFORMATION SCHEMA JOBS TIMELINE BY PROJECTIn the BigQuery Console UI under More→Query Settings→Additional Settings change the Data Location to the location where your dataset is residing For example if your datasets are in us east select that region Analyze visualize point in time slot consumption and concurrencyThe purpose of this query is to collect information for all the jobs that are running during a range of time broken into smaller intervals in which we will capture a single second Point In Time   The final charts used to visualize this data will have Time on the X Axis For the Y Axis we will either have Slot Seconds or Concurrent Query Count or we can have both with a Primary and Secondary Y Axis defined   Main SQLHere is the SQL script from the github repo copy paste into the BigQuery UI and run it   Let s break down the SQL into smaller chunks for easy understanding Declaring and setting variablesWe will be declaring  variables   code block StructValue u code u DECLARE RANGE START TS LOCAL timestamp r nDECLARE RANGE END TS LOCAL timestamp r nDECLARE RANGE INTERVAL SECONDS int r nDECLARE UTC OFFSET INT r nDECLARE RANGE START TS UTC timestamp r nDECLARE RANGE END TS UTC timestamp r nDECLARE TIMEZONE STRING u language u As you can see these variables are all related to time After declaring it is time to set these variables  code block StructValue u code u SET TIMEZONE US Eastern r nSET RANGE START TS LOCAL r nSET RANGE END TS LOCAL r nSET RANGE INTERVAL SECONDS r nSET UTC OFFSET r nSELECT DATETIME DIFF DATETIME RANGE START TS LOCAL r n TIMEZONE DATETIME RANGE START TS LOCAL HOUR r n r nSET RANGE START TS UTC TimeStamp SUB Range Start TS LOCAL Interval UTC OFFSET Hour r nSET RANGE END TS UTC TimeStamp SUB Range End TS LOCAL Interval UTC OFFSET Hour u language u Let s set the values for the declared variables  The first variables will need to be set manually by the individual running the query   The first variable TIMEZONE should represent your local TimeZone   The second RANGE START TS LOCAL and third RANGE END TS LOCAL variables will represent the range of time using your local TimeZone which you want to analyze   The fourth variable RANGE INTERVAL SECONDS represents the size of the time intervals you want in which a single Point in Time a second per Interval will be collected   Note It is very important to limit the X Axis Time data points to a reasonable number something between and otherwise you will have troubles with the size of the query result export and or you will have issues with having too many X Axis data points in the final graph For analyzing what happened for a minute range of time it would be appropriate to set the RANGE INTERVAL SECONDS as this would produce data points on my X Axis one per second for every second in my defined minute second range of time For analyzing what happened for a hour range of time it would be appropriate to set the RANGE INTERVAL SECONDS as this would produce data points on my X Axis one per every seconds in my defined hr second range of time On the same note for analyzing what happened for a hour range of time it would be appropriate to set the RANGE INTERVAL SECONDS as this would produce data points on my X Axis one per every seconds in my defined hr second range of time In summary we are encouraging the user to sacrifice accuracy for larger time ranges in order to produce a more readable chart  While this chart is accurate as of a second in time if we only choose second to visualize for every seconds then we are producing a chart that is a sample representation of actual slot consumption and concurrent query counts for the range of time being analyzed The fifth variable UTC OFFSET represents the offset between your locally defined TimeZone and UTC  Let s make it an expression as opposed to a manually defined literal value because of issues with Daylight Savings Time DST otherwise the user would have to remember to change the literal offset throughout the year as DST changes The sixth RANGE START TS UTC and seventh RANGE END TS UTC variables represent the range of time you want to analyze converted into UTC time using the derived UTC OFFSET value You might be asking yourself “Why spend so much time declaring and setting variables  In short this has been done for readability supportability and to minimize the amount of manual changes needed every time you run this code for a new range of time Now that all of our variables have been declared and set we can finally start to analyze the query  The query is built from two derived sets of data aliased as key and  query info The key derived tableThe key derived table is creating a one column result set with a single row for every interval RANGE INTERVAL SECONDS that exists within the range of time you are wanting to analyze  We are able to do this with a couple of really neat array functions  First we leverage the GENERATE TIMESTAMP ARRAY function which will produce an array aliased as POINT IN TIME of timestamps between the RANGE START TS UTC and RANGE END TS UTC variables for each interval of time defined in RANGE INTERVAL SECONDS For example RANGE START TS UTC RANGE END TS UTC RANGE INTERVAL SECONDS Using the above inputs the GENERATE TIMESTAMP ARRAY will produce the following array with elements In order to convert this array of elements into rows we simply use the UNNEST Function Note The key derived table could be considered optional if you are certain that queries were actively running every second of the time range being analyzed however if any point in time exists in which nothing was actively running then your final chart wouldn t have a datapoint on the X Axis to represent that point s in time which makes for a misleading chart  So to be safe it is strongly encouraged to use the key derived table The query info derived tableThe query info derived table is relatively straightforward   In our example I want to pull Slot Seconds period slot ms and Query count information from the INFORAMTION SCHEMA JOBS TIMELINE BY PROJECT object for every job for each second that matches the TimeStamps generated in the key derived table   In this particular query the GROUP BY statement isn t needed because every job should have a single row per second therefore nothing needs to be aggregated and I simply could have hard coded a for Query Count  I left the Group By in this example in case you aren t interested in analysis at the Job ID level  If you aren t you can simply comment out the Job ID field in query info tweak the Group By statement accordingly and comment out Job ID in the outermost query  In doing so you would still be able to perform user email level analysis with the final result set with accurate Slot Sec and Concurrency Query Count data  Filters used in the queryWe have six filters for this query   First in order to minimize the IO scanned to satisfy the query we are filtering on job creation time the underlying value used to partition this data where the min value is hours earlier than the defined start time to account for long running jobs and the max job creation time is less than the defined end time  Second we want to only look at rows with a period start timestamp within our defined range of time to be analyzed   Third we only want to look at job type query   Fourth in order to avoid double counting we are excluding scripts as a script parent job id contains summary information about its children jobs   Fifth and this is a personal preference I don t want to analyze any rows for a job if it isn t actively using Slots for the respective Point in Time   The sixth filter doesn t actually change the of rows returned by the final query however it provides an increasingly large performance improvement for queries as the value of RANGE INTERVAL SECONDS grows  We first calculate the difference in seconds between the RANGE START TS UTC and the TimeLine object s Period Start timestamp  Next we MOD that value by the RANGE INTERVAL SECONDS value  If the result of the MOD operation does not equal we discard the row as we know that this respective Timestamp will not exist in the key timeline built   Note Yes these rows would have been discarded when we JOIN the key and query info table however this requires shuffling a lot of potentially unnecessary rows  For instance if the  RANGE INTERVAL SECONDS is set to and a query ran for seconds then we d be joining rows of query info data for that job only to filter out rows in the subsequent JOIN to the key table  With this filter we are pre filtering the unnecessary rows before joining to the key table Outermost query  In the outermost query we will LEFT OUTER JOIN the key timeline table to our pre filtered query info table based on the cleaned up TimeStamp values from each table  This needs to be a LEFT OUTER JOIN versus an INNER JOIN to ensure our timeline is continuous even if we have no matching data in the query info table In terms of the select statement we are using our previously defined UTC OFFSET value to convert the UTC Timestamps back to our defined TimeZone  We also select the job id user email  proejct ID reservation ID Total Slot Second and Query Count from query info  Note for our two metric columns we are filling in Null values with a so our final graph doesn t have null data points Plotting the chartNow that we have a query result set we need to copy amp paste the data to Google Sheets or any equivalent Spreadsheet application You could follow the below stepsAdd a Pivot Table For Google Sheets on the menu bar Data→Pivot Table  In the table editor Period Ts goes in as Rows Total Slot Sec and Concuurent Queries goes as Value Once the Pivot Table is created it is time to add a chart visual For Google Sheets on the menu bar Insert→Chart Once the chart is inserted you will see that the concurrent queries and Total Slot Sec are on the same axis Let s put them on a different axis i e add another Y axis Double click on the chart and select customize Click Series  Select “Sum of Total Slot Sec and Select Left Axis on the Axis selection Select “Sum of Concurrent Queries and Select Right Axis on the Axis selection Lastly change the chart type to a line chart That s it your chart is ready With a little slicing and dicing you could also produced a Stacked Area Chart By User with Slot SecondsAnalyze visualize slot consumption and query throughput for an interval of timeThe second part of this blog is to monitor the average slot utilization and query throughput for an interval of time This query will look very similar to the previous query  The key difference is that we ll be measuring query count throughput for an interval of time as opposed to a Point In Time  In addition we ll measure Total Slot Seconds consumed for that Interval and we ll calculate a Pct Slot Usage metric applicable if you are using fixed slots and an Avg Interval Slot Seconds metric Main SQLHere is the SQL script copy paste into the BQ UI and run it  Remember Change the Region qualifier if you aren t part of the region US Just like the previous one let s break down the SQL into smaller chunks for easy understanding  Declaring and setting variablesThis variable declaration segment is exactly the same as the previous query but with three additions  We will be using a variable named RANGE INTERVAL MINUTES instead of RANGE INTERVAL SECONDS  We have added two new variables named SLOTS ALLOCATED and SLOTS SECONDS ALLOCATED PER INTERVAL After declaring it is time to set these variables We ll discuss the new variables in this section RANGE INTERVAL MINUTES As with the last query the interval size will determine the number of data points on the X Axis time therefore we want to set the RANGE INTERVAL MINUTES value to something appropriate relative to the Range of time you are interested in analyzing  If you are only interested in an hour minutes then a RANGE INTERVAL MINUTES value of is fine as it will provide you with X Axis Data Points one per minute  However if you are interested in looking at a hour day Minutes then you ll probably want to set the RANGE INTERVAL MINUTES to something like as it will provide you with X Axis Data Points SLOTS ALLOCATED Regarding SLOTS ALLOCATED you will need to determine how many slots are allocated to a specific project or reservation  For an on demand project this value should be set to  For projects leveraging Flat Rate Slots you will need to determine how many slots are allocated to the respective project s reservation  If your reservation id only has one project mapped to it then you will enter a SLOTS ALLOCATED value equal to the of slots allocated to the respective reservation id  If multiple projects are linked to a single reservation id I d recommend that you run this query at the ORG level filter on the appropriate reservation id and set the variable with the of slots allocated to the respective reservation id SLOTS SECONDS ALLOCATED PER INTERVAL This is simply your interval length converted to seconds multiplied by the number of slots allocated  This value represents the total number of slot seconds that can be consumed in an interval of time assuming utilization Now that all of our variables have been declared and set we can finally start to analyze the query    There are key differences between this query and the “Point In Time query we reviewed earlier   First we will not be pulling information at a job id level  Given the intended usage of this query including the job id granularity would produce query results which would be difficult to dump to a spreadsheet   Second we will be pulling all timeline values seconds within our defined RANGE INTERVAL MINUTES instead of a single second  This will result in a much more computationally intensive query as we are aggregating much more data   Third we are counting all queries that ran during our defined RANGE INTERVAL MINUTES instead of just counting the queries actively consuming CPU for a given second in an interval  This means that the same query may be counted across more than one interval and the ultimate Query Count metric represents the number of queries active during the interval being analyzed   Fourth we will be calculating a custom metric called Pct Slot Usage which will sum all slots consumed for an interval and divide that by the number of slots allocated SLOTS ALLOCATED PER INTERVAL for an interval  For example for a minute interval given an allocation of slots SLOTS SECONDS ALLOCATED PER INTERVAL would equate to K Slot Seconds minutes seconds slots  During this interval if we used K Slots Seconds then K K equals a Pct Slot Usage of   Fifth we will be calculating another custom metric called Avg Interval Slot Seconds which will sum all slots consumed for an interval and divide it by RANGE INTERVAL MINUTES in order to calculate the average slot consumption in Slot Secs for the interval of time  For example if a user were to consume Slot Seconds in an interval of minutes the Avg Interval Slot Seconds would equal slots consumed min interval seconds per minute Note  It is possible even likely that the Pct Slot Usage metric could have a value greater than  In the case of an on demand project this can occur due to the inherent short query bias built into the scheduler   For the first seconds of a query s execution the scheduler will allow a query to get more than its fair share of Slot Seconds relative to project concurrency and the slot limit imposed on on demand projects  This behavior goes away for an individual query after seconds  If a workload consists of lots of small and computationally intensive queries you may see prolonged periods of Slot Consumption above the slot limit   In the case of a project tied to a reservation with Fixed Slots you may see the Pct Slot Usage metric exceed if Idle Slot Sharing is enabled for the respective reservation and if idle slots are available in the respective Org As with the previous script the outermost query is built from two derived sets of data aliased as key and  query info The key derived tableSame as the first query this derived query is identical to the one used in the previous example The query info derived tableThis derived query is very similar to the one in the previous example  There are key differences   First we are not selecting JOB ID   Second we are looking at all the TimeLine seconds for an interval of time not a single Point In Time Second per Interval as we did in the previous query   Third in order to get an accurate count of all jobs that ran within an interval of time we will run a Count Distinct operation on Job ID  The Distinct piece ensures that we do not count a JOB ID more than once within an interval of Time   Fourth in order to match our Period TS value to the key table we need to aggregate all minutes for an interval and associate that data to the first minute of the interval so that we don t lose any data when we join to the key table  This is being done by some creative conversion of timestamps to UNIX seconds division and offsets based on which minute an interval starts Outermost query  In the outermost query we will again LEFT OUTER JOIN the key timeline table to our pre filtered query info table based on the cleaned up TimeStamp values from each table  This needs to be a LEFT OUTER JOIN versus an INNER JOIN to ensure our timeline is continuous even if we have no matching data in the query info table In terms of the select statement I m using our previously defined UTC OFFSET value to convert the UTC Timestamps back to our defined TimeZone  I also select the user email  proejct ID reservation ID Total Slot Second Query Count and calculate Pct Slot Usage and Avg Interval Slot Seconds Similarly like the first query here are some sample Charts you can create from the final query result I hope you found these queries and their explanations useful albeit maybe a bit wordy  There was a lot to unpack  The origin of these queries go back to something similar we use to run on a totally different DBMS  With BigQuery s scripting support and Nested Array capabilities the newly ported queries are much cleaner They are easier to read and require many less manual changes to the parameters  Look out for our future blogs in this series Related ArticleLearn how to stream JSON data into BigQuery using the new BigQuery Storage Write APIWalks through a code example that streams GitHub commit data to BigQuery for real time analysisRead Article 2022-04-26 19:30:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)