AWS |
AWS Compute Blog |
How to choose between CoIP and Direct VPC routing modes on AWS Outposts rack |
https://aws.amazon.com/blogs/compute/how-to-choose-between-coip-and-direct-vpc-routing-modes-on-aws-outposts-rack/
|
How to choose between CoIP and Direct VPC routing modes on AWS Outposts rackThis blog post is written by Sumit Menaria Senior Hybrid Solutions Architect AWS WWSO Core Services AWS Outposts Rack is a fully managed service that extends AWS infrastructure services APIs and tools to customer premises By providing local access to AWS managed infrastructure and services Outposts rack enables customers to build and run applications on premises … |
2023-02-10 13:43:45 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
【Python】なるべく短くベジェ曲線 |
https://qiita.com/Cartelet/items/f204d4409fcbb3919d16
|
npileftbeginarraycnienda |
2023-02-10 22:08:54 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
Python 初学者が買い物リスト作成アプリを作った(データ加工編) |
https://qiita.com/yuuauuy1/items/3448c41c0b684db76edb
|
買い物 |
2023-02-10 22:06:49 |
js |
JavaScriptタグが付けられた新着投稿 - Qiita |
lodashはメソッドによってファイルサイズが全然違っていた |
https://qiita.com/matsu012/items/4c57b3a91b67a0f9491c
|
importfromlodash |
2023-02-10 22:27:40 |
Ruby |
Rubyタグが付けられた新着投稿 - Qiita |
[軽量化] マルチステージビルドによるrails環境のdocker化 |
https://qiita.com/NaokiKotani/items/1ebe71cc594b1c3586c9
|
rails |
2023-02-10 22:04:01 |
Docker |
dockerタグが付けられた新着投稿 - Qiita |
laravelでページネーションが動かなかった |
https://qiita.com/Qubieeee/items/3c317c66e60eff0285b1
|
nindexreturnviewuserindex |
2023-02-10 22:47:32 |
Docker |
dockerタグが付けられた新着投稿 - Qiita |
[軽量化] マルチステージビルドによるrails環境のdocker化 |
https://qiita.com/NaokiKotani/items/1ebe71cc594b1c3586c9
|
rails |
2023-02-10 22:04:01 |
golang |
Goタグが付けられた新着投稿 - Qiita |
GoでSMTPを使用したメール二段階認証ページを作ってみた |
https://qiita.com/HoppingGanon/items/80c800ab111f611bb424
|
作ってみた |
2023-02-10 22:18:38 |
Ruby |
Railsタグが付けられた新着投稿 - Qiita |
[軽量化] マルチステージビルドによるrails環境のdocker化 |
https://qiita.com/NaokiKotani/items/1ebe71cc594b1c3586c9
|
rails |
2023-02-10 22:04:01 |
技術ブログ |
Developers.IO |
【新機能】BigQueryコンソールのデータセットの表示部分がリフレッシュできるようになりました(小ネタ) |
https://dev.classmethod.jp/articles/bq-new-explorer-pane-refresh/
|
bigquery |
2023-02-10 13:44:13 |
海外TECH |
MakeUseOf |
How to Display Images in Your Game With PyGame |
https://www.makeuseof.com/display-images-with-pygame/
|
pygame |
2023-02-10 13:30:15 |
海外TECH |
MakeUseOf |
8 Tips to Land a High-Paying Remote Job |
https://www.makeuseof.com/tips-for-high-paying-remote-job/
|
remote |
2023-02-10 13:15:15 |
海外TECH |
DEV Community |
How to build a Next.js app with ApyHub |
https://dev.to/apyhub/how-to-build-a-nextjs-app-with-apyhub-1nif
|
How to build a Next js app with ApyHubThis tutorial will guide you through the steps of setting up a Next js app and integrating it with ApyHub We will use the timezone API and the iCal API from ApyHub PrerequisitesBasic knowledge of Next js and TypescriptNode js v An ApyHub account Setting up the Next js ProjectOpen your terminal and navigate to the directory where you want to set up your project Clone the Next js starter project from the repository git clone b starter Navigate into the newly created directory cd with nextjsInstall the dependencies npm installStart the app npm run devYour app should now be running and look similar to this screenshot Set up your ApyHub accountGo to and log in to your ApyHub account Once logged in access the timezone API and the iCal generator API To be able to communicate with the APIs from your Next js app you will need an apy token environment variable To get this token follow these steps On the API documentation page click on the yellow Create App button in the top right corner Create a new token with the name with nextjs After creating the token you can view it on the My Apps page Copy the token and store it securely We will use it in the following steps of our Next js project Integrate ApyHub with a Next js projectInstall the apyhub js Node js library in your with nextjs project npm install apyhubCreate a new env file in the root of your project and name it env local Inside this file add the app token created in the previous step like so APY TOKEN YOUR APP TOKEN GOES HERECreate a new folder called lib to initialize the ApyHub library Inside that folder create a new file called apyhub tsIn this file we will initialize the Apyhub client using the environment variable token created previously import initApyhub from apyhub const apy initApyhub process env APY TOKEN as string export apy How to use ApyHub API utilities Fetching timezonesCurrently our user interface UI has an empty dropdown menu for time zones To avoid having to manually create the array ourselves it would be beneficial to display all time zones in this menu Fortunately we can use the data lists timezone API for this purpose ApyHub does not support client side requests due to the security risk of exposing credentials to the client which could lead to malicious actors making requests on your behalf If you attempt to make a client side API request you will receive a CORS error To ensure all API requests are handled from the server Next js serves as the bridge between the frontend and backend Inside the pages index tsx file of your Next js project there is a getServerSideProps function This is where server side code is executed for Next js projects In here we currently return an empty array for timezones Let s change that by fetching an array of timezones from ApyHub First import and export the data object inside the lib apyhub ts file import initApyhub data from apyhub const apy initApyhub process env APY TOKEN as string export apy data Use getServerSideProps in the pages index tsx file to fetch data import data from lib apyhub export const getServerSideProps async gt const data timezones await data timezones return props timezones Reload the page to see all timezones in the dropdown menu great Fetching iCal fileNow that we can select a time zone from our drop down we have all the necessary components to generate a new iCal event Next we will create the API request to make this happen To do this we will use Next js API routes Create a new API route Inside the pages folder create a new folder called apiInside the pages api folder create a new file called ical ts We will use the generate object from the apyhub library to create a new iCal file Go to the lib apyhub ts file and add the generate object then export it import initApyhub data generate from apyhub const apy initApyhub process env APY TOKEN as string export apy data generate Inside the pages api ical ts file create a new handler function Pass the incoming request body data to the generate ical function import generate from lib apyhub import NextApiRequest NextApiResponse from next const handler async req NextApiRequest res NextApiResponse gt const summary description organizer email attendees emails location timezone start time end time meeting date recurring recurrence req body const url await generate ical summary description organizerEmail organizer email attendeesEmails attendees emails location timeZone timezone startTime start time endTime end time meetingDate meeting date recurring recurrence responseFormat url return res status json url export default handler We ve finished We can now generate an ical file with the given data and download it from our app No more hassle To test it out fill out the form in our app and click Create Event The browser then makes a request to the api ical ts API endpoint which sends a request to ApyHub servers to create a new iCal file We receive an URL pointing to this file which we return to the browser The file is then downloaded with window open data blank DeploymentNow that our app is complete we can easily deploy it on Vercel Follow these steps and make sure to add the APY TOKEN as an environment variable when deploying the application BonusCurrently there is no way to validate if an organizer or attendee email provided on our app has a valid domain To ensure accuracy you can add the validator dns email API to the api ical ts API route before creating the iCal file This will validate if the submitted email is valid |
2023-02-10 13:50:22 |
海外TECH |
DEV Community |
tsParticles 2.9.0 Released |
https://dev.to/tsparticles/tsparticles-290-released-421i
|
tsParticles Released tsParticles Changelog New FeaturesCreating confetti bundle easier confetti animations usage Removed confetti function from the preset this bundle replaces this feature The readme with instructions can be found hereCreating fireworks bundle easier fireworks animations usage The readme with instructions can be found here Minor ChangesAdded version to the Engine object Added color and colorOffset properties to split options Changed default particles number value to the previous default value was meaningless You must specify a number now it s easier to implement emitters plugin since you can declare just the emitters property without specifying particles If you need any number you declare it ignoring the default value vProbably this will be the last v x version except some bug fixes needed before v will be released You can read more about the upcoming v in the post linked below Preparing tsParticles v Matteo Bruni for tsParticles・Jan ・ min read javascript typescript webdev showdev Social linksDiscordSlackWhatsAppTelegramReddit matteobruni tsparticles tsParticles Easily create highly customizable JavaScript particles effects confetti explosions and fireworks animations and use them as animated backgrounds for your website Ready to use components available for React js Vue js x and x Angular Svelte jQuery Preact Inferno Solid Riot and Web Components tsParticles TypeScript ParticlesA lightweight TypeScript library for creating particles Dependency free browser ready and compatible withReact js Vue js x and x Angular Svelte jQuery Preact Inferno Riot js Solid js and Web Components Table of Contents️️ This readme refers to vversion read here for v documentation ️️Use for your websiteLibrary installationOfficial components for some of the most used frameworksAngularInfernojQueryPreactReactJSRiotJSSolidJSSvelteVueJS xVueJS xWeb ComponentsWordPressElementorPresetsBig CirclesBubblesConfettiFireFireflyFireworksFountainLinksSea AnemoneSnowStarsTrianglesTemplates and ResourcesDemo GeneratorCharacters as particlesMouse hover connectionsPolygon maskAnimated starsNyan cat flying on scrolling starsBackground Mask particlesVideo TutorialsMigrating from Particles jsPlugins CustomizationsDependency GraphsSponsorsDo you want to use it on your website Documentation and Development references here This library is… View on GitHub |
2023-02-10 13:19:26 |
海外TECH |
DEV Community |
Caching at DEV |
https://dev.to/devteam/caching-at-dev-11el
|
Caching at DEVWe ve always put a lot of effort into performance at DEV We want our users to be able to see their content almost instantaneously when interacting with our content In order to do so we ve placed a big emphasis on caching We ve had to ask ourselves questions like what are the right things to cache Which layer in the stack would be best to cache it And how will this affect the overall performance As someone reading this post you re most likely a user of DEV which means that you know that DEV is a read heavy site Tens of thousands of users all around the world are reading content on our site at any point in the day This content for example an article once created will rarely change and so we can refer to it as “mostly static content There are also more interactive bits with DEV like being able to react to and comment on posts which provides us with interesting caching opportunities There are many types of caching that we apply in the application however in this post we ll be discussing Response Caching specifically At the core of our response caching strategy we identify the static content and try to cache it so that users are not making a trip to our servers for each request Types of response caching on DEVThere are many types of response caching that occur in different layers of the application stack One of the important decisions that you ll make when developing features is to decide on which part of the stack you should implement caching Do we implement caching to avoid hitting the origin server completely in the form of Edge caching Do we implement caching at the view layer to reduce the number of database queries and complex rendering of the UI in the form of Fragment caching Or do we implement Browser caching to constrict our request to never leave our browser Each of these strategies has different use cases and sometimes we may end up using a combination of multiple caching techniques on one feature to achieve the most optimised result Have you ever wondered why the article page on DEV loads up so quickly even when the page is really long and there are tons of images it s still pretty snappy That s mostly due to edge caching and when the edge cache is being refreshed you can thank Fragment caching for stepping in The assets load pretty quickly on that page as well we can thank the browser cache for caching our JavaScript and CSS for those pages Let s go into more detail about each of these types of caching ️Edge CachingEdge Caching lives between the browser and the origin server thus reducing the need to make a trip to the origin server for every request Its where we add an intermediary storage ideally closer to the user between the user and the server to store the data for a period of time Why do we edge cache at DEV There are two parts of edge caching at DEV that make it beneficial for the application Edge caching moves memory storage closer to the end users by adding a machine ahead of the origin server This means that a user from South Africa will get content served from a point of presence in Cape Town which contains the edge cache instead of going all the way to the origin server in the United States the trip ends up being much faster The edge cache contains a cached version of the page that the user is requesting without having to do any re computation thus making the response time really really fast Some of the benefits of the edge cache include reducing server load and stress on the origin server improving content delivery and response times of requests thus reducing waiting time on web pages and lightening the network load by reducing the amount of duplicate data At DEV we currently use Nginx or Fastly for our edge cache In the future we hope to allow for our configuration to be scalable enough to run through any caching intermediary Currently Fastly caches the content stored on our origin server at points of presence POPs around the world which then improves the user experience of our site Within our Fastly configuration we have shielding enabled When one of Fastlys POPs is used as an origin shield it will reduce the load on the origin server Thus the requests to the origin will come from a single POP thereby increasing the chances of an end user request resulting in a cache HIT How does edge caching actually work When a user navigates to our site they first hit our edge cache Within this layer the edge server will either have a warm cache or a cold cache Usually the first visit to a site after a cache is set up or after it expires will reach a “cold cache A cold cache is an empty one that does not have any data stored When a cache is “cold then the request will make its way to the origin server to retrieve the data and it will be labeled a cache “MISS However when it does this it also retains the data that it got from the origin server within the cache This is referred to as the process of warming the cache A warm cache will contain data that is already stored and prepared to serve users If the cache is warm the data is returned from the cache to the browser and it will be labeled as a “HIT Every subsequent user will hit a warm cache until we expire or purge the cache and the same process repeats itself Expiring a cacheWhen caching objects it s important to think about how long you want the cache to be around until it gets stale One approach is to set a reasonably longer cache lifetime and then purge the cache on certain conditions When we want our content to be edge cached we set the appropriate cache control headers Here s a snippet of the code that we use in the DEV codebase before action set cache control headers only i index show set cache control headers is defined with the following configuration def set cache control headers max age day to i surrogate control nil stale while revalidate nil stale if error Only public forems should be edge cached based on current functionality return unless Settings UserExperience public request session options skip true no cookies RequestStore store edge caching in place true To be observed downstream response headers Cache Control public no cache Used only by Fastly response headers X Accel Expires max age to s Used only by Nginx response headers Surrogate Control surrogate control presence build surrogate control max age stale while revalidate stale while revalidate stale if error stale if error The max age header will help the edge cache server to calculate a Time To Live TTL for the cache TTL is the maximum amount of time that the content will be used to respond to requests Thereafter the cache will need to be revalidated by consulting the origin server TTL is defined in seconds For DEV we set the default max age to be day however in some cases we may override this value We override this value for caching of the feed where we set the max age to be two hours I encourage you to grep for set cache control headers in the codebase to explore the length of the caches for the various controller actions In case you re curious about those other values in the snippet of code above stale while revalidate tells caches that they may continue to serve a response after it becomes stale for up to the specified number of seconds provided that they work asynchronously in the background to fetch a new one stale if error tells the caches that they may continue to serve a response after it becomes stale for up to the specified number of seconds in the case where the check for a fresh one fails in most cases where there is an issue at the origin server Some best practices worth outlining are that it is recommended to specify a short stale while revalidate and a longstale if error value The Fastly docs rationale for recommending this is that if your origin is working you don t want to subject users to content that is significantly out of date But if your origin is down you re probably much more willing to serve something old if the alternative is an error page Purging a cacheExpiring a cache allows a cache to be populated with fresh data periodically however there are times when you d want to refresh the cache based on actions This is where purging becomes useful Purging describes the act of explicitly removing content from the edge cache rather than allowing it to expire or to be evicted You ve read above that we expire the cache on an article page after one day but what if after publishing the article the author realizes that they made some typos and they update the article In this case we wouldn t want readers to continue viewing the outdated version with the typos We d want to “purge that cache so that we can get the latest version from the origin server Hence when creating a cache you want to evaluate the conditions for which you d need to purge the cache In the case of the article page we purge the cache on some of these actions below Changes in discussion lockscreate update or delete an article create read update or delete comments on the articleupdate a users name and profile imageThe above are some of the main cases where we purge the article however it is not the exhaustive list You can read more about Purging on the fastly developer documentationObserving edge caching on DEVWe can observe edge caching on an article page on DEV When you load up a page like you ll notice that the content of the article page loads really quickly but then the reactions take some time to come in This is due to the fact that the first time we render the page we hit an edge cache and the article gets rendered from that cache We then make a follow up asynchronous request to get the reactions that are not needed immediately From a user experience point of view you re most likely keeping your eye on the content to read the article before reacting to it It s also useful to note that the comments are rendered along with the article on the first load and this is mostly for SEO purposes In order to see whether a request is cached you can click on the request in the network tab and look at the request headers The request headers have an x cache attribute which is written by Fastly which indicates whether the request was a HIT or a MISS It also contains a header X cache hits which indicates the number of cache hits in each node These are useful headers to look out for to determine if the requests are being cached ️Fragment CachingFragment Caching is used to cache parts of views thus reducing the need to re compute complex viewsWhy do we fragment cache at DEV In a typical Rails application when a user visits a page on the site then a request to load the page would get sent to the Rails application The application will then call the relevant controller which in turn will request the model for the data The model fetches the data from the database and returns it to the controller The controller armed with the data will render the view which is the user interface that you as a user will see on the web page However that rendering can be slow for numerous reasons Some of these include There can be expensive database queries in the view It may be a complex view with nested loops and we have tons of complex views at DEV There may also be many partials with some being nested which can increase rendering time Hence to avoid the slowness that comes with the above problems we sometimes cache the view to allow the request to complete more quickly How does fragment caching work Fragment caching removes the call to our Postgres database and reduces the time taken to compute a view in favor of storing the “fragment in a memory cache like Redis as key value objects The key is provided within the Rails application and the fragment is stored as its value During the view rendering if a cache key is come upon it checks the Redis store for that cache key if it finds the cache key it reads the value of the cache key which is the Fragment and renders it within the block If it does not find the cache key it will then compute the view and write that key to the Redis store for next time Expiring a cacheA unique key is provided for every fragment that will be cached in our Rails view Below is an example of one of our more complex cache keys that rely on numerous identifies lt cache whole comment area article id article last comment at article show comments discussion lock amp updated at comments order user signed in expires in hours do gt This view touches a few different resources and hence you ll notice the different dynamic aspects that make up the cache key The cache key allows the cache to be purged each time it changes article id requests that we maintain a cache of the comments section for every article page article last comment at will change when we add a new comment and hence we d want to refresh the cache If a user chooses to not show the comments article show comments or to lock a discussion thread discussion lock amp updated at we want it to refresh the fragment cache as well If a parameter is passed through that changes the sorting order of the comments comments order Finally we show a different view fragment for logged in vs logged out view When any of the above cache keys change then we ll be writing to the Redis store Another way that we get fresh data is by expiring the cache Just like with edge caching we can set an expiry time after which the cache will be refreshed Observing fragment caching on DEVWe use fragment caching in numerous places on the DEV application you can grep for lt cache in our codebase to view these instances Some include Our comment area on the articles pageThe left sidebarDisplay AdsThe right hand navbar on an article pageThe home feedThese are just some of the instances where we use Fragment caching and each of these views are cached for one of the reasons outlined above In order to observe Fragment caching you can run rails dev cache You can start off by clearing the cache with rake tmp cache clear to ensure that the first partial is rendered by the server Thereafter spin up the DEV server locally and navigate to the article page When you navigate to this page you should be able to see the whole comment area partial being logged On the first load when the cache is clear you will notice that we write to the cacheCache write views articles full comment area eefacfc whole comment area UTC true top true namespace gt nil compress gt true compress threshold gt expires in gt hours race condition ttl gt nil web dd env development dd service rails development dd trace id dd span id Cache write views articles show deaaabcabae specific article extra scripts UTC namespace gt nil compress gt true compress threshold gt expires in gt hours race condition ttl gt nil However on subsequent loads we ll simply continue reading from the store web dd env development dd service rails development dd trace id dd span id Cache read views articles full comment area eefacfc whole comment area UTC true top true namespace gt nil compress gt true compress threshold gt expires in gt hours race condition ttl gt nil web dd env development dd service rails development dd trace id dd span id Read fragment views articles full comment area eefacfc whole comment area UTC true top true ms We realize that sometimes we may overuse Fragment caching at times in our application and there is probably some room for cleanup and improvement One aspect to keep an eye on is the complexity of the cache keys If the cache keys are more complex than the cached content then you may end up with a circumstance where it takes longer to check if there is a cache stored for that key in memory store than it does to render the view then perhaps it s time to re evaluate Whilst Redis handles this very well and scales magnificently sometimes the better path can be optimizing the database queries 🪆Russian Doll CachingIf you look through the application we you will see that we sometimes nest the cached fragments inside other cached fragments this is referred to as Russian Doll Caching It ensures that our cached fragments are broken up into smaller pieces which allows the outer cache to be rendered faster when only one of the nested fragments change An example of Russian Doll Caching can be seen in the manner that we render our navigation links lt render partial layouts sidebar nav link collection NavigationLink where id navigation links other nav ids ordered as link cached true gt NavigationLink where id navigation links other nav ids ordered renders a navigation link collection Hence using the collection attribute we re theoretically wrapping each navigation link layouts sidebar nav link partial in a cache block Each object in the collection contains the details of a navigation link and so if any of that data is updated the cache of that particular element will be invalidated whilst the outer cache and the other nested caches will remain unchanged ️Browser CachingThe Browser cache allows resources to be cached in one s browser thus reducing the page load time and eliminating the need to go to the server It stores resources in the web browser Why do we browser cacheAt DEV we cache our static assets like images CSS and JavaScript Previously we had early adopted service workers as a form of browser caching but we later removed it when we ran into many caching bugs We mainly browser cache to speed up the page loading process as well as minimize the load on the server How does the browser cache work When a user makes a request for the first time on a site the browser will request those resources from the server and store some of the resources in the browser cache On subsequent requests these resources will then be returned from the browser instead of having to travel over the internet to the user In order to browser cache we set some headers in our production rb config public file server headers Cache Control gt public s maxage days to i max age days to i Here we are first setting the Cache Control header to use public intermediate caching for days with max age s maxage stands for Surrogate cache which is used to cache internally by Fastly When the browser sees the above cache control header it will cache the asset for a year thus the network calls will no longer hit the server When the browser parses the index html file it looks for the different referenced script files You ll notice that these files are fingerprinted Hence the request for those files will look like This versioning technique binds the name of a file to its content usually by adding the file hash to the name If a file has been updated on the server the fingerprint changes as well which then results in the browser not having a reference to that filename in its cache Hence it refreshes the cache for that resource Observing browser cachingFrom the screenshot below you can see that our js file returns a from memory cache which shows that the response was indeed served from the cache If you look at the Request Url you will notice that the file name is fingerprinted to let the browser know whether the resource has changed on the server or not ConclusionTo conclude I d like to discuss some of the questions I ponder on when trying to determine whether or not to implement caching on a feature One of them is “How often do we see the data for that feature changing If the answer to that is very seldom and “relatively static then caching is most likely the way to go Another question that I ask is at what layer does it make sense to implement the caching Here I think about what needs to be cached and how often will it need to be refreshed If I think that caching is the way to go then I explore whether it would be beneficial to implement only Fragment caching to cache any complex views on that page Hence allowing the requests to still hit the server but reduce the load on the database by serving from the application cache Or do I need something more perhaps we anticipate a large load on the page with successive hits from all around the world In this case edge caching will be more efficient in serving the page Maybe it makes sense to do both like we do in many parts of the DEV application There s no right or wrong answer here but as with all performance problems we encourage our team members and contributors to analyze the problem carefully and run some tests in order to make the most informed decision And that s all folks I hope that you found this post useful Please drop any feedback and comments below |
2023-02-10 13:15:06 |
Apple |
AppleInsider - Frontpage News |
iPhone 15 Ultra rumors, our in-depth HomePod review, and favorite Apple devices |
https://appleinsider.com/articles/23/02/10/iphone-15-ultra-rumors-our-in-depth-homepod-review-and-favorite-apple-devices?utm_medium=rss
|
iPhone Ultra rumors our in depth HomePod review and favorite Apple devicesOn this week s episode of the AppleInsider Podcase forget the iPhone Ultra forget the HomePod it s time for host Stephen Robles to explain that tidy desk of his Alongside unnatural tidiness Stephen has been testing out the new HomePod and specifically comparing its sound quality to other high end speaker systems He s as impressed as everyone else with how it sounds but he has some unexpected conclusions about the HomePod s price ーand just who it s aimed at Then there are all these rumors that Apple will top the Pro Max line with an even more advanced ーand presumably costly ーiPhone Pro Ultra This may not seem a great time to release an even more expensive iPhone but there are reasons to think it could happen Read more |
2023-02-10 13:39:25 |
Apple |
AppleInsider - Frontpage News |
USB-C on iPhone 15 might still require MFi certified cables |
https://appleinsider.com/articles/23/02/10/usb-c-on-iphone-15-might-still-require-mfi-certified-cables?utm_medium=rss
|
USB C on iPhone might still require MFi certified cablesThe EU s new law about USB C is intended to make all charging cables interchangeable but an iffy rumor about the iPhone says Apple will put its own spin on what that means New regulations requiring a common charging standard specifically USB C were finalized by the European Union in October The date the law comes into affect plus what devices it applies to means that the iPhone may have USB C but the iPhone will definitely have to have it Now an unverifiable rumor posted on Chinese social media site Weibo says that Apple may stick to the letter of the law but not the spirit of it Read more |
2023-02-10 13:01:06 |
海外TECH |
Engadget |
Engadget Podcast: Microsoft and Google’s budding AI rivalry |
https://www.engadget.com/engadget-podcast-microsoft-new-bing-chatgpt-open-ai-google-bard-oneplus-11-review-133038848.html?src=rss
|
Engadget Podcast Microsoft and Google s budding AI rivalryWhat a wild week chock full of news all over tech Microsoft and Google both unveiled their AI products for the masses with Microsoft holding a whole event this week to show off the new Edge and Bing Google also had an event in Paris and unveiled the first Android developer preview while OnePlus launched its first ever tablet alongside a new phone Cherlynn is joined this week by guest co host Sam Rutherford to tear into the week s onslaught of news and check in to see how we feel about Samsung s Galaxy S Ultra while reviewing it Listen below or subscribe on your podcast app of choice If you ve got suggestions or topics you d like covered on the show be sure to email us or drop a note in the comments And be sure to check out our other podcasts the Morning After and Engadget News Subscribe iTunesSpotifyPocket CastsStitcherGoogle PodcastsTopicsMicrosoft s AI event unveils Bing and Edge with OpenAI collaboration Google unveils Bard chatbot its ChatGPT competitor Mat Smith s OnePlus review Also coming from OnePlus a tablet earbuds and a keyboard Sam Rutherford s Galaxy S Ultra review AI generated Seinfeld show “Nothing Forever banned from Twitch Android developer preview is available now What is even happening with Twitter s API access Working on Pop culture picks LivestreamCreditsHosts Cherlynn Low and Sam RutherfordProducer Ben EllmanMusic Dale North and Terrence O BrienLivestream producers Julio BarrientosGraphic artists Luke Brooks and Brian Oh |
2023-02-10 13:30:38 |
金融 |
RSS FILE - 日本証券業協会 |
公社債発行額・償還額等 |
https://www.jsda.or.jp/shiryoshitsu/toukei/hakkou/index.html
|
発行 |
2023-02-10 15:00:00 |
金融 |
RSS FILE - 日本証券業協会 |
公社債発行銘柄一覧 |
https://www.jsda.or.jp/shiryoshitsu/toukei/saiken_hakkou/youkou/ichiran.html
|
銘柄 |
2023-02-10 15:00:00 |
金融 |
金融庁ホームページ |
スチュワードシップ・コードの受入れを表明した機関投資家のリストを更新しました。 |
https://www.fsa.go.jp/singi/stewardship/list/20171225.html
|
機関投資家 |
2023-02-10 15:00:00 |
金融 |
金融庁ホームページ |
第63回金融トラブル連絡調整協議会 議事録を公表しました。 |
https://www.fsa.go.jp/singi/singi_trouble/gijiyoroku/20230106.html
|
Detail Nothing |
2023-02-10 15:00:00 |
金融 |
金融庁ホームページ |
適格機関投資家等特例業務届出者に対する行政処分について公表しました。 |
https://www.fsa.go.jp/news/r4/shouken/20230210.html
|
行政処分 |
2023-02-10 15:00:00 |
金融 |
金融庁ホームページ |
金融審議会「事業性に着目した融資実務を支える制度のあり方に関する ワーキング・グループ」(第5回) 議事録を公表しました。 |
https://www.fsa.go.jp/singi/singi_kinyu/jigyoyushi_wg/gijiroku/20221227.html
|
金融審議会 |
2023-02-10 14:59:00 |
金融 |
金融庁ホームページ |
金融審議会「事業性に着目した融資実務を支える制度のあり方に関する ワーキング・グループ」(第4回) の議事録を公表しました。 |
https://www.fsa.go.jp/singi/singi_kinyu/jigyoyushi_wg/gijiroku/20221223.html
|
金融審議会 |
2023-02-10 14:58:00 |
金融 |
金融庁ホームページ |
欧州保険・企業年金監督機構との保険監督上の協力に関する枠組みの締結について公表しました。 |
https://www.fsa.go.jp/inter/etc/20230210/20230210.html
|
企業年金 |
2023-02-10 14:00:00 |
ニュース |
BBC News - Home |
Ukraine war: Russia again fires missiles over Moldova in latest strikes |
https://www.bbc.co.uk/news/world-europe-64593488?at_medium=RSS&at_campaign=KARANGA
|
romania |
2023-02-10 13:12:45 |
ニュース |
BBC News - Home |
Ban on Edinburgh lap dancing clubs overturned after judicial review |
https://www.bbc.co.uk/news/uk-scotland-edinburgh-east-fife-64596835?at_medium=RSS&at_campaign=KARANGA
|
effective |
2023-02-10 13:29:30 |
海外TECH |
reddit |
無計画に神戸に遊びに来た。明石焼き食べた。 |
https://www.reddit.com/r/newsokunomoral/comments/10yrnql/無計画に神戸に遊びに来た明石焼き食べた/
|
ewsokunomorallinkcomments |
2023-02-10 13:12:49 |
コメント
コメントを投稿