投稿時間:2024-05-15 03:22:31 RSSフィード2024-05-15 03:00分まとめ(23件)

カテゴリー サイト名 記事タイトル リンクURL 頻出ワード・要約等 登録日
AWS AWS Game Tech Blog Omeda Studios swaps out ‘Predecessor’ backend in less than five months using Amazon GameLift and Pragma https://aws.amazon.com/blogs/gametech/omeda-studios-swaps-out-predecessor-backend-in-less-than-five-months-using-amazon-gamelift-and-pragma/ After running the live game for months Omeda Studios migrated its backend to Pragma using Amazon GameLift the dedicated game server management service from Amazon Web Services AWS The change was implemented ahead of its latest release which made the game free to play across platforms 2024-05-14 17:25:38
AWS AWS Machine Learning Blog Incorporate offline and online human – machine workflows into your generative AI applications on AWS https://aws.amazon.com/blogs/machine-learning/incorporate-offline-and-online-human-machine-workflows-into-your-generative-ai-applications-on-aws/ Recent advances in artificial intelligence have led to the emergence of generative AI that can produce human like novel content such as images text and audio These models are pre trained on massive datasets and to sometimes fine tuned with smaller sets of more task specific data An important aspect of developing effective generative AI application is Reinforcement 2024-05-14 17:52:46
AWS AWS Machine Learning Blog Build generative AI applications with Amazon Titan Text Premier, Amazon Bedrock, and AWS CDK https://aws.amazon.com/blogs/machine-learning/build-generative-ai-applications-with-amazon-titan-text-premier-amazon-bedrock-and-aws-cdk/ Amazon Titan Text Premier the latest addition to the Amazon Titan family of large language models LLMs is now generally available in Amazon Bedrock Amazon Bedrock is a fully managed service that offers a choice of high performing foundation models FMs from leading artificial intelligence AI companies like AI Labs Anthropic Cohere Meta Stability AI and 2024-05-14 17:06:27
Program JavaScriptタグが付けられた新着投稿 - Qiita 【js/ts】Date型のtoStringでYYYY/MM/DDを取得してみる https://qiita.com/recxu/items/373e01e5431d1694a556 jstsdate,constdnewdate,date 2024-05-15 02:04:07
海外TECH Ars Technica 5,471-piece Lego Barad-Dûr set will turn its watchful Eye to us in June https://arstechnica.com/?p=2024228 Fiery eye actually lights up and multiple towers can be stacked 2024-05-14 17:33:53
海外TECH Ars Technica Feds probe Waymo driverless cars hitting parked cars, drifting into traffic https://arstechnica.com/?p=2024231 Auto safety regulator is investigating reports of Waymo cars malfunctioning 2024-05-14 17:13:36
海外TECH AppleInsider - Frontpage News Apple blocked $7 billion in fraud attempts on the App Store https://appleinsider.com/articles/24/05/14/apple-blocked-7-billion-in-fraud-attempts-on-the-app-store?utm_medium=rss Over a span of four years Apple says that it has prevented over billion in fraudulent transactions blocked apps on the App Store over privacy violations and killed million accounts for fraud attempts Apple s stats on App Store protectionsIn its fourth annual fraud prevention analysis Apple has detailed the ways that it has prevented fraud attempts and blocked what it calls problematic apps from appearing on the App Store Alongside the billion in fraudulent transactions the company says it blocked it also blacklisted million stolen credit cards In the process it stopped million accounts from future transactions Continue Reading on AppleInsider Discuss on our Forums 2024-05-14 17:29:31
海外TECH Engadget Google's Gemini will search your videos to help you solve problems https://www.engadget.com/googles-gemini-will-search-your-videos-to-help-you-solve-problems-175235105.html?src=rss As part of its push toward adding generative AI to search Google has introduced a new twist video Gemini will let you upload video that demonstrates an issue you re trying to resolve then scour user forums and other areas of the internet to find a solution nbsp As an example Google s Rose Yao talked onstage at I O about a used turntable she bought and how she couldn t get the needle to sit on the record Yao uploaded a video showing the issue then Gemini quickly found an explainer describing how to balance the arm on that particular make and model nbsp Google quot Search is so much more than just words in a text box Often the questions you have are about the things you see around you including objects in motion quot Google wrote quot Searching with video saves you the time and trouble of finding the right words to describe this issue and you ll get an AI Overview with steps and resources to troubleshoot quot If the video alone doesn t make it clear what you re trying to figure out you can add text or draw arrows that point to the issue in question nbsp OpenAI just introduced ChatGPT o with the ability to interpret live video in real time then describe a scene or even sing a song about it Google however is taking a different tack with video by focusing on its Search product for now Searching with video is coming to Search Labs US users in English to start with but will expand to more regions over time the company said Catch up on all the news from Google I O right here This article originally appeared on Engadget at 2024-05-14 17:52:35
海外TECH Engadget Google expands digital watermarks to AI-made video https://www.engadget.com/google-expands-digital-watermarks-to-ai-made-video-175232320.html?src=rss As Google starts to make its latest video generation tools available the company says it has a plan to ensure transparency around the origins of its increasingly realistic AI generated clips All video made by the company s new Veo model in the VideoFX app will have digital watermarks thanks to Google s SynthID system SynthID is Google s digital watermarking system that started rolling out to AI generated images last year The tech embeds imperceptible watermarks into AI made content so that AI detection tools can recognize that the content was generated by AI Considering that Veo the company s latest video generation model previewed onstage at I O can create longer and higher res clips than what was previously possible tracking the source of such content will be increasingly important During a briefing with reporters DeepMind CEO Demis Hassabis said that SynthID watermarks would also expand to AI generated text As generative AI models advance more companies have turned to watermarking amid fears that AI could fuel a new wave of misinformation Watermarking systems would give platforms like Google a framework for detecting AI generated content that may otherwise be impossible to distinguish TikTok and Meta have also recently announced plans to support similar detection tools on their platforms and label more AI content in their apps Of course there are still significant questions about whether digital watermarks on their own offer sufficient protection against deceptive AI content Researchers have shown that watermarks can be easy to evade But making AI made content detectable in some way is an important first step toward transparency Catch up on all the news from Google I O right here This article originally appeared on Engadget at 2024-05-14 17:52:32
海外TECH Engadget Google Search will now show AI-generated answers to millions by default https://www.engadget.com/google-search-will-now-show-ai-generated-answers-to-millions-by-default-174512845.html?src=rss Google is shaking up Search On Tuesday the company announced big new AI powered changes to the world s dominant search engine at I O Google s annual conference for developers With the new features Google is positioning Search as more than a way to simply find websites Instead the company wants people to use its search engine to directly get answers and help them with planning events and brainstorming ideas With generative AI Search can do more than you ever imagined wrote Liz Reid vice president and head of Google Search in a blog post So you can ask whatever s on your mind or whatever you need to get done ーfrom researching to planning to brainstorming ーand Google will take care of the legwork Google s changes to Search the primary way that the company makes money are a response to the explosion of generative AI ever since OpenAI s ChatGPT released at the end of Since then a handful of AI powered apps and services including ChatGPT Anthropic Perplexity and Microsoft s Bing which is powered by OpenAI s GPT have challenged Google s flagship service by directly providing answers to questions instead of simply presenting people a list of links This is the gap that Google is racing to bridge with its new features in Search Starting today Google will show complete AI generated answers in response to most search queries at the top of the results page in the US Google first unveiled the feature a year ago at Google I O in but so far anyone who wanted to use the feature had to sign up for it as part of the company s Search Labs platform that lets people try out upcoming features ahead of their general release Google is now making AI Overviews available to hundreds of millions of Americans and says that it expects it to be available in more countries to over a billion people by the end of the year Reid wrote that people who opted to try the feature through Search Labs have used it billions of times so far and said that any links included as part of the AI generated answers get more clicks than if the page had appeared as a traditional web listing something that publishers have been concerned about As we expand this experience we ll continue to focus on sending valuable traffic to publishers and creators Reid wrote nbsp In addition to AI Overviews searching for certain queries around dining and recipes and later with movies music books hotels shopping and more in English in the US will show a new search page where results are organized using AI When you re looking for ideas Search will use generate AI to brainstorm with you and create an AI organized results page that makes it easy to explore Reid said in the blog post GoogleIf you opt in to Search Labs you ll be able to access even more features powered by generative AI in Google Search You ll be able to get AI Overview to simplify the language or break down a complex topic in more detail Here s an example of a query asking Google to explain for instance the connection between lightning and thunder GoogleSearch Labs testers will also be able to ask Google really complex questions in a single query to get answers on a single page instead of having to do multiple searches The example that Google s blog post gives Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill In response Google shows the highest rated yoga and pilates studios near Boston s Beacon Hill neighborhood and even puts them on a map for easy navigation GoogleGoogle also wants to become a meal and vacation planner by letting people who sign up for Search Labs ask queries like create a day meal plan for a group that s easy to prepare and letting you swap out individual results in its AI generated plan with something else swapping a meat based dish in a meal plan for a vegetarian one for instance GoogleFinally Google will eventually let anyone who signs up for Search Labs use a video as a search query instead of text or images Maybe you bought a record player at a thriftshop but it s not working when you turn it on and the metal piece with the needle is drifting unexpectedly wrote Reid in Google s blog post Searching with video saves you the time and trouble of finding the right words to describe this issue and you ll get an AI Overview with steps and resources to troubleshoot Google said that all these new capabilities are powered by a brand new Gemini model customized for Search that combines Gemini s advanced multi step reasoning and multimodal abilities with Google s traditional search systems Catch up on all the news from Google I O right here This article originally appeared on Engadget at 2024-05-14 17:45:12
海外TECH Engadget Google unveils Veo and Imagen 3, its latest AI media creation models https://www.engadget.com/google-unveils-veo-and-imagen-3-its-latest-ai-media-creation-models-173617373.html?src=rss It s all AI all the time at Google I O Today Google announced its new AI media creation engines Veo which can produce quot high quality quot p videos and Imagen its latest text to image framework Neither sound particularly revolutionary but they re a way for Google to keep up the fight against OpenAI s Sora video model and Dall E a tool that has practically become synonymous with AI generated images Google claims Veo has quot an advanced understanding of natural language and visual semantics quot to create whatever video you have in mind The AI generated videos can last quot beyond a minute quot Veo is also capable of understanding cinematic and visual techniques like the concept of a timelapse But really that should be table stakes for an AI video generation model right To prove that Veo isn t out to steal artist s jobs Google has also partnered with Donald Glover and Gilga his creative studio to show off the model s capabilities In a very brief promotional video we see Glover and crew using text to create video of a convertible arriving at a European home and a sailboat gliding through the ocean According to Google Veo can simulate real world physics better than its previous models and it s also improved how it renders high definition footage quot Everybody s going to become a director and everybody should be a director quot Glover says in the video absolutely earning his Google paycheck quot At the heart of all of this is just storytelling The closer we are to be able to tell each other our stories the more we ll understand each other quot It remains to be seen if anyone will actually want to watch AI generated video outside of the morbid curiosity of seeing a machine attempt to algorithmically recreate the work of human artists But that s not stopping Google or OpenAI from promoting these tools and hoping they ll be useful or at least make a bunch of money Veo will be available inside of Google s VideoFX tool today for some creators and the company says it ll also be coming to YouTube Shorts and other products If Veo does end up becoming a built in part of YouTube Shorts that s at least one feature Google can lord over TikTok GoogleAs for Imagen Google is making the usual promises It s said to be the company s quot highest quality quot text to image model with quot incredible level of detail quot for quot photorealistic lifelike images quot and fewer artifacts The real test of course will be to see how it handles prompts compared to Dall E Imagen handles text better than before Google says and it s also smarter about handling details from long prompts Google is also working with recording artists like Wyclef Jean and Bjorn to test out its Music AI Sandbox a set of tools that can help with song and beat creation We only saw a brief glimpse of this but it s led to a few intriguing demos nbsp The sun rises and sets We re all slowly dying And AI is getting smarter by the day That seems to be the big takeaway from Google s latest media creation tools Of course they re getting better Google is pouring billions into making the dream of AI a reality all in a bid to own the next great leap for computing Will any of this actually make our lives better Will they ever be able to generate art with genuine soul Check back at Google I O every year until AGI actually appears or our civilization collapses Developing Catch up on all the news from Google I O right here This article originally appeared on Engadget at 2024-05-14 17:36:17
海外TECH Engadget Google just snuck a pair of AR glasses into a Project Astra demo at I/O https://www.engadget.com/google-just-snuck-a-pair-of-ar-glasses-into-a-project-astra-demo-at-io-172824539.html?src=rss In a video demonstrating the prowess of its new Project Astra app the person demonstrating asked Gemini quot do you remember where you saw my glasses quot The AI impressively responded quot Yes I do Your glasses were on a desk near a red apple quot despite said object not actually being in view when the question was asked But these weren t your bog standard visual aid These glasses had a camera onboard and some sort of visual interface The tester picked up their glasses and put them on and proceeded to ask the AI more questions about things they were looking at Clearly there is a camera on the device that s helping it take in the surroundings and we were shown some sort of interface where a waveform moved to indicate it was listening Onscreen captions appeared to reflect the answer that was being read aloud to the wearer as well So if we re keeping track that s at least a microphone and speaker onboard too along with some kind of processor and battery to power the whole thing nbsp We only caught a brief glimpse of the wearable but from the sneaky seconds it was in view a few things were evident The glasses had a simple black frame and didn t look at all like Google Glass They didn t appear very bulky either nbsp In all likelihood Google is not ready to actually launch a pair of glasses at I O It breezed right past the wearable s appearance and barely mentioned them only to say that Project Astra and the company s vision of quot universal agents quot could come to devices like our phones or glasses We don t know much else at the moment but if you ve been mourning Google Glass or the company s other failed wearable products this might instill some hope yet Catch up on all the news from Google I O right here This article originally appeared on Engadget at 2024-05-14 17:28:24
海外TECH Engadget Google's Project Astra uses your phone's camera and AI to find noise makers, misplaced items and more. https://www.engadget.com/googles-project-astra-uses-your-phones-camera-and-ai-to-find-noise-makers-misplaced-items-and-more-172642329.html?src=rss When Google first showcased its Duplex voice assistant technology at its developer conference in it was both impressive and concerning Today at I O the company may be bringing up those same reactions again this time by showing off another application of its AI smarts with something called Project Astra nbsp The company couldn t even wait till its keynote today to tease Project Astra posting a video to its social media of a camera based AI app yesterday At its keynote today though Google s DeepMind CEO Demis Hassabis shared that his team has quot always wanted to develop universal AI agents that can be helpful in everyday life quot Project Astra is the result of progress on that front nbsp What is Project Astra According to a video that Google showed during a media briefing yesterday Project Astra appeared to be an app which has a viewfinder as its main interface A person holding up a phone pointed its camera at various parts of an office and verbally said quot Tell me when you see something that makes sound quot When a speaker next to a monitor came into view Gemini responded quot I see a speaker which makes sound quot The person behind the phone stopped and drew an onscreen arrow to the top circle on the speaker and said quot What is that part of the speaker called quot Gemini promptly responded quot That is the tweeter It produces high frequency sounds quot Then in the video that Google said was recorded in a single take the tester moved over to a cup of crayons further down the table and asked quot Give me a creative alliteration about these quot to which Gemini said quot Creative crayons color cheerfully They certainly craft colorful creations quot Wait were those Project Astra glasses Is Google Glass back The rest of the video goes on to show Gemini in Project Astra identifying and explaining parts of code on a monitor telling the user what neighborhood they were in based on the view out the window Most impressively Astra was able to answer quot Do you remember where you saw my glasses quot even though said glasses were completely out of frame and were not previously pointed out quot Yes I do quot Gemini said adding quot Your glasses were on a desk near a red apple quot After Astra located those glasses the tester put them on and the video shifted to the perspective of what you d see on the wearable Using a camera onboard the glasses scanned the wearer s surroundings to see things like a diagram on a whiteboard The person in the video then asked quot What can I add here to make this system faster quot As they spoke an onscreen waveform moved to indicate it was listening and as it responded text captions appeared in tandem Astra said quot Adding a cache between the server and database could improve speed quot The tester then looked over to a pair of cats doodled on the board and asked quot What does this remind you of quot Astra said quot Schrodinger s cat quot Finally they picked up a plush tiger toy put it next to a cute golden retriever and asked for quot a band name for this duo quot Astra dutifully replied quot Golden stripes quot How does Project Astra work This means that not only was Astra processing visual data in realtime it was also remembering what it saw and working with an impressive backlog of stored information This was achieved according to Hassabis because these quot agents quot were quot designed to process information faster by continuously encoding video frames combining the video and speech input into a timeline of events and caching this information for efficient recall quot It was also worth noting that at least in the video Astra was responding quickly Hassabis noted in a blog post that quot While we ve made incredible progress developing AI systems that can understand multimodal information getting response time down to something conversational is a difficult engineering challenge quot Google has also been working on giving its AI more range of vocal expression using its speech models to quot enhanced how they sound giving the agents a wider range of intonations quot This sort of mimicry of human expressiveness in responses is reminiscent of Duplex s pauses and utterances that led people to think Google s AI might be a candidate for the Turing test When will Project Astra be available While Astra remains an early feature with no discernible plans for launch Hassabis wrote that in future these assistants could be available quot through your phone or glasses quot No word yet on whether those glasses are actually a product or the successor to Google Glass but Hassabis did write that quot some of these capabilities are coming to Google products like the Gemini app later this year quot Catch up on all the news from Google I O right here This article originally appeared on Engadget at 2024-05-14 17:28:00
海外TECH Engadget Google's new Gemini 1.5 Flash AI model is lighter than Gemini Pro and more accessible https://www.engadget.com/googles-new-gemini-15-flash-ai-model-is-lighter-than-gemini-pro-and-more-accessible-172353657.html?src=rss Google announced updates to its Gemini family of AI models at I O the company s annual conference for developers on Tuesday It s rolling out a new model called Gemini Flash which it says is optimized for speed and efficiency Gemini Flash excels at summarization chat applications image and video captioning data extraction from long documents and tables and more wrote Demis Hassabis CEO of Google DeepMind in a blog post Hassabis added that Google created Gemini Flash because developers needed a model that was lighter and less expensive than the Pro version which Google announced in February Gemini Pro is more efficient and powerful than the company s original Gemini model announced late last year Gemini Flash sits between Gemini Pro and Gemini Nano Google s smallest model that runs locally on devices Despite being lighter weight then Gemini Pro however it is just as powerful Google said that this was achieved through a process called distillation where the most essential knowledge and skills from Gemini Pro were transferred to the smaller model This means that Gemini Flash will get the same multimodal capabilities of Pro as well as its long context window the amount of data that an AI model can ingest at once of one million tokens This according to Google means that Gemini Flash will be capable of analyzing a page document or a codebase with more than lines at once nbsp Gemini Flash or any of these models aren t really meant for consumers Instead it s a faster and less expensive way for developers building their own AI products and services using tech designed by Google In addition to launching Gemini Flash Google is also upgrading Gemini Pro The company said that it had enhanced the model s abilities to write code reason and parse audio and images But the biggest update is yet to come Google announced it will double the model s existing context window to two million tokens later this year That would make it capable of processing two hours of video hours of audio more than lines of code or more than million words at the same time Both Gemini Flash and Pro are now available in public preview in Google s AI Studio and Vertex AI The company also announced today a new version of its Gemma open model called Gemma But unless you re a developer or someone who likes to tinker around with building AI apps and services these updates aren t really meant for the average consumer Catch up on all the news from Google I O right here This article originally appeared on Engadget at 2024-05-14 17:23:53
海外TECH Engadget Watch the Google I/O 2024 keynote live https://www.engadget.com/how-to-watch-googles-io-2024-keynote-160010787.html?src=rss Editor s note The Google I O keynote is live See below for the stream and toggle over to the Engadget Google I O liveblog for real time coverage It s that time of year again Google s annual I O keynote is upon us This event is likely to be packed with updates and announcements We ll be covering all of the news as it happens and you can stream the full event below The keynote starts at PM ET on May and streams are available via YouTube and the company s hub page In terms of what to expect the rumor mill has been working overtime There are multiple reports that the event will largely focus on the Android mobile operating system which seems like a given since I O is primarily an event for developers and the beta version is already out in the wild So let s talk about the Android beta and what to expect from the full release The beta includes an updated Privacy Sandbox feature partial screen sharing to record a certain app or window instead of the whole screen and system level app archiving to free up space There s also improved satellite connectivity additional in app camera controls and a new power efficiency mode Despite the beta already existing it s highly probable that Google will drop some surprise Android announcements The company has confirmed that satellite messaging is coming to Android so maybe that ll be part of this event Rumors also suggest that Android will boast a redesigned status bar and an easier way to monitor battery health Sam Rutherford EngadgetAndroid won t be the only thing Google discusses during the event There s a little acronym called AI you may have heard about and the company has gone all in It s a good bet that Google will spend a fair amount of time announcing updates for its Gemini AI which could eventually replace Assistant entirely Back in December it was reported that Google was working on an AI assistant called Pixie as an exclusive feature for Pixel devices The branding is certainly on point We could hear more about that as it may debut in the Pixel later this year nbsp Google s most popular products could also get AI focused redesigns including Search Chrome G Suite and Maps We might get an update as to what the company plans on doing about third party cookies and maybe it ll throw some AI at that problem too What not to expect Don t get your hopes up for a Pixel or refreshed Pixel Fold for this event as I O is more for software than hardware We ll likely get details on those releases in the fall However rules were made to be broken Last year we got a Pixel Fold announcement at I O so maybe the line between hardware and software is blurring We ll find out soon This article originally appeared on Engadget at 2024-05-14 17:13:19
海外TECH Engadget Ask Google Photos to help make sense of your gallery https://www.engadget.com/ask-google-photos-to-get-help-making-sense-of-your-gallery-170734062.html?src=rss Google is inserting more of its Gemini AI into many of its product and the next target in its sights is Photos At its I O developer conference today the company s CEO Sundar Pichai announced a feature called Ask Photos which is designed to help you find specific images in your gallery by talking to Gemini nbsp Ask Photos will show up as a new tab at the bottom of your Google Photos app It ll start rolling out to One subscribers first starting in US English over the upcoming months When you tap over to that panel you ll see the Gemini star icon and a welcome message above a bar that prompts you to quot search or ask about Photos quot According to Google you can ask things like quot show me the best photo from each national park I ve visited quot which not only draws upon GPS information but also requires the AI to exercise some judgement in determining what is quot best quot The company s VP for Photos Shimrit Ben Yair told Engadget that you ll be able to provide feedback to the AI and let it know which pictures you preferred instead quot Learning is key quot Ben Yair said You can also ask Photos to find your top photos from a recent vacation and generate a caption to describe them so you can more quickly share them to social media Again if you didn t like what Gemini suggested you can also make tweaks later on For now you ll have to type your query to Ask Photos ーvoice input isn t yet supported And as the feature rolls out those who opt in to use it will see their existing search feature get quot upgraded quot to Ask However Google said that quot key search functionality like quick access to your face groups or the map view won t be lost quot The company explained that there are three parts to the Ask Photos process quot Understanding your question quot quot crafting a response quot and quot ensuring safety and remembering corrections quot Though safety is only mentioned in the final stage it should be baked in the entire time The company acknowledged that quot the information in your photos can be deeply personal and we take the responsibility of protecting it very seriously quot To that end queries are not stored anywhere though they are processed in the cloud not on device People will not review conversations or personal data in Ask Photos except quot in rare cases to address abuse or harm quot Google also said it doesn t train quot any generative AI product outside of Google Photos on this personal data including other Gemini models and products quot Your media continues to be protected by the same security and privacy measures that cover your use of Google Photos That s a good thing since one of the potentially more helpful ways to use Ask Photos might be to get information like passport or license expiry dates from pictures you might have snapped years ago It uses Gemini s multimodal capabilities to read text in images to find answers too Of course AI isn t new in Google Photos You ve always been able to search the app for things like quot credit card quot or a specific friend using the company s facial and object recognition algorithms But Gemini AI brings generative processing so Photos can do a lot more than just deliver pictures with certain people or items in them Other applications include getting Photos to tell you what themes you might have used for the last few birthday parties you threw for your partner or child Gemini AI is at work here to study your pictures and figure out what themes you already adopted There are a lot of promising use cases for Ask Photos which is an experimental feature at the moment and that is quot starting to roll out soon quot Like other Photos tools it might begin as a premium feature for One subscribers and Pixel owners before trickling down to all who use the free app There s no official word yet on when or whether that might happen though Catch up on all the news from Google I O right here This article originally appeared on Engadget at 2024-05-14 17:08:10
YouTube Channels google Empowering Every Teacher to Reach Every Student with LearnLM https://www.youtube.com/watch?v=NTECA6ct55w Google has been piloting new features in Google Classroom powered by LearnLM to help lighten the workload for teachers Applying generative AI we re exploring how to help simplify the lesson planning process empowering teachers to tailor lessons and content to the individual needs of their students so they can amplify their learning impact and meet students where they are Watch the full Google I O keynote To watch this keynote with American Sign Language ASL interpretation please click here GoogleIO GoogleIOSubscribe to our Channel Find us on X Watch us on TikTok googleFollow us on Instagram Join us on Facebook 2024-05-14 17:58:29
YouTube Channels google Developing for Indic languages | Gemma and Navarasa https://www.youtube.com/watch?v=b4Gs-taU0Tk While many early large language models were predominantly trained on English language data the field is rapidly evolving Newer models are increasingly being trained on multilingual datasets and there s a growing focus on developing models specifically for the world s languages However challenges remain in ensuring equitable representation and performance across diverse languages particularly those with less available data and computational resources Gemma Google s family of open models is designed to address these challenges by enabling the development of projects in non Germanic languages Its tokenizer and large token vocabulary make it particularly well suited for handling diverse languages Watch how developers in India used Gemma to create Navarasa ーa fine tuned Gemma model for Indic languages Watch the full keynote To watch this keynote with American Sign Language ASL interpretation please click here GoogleIO GoogleIOSubscribe to our Channel Find us on X Watch us on TikTok googleFollow us on Instagram Join us on Facebook 2024-05-14 17:57:21
YouTube Channels google Search in the Gemini era | Google I/O 2024 https://www.youtube.com/watch?v=s4InWsd-J6g Take a look at some of the new ways AI in Search can do the hard work so you don t have to Just ask Learn more about Search in the Gemini era Watch the full Google I O keynote To watch this keynote with American Sign Language ASL interpretation please click here GoogleIO GoogleIOSubscribe to our Channel Find us on X Watch us on TikTok googleFollow us on Instagram Join us on Facebook 2024-05-14 17:54:08
YouTube Channels google Filmmaking with Donald Glover and his creative studio, Gilga | Veo https://www.youtube.com/watch?v=dKAVFLB75xs Ever wondered how artificial intelligence might change how we approach storytelling in the world of filmmaking We invited filmmaker Donald Glover and his creative studio Gilga to experiment with Veo our latest video generation model Get a sneak peek inside the upcoming collaboration where he uses AI to create a short film Learn more about his experience and how these collaborations could shape the future of technology in storytelling Learn more about Veo GoogleIOWatch the full Google I O keynote To watch this keynote with American Sign Language ASL interpretation please click here Subscribe to our Channel Find us on X Watch us on TikTok googleFollow us on Instagram Join us on Facebook 2024-05-14 17:36:25
YouTube Channels google Project Astra: Our vision for the future of AI assistants https://www.youtube.com/watch?v=nXVvvRhiGjI Introducing Project Astra We created a demo in which a tester interacts with a prototype of AI agents supported by our multimodal foundation model Gemini There are two continuous takes one with the prototype running on a Google Pixel phone and another on a prototype glasses device The agent takes in a constant stream of audio and video input It can reason about its environment in real time and interact with the tester in a conversation about what it is seeing Learn more about Project Astra GoogleIOWatch the full Google I O keynote To watch this keynote with American Sign Language ASL interpretation please click here GoogleIOSubscribe to our Channel Find us on X Watch us on TikTok googleFollow us on Instagram Join us on Facebook 2024-05-14 17:26:43
YouTube Channels google How developers are using Gemini 1.5 Pro’s 1 million token context window https://www.youtube.com/watch?v=cogrixfRvWw When Gemini Pro was released it immediately caught the attention of developers all over the world There were so many incredible tests being done and stories being told that we reached out to a few people and asked them to share theirs with us Watch to see how developers are using Gemini Pro s million token context window Watch the full Google I O keynote To watch this keynote with American Sign Language ASL interpretation please click here GoogleIO GoogleIOSubscribe to our Channel Find us on X Watch us on TikTok googleFollow us on Instagram Join us on Facebook 2024-05-14 17:09:22
YouTube Channels google Google I/O 2024: Opening Film https://www.youtube.com/watch?v=mQ8k4T5wVgE What a year it s been To kick off I O we took a look at the innovations and breakthroughs at Google from the past year ーall in pursuit of our goal to make AI helpful for everyone Watch the full Google I O keynote To watch this keynote with American Sign Language ASL interpretation please click here GoogleIO GoogleIOSubscribe to our Channel Find us on X Watch us on TikTok googleFollow us on Instagram Join us on Facebook 2024-05-14 17:00:57

コメント

このブログの人気の投稿

投稿時間:2021-06-17 22:08:45 RSSフィード2021-06-17 22:00 分まとめ(2089件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)