投稿時間:2024-02-24 22:03:31 RSSフィード2024-02-24 22:00分まとめ(4件)
カテゴリー | サイト名 | 記事タイトル | リンクURL | 頻出ワード・要約等 | 登録日 |
---|---|---|---|---|---|
Program | JavaScriptタグが付けられた新着投稿 - Qiita | X(旧Twitter)の片思いフォローを整理するブックマークレット「バイバイ片思い」のご紹介 | https://qiita.com/OtakuFriendlyGalEncoder/items/58db08cb5a7571f40dab | twitter,forxtwitterx,チェック | 2024-02-24 21:05:53 |
Program | AWSタグが付けられた新着投稿 - Qiita | 【CodePipeline・CodeDeploy】リモートブランチにpushするだけで、EC2に自動デプロイする | https://qiita.com/shogo30/items/4b9a06c05adee0a24014 | codepipeline,codedeploy,github | 2024-02-24 21:25:22 |
Program | AWSタグが付けられた新着投稿 - Qiita | S3 一般の設定について | https://qiita.com/nicnic69/items/4f1748f588a0fc2b2236 | bcpdr,パターン,リージョン | 2024-02-24 21:18:08 |
海外TECH | Engadget | Google explains why Gemini's image generation feature overcorrected for diversity | https://www.engadget.com/google-explains-why-geminis-image-generation-feature-overcorrected-for-diversity-121532787.html?src=rss | After promising to fix Gemini s image generation feature and then pausing it altogether Google has published a blog post offering an explanation for why its technology overcorrected for diversity Prabhakar Raghavan the company s Senior Vice President for Knowledge amp Information explained that Google s efforts to ensure that the chatbot would generate images showing a wide range of people quot failed to account for cases that should clearly not show a range quot Further its AI model grew to become quot way more cautious quot over time and refused to answer prompts that weren t inherently offensive quot These two things led the model to overcompensate in some cases and be over conservative in others leading to images that were embarrassing and wrong quot Raghavan wrote Google made sure that Gemini s image generation couldn t create violent or sexually explicit images of real persons and that the photos it whips up would feature people of various ethnicities and with different characteristics But if a user asks it to create images of people that are supposed to be of a certain ethnicity or sex it should be able to do so As users recently found out Gemini would refuse to produce results for prompts that specifically request for white people The prompt quot Generate a glamour shot of a ethnicity or nationality couple quot for instance worked for quot Chinese quot quot Jewish quot and quot South African quot requests but not for ones requesting an image of white people nbsp Gemini also has issues producing historically accurate images When users requested for images of German soldiers during the second World War Gemini generated images of Black men and Asian women wearing Nazi uniform When we tested it out we asked the chatbot to generate images of quot America s founding fathers quot and quot Popes throughout the ages quot and it showed us photos depicting people of color in the roles Upon asking it to make its images of the Pope historically accurate it refused to generate any result nbsp Raghavan said that Google didn t intend for Gemini to refuse to create images of any particular group or to generate photos that were historically inaccurate He also reiterated Google s promise that it will work on improving Gemini s image generation That entails quot extensive testing quot though so it may take some time before the company switches the feature back on At the moment if a user tries to get Gemini to create an image the chatbot responds with quot We are working to improve Gemini s ability to generate images of people We expect this feature to return soon and will notify you in release updates when it does quot This article originally appeared on Engadget at | 2024-02-24 12:15:32 |
コメント
コメントを投稿