投稿時間:2023-06-05 02:06:57 RSSフィード2023-06-05 02:00 分まとめ(8件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
python Pythonタグが付けられた新着投稿 - Qiita わたしと値と参照と https://qiita.com/Amnis333/items/fe48049d9974026ed382 違い 2023-06-05 01:22:36
python Pythonタグが付けられた新着投稿 - Qiita Pythonで画像データに地理座標系を与えるのに使えそうな記事まとめ https://qiita.com/wakama1994/items/4905f892cec3806bf15f 地理座標系 2023-06-05 01:03:47
Docker dockerタグが付けられた新着投稿 - Qiita 【Docker初心者】Dockerfileの書き方について https://qiita.com/gon0821/items/f9e3bcbb6cb01d4ef7fa docker 2023-06-05 01:41:10
海外TECH DEV Community Beyond OpenAI: Harnessing Open Source Models to Create Your Personalized AI Companion https://dev.to/akshayballal/beyond-openai-harnessing-open-source-models-to-create-your-personalized-ai-companion-1npb Beyond OpenAI Harnessing Open Source Models to Create Your Personalized AI Companion IntroductionImagine having a personal assistant who can engage in interactive conversations provide insightful information and help you navigate your vast knowledge base A companion that understands your queries and responds with relevant and tailored answers This is precisely what I set out to achieve with my latest project As someone with a profound collection of knowledge on deep learning meticulously organized in my Obsidian Vault I wanted to create a personal assistant that could serve as an intelligent interface to interact with this knowledge base I envisioned a seamless experience where I could engage in natural language conversations and effortlessly retrieve valuable insights from my extensive repository of information In my previous blog I explored the exciting possibility of creating your very own YouTube GPT using LangChain and OpenAI We created a system that enabled us to engage with video content in an entirely new way OpenAI while undeniably brilliant is not as open as its name suggests It comes at a cost a paywall that separates the curious from the unlimited possibilities that lie within However this time we re taking a different pathーone that leads us to the world of open source models where freedom and accessibility reign supreme In this blog we will dive into the realm of open source AI models that are freely available for use breaking away from the limitations of proprietary systems like OpenAI Together we will unlock the true potential of AI by harnessing the capabilities of the MPT B Instruct Model developed by Mosaic ML and served by GPTAll alongside LangChain MPT B is an amazing open source language model created by the talented folks at Mosaic ML This model is truly remarkable with its ability to understand and generate text that feels just like human conversation What sets MPT B apart is its special fine tuning to provide instructive responses making it perfect for building your own personalized AI companion LangChain is a powerful tool that facilitates the creation of AI models tailored to specific domains or languages It allows users to train language models on custom datasets making it ideal for developing specialized AI applications With LangChain you can fine tune models to suit your unique needs enabling more accurate and context aware responses By leveraging the capabilities of LangChain you can enhance the performance of your AI system and create a more personalized and effective conversational experience Now let us get coding As always a link to the Git Repo will be available at the bottom of this post Start a new python project and initiate a new virtual environment Create a python file name private gpt py Install DependenciesIn order to get started with building your personalized AI companion there are a few dependencies that need to be installed The code block above shows the necessary packages that need to be installed which can be done by running the following command pip install qU langchain tiktoken gptall streamlit chat einops transformers accelerate chromadb Import DependenciesTo kick start the development of our personalized AI assistant we begin by importing the necessary dependencies These libraries and modules will provide the foundation for our assistant s functionality and enable us to interact with various components seamlessly from langchain import ConversationChain PromptTemplateimport torchfrom langchain embeddings import HuggingFaceEmbeddingsfrom langchain indexes import VectorstoreIndexCreatorfrom langchain document loaders import TextLoader DirectoryLoaderfrom langchain llms import GPTAllimport osfrom langchain memory import VectorStoreRetrieverMemoryimport streamlit as stfrom streamlit chat import message Create Data Loaderloader DirectoryLoader D OneDrive Documents Obsidian Projects myVault glob md recursive True show progress True use multithreading True loader cls TextLoader docs loader load len docs Now that we have imported the necessary dependencies let s move on to creating a data loader This loader will enable us to load and process text documents from our knowledge base which will serve as a valuable source of information for our personalized AI assistant In the code block above we begin by initializing the data loader using the DirectoryLoader class from langchain document loaders We pass in the directory path where our text documents are stored as the first argument In this example the directory path is D OneDrive Documents Obsidian Projects myVault Feel free to replace this with the path to your own text document directory The next parameter glob allows us to specify the file pattern or extension of the documents we want to load In this case we use md to load all Markdown files md in the directory and its subdirectories You can modify this pattern to suit your specific file types or naming conventions Setting recursive True ensures that the loader explores subdirectories within the specified directory enabling us to load documents from a nested structure if necessary The show progress parameter controls whether the loader displays a progress bar while loading the documents Setting it to True provides visibility into the loading process especially useful for larger knowledge bases To enhance performance we can leverage multithreading by setting use multithreading True This speeds up the loading process by loading multiple documents concurrently Finally we specify the loader class as TextLoader which instructs the data loader to treat each document as a text file After setting up the loader we proceed to load the documents using the loader load method This returns a list of document objects To verify the successful loading of documents we print the length of the docs list using len docs This provides us with the count of loaded documents ensuring that our data loader is functioning as expected Instantiate Embeddings and LLMembeddings HuggingFaceEmbeddings model name all mpnet base v llm GPTAll model ggml mpt b instruct bin top p top k temp repeat penalty n threads n batch n ctx In order to enhance the language understanding and generation capabilities of our personalized AI assistant we need to instantiate embeddings and a language model LLM These components will enable our assistant to grasp the context of conversations and generate coherent and relevant responses In the code block provided we create an instance of the HuggingFaceEmbeddings class from langchain embeddings This class allows us to leverage pre trained word embeddings from the Hugging Face library which are powerful tools for capturing the meaning and context of words in our language model We specify the model name parameter as all mpnet base v which corresponds to a specific pre trained model from Hugging Face s model repository You can explore other available models or choose one that best suits your requirements Next we instantiate the language model using the GPTAll class from langchain llms We pass in the model parameter to specify the path to our pre trained language model In this example the model is located at ggml mpt b instruct bin Please ensure that you provide the correct path to your own pre trained model Additionally we set several parameters to fine tune the behavior of our language model These parameters include top p This parameter determines the cumulative probability threshold for the model s sampling during text generation A lower value results in more focused and deterministic responses top k This parameter sets the number of highest probability tokens to consider during text generation Setting it to means all tokens are considered allowing for more diverse responses temp The temp parameter controls the temperature of the model s softmax distribution during sampling Higher values e g result in more randomness and creativity in generated text repeat penalty This parameter discourages the model from repeating the same phrases or patterns excessively Increasing the value further reduces repetitive responses n threads and n batch These parameters determine the number of threads and batch size used for parallel processing during text generation Adjusting these values can optimize the performance of our language model Feel free to play around with these values to get suitable results Or you can just stick to these values Create Vector Database and Memoryindex VectorstoreIndexCreator embedding embeddings from loaders loader retriever index vectorstore as retriever search kwargs dict k memory VectorStoreRetrieverMemory retriever retriever To retrieve information efficiently we create a vector database using VectorstoreIndexCreator from langchain indexes It leverages our pre trained word embeddings to index the loaded documents effectively Using the from loaders method we generate the vector database from the loaded documents using our chosen embeddings Next we create a retriever using the as retriever method of the vector database This allows us to search for relevant documents based on queries providing the necessary information for context aware responses Finally we create a memory system using VectorStoreRetrieverMemory which bridges the vector database and the language model It enhances the assistant s ability to recall relevant information during conversations ensuring accurate and contextually appropriate responses Create Prompt Template MPT B Instruct model was trained on data formatted in the dolly k format like shown DEFAULT TEMPLATE Below is an instruction that describes a task Write a response that appropriately completes the request Instruction The following is a friendly conversation between a human and an AI The AI is talkative and provides lots of specific details from its context If the AI does not know the answer to a question it truthfully says it does not know Do not make up answers and provide only information that you have Relevant pieces of previous conversation history You do not need to use these pieces of information if not relevant input Response To structure the prompts for our personalized AI assistant we use a template chosen specifically for the MPT B Instruct model This model was trained on data formatted in the dolly k format which is known for its effectiveness in generating instructive responses The template includes an instruction section where the AI is encouraged to provide detailed and context specific information It also emphasizes that the AI should admit when it doesn t know the answer The previous conversation s relevant pieces and any additional input provided are incorporated into the prompt template These elements contribute to the AI s understanding of the conversation s context The response section of the template is left empty allowing the AI to generate its response based on the given context and instructions InferencePROMPT PromptTemplate input variables history input template DEFAULT TEMPLATE conversation with summary ConversationChain llm llm prompt PROMPT We set a very low max token limit for the purposes of testing memory memory verbose True with torch inference mode conversation with summary predict input Make me a study plan to study deep learning Now that we have set up our prompt template and created the necessary components for our personalized AI assistant we can move on to the inference stage In this stage we utilize the conversation chain to generate responses based on the given input and context In the provided code block we define a PROMPT object using the PromptTemplate class This template incorporates the input variables history and input and the default template DEFAULT TEMPLATE we discussed earlier It serves as the structure for our conversations guiding the AI s responses Next we create a ConversationChain object named conversation with summary This chain utilizes our language model llm prompt template PROMPT and memory system memory to generate responses We also set verbose True to enable detailed output during the conversation Inside the with torch inference mode block we call the predict method of the conversation with summary object We pass the input give me more details to initiate the conversation The AI will utilize the prompt template context from the conversation history and the memory system to generate a relevant and informative response During this inference stage the AI will leverage its knowledge and contextual understanding to generate responses that align with the conversation s flow and user s queries The conversation chain ensures that the generated responses are coherent and contextually appropriate Create UIst set page config page title PrivateGPT page icon robot st header PrivateGPT if generated not in st session state st session state generated if past not in st session state st session state past def get text input text st text input You Hello how are you key input return input text user input get text def writeText output st session state generated append output if user input with torch inference mode st session state past append user input st session state generated append conversation with summary predict input user input if st session state generated for i in range len st session state generated message st session state generated i key str i message st session state past i is user True key str i user Run the Appstreamlit run private gpt pyGit Repository Want to connect My WebsiteMy TwitterMy LinkedIn 2023-06-04 16:28:29
海外TECH DEV Community Headless CMS: The Headache-less Solution for Content Management System https://dev.to/aradwan20/headless-cms-the-headache-less-solution-for-content-management-system-2gmc Headless CMS The Headache less Solution for Content Management SystemIn the rapidly evolving digital landscape managing content effectively and efficiently has become a paramount concern Traditional Content Management Systems CMS have served us well but as our needs diversify and grow we require more flexible solutions Enter the world of Headless CMS a revolutionary approach to content management that promises to alleviate the headaches associated with traditional CMS In this comprehensive guide we will dive into the intricacies of Headless CMS contrasting it with traditional CMS and exploring its numerous advantages including flexibility speed performance streamlined content management and improved security Introduction Traditional CMS vs Headless CMS The Flexibility of a Headless CMS Multi platform Integration Separation of Concerns Content vs Presentation Speed and Performance Advantages Optimized Loading Times Better Scalability Streamlining Content Management Centralized Content Repository Simplified Content Headache less Collaboration Improved Security and Reliability Reduced Vulnerability to Attacks Enhanced Stability and Uptime Headless CMS and Future proofing Your Content Adapting to New Technologies and Trends API driven Content Distribution Explore CMS systems and take Contentful as an exampleConclusion Common FAQs IntroductionWelcome to the exciting world of content management systems In this article we ll explore the differences between traditional CMS and headless CMS Whether you re new to the concept or just looking to learn more you ll leave with a better understanding of what each system offers So let s dive right in Traditional CMS vs Headless CMS So you ve probably heard about traditional CMS Content Management System like Joomla and Drupal right These systems have been around for a while and are great for managing content on websites But as technology has evolved we ve found ourselves needing more flexibility to adapt to the ever changing digital landscape That s where headless CMS comes in Let s break down the key differences between traditional CMS and headless CMS Traditional CMS tightly couples content and presentation In a traditional CMS your content and the way it looks on the website are closely connected This can limit your ability to use the same content across multiple platforms or customize the presentation for different devices Headless CMS separates content and presentation With a headless CMS you store your content in a way that s independent of how it s displayed This allows you to use the same content on multiple platforms like websites mobile apps and even voice assistants and customize the presentation as needed Now let s explore some amazing advantages that headless CMS offers over traditional ones Are you excited Am sure I The Flexibility of a Headless CMS Alright let s talk about one of the coolest things about headless CMS flexibility By separating content from presentation a headless CMS opens up a world of possibilities for multi platform integration and collaboration Let s dive into some of the ways this flexibility can benefit you Multi platform IntegrationImagine being able to use the same content across your website mobile app and even your smart speaker all without having to copy and paste or juggle multiple systems Sounds amazing right With a headless CMS this dream becomes a reality Headless CMS allows you to store your content in a single repository which can then be accessed and displayed on various platforms through APIs Application Programming Interfaces This means you can create content once and use it everywhere saving you time and ensuring consistency across your digital presence Separation of Concerns Content vs PresentationNow let s talk about how headless CMS makes life easier for both content creators and developers By separating content from presentation it allows these two groups to work independently streamlining the content creation process Content creators can focus on what they do best crafting engaging informative content without worrying about how it will look on different platforms They can simply input their content into the headless CMS and it s good to go On the other hand developers can concentrate on building the perfect user experience for each platform They have the freedom to design and implement the presentation layer without being tied down by the content management system This separation of concerns results in a more efficient workflow and a better overall product Speed and Performance Advantages We all love fast smooth experiences when browsing the web or using apps right Well guess what Headless CMS can help with that too By separating content from presentation headless CMS can provide significant speed and performance benefits Let s explore how it achieves this Optimized Loading Times When it comes to loading times every second counts Users can get frustrated and leave if a site or app takes too long to load With a headless CMS you have the power to optimize content delivery for each platform How does this work Well when content is requested from the headless CMS through APIs it can be delivered in a format that s best suited for the specific platform For example a website might receive content as HTML while a mobile app might receive it in JSON format This ensures that each platform gets the content it needs in the most efficient way possible resulting in faster load times and a better user experience Better Scalability As your content needs grow you want a system that can grow with you With a headless CMS scaling your content management becomes a breeze Because the content is separate from the presentation you can easily add new platforms or handle increasing amounts of content and traffic without compromising performance Since headless CMS platforms are often built using modern cloud based infrastructure they re designed to handle growth smoothly This means that as your content needs evolve your headless CMS will be able to adapt and support your growth without a hitch Streamlining Content Management Now that we ve talked about the speed and performance benefits of headless CMS let s dive into how it can streamline content management With a headless CMS you ll enjoy a centralized content repository simplified updates and maintenance and headache less collaboration Let s explore these advantages in more detail Centralized Content Repository Picture this all your content organized neatly in one place ready to be used across any platform you want Sounds pretty great right That s exactly what a headless CMS does for you By providing a centralized content repository it makes managing your content a whole lot easier No more duplicating content across different systems or struggling to find that one piece of content you need With a headless CMS everything is stored in a single location making it easier to keep everything organized and up to date Simplified Content Updates and Maintenance Updating content can be a pain especially when you have to do it across multiple platforms But with a headless CMS this process becomes a breeze Make changes in one place and they ll automatically propagate to all your platforms ensuring consistency and saving you time Need to fix a typo or update some information Just edit the content in your headless CMS and you re done No need to worry about updating each platform individually or dealing with inconsistent content Headache less Collaboration Collaboration is crucial when it comes to creating and managing content A headless CMS makes it easier for teams to work together by allowing content creators and developers to work simultaneously without stepping on each other s toes Remember how we talked about the separation of content and presentation This allows content creators to focus on crafting engaging content while developers can work on the presentation layer for each platform This streamlined workflow results in a more efficient content creation process and better overall outcomes Improved Security and Reliability Let s be real nobody wants to deal with security breaches or downtime The good news is that headless CMS can help improve both the security and reliability of your digital presence Let s take a closer look at how it does this Reduced Vulnerability to Attacks One major advantage of headless CMS is that it s less likely to be targeted by hackers Since a headless CMS doesn t handle the presentation layer there are fewer points of entry for potential attackers This means your content and data are more secure compared to traditional CMS platforms In addition headless CMS platforms often come with built in security features such as encryption authentication and access controls This ensures that only authorized users can access and modify your content providing an extra layer of protection Enhanced Stability and Uptime Nobody likes dealing with a website or app that keeps crashing or going offline With a headless CMS you can enjoy better stability and uptime ensuring a smooth experience for your users How does this work Well headless CMS platforms are often built on modern cloud based infrastructure which is designed to be robust and resilient This means that even if one server goes down your content can still be accessed and served from other servers in the network Plus by separating content from presentation headless CMS allows developers to update and modify the presentation layer without affecting the content itself This minimizes the risk of unintended downtime or issues caused by updates Headless CMS and Future proofing Your ContentOne of the most significant benefits of using a headless CMS is its ability to help you future proof your content With technology constantly evolving it s essential to stay adaptable and ready for whatever comes next So let s explore how a headless CMS can help you adapt to new technologies and trends as well as make the most of API driven content distribution Adapting to New Technologies and Trends A headless CMS is designed with flexibility and adaptability in mind As new platforms and technologies emerge it s easy to update your content strategy and integrate these innovations into your digital presence For example imagine a new social media platform takes the world by storm With a headless CMS you can quickly adapt your content to fit this new platform without having to rebuild your entire content management system The same goes for other emerging technologies like virtual reality voice assistants or even smart appliances By decoupling content from presentation a headless CMS ensures that your content can be easily repurposed and delivered to any platform or device making it much simpler to keep up with the ever changing digital landscape API driven Content Distribution A key feature of headless CMS is its reliance on APIs Application Programming Interfaces for content distribution APIs act as a bridge between your content and the various platforms it needs to be displayed on allowing you to tap into new channels and technologies quickly and easily This API driven approach means that you can plug your content into any platform that supports APIs without having to worry about complex integrations or compatibility issues It also allows you to easily experiment with new distribution channels and technologies as they emerge ensuring that your content strategy remains cutting edge and future proof Explore CMS systems and take Contentful as an exampleContentful is a cloud based headless CMS that provides a powerful API for delivering your content to various platforms It s known for its flexibility scalability and ease of use With Contentful you can manage your content in a centralized location and deliver it to websites mobile apps voice assistants and more Plus it offers a variety of SDKs for popular programming languages making it easy to integrate with your preferred technology stack Some key features of Contentful include Rich text editingMedia managementContent versioningWebhooks for event driven workflowsWe ll discuss how to connect with Contentful API create content and retrieve content using sample code First you ll need to sign up for a Contentful account and create a new space Once you have your space navigate to the API keys section and create a new API key Take note of your Space ID Content Delivery API Access Token and Content Management API Access Token as you ll need these to interact with the API To work with Contentful API you can use the official Contentful SDK for your preferred programming language In this example we ll use JavaScript and the Contentful JavaScript SDK Install the Contentful SDK using npm npm install contentful To retrieve content you ll need to initialize the Contentful client using the Space ID and Content Delivery API Access Token const contentful require contentful const client contentful createClient space lt your space id gt accessToken lt your content delivery api access token gt To fetch a specific entry use the getEntry method with the entry ID client getEntry lt entry id gt then entry gt console log entry catch error gt console error error To fetch all entries of a specific content type use the getEntries method with the content type ID client getEntries content type lt content type id gt then response gt console log response items catch error gt console error error To create content you ll need to use the Content Management API First install the Contentful Management SDK using npm npm install contentful management Initialize the Contentful management client using the Space ID and Content Management API Access Token const contentfulManagement require contentful management const managementClient contentfulManagement createClient accessToken lt your content management api access token gt To create a new entry use the createEntry method with the content type ID and the fields for the new entry managementClient getSpace lt your space id gt then space gt space createEntry lt content type id gt fields title en US Sample Title body en US Sample Body Text then entry gt console log entry catch error gt console error error With these code examples you can interact with the Contentful API to create and retrieve content in your headless CMS This is just the tip of the iceberg as Contentful offers a wide range of features and capabilities For more in depth information and other API methods check out the official Contentful documentationConclusion In conclusion headless CMS represents a headache less future for content management By embracing this innovative approach you re unlocking a world of flexibility performance improvements and streamlined content management processes Traditional CMS Combines content management and presentation how your content looks Offers built in templates and themesFocuses on website creation and managementLess flexible in terms of multi platform integrationHeadless CMS Separates content management from presentationNo built in templates or themes but you can create your own Works great for websites apps voice assistants and moreOffers excellent flexibility for multi platform integrationThroughout this article we ve explored the benefits of using a headless CMS such as multi platform integration separation of concerns and improved security We ve also discussed how headless CMS can help you adapt to new technologies and trends ensuring your content stays future proof Popular headless CMS options like Contentful Strapi and Sanity each offer unique advantages so it s crucial to consider your project s specific needs before choosing the right one for you With a little research and some experimentation you ll find the perfect fit to elevate your content management experience It s time to embrace the world of headless CMS and say goodbye to the headaches of traditional content management systems The future of content management is here and it s more powerful efficient and accessible than ever before So what are you waiting for Dive in and discover the endless possibilities that await you in the headache less realm of headless CMS Common FAQs Q Can I switch from a traditional CMS to a headless CMS later A Yes it s possible to migrate your content from a traditional CMS to a headless CMS but the process might require some technical work and adjustments to your content structure Q Is a headless CMS more difficult to use than a traditional CMS A A headless CMS can be more technically challenging as it requires working with APIs and creating custom templates However if you re comfortable with these concepts or willing to learn a headless CMS can offer greater flexibility and benefits 2023-06-04 16:27:10
Apple AppleInsider - Frontpage News AI more important to investors than a headset, claims Ming-Chi Kuo https://appleinsider.com/articles/23/06/04/ai-more-important-to-investors-than-a-headset-claims-ming-chi-kuo?utm_medium=rss AI more important to investors than a headset claims Ming Chi KuoInvestors are less interested in Apple s headset than they are about AI analyst Ming Chi Kuo claims ahead of Monday s packed WWDC keynote Siri Apple s main public facing machine learning featureThe WWDC keynote is expected to chiefly focus on the Apple VR and AR headset but that s only one part of the sprawling empire investors are interested in According to TF Securities analyst Ming Chi Kuo investors are more keen to know about its AI related efforts Read more 2023-06-04 16:22:24
ニュース BBC News - Home Bournemouth victim was fabulous young man - family https://www.bbc.co.uk/news/uk-england-65804301?at_medium=RSS&at_campaign=KARANGA sunnah 2023-06-04 16:40:50
ニュース BBC News - Home The Ashes: England spinner Jack Leach ruled out of Australia series https://www.bbc.co.uk/sport/cricket/65805317?at_medium=RSS&at_campaign=KARANGA fracture 2023-06-04 16:41:21

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)