投稿時間:2023-08-28 21:12:53 RSSフィード2023-08-28 21:00 分まとめ(15件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
IT ITmedia 総合記事一覧 [ITmedia News] 「シンゴジ」「三丁目の夕日」など 日本映画のVFXを牽引してきた「白組」が50周年迎える https://www.itmedia.co.jp/news/articles/2308/28/news146.html itmedia 2023-08-28 20:05:00
js JavaScriptタグが付けられた新着投稿 - Qiita plunkerでvue その61 https://qiita.com/ohisama@github/items/47b1579044862c216b2f lthgtltc 2023-08-28 20:30:23
技術ブログ Developers.IO Cloudflare Zero Trust で Private Network を作成 ~ CLI編 https://dev.classmethod.jp/articles/cloudflare-zero-trust-private-network-cli/ cloudflared 2023-08-28 11:21:00
海外TECH DEV Community C# Multithreading Interview Questions and Answers https://dev.to/bytehide/c-multithreading-interview-questions-and-answers-4opj C Multithreading Interview Questions and AnswersWhen preparing for a software development interview it s crucial to brush up on your knowledge of the essential concepts in the field One of the key aspects to cover is C multithreading as this is a fundamental part of building efficient and responsive applications In this article you will find a comprehensive collection of C threading interview questions that will not only test your understanding of the subject but also ensure you are well equipped to tackle your next technical interview With questions ranging from basic concepts to advanced topics you will dive deep into the realm of multithreading exploring various synchronization techniques thread management strategies and performance optimizations In the context of C multithreading what are the main differences between the ThreadPool and creating your own dedicated Thread instances AnswerThe main differences between using ThreadPool and creating dedicated Thread instances are Resource Management The ThreadPool manages a pool of worker threads which are reused for multiple tasks reducing the overhead of creating and destroying threads Creating dedicated Threads creates a new thread for each task which can be resource intensive particularly when handling a large number of tasks Thread Lifetime ThreadPool threads have a background status and their lifetimes are managed by the system Dedicated threads have a foreground status by default and their lifetime is managed by the developer Scalability ThreadPool automatically adjusts the number of worker threads based on system load and available resources providing better scalability for applications When creating dedicated threads you must manage the thread count yourself which can be more complex and error prone Priority amp Customization ThreadPool threads have default priority and limited customization Dedicated threads can be customized in terms of priority name stack size and other properties Synchronization ThreadPool mitigates the need for manual thread synchronization as work items are queued and executed by available threads When using dedicated threads developers are responsible for thread synchronization Example using ThreadPool ThreadPool QueueUserWorkItem gt Your task logic here Example using dedicated Thread Thread thread new Thread gt Your task logic here thread Start How can you ensure mutual exclusion while accessing shared resources in C multithreading without using lock statement or Monitor methods AnswerTo ensure mutual exclusion without using lock or Monitor you can use other synchronization primitives Some common alternatives include Mutex A named or unnamed system wide synchronization primitive that can be used across multiple processes Mutex ensures that only one thread can access a shared resource at a time Example Mutex mutex new Mutex mutex WaitOne try Access shared resource finally mutex ReleaseMutex Semaphore A synchronization primitive that limits the number of concurrent threads that can access a shared resource Semaphore can be used when multiple threads are allowed to access the resource but with a limited number of instances Example Semaphore sem new Semaphore Initial and maximum count set to sem WaitOne try Access shared resource finally sem Release ReaderWriterLockSlim A synchronization primitive that provides efficient read write access to shared resources ReaderWriterLockSlim allows multiple concurrent readers when no writer holds the lock and exclusive access for writer s Example ReaderWriterLockSlim rwLock new ReaderWriterLockSlim Read access rwLock EnterReadLock try Access shared resource finally rwLock ExitReadLock Write access rwLock EnterWriteLock try Access shared resource finally rwLock ExitWriteLock SpinLock A low level synchronization primitive that keeps trying to acquire the lock until successful SpinLock should be used in low contention scenarios where the lock is expected to be held for a very short time Example SpinLock spinLock new SpinLock bool lockTaken false spinLock Enter ref lockTaken try Access shared resource finally if lockTaken spinLock Exit These synchronization primitives can ensure mutual exclusion in scenarios where lock or Monitor is not preferred Be aware of the overhead and potential contention associated with each option and choose the appropriate primitive based on the specific requirements of your application Explain the difference between the Barrier and CountdownEvent synchronization primitives in multithreading Provide a real world scenario in which each of these would be useful AnswerBarrier The Barrier class is a synchronization primitive that allows multiple threads to work concurrently and blocks them until they reach a specific synchronization point Once all the participating threads have reached this point they can proceed together Barriers are useful for dividing a problem into parallel stages where each stage must complete before the next one starts Example scenario for Barrier Imagine you have an image processing application that applies several filters to an image Each filter is applied by a different thread and each filter depends on the output of the previous filter The Barrier class ensures that all threads finish applying their respective filters synchronize and then move to the next stage together Example code with Barrier int participants Barrier barrier new Barrier participants Parallel ForEach filters filter gt Apply filter on the image barrier SignalAndWait Wait for other filters to complete CountdownEvent The CountdownEvent class is a synchronization primitive that blocks threads until a specific count reaches zero A thread must signal the CountdownEvent once it completes its task decrementing the count When the count reaches zero all waiting threads are released CountdownEvent is useful when one or more threads need to wait for other threads to finish before starting or continuing their work Example scenario for CountdownEvent Imagine a job processing system where a single worker thread needs to process data from files downloaded by multiple downloader threads The worker thread waits until all downloader threads have finished downloading their respective files and signaled the CountdownEvent Once the count reaches zero the worker thread starts processing the downloaded data Example code with CountdownEvent int fileCount CountdownEvent countdown new CountdownEvent fileCount Downloader threadsfor int i i lt fileCount i new Thread gt Download file countdown Signal Start Worker thread waits for all files to be downloadedcountdown Wait Process downloaded dataIn summary Barrier is used to synchronize multiple threads at specific points in their execution whereas CountdownEvent blocks threads until all participating threads have signaled completion Both synchronization primitives have their use cases depending on the design of the parallel algorithm and the requirements of the application What would be the potential issues with using Thread Abort to terminate a running thread Explain the implications and suggest alternative methods for gracefully stopping a thread AnswerUsing Thread Abort to terminate a running thread can lead to several potential issues Unpredictable State Thread Abort raises a ThreadAbortException that immediately interrupts the thread potentially leaving shared resources data structures or critical sections in an inconsistent state Resource Leaks If the interrupted thread has allocated resources such as handles file streams or database connections they may not be released leading to resource leakage Deadlocks Aborted threads holding locks or other synchronization primitives might not have the chance to release them causing deadlocks in other threads ThreadAbortException Handling If the thread catches the ThreadAbortException and ignores it the abort request will fail and the thread will continue to execute Legacy amp Not Supported The Thread Abort method is not supported in NET Core NET and later versions indicating that it shouldn t be used in modern applications To gracefully stop a thread consider the following alternative methods Use a shared flag Introduce a shared boolean flag that is periodically checked by the running thread When the flag is set to true the thread should exit Make sure to use the volatile keyword or the Interlocked class to ensure proper synchronization Example volatile bool stopRequested false Thread thread new Thread gt while stopRequested Perform task To stop the thread stopRequested true Use a CancellationToken If you are using Tasks instead of Threads the Task Parallel Library TPL provides a cancellation model using the CancellationTokenSource and CancellationToken classes Example CancellationTokenSource cancellationTokenSource new CancellationTokenSource Task task Task Run gt while cancellationTokenSource IsCancellationRequested Perform task To stop the task cancellationTokenSource Cancel Gracefully stopping threads using these methods ensures that resources are released locks are properly managed and shared data remains in a consistent state It is also compatible with modern NET versions and the TPL How do you achieve thread synchronization using ReaderWriterLockSlim in C multithreading Explain its advantages over traditional ReaderWriterLock AnswerReaderWriterLockSlim is a synchronization primitive that provides efficient read write access to shared resources It allows multiple concurrent readers when no writer holds the lock and exclusive access for writer s To achieve thread synchronization using ReaderWriterLockSlim follow these steps Create an instance of ReaderWriterLockSlim Use EnterReadLock before accessing the shared resource for reading and ExitReadLock after the read operation is done Use EnterWriteLock before accessing the shared resource for writing and ExitWriteLock after the write operation is done Here s an example of using ReaderWriterLockSlim for read and write operations ReaderWriterLockSlim rwLock new ReaderWriterLockSlim Reading datarwLock EnterReadLock try Access shared resource for reading finally rwLock ExitReadLock Writing datarwLock EnterWriteLock try Access shared resource for writing finally rwLock ExitWriteLock Advantages of ReaderWriterLockSlim over the traditional ReaderWriterLock Performance ReaderWriterLockSlim has better performance compared to ReaderWriterLock It uses spin wait and other optimizations for scenarios where the lock is expected to be uncontended or held for a short time Recursion ReaderWriterLockSlim provides flexible support for lock recursion enabling you to enter and exit the lock multiple times in the same thread while ReaderWriterLock has limitations with recursion Avoidance of Writer Starvation ReaderWriterLockSlim has options to reduce writer starvation by giving preference to write lock requests over read lock requests ReaderWriterLock may suffer from writer starvation when there is a continuous stream of readers However note that ReaderWriterLockSlim doesn t support cross process synchronization and it should not be used if the lock object must be used across multiple processes In such cases use the Mutex synchronization primitive As we move forward with our list of C threading interview questions it s important to remember that threading can be quite complex requiring a deep understanding of concurrency and parallelism principles The upcoming questions will delve into more advanced concepts covering various synchronization primitives techniques for preventing races and efficient approaches to extending and evolving your multithreaded applications How do Tasks in C differ from traditional Threads Explain the benefits and scenarios where Tasks would be preferred over directly spawning Threads AnswerTasks and Threads are both used in C for concurrent and parallel programming However there are some key differences between them Abstraction Level Tasks are a higher level abstraction built on top of threads focusing on the work being done rather than the low level thread management Threads are a lower level concept allowing more fine grained control over the execution details Resource Management Tasks use the NET ThreadPool to manage worker threads more efficiently reducing the overhead of creating and destroying threads Threads when created individually incur more resource overhead and don t scale as well for larger workloads Thread Lifetime Task threads are background threads with lifetimes managed by the system Threads can be foreground or background with their lifetimes managed by the developer Asynchronous Programming Tasks integrate with the async await pattern for streamlined asynchronous programming Threads require manual synchronization when coordinating with asynchronous operations Continuation Tasks enable easier chaining of work using ContinueWith allowing scheduling work to be performed once the preceding task is done Threads require manual synchronization for such scenarios using synchronization primitives Cancellation Tasks provide a built in cancellation mechanism using CancellationToken offering a standardized way to cancel and propagate cancellation requests Threads must implement custom cancellation logic using shared flags or other synchronization mechanisms Exception Handling Tasks provide better support for exception handling aggregating exceptions from multiple tasks and propagating them to the calling context Threads require more complex mechanisms for handling exceptions thrown in child threads Considering these differences and benefits Tasks should be preferred over Threads in the following scenarios When dealing with asynchronous or parallel workloads that can benefit from the improved resource management and scalability of the ThreadPool When using the async await pattern for asynchronous programming context When there is a need for straightforward composition and coordination of work using continuations When a built in cancellation mechanism and standardized exception handling are needed In summary Tasks provide a higher level and more flexible abstraction for parallel and asynchronous programming simplifying code and improving performance in many scenarios However there might be specific cases where the fine grained control and customization provided by Threads are still necessary or beneficial Discuss the differences between Volatile Interlocked and MemoryBarrier methods for using shared variables in multithreading When should each of these be used AnswerIn C multithreading Volatile Interlocked and MemoryBarrier are used to maintain proper synchronization and ordering when using shared variables across multiple threads They help ensure correct and predictable behavior by controlling the order in which reads and writes are performed Volatile Volatile is a keyword that tells the compiler and runtime not to cache the variable s value in a register and to always read it from or write it to main memory This ensures the most recent value is used by all threads It is used when a variable will be accessed by multiple threads without locking and you need to maintain the correct memory ordering for these accesses Example usage private volatile bool stopRequested In one thread stopRequested true In another thread if stopRequested Perform work Interlocked The Interlocked class provides atomic operations like Add Increment Decrement Exchange and CompareExchange for shared variables These operations are designed to be thread safe and perform their operations uninterruptedly It should be used when you need to perform simple arithmetic or comparison operations on shared variables without locks ensuring that those operations are atomic Example usage private int counter Increment counter atomically Interlocked Increment ref counter MemoryBarrier The MemoryBarrier method also known as a “fence prevents the runtime and hardware from reordering memory access instructions across the barrier This helps to ensure proper memory ordering between reads and writes It should be used in low level algorithms that require precise control over memory access orderings It s rarely needed for most application level programming as the volatile keyword and Interlocked class are usually sufficient Example usage private int value private int value In one thread value Thread MemoryBarrier value In another thread int localValue value Thread MemoryBarrier int localValue value In summary use the volatile keyword when you need to ensure correct memory ordering for simple shared variables use the Interlocked class for thread safe atomic operations on shared variables and use the MemoryBarrier method when you need precise control over the memory access orderings in low level algorithms In C multithreading explain the concept of thread local and data partitioning and how it can help improve the overall performance of a multi threaded application AnswerThread Local Thread local storage is a concept that allows each thread in a multi threaded application to have its own private instance of a variable A thread local variable retains its value throughout the thread s lifetime and is initialized once per thread By giving each thread its private copy of a variable we can minimize contention and improve performance as no synchronization is necessary when accessing the variable In C you can use the ThreadLocal lt T gt class to declare a thread local variable For example ThreadLocal lt int gt localSum new ThreadLocal lt int gt gt Each thread can safely use and modify its localSum without synchronization localSum Value Data Partitioning Data partitioning is a technique in which a large data set is divided into smaller independent pieces known as partitions Each partition is then processed by a separate thread in parallel Data partitioning enables better utilization of system resources reduces contention and helps improve the overall performance for parallel algorithms Repartitioning may be done statically or dynamically depending on the specific problem and the goals of the application Parallel ForEach and Parallel LINQ Mastering C LINQ Guide From Beginner and Expert PLINQ are two examples of built in NET mechanisms that utilize data partitioning internally to execute parallel operations more efficiently Example of data partitioning using Parallel ForEach List lt int gt data new List lt int gt Parallel ForEach data item gt Process item In summary thread local storage and data partitioning are two techniques that can significantly improve the performance and efficiency of multi threaded applications in C They help minimize contention reduce lock overhead and better utilize available system resources It is essential to choose the appropriate technique based on the nature of the problem and the algorithms involved How does the Cancellation model in a Task Parallel Library work Explain how you can use CancellationToken to handle cancellations in TPL AnswerThe Task Parallel Library TPL provides a cancellation model built around the CancellationTokenSource and CancellationToken classes The model allows tasks to cooperatively and gracefully cancel their execution upon request Here s a step by step explanation of how the cancellation model works in TPL Create a CancellationTokenSource object This object is responsible for generating and managing CancellationToken instances Obtain a CancellationToken from the CancellationTokenSource The CancellationToken carries the cancellation request to the executing tasks Pass the CancellationToken to the task that you want to support cancellation In the task implementation periodically check the CancellationToken for cancellation requests using the IsCancellationRequested property Alternatively the tasks that use Task Delay Task Wait or Task Run can pass the CancellationToken to these methods and they will throw a TaskCanceledException or OperationCanceledException when cancellation is requested When a task detects the cancellation request it should clean up any resources and exit gracefully To request cancellation call the CancellationTokenSource Cancel method Here s an example of using a CancellationToken to handle cancellations in TPL CancellationTokenSource cancellationTokenSource new CancellationTokenSource CancellationToken cancellationToken cancellationTokenSource Token Task task Task Run gt Example of a long running task for int i i lt i Check for cancellation cancellationToken ThrowIfCancellationRequested Perform task work cancellationToken After some time when a cancellation is neededcancellationTokenSource Cancel It s essential to note that cooperative cancellation relies on the task implementation to regularly check the CancellationToken for cancellation requests If a task does not check the token it cannot be gracefully canceled In summary the Task Parallel Library s cancellation model provides a flexible and cooperative approach for canceling tasks Using CancellationToken developers can support cancellation in their tasks and ensure proper cleanup of resources and graceful termination How do you combine asynchronous programming with multithreading using C s async await pattern Explain how the TaskScheduler class can be used in this context AnswerThe async await pattern in C simplifies asynchronous programming by allowing developers to write asynchronous code that looks similar to synchronous code The pattern relies on the Task and Task lt TResult gt classes in the Task Parallel Library TPL Asynchrony can be combined with multithreading using TPL to efficiently perform parallel and concurrent operations To combine asynchronous programming with multithreading follow these steps Use async and await with Task Run or other methods that return a Task to schedule work on a separate thread This provides a responsive user interface while allowing computationally intensive work to be processed in parallel Use Task WhenAll or Task WhenAny to coordinate multiple asynchronous tasks either waiting for all tasks to complete or waiting for one task to complete Optionally use TaskScheduler to control the scheduling and execution of tasks This can be useful for applications with custom scheduling requirements Here s an example of using the async await pattern with multithreading public async Task PerformWorkAsync Start two tasks running concurrently Task task Task Run gt PerformIntensiveWork Task task Task Run gt PerformAdditionalIntensiveWork Wait for both tasks to complete await Task WhenAll task task Continue processing results private void PerformIntensiveWork Long running or CPU intensive work private void PerformAdditionalIntensiveWork Additional long running or CPU intensive work The TaskScheduler class can be used to control how tasks are scheduled and executed By default tasks use the default TaskScheduler which is the NET ThreadPool You can create a custom TaskScheduler for specific scenarios such as tasks that require a particular order priority or thread affinity To use a custom TaskScheduler pass it as a parameter to the TaskFactory StartNew or Task ContinueWith methods In summary combining asynchronous programming and multithreading with the async await pattern and the Task Parallel Library allows developers to write responsive parallel and efficient applications The TaskScheduler class can be used to customize task execution and scheduling for more specific requirements Now that we ve covered a wide range of c threading interview questions let s dig even deeper into the realm of multithreading The upcoming questions will delve into parallelism task parallelism and various techniques for ensuring thread safety in your multi threaded applications Equipping yourself with this knowledge will prepare you to tackle complex and challenging problems in the fast paced world of software development What is parallelism and how do you control the degree of parallelism for a parallel loop in C using the Parallel class AnswerParallelism is a programming technique where multiple tasks or operations are executed concurrently utilizing multiple cores processors or threads In C the Parallel class is part of the Task Parallel Library TPL and provides support for executing parallel loops or code blocks in a simple and efficient manner To control the degree of parallelism for a parallel loop in C using the Parallel class you can create a new instance of ParallelOptions and set its MaxDegreeOfParallelism property This property limits the maximum number of concurrent operations in a parallel loop Here s an example of controlling the degree of parallelism for a parallel loop using the Parallel class int data new int int maxDegreeOfParallelism Limit the maximum number of concurrent tasks to ParallelOptions parallelOptions new ParallelOptions MaxDegreeOfParallelism maxDegreeOfParallelism Parallel ForEach data parallelOptions item gt Process each item in parallel with a limited number of tasks ProcessItem item Keep in mind that setting the MaxDegreeOfParallelism to a lower value than the number of available cores or reducing it unnecessarily can lead to suboptimal performance It s generally best to let TPL automatically manage the degree of parallelism based on the available system resources However in some scenarios you might want to control the degree of parallelism to enforce resource constraints or to preserve a certain level of responsiveness in your application In summary parallelism in C enables efficient and concurrent execution of tasks or code blocks and the degree of parallelism for parallel loops can be controlled using the ParallelOptions class in combination with the Parallel class Describe the concept of lock contention in multithreading and explain its impact on the performance of your application How can you address and mitigate lock contention issues AnswerLock contention is a scenario in which two or more threads are trying to acquire a lock or synchronization primitive at the same time resulting in delayed execution and contention as they wait for the lock to be released When lock contention is high the performance of the application may degrade leading to decreased throughput and potential bottlenecks Lock contention has the following impacts on application performance Increased waiting time Threads waiting for a lock to be released experience increased latency which reduces overall application throughput Reduced parallelism When multiple threads are waiting for a lock the potential for parallelism is reduced making the application less efficient in utilizing hardware and system resources Risk of deadlocks High lock contention may increase the risk of deadlocks when multiple threads are waiting for locks held by other threads in a circular pattern To address and mitigate lock contention issues consider the following strategies Reduce lock granularity Instead of locking the entire data structure or resource lock smaller parts to allow more threads to access different sections simultaneously Reduce lock duration Minimize the time spent inside the locked region by performing only essential operations and moving non critical tasks outside the locked section Use lock free data structures and algorithms If possible use lock free data structures and algorithms that don t rely on locks such as ConcurrentQueue ConcurrentDictionary or ConcurrentBag Use finer grained lock Replace your global lock with multiple finer grained locks Use reader writer locks Use ReaderWriterLockSlim when there are more read operations than write operations allowing multiple readers while maintaining exclusive write access Minimize contention with partitioning Divide data into partitions processed by separate threads reducing the need for synchronization Avoid nested locks Reduce the risk of deadlocks and contention by avoiding nested locks or lock hierarchies By applying these strategies you can address and mitigate lock contention issues in your multi threaded application improving both the performance and reliability of your application What is the difference between a BlockingCollection and a ConcurrentQueue or ConcurrentStack In which scenarios would you choose to use the BlockingCollection and why AnswerBlockingCollection lt T gt ConcurrentQueue lt T gt and ConcurrentStack lt T gt are thread safe collections in the System Collections Concurrent namespace designed for use in multi threaded or parallel scenarios The differences between BlockingCollection lt T gt and ConcurrentQueue lt T gt or ConcurrentStack lt T gt are as follows Bounded capacity BlockingCollection lt T gt can be created with a bounded capacity which means it will block producers when the collection reaches the specified capacity In contrast ConcurrentQueue lt T gt and ConcurrentStack lt T gt are unbounded and will not block producers when adding elements Blocking on take BlockingCollection lt T gt provides blocking and non blocking methods for adding and taking items When the collection is empty and a consumer calls the blocking take method the consumer thread will be blocked until an item becomes available On the other hand ConcurrentQueue lt T gt and ConcurrentStack lt T gt only provide non blocking methods for adding and taking items Multiple underlying collections BlockingCollection lt T gt can use different underlying collections including ConcurrentQueue lt T gt ConcurrentStack lt T gt or ConcurrentBag lt T gt depending on the desired behavior such as FIFO LIFO or unordered In scenarios where you would choose to use BlockingCollection lt T gt over ConcurrentQueue lt T gt or ConcurrentStack lt T gt When developing producer consumer patterns where producers may add items faster than consumers can process them and you want to enforce a specified capacity limit BlockingCollection lt T gt will block producers when the capacity has been reached preventing unbounded memory growth When you need to simplify coordination between producers and consumers by using blocking or bounding behaviors provided by the collection For example in a scenario where consumers need to be blocked when no items are available in the collection the built in blocking feature of BlockingCollection lt T gt can be helpful In summary choose BlockingCollection lt T gt when you require coordination between producers and consumers using blocking or bounding behaviors or when you need flexibility in the underlying collection for FIFO LIFO or unordered access patterns Use ConcurrentQueue lt T gt or ConcurrentStack lt T gt when you require basic thread safe collections without built in blocking or bounding behaviors How do Tasks in C differ from traditional Threads Explain the benefits and scenarios where Tasks would be preferred over directly spawning Threads AnswerTasks and Threads are both used for concurrent and parallel execution of work but they have some key differences Higher level abstraction Task Task and Task is a higher level abstraction over threads wrapping the concept of thread execution work item scheduling and result retrieval into a single unit Tasks allow you to write asynchronous and parallel code more easily and with less boilerplate ThreadPool utilization Tasks are usually executed on the ThreadPool allowing for efficient management scheduling and recycling of threads This results in less overhead compared to creating and disposing of new Thread instances especially in high load scenarios Cancellation support Tasks provide built in cancellation support through CancellationToken and CancellationTokenSource making it easier to cancel long running operations in a cooperative and consistent manner Continuations Tasks support continuations allowing you to chain multiple operations to run in an asynchronous non blocking manner after the completion of a previous operation Exception handling Tasks allow for more efficient and centralized exception handling by capturing and propagating exceptions to the point where the task results are retrieved or awaited Tightly integrated with modern C features Tasks are well integrated with modern C language features such as async await making it easier to write asynchronous code In scenarios where Tasks are preferred over directly spawning Threads When you need to run a short lived operation that can execute in parallel or concurrently with other tasks without incurring the overhead of creating and disposing of threads When you need to run multiple asynchronous operations and coordinate their completions either by waiting on all tasks to complete or proceeding when any task completes When you need to write asynchronous code with the async await pattern taking advantage of the tight integration between Tasks and modern C language features When you need to cancel a long running operation in a consistent and cooperative manner avoiding potential resource leaks or inconsistent states In summary Tasks are preferred over Threads in C for most scenarios due to their higher level abstraction efficient use of the ThreadPool built in cancellation support seamless integration with modern C features such as async await and simplified exception handling How can you ensure mutual exclusion while accessing shared resources in C multithreading without using lock statement or Monitor methods AnswerWhen you need to ensure mutual exclusion while accessing shared resources in C multithreading without using lock statement or Monitor methods you can use other synchronization primitives provided by NET Mutex A Mutex short for “mutual exclusion provides inter process synchronization allowing only one thread at a time to access a shared resource A Mutex can be used across multiple processes and you can give it a unique name private static readonly Mutex mutex new Mutex public void SharedResourceAccess mutex WaitOne try Access the shared resource finally mutex ReleaseMutex Semaphore A Semaphore allows you to limit the number of concurrent access to a shared resource You can use a Semaphore with an initial count of to mimic a Mutex for single access private static readonly Semaphore semaphore new Semaphore public void SharedResourceAccess semaphore WaitOne try Access the shared resource finally semaphore Release ReaderWriterLockSlim A ReaderWriterLockSlim allows you to synchronize access to a shared resource by distinguishing between read and write access Multiple threads can simultaneously read the resource while write access is exclusive private static readonly ReaderWriterLockSlim rwLock new ReaderWriterLockSlim public void SharedResourceReadAccess rwLock EnterReadLock try Read the shared resource finally rwLock ExitReadLock public void SharedResourceWriteAccess rwLock EnterWriteLock try Write to the shared resource finally rwLock ExitWriteLock SpinLock A SpinLock is a lightweight synchronization primitive that avoids the overhead of context switching by repeatedly checking the lock condition SpinLock is suitable for scenarios where the lock is held for a very short duration private static SpinLock spinLock new SpinLock public void SharedResourceAccess bool lockTaken false try spinLock Enter ref lockTaken Access the shared resource finally if lockTaken spinLock Exit Each of these synchronization primitives have their own advantages and specific use cases but they can all ensure mutual exclusion while accessing shared resources in C multithreading without using lock statement or Monitor methods As we approach the final set of C multithreading interview questions it s important to keep in mind that being well versed in the concepts and best practices related to this crucial aspect of software development is instrumental in creating robust and efficient applications The last few questions will focus on advanced techniques such as lock free data structures thread local storage and strategies for handling resource contention Mastering these topics will ensure you are prepared to excel in your upcoming interview as well as your professional career Describe the concept of a race condition in C multithreading and explain different strategies for preventing race conditions from occurring in your application AnswerA race condition is a situation in which the behavior of an application depends on the relative timing of events such as the order in which threads are scheduled to run Race conditions usually occur when multiple threads access shared mutable data simultaneously without proper synchronization leading to unpredictable results and potential issues such as data corruption deadlocks and crashes There are several strategies to prevent race conditions in C multithreading Locking The most common method to prevent race conditions is by using synchronization primitives like lock Monitor Mutex Semaphore or ReaderWriterLockSlim These primitives ensure that only one thread can access the shared resource at a time providing mutual exclusion private readonly object syncLock new object private int sharedCounter public void Increment lock syncLock sharedCounter Atomic operations Use atomic operations provided by the Interlocked class to perform basic operations like increment decrement and exchange on shared variables without the need for locking These operations are designed to be thread safe and efficient private int sharedCounter public void Increment Interlocked Increment ref sharedCounter Immutable data structures Using immutable data structures can help prevent race conditions by design as their state cannot be changed after initialization With immutable data structures you can share data across threads without the need for synchronization Using a built in immutable data structure private ImmutableDictionary lt int string gt sharedData ImmutableDictionary lt int string gt Empty public void AddData int key string value sharedData sharedData Add key value Thread local storage Store data in a way that it belongs to a specific thread so there s no need for synchronization or shared access You can use ThreadLocal lt T gt or ThreadStatic attribute for static fields private ThreadLocal lt int gt privateCounter new ThreadLocal lt int gt gt public void Increment privateCounter Value Concurrent collections Make use of thread safe concurrent collections available in the System Collections Concurrent namespace like ConcurrentQueue lt T gt ConcurrentBag lt T gt or ConcurrentDictionary lt TKey TValue gt These collections are designed to handle concurrent access without the need for explicit locking private ConcurrentDictionary lt int string gt sharedData new ConcurrentDictionary lt int string gt public void AddData int key string value sharedData TryAdd key value By employing these strategies in appropriate scenarios you can prevent race conditions and ensure the correct operation of your multi threaded application What is lazy initialization in C multithreading and how does it affect application start up performance Discuss the use of Lazy class and provide an example where lazy initialization would be beneficial AnswerLazy initialization is a technique in which an object or a resource is not initialized until it s actually needed This approach can be beneficial for optimizing application start up performance by deferring the initialization of time consuming resources heavy objects or expensive computations until they are required The Lazy lt T gt class in C facilitates lazy initialization and provides thread safety by default ensuring that the initialization is performed only once even when multiple threads need access to the object or resource simultaneously A scenario where lazy initialization would be beneficial might be when you have an application that performs complex calculations but not all users need the results of these calculations In such cases using lazy initialization can help improve the application s responsiveness as the calculations are only performed when they are actually needed Here s an example public class ComplexCalculation public ComplexCalculation Expensive and time consuming calculations Other methods and properties public class HomeController private Lazy lt ComplexCalculation gt calculation new Lazy lt ComplexCalculation gt public ActionResult Calculate Access the calculation instance only when needed initializing it in a thread safe manner if necessary var result calculation Value PerformCalculation return View result In this example the ComplexCalculation object is initialized using Lazy lt T gt When the Calculate action is invoked the ComplexCalculation instance is created only if it hasn t been initialized before and the calculation is performed For users who never need to access the calculation the ComplexCalculation object is never created saving resources and improving application performance In summary lazy initialization is a technique to improve application start up performance and responsiveness by delaying the initialization of heavy objects or expensive computations until they are actually needed The Lazy lt T gt class in C can be used to implement lazy initialization in a thread safe manner How do you achieve thread synchronization using ReaderWriterLockSlim in C multithreading Explain its advantages over traditional ReaderWriterLock AnswerReaderWriterLockSlim is a synchronization primitive that allows multiple threads to read a shared resource concurrently while write access is exclusive It is ideal for situations where read operations are more frequent than write operations ReaderWriterLockSlim is an improved version of the traditional ReaderWriterLock and provides better performance and additional features To achieve thread synchronization using ReaderWriterLockSlim in C multithreading Declare a ReaderWriterLockSlim instance to use as a synchronization object private static ReaderWriterLockSlim rwLock new ReaderWriterLockSlim Use the EnterReadLock EnterWriteLock ExitReadLock and ExitWriteLock methods to acquire and release read or write locks as needed public void ReadSharedResource rwLock EnterReadLock try Read from the shared resource finally rwLock ExitReadLock public void WriteSharedResource rwLock EnterWriteLock try Write to the shared resource finally rwLock ExitWriteLock Advantages of ReaderWriterLockSlim over traditional ReaderWriterLock Better performance ReaderWriterLockSlim provides better performance characteristics especially in high contention scenarios due to its optimized implementation and reduced reliance on operating system kernel objects Recursion policy You can specify the recursion policy for the lock by passing a LockRecursionPolicy value NoRecursion or SupportsRecursion in the constructor providing more control over lock behavior TryEnter methods ReaderWriterLockSlim offers TryEnterReadLock TryEnterUpgradeableReadLock and TryEnterWriteLock methods allowing you to attempt to acquire a lock without blocking if the lock is not immediately available Upgradeable read It provides upgradeable read locks allowing threads to perform read operations or temporarily escalate to write operations without releasing the initial read lock minimizing the chances of deadlocks In summary to achieve thread synchronization using ReaderWriterLockSlim in C multithreading use the Enter Exit methods while accessing shared resources ReaderWriterLockSlim provides significant advantages over the traditional ReaderWriterLock including better performance flexible recursion policies non blocking try methods and upgradeable read locks What are the differences between ManualResetEvent and AutoResetEvent synchronization primitives in C multithreading AnswerManualResetEvent and AutoResetEvent are synchronization primitives in C used for signaling between threads Both allow one or more waiting threads to continue execution once the event is set signaled However they have different behaviors regarding event reset behavior ManualResetEvent When a ManualResetEvent is signaled using the Set method it remains signaled until it is explicitly reset using the Reset method All waiting threads are released at once when the event is set and any thread that waits on the event while it is signaled proceeds immediately without blocking private static ManualResetEvent manualResetEvent new ManualResetEvent false public void WaitOnEvent Blocks until the event is set by another thread manualResetEvent WaitOne public void SignalEvent Signals the event releases all waiting threads manualResetEvent Set public void ResetEvent Resets the event to non signaled state manualResetEvent Reset AutoResetEvent When an AutoResetEvent is signaled using the Set method it automatically resets to a non signaled state after releasing a single waiting thread This means that only one waiting thread is released at a time and the event must be signaled again for each additional thread that needs to be released private static AutoResetEvent autoResetEvent new AutoResetEvent false public void WaitOnEvent Blocks until the event is set by another thread autoResetEvent WaitOne public void SignalEvent Signals the event releases one waiting thread and resets the event autoResetEvent Set In summary the main differences between ManualResetEvent and AutoResetEvent synchronization primitives in C are ManualResetEvent remains signaled until it s explicitly reset and releases all waiting threads at once upon signaling AutoResetEvent resets automatically after releasing a single waiting thread which requires manual signaling for each additional thread that needs to be released Choosing between the two depends on the desired signaling behavior and how many threads should be released when the event is signaled Explain the use of SpinLock in C multithreading and how it differs from a standard lock or Monitor Describe the potential advantages and limitations of using SpinLocks AnswerSpinLock is a synchronization primitive in C used to provide mutual exclusion that acquires and releases a lock in a loop repeatedly checking the lock s current state without preempting or causing the waiting thread to yield its execution This is in contrast to a standard lock or Monitor which use operating system kernel objects and may cause the waiting thread to block or context switch if the lock is not immediately available Advantages of using SpinLock Performance SpinLock can provide better performance than a lock or Monitor in scenarios where lock contention is low and the lock is held for very short periods Its lightweight implementation can outperform traditional locking mechanisms especially when context switching overhead would be comparatively high TryEnter functionality SpinLock provides a TryEnter method which can try to acquire the lock without blocking and provides an option to specify a timeout This can be useful in scenarios where taking an alternative action is preferable to waiting on a lock Limitations of using SpinLock Spin wait It uses a spin wait loop to acquire the lock meaning the thread will continue using CPU time while waiting for the lock In high contention scenarios or when the lock is held for longer durations this can lead to increased CPU usage and decreased performance compared to a lock or Monitor Non reentrant SpinLock is a non reentrant lock If a thread holding a SpinLock attempts to re acquire it without releasing it first a deadlock will occur No thread ownership SpinLock does not associate with the thread that currently holds the lock which makes it impossible to track thread ownership or detect deadlocks livelocks or lock re entrancy attempts Here s an example of using SpinLock private static SpinLock spinLock new SpinLock public void AccessSharedResource bool lockTaken false try spinLock Enter ref lockTaken Access the shared resource finally if lockTaken spinLock Exit In summary SpinLock is a lightweight synchronization primitive in C that provides mutual exclusion through a spin wait loop It has advantages in specific low contention scenarios with short lock durations but also comes with limitations compared to a standard lock or Monitor It is important to choose the appropriate synchronization mechanism based on the specific requirements and characteristics of your multi threaded application We hope that this extensive list of C threading interview questions and answers has provided you with invaluable insights into the various facets of multithreading in C By refreshing your knowledge of thread management synchronization and performance optimization you are better prepared to tackle challenging interview questions stand out to potential employers and excel in your career as a C developer Remember multithreading is a fundamental aspect of modern software development and mastering this skill set will not only improve your coding proficiency but also enable you to create more efficient and responsive applications in a fast paced and constantly evolving industry Good luck on your upcoming interview 2023-08-28 11:50:55
海外TECH DEV Community Log Viewer v3 is out https://dev.to/arukomp/log-viewer-v3-is-out-64f Log Viewer v is outLog Viewer v brings several quality of life features that I m sure you will enjoy different log format support and email previews Support for multiple log formatsLog Viewer can now view not just Laravel logs but also Apache Nginx Redis Postgres Supervisor and more Here you can see the types of logs that are now supported by Log Viewer out of the box Showing these logs in the new Log Viewer is as easy as adding the paths to your logs in the config log viewer php configuration like so Include file patterns include files gt log log You can include paths to other log types as well such as apache nginx and more var log httpd var log nginx For example logs can be found on MacOS Apple Silicon machines opt homebrew var log nginx opt homebrew var log httpd opt homebrew var log php fpm log opt homebrew var log postgres log opt homebrew var log redis log opt homebrew var log supervisor log absolute paths supported If you don t see a particular log format don t worry you can now define custom log formats which will allow you to browse these custom logs within the Log Viewer UI that you love Defining custom logs also works for overriding the existing built in log formats If for example your Laravel or HTTP logs are somehow different from factory defaults you can extend the parser classes to include the custom modifications you ve made You can read all about extending log formats here Log type switcherSupporting multiple types of logs means there a lot more logs To help find exactly what you re looking for or to focus your search on particular log types we have added a log type selector The selector will show up as soon as Log Viewer finds more than one type of log Email previewsHow often do you use the log mail driver I know I don t simply because it s very difficult to read these raw emails with headers and MIME parts much less see how they actually look I ve always opted in for other solutions like Mailtrap Helo or just sending real emails Well all the other tools require additional setup separate apps or can be plain dangerous in test environments ever sent a test email to a real person by mistake The log mail driver is the simplest and safest approach to sending emails locally It becomes even more useful with Log Viewer v which brings email previews directly within the Log Viewer All you need to make this work is to set this in your env MAIL MAILER logThe emails you send will instead be logged to your Laravel log and they will then be viewable within the Log Viewer Nifty little feature don t you think UpgradingLog Viewer v is an easy upgrade without any breaking changes for most users You can learn how to upgrade from v to v here The documentation has been updated for Log Viewer v Questions FeedbackIf you have any questions issues bugs to report please send them directly to the project s GitHub page issues or discussions SupportIf you enjoy using this free open source project there s several ways you can give back Submit a PR fix a bug add a new feature refactor code All PRs are welcome Add support for additional log formats there s tons of different log formats in the world and the more we can support in the Log Viewer the better Submit a PR with a new format support Buy me a coffee ️I m an avid coffee drinker which helps me work through the weekends after my full time job to provide you with the Log Viewer for free 2023-08-28 11:46:43
海外TECH DEV Community Performance e elegância! Escrevendo uma CLI CRUD utilizando ScyllaDB e Ruby https://dev.to/he4rt/performance-e-elegancia-escrevendo-uma-cli-crud-utilizando-scylladb-e-ruby-1452 Performance e elegância Escrevendo uma CLI CRUD utilizando ScyllaDB e RubyBoas pessoas desenvolvedoras precisam saber fazer CRUD não émesmo Então jápensou em ser capaz de produzir um CRUD com um banco de dados NoSQL montado para alta escalabilidade e ainda mais utilizando uma linguagem elegante e simples Não Pois muito que bem nesse artigo vocêvai aprender como construir uma CLI utilizando a gem dry cli consumindo o banco de dados ScyllaDB com a gem cassandra driverPara saber um pouco mais sobre o que éScyllaDB e em quais contextos essa ferramenta éútil recomendo ler a documentação oficialDisclaimer Esse artigo assume conhecimento básico com bancos de dados visto que o objetivo vai ser focar na utilização em conjunto com ruby para mais informações especificas sobre Scylla DB recomendo Os artigos criados pelo Developer Advocate da ScyllaDB DanielHert no Dev ToOs cursos gratuitos promovidos pela própria ScyllaDB no ScyllaDB University Table of ContentsIniciando o projetoDefinindo a camada de injeção de dependênciasDefinindo o boilerplate para nossa CLIImplementando nossos comandosConclusão Iniciando o projeto Resolvendo bibliotecas de sistema para instalar o driverA gem que vamos utilizar para comunicar com o ScyllaDB se chama cassandra driver infelizmente ela exige bibliotecas de sistema relacionadas ao banco de dados cassandra para serem instaladas a forma mais simples de ter essas bibliotecas instaladas éinstalar o próprio cassandra na maquina Disclaimer Não vamos utilizar o banco cassandra para nada na prática desse artigo apenas precisamos do pacote instalado para que a gem consiga buscar as bibliotecas necessárias ao seu funcionamento Visto que o ScyllaDB ébaseado originalmente no Cassandra podemos utilizar tranquilamente Para instalar o cassandra no MacOS brew install cassandraPara instalar o cassandra no Linux Debian sudo apt get install y cassandraCaso sua Distro Sistema operacional não tenha sido listada recomendo sempre recorrer a Documentação Oficial Iniciando o projetoCom as bibliotecas de sistema corretamente instaladas podemos iniciar nosso projeto utilizando bundler mkdir project scylla amp amp cd project scylla amp amp bundle init Instalando dependências de projetoAgora podemos finalmente adicionar as gems necessárias para o projeto bundle add cassandra driverbundle add dry auto injectbundle add dry systembundle add zeitwerkbundle add dry clibundle add dotenvcassandra driverdry auto injectdry systemzeitwerkdotenvdry cli Definindo um REPLComo somos bons rubistas devemos fazer o setup do nosso IRB como a primeira ação no projeto correto Para isso crie um script executável em bin console com touch bin console amp amp chmod x bin consoleNesse arquivo vamos incluir um setup básico com IRB Dotenv usr bin env rubyrequire dotenv load require irb IRB startExcelente Agora temos o básico do nosso setup concluído Vamos prosseguir definindo a camada de injeção de dependência e um provedor para incorporar a funcionalidade do ScyllaDB Definindo a camada de injeção de dependênciasNesse artigo vamos seguir um modelo igual mostrado no meu artigo sobre injeção de dependências com sinatra para um detalhamento maior de como esse setup e feito e o porquêéfeito por favor referencie a esse link Criando o container principalComo descrito no artigo referenciada acima para nossa camada de injeção de dependências funcionar precisamos definir um container principal que vai servir de referencia para buscar dependências e registrar novos providers Agora vamos criar um arquivo em config application rb com o seguinte conteúdo frozen string literal truerequire dry system class Application lt Dry System Container configure do config config root Pathname config component dirs loader Dry System Loader Autoloading config component dirs add lib config component dirs add config endendloader Zeitwerk Loader newloader push dir Application config root join lib realpath loader push dir Application config root join config realpath loader setupNeste arquivo recém criado estamos definindo a localização dos componentes ao longo da nossa aplicação a principio isso vai definir auto loading desses arquivos para que não precisemos usar require sempre que usamos alguma classe de algum arquivo Com o container definido jápodemos modificar nosso script de REPL para que contenha o método finalize da nossa aplicação recém definida usr bin env rubyrequire dotenv load require relative config application Application finalize require irb IRB startTambém vamos criar um arquivo main rb na raiz do nosso projeto apenas com o finalize para servir como um ponto de entrada na nossa CLI require dotenv load require relative config application Application finalize Criando o provider de banco de dadosAgora que temos um container principal para nossa aplicação podemos definir o único provider desse projeto que vai ser a conexão com o ScyllaDB Para isso crie um arquivo em config provider database rb com o seguinte conteúdo frozen string literal trueApplication register provider database do prepare do require cassandra require relative lib migration utils require relative constants end start do cluster Cassandra cluster username ENV fetch DB USER nil password ENV fetch DB PASSWORD nil hosts ENV fetch DB HOSTS nil split connection cluster connect MigrationUtils create keyspace session connection if MigrationUtils keyspace exist session connection MigrationUtils create table session connection if MigrationUtils table exist session connection connection cluster connect KEYSPACE NAME register database connection connection endendPara as credenciais recomendo utilizar o serviço cloud ScyllaDB nele vocêconsegue criar um cluster super rápido e ganhar acesso a todas as credenciais de maneira super simples Um exemplo env pode ser visto abaixo DB USER scyllaDB PASSWORD passwordDB HOSTS node amazonaws node amazonaws node amazonawsCaso vocêtenha lido o artigo sobre injeção de dependências mencionado acima esse provider parece bem simples mas também temos o uso de algumas classes novas que ainda não criamos e portanto vamos verificá las passo a passo Definindo as constantes para nosso projetoNessa aplicação vamos manter nomes de keyspace e tabela definidos em constantes vocêpode alterar para receber por parâmetro ou variável de ambiente mas para simplicidade vamos deixar apenas com uma constante mesmo Crie um arquivo em config constants rb com o seguinte conteúdo frozen string literal trueKEYSPACE NAME media player TABLE NAME playlist Criando uma classe utilitária para criar nosso bancoComo podemos ver no exemplo do provider anterior estamos usando a classe MigrationUtils para produzir atividades comuns para a inicialização do nosso keyspace e da nossa tabela Agora vamos seguir passo a passo pelos métodos necessários para a criação do nosso keyspace e tabelas Checando se um keyspace ou tabela existeAntes de prosseguirmos com a criação dos nossos keyspaces e tabelas écrucial verificar se eles jáexistem a fim de evitar a execução desnecessária da função Para isso vamos implementar métodos booleanos da seguinte maneira Primeiramente criaremos um arquivo chamado migration utils rb localizado emlib migration utils rb e o preencheremos com o código descrito abaixo class MigrationUtils param session Cassandra Cluster return Boolean def self keyspace exist session query lt lt SQL select keyspace name from system schema keyspaces WHERE keyspace name SQL session execute async query arguments KEYSPACE NAME join rows size zero end param session Cassandra Cluster return Boolean def self table exist session query lt lt SQL select keyspace name table name from system schema tables where keyspace name AND table name SQL session execute async query arguments KEYSPACE NAME TABLE NAME join rows size zero endendAqui estamos implementando métodos estáticos que serão executados antes de configurarmos a camada de injeção de dependência Por essa razão aceitamos a sessão do banco de dados como parâmetro para essas funções Com essa sessão podemos usar o método execute async para enviar uma consulta CQL Esse método também nos permite utilizar placeholders para os parâmetros e especificar os valores em um objeto arguments Como esse método funciona de forma assíncrona precisamos usar o método join para esperar essa Future finalizar e nos retornar um objeto Com o objeto em mãos podemos acessar a propriedade rows sendo esta uma lista contendo todas as linhas referentes a query mostrada acima Para finalizar a implementação do método retornando um booleano vamos checar se a lista esta vazia ou não checando se o tamanho da lista ézero com size zero Disclaimer Precisamos usar o size zero pois o retorno éum Enumerator que não possui o método empty Criando os keyspaces e tabelasAgora que criamos métodos responsáveis por checar a existência de um keyspace e uma tabela precisamos criar os métodos que vão criar eles caso jánão existam correto Para isso vamos continuar trabalhando na classe localizada em lib migration utils rb com o seguinte conteúdo class MigrationUtils param session Cassandra Cluster return void def self create keyspace session query lt lt SQL CREATE KEYSPACE KEYSPACE NAME WITH replication class NetworkTopologyStrategy replication factor AND durable writes true SQL session execute async query join end param session Cassandra Cluster return void def self create table session query lt lt SQL CREATE TABLE KEYSPACE NAME TABLE NAME id uuid title text album text artist text created at timestamp PRIMARY KEY id created at SQL session execute async query join endendAqui vamos seguir o mesmo padrão onde recebemos o session como parâmetro e o usamos para executar uma query assíncrona com execute async como não precisamos lidar com o retorno dessas queries podemos apenas usar o join para esperar ela finalizar Note Para qualquer duvida referente as queries em si recomendo fortemente os links mencionados da ScyllaDB University Carregando o novo provider na nossa aplicaçãoAgora que definimos e entendemos o nosso provider de banco de dados precisamos carregá lo para que seja injetado nos nossos dois pontos principais da aplicação No main rb vamos adicionar o require require relative config provider database E no bin console a mesma coisa require relative config provider database Definindo o boilerplate para nossa CLIAgora que temos uma camada de banco de dados e auto requiring pronta pra ser usada vamos utilizar a outra grande gem desse projeto para definir os comandos da nossa CLI Vem ai dry cli senhoras e senhores Nesse primeiro momento vamos nos preocupar em apenas definir o boilerplate para nossa CLI sem nos preocupar com a implementação real certo Para isso vamos definir o modulo que vai registrar todos os comandos localizado em lib cli rb com o seguinte conteúdo frozen string literal truerequire dry cli module Cli extend Dry CLI Registry register add Commands AddendNesse modulo inicial podemos usar uma DSL para registrar novos comandos com o register essa DSL éfornecida ao extender o modulo Dry CLI Registry Agora que registramos um comando Add vamos criar a classe referente a ele localizada em lib cli commands add rb com o seguinte conteúdo frozen string literal truerequire dry cli module Cli module Commands class Add lt Dry CLI Command desc This command add a new song to the playlist argument title type string required true desc The title of the song argument album type string required true desc The name of the album of that song argument artist type string required true desc The name of the artist of band def call title album artist puts Add command gt Title title Album album Artist artist end end endendNessa classe podemos ver outra DSL fornecida por herdar a classe Dry CLI Command com ela podemos prover uma descrição para o comando usando desc declarar quais argumentos vamos receber para esse comando junto com seu tipo validação e descrição com argument e muito mais Logo após definir os metadados do nosso comando definimos um método call que vai receber os argumentos definidos como parâmetros nomeados Em nosso arquivo main rb podemos inicializar a nossa CLI com require dry cli Dry CLI new Cli callFinalmente executando nossa aplicação com ruby main rb devemos ter o seguinte output ruby main rbCommands main rb add TITLE ALBUM ARTIST This command add a new song to the playlist Implementando nossos comandosAgora que temos um boilerplate e um entendimento básico quanto ao funcionamento da gem dry cli podemos nos preocupar em implementar alguns comandos simples para o nosso CRUD Implementando o primeiro comando AddComo játemos a classe para esse comando criada vamos apenas começar a trabalhar na implementação do método call da seguinte forma frozen string literal truerequire dry cli module Cli module Commands class Add lt Dry CLI Command desc This command add a new song to the playlist argument title type string required true desc The title of the song argument album type string required true desc The name of the album of that song argument artist type string required true desc The name of the artist of band def initialize super repo Application database connection end def call title album artist query lt lt SQL INSERT INTO KEYSPACE NAME TABLE NAME id title artist album created at VALUES now SQL repo execute async query arguments title artist album Time now join puts Song title from artist artist Added end end endendNesta implementação primeiro vamos injetar nossa conexão com o banco de dados por meio da camada de injeção de dependência no construtor da classe e então iremos utilizar o método jáconhecido execute async para inserir um novo dado na tabela correta Um ponto importante a ressaltar éo uso da função now na query essa função énativa do banco de dados e serve para inserir um novo UUID no padrão esperado pelo ScyllaDB dessa forma não precisamos lidar com geração de UUID pelo lado da linguagem Bem simples certo vamos continuar com os próximos comandos do nosso CRUD seguindo a mesma arquitetura jáproposta Implementando o comando ListApós ter elaborado um comando para adicionar novas músicas avançaremos para a criação de um comando destinado a listar todas as músicas jácriadas Para isso vamos criar um arquivo em lib cli commands list rb com o seguinte conteúdo frozen string literal truerequire dry cli module Cli module Commands class List lt Dry CLI Command desc This command shows all the created songs def initialize super repo Application database connection end def call query lt lt SQL SELECT FROM KEYSPACE NAME TABLE NAME SQL repo execute async query join rows each do song puts lt lt MSG ID song id Song Name song title Album song album Created At song created at MSG end end end endendNovamente estamos aqui fazendo uma query SELECT simples e percorrendo pelos resultados no array rows para mostrar ao usuário final éimportante ressaltar que os fields acessados com row title correspondem aos fields que criamos quando fizemos o CREATE TABLE no provider Não podemos esquecer de registrar esse comando então vamos modificar o arquivo lib cli rb frozen string literal truerequire dry cli module Cli extend Dry CLI Registry register add Commands Add register list Commands List lt Novo comandoendPerfeito Agora podemos tanto adicionar quanto listar musicas DUma simples demo do que temos agora tanto com add quanto list Implementando o comando DeleteVamos explorar agora um comando bastante interessante Neste caso iremos exibir uma lista de músicas acompanhadas por seus indices permitindo ao usuário selecionar uma música específica através da posição correspondente Para realizar isso vamos desenvolver o comando e implementar um método que seja responsável por listar as músicas com suas respectivas posições numéricas Além disso iremos também aguardar o input do usuário A seguir vamos criar um novo arquivo localizado em lib cli commands delete rb com o seguinte conteúdo frozen string literal truerequire dry cli module Cli module Commands class Delete lt Dry CLI Command desc This command will prompt for a song to be deleted and then delete it def initialize super repo Application database connection end def call songs repo execute async SELECT FROM KEYSPACE NAME TABLE NAME join rows song to delete index select song to delete songs query lt lt SQL DELETE FROM KEYSPACE NAME TABLE NAME WHERE id SQL song to delete songs to a song to delete index repo execute async query arguments song to delete id join end private param songs Array lt Hash gt def select song to delete songs songs each with index do song index puts lt lt DESC index Song song title Album song album Artist song artist Created At song created at DESC end print Select a song to be deleted stdin gets chomp to i end end endendNo método select song to delete nós recebemos uma lista de musicas e percorremos por ela com um índice usando o método each with index dessa forma conseguimos mostrar uma mensagem no modelo Song Album Artist Created At Ainda nesse método esperamos o input do usuário com o stdin gets chomp e repassamos no retorno como um integer convertendo com to i Jáno método call começamos fazendo uma consulta para obter todas as músicas registradas na tabela Em seguida passamos esse conjunto de dados para o método que permite ao usuário selecionar uma em especifico e retornar o índice da mesma Usamos esse índice para escolher uma música específica no array e em seguida executamos uma consulta DELETE para removê la Disclaimer Antes de selecionar um item especifico precisamos transformar em um array com to a visto que essas linhas são um Enumerator Uma demo mostrando o uso prático do comando Não podemos esquecer de registrar esse comando então vamos modificar o arquivo lib cli rb frozen string literal truerequire dry cli module Cli extend Dry CLI Registry register add Commands Add register list Commands List register delete Commands Delete lt Novo comandoend Implementando o comando UpdateAgora vamos criar um comando que vai juntar todos os conceitos mostradosanteriormente este seráum update e vai performar da seguinte forma Vamos aceitar parâmetros como title album artist para usar como parte do update semelhante ao que fizemos no comando add Vamos mostrar para o usuário uma lista das musicas cadastradas e esperar o input do usuário com um índice semelhante ao que fizemos no comando delete Perfeito Vamos criar um comando localizado em lib cli commands update rb com o seguinte conteúdo frozen string literal truerequire dry cli module Cli module Commands class Update lt Dry CLI Command desc This command will prompt for a song to be updated and use the argument information to updated it argument title type string required true desc The title of the song argument album type string required true desc The name of the album of that song argument artist type string required true desc The name of the artist of band def initialize super repo Application database connection end def call title album artist songs repo execute async SELECT FROM KEYSPACE NAME TABLE NAME join rows song to update index select song to update songs query lt lt SQL UPDATE KEYSPACE NAME TABLE NAME SET title artist album WHERE id AND created at SQL song to update songs to a song to update index repo execute async query arguments title artist album song to update id song to update created at join end private param songs Array lt Hash gt def select song to update songs songs each with index do song index puts lt lt DESC index Song song title Album song album Artist song artist Created At song created at DESC end print Select a song to be updated stdin gets chomp to i end end endendComo pode ver esse comando érealmente uma junção entre os conceitos do comando add arguments e conceitos do comando delete método select pelo índice Sobre a query update éimportante ressaltar que no ScyllaDB temos duas primary keys nesse caso id e created at portanto precisamos utilizar ambas as informações para que o banco de dados ache corretamente nossa linha Uma demo mostrando o funcionamento do comando ConclusãoEspero que esse artigo tenha sido útil Tentei ao máximo focar na integração entre Ruby e ScyllaDB pois não encontrei nada com uma linguagem simples para iniciantes Para duvidas direcionadas especificamente ao ScyllaDB recomendo fortemente os artigos do DanielHert e o ScyllaDB University E não se esqueça de dar continuidade a este projeto que vocêacaba de criar Deixo como um desafio a busca por aprimoramentos na arquitetura que construímos Abaixo elenco alguns pontos de melhoria evidentes porém encorajo vocêa identificar outros que possam ter passado despercebidos por mim A prática éo segredo para se tornar uma pessoa desenvolvedora habilidosa As funções select song to delete e select song to update são iguais talvez mover ela para algum local comum No comando add nós printamos uma mensagem de sucesso para o usuário mas não seguimos isso nos outros comandos podemos melhorar essa experiencia de usuário May the force be with you 2023-08-28 11:24:53
海外TECH DEV Community Project IDX by Google 〉Web, Flutter, AI... It's not there yet 💩 https://dev.to/maximsaplin/project-idx-by-google-web-flutter-ai-its-not-there-yet-91m Project IDX by Google〉Web Flutter AI It x s not there yet Google has announced their take on developer tooling and AI coding Project IDX is available as public preview What they did was wrapping VSCode into own cloud hosted environment setting up templates for Web and Flutter projects adding AI coding assistant and giving promises of frictionless and more productive dev environment The moment I imported my Flutter GitHub repo I faced a myriad of problems Enable Nix for this workspace option in the import dialog was breaking everything IDE failed to startWith Nix disabled I got my project imported and opened in Google hosted VSCode or Code OSS to be correct It was very slow loading files getting packages building Linux Android Web are available as devices though Linux doesn t work requires CMake to be installed and you can t do apt install Android starts building only after you do flutter upgrade force older version of Flutter is installed by default and has older gradle versionAndroid even when it gets built there s no UI of the emulator seen anywhereWeb well after starting the app you get a pop up window with a preview though it is emptyIDX AI in the intro video there was demo of AI knowing codebase there s an extension setting I enabled Tried asking about my solution got nothingTried it with Chrome Project IDX is very early in development see no point using it now 2023-08-28 11:19:29
海外TECH DEV Community Best library for Form Handling | React | Part 1 https://dev.to/shubhamtiwari909/form-handling-in-react-1-3deg Best library for Form Handling React Part Hello everyone today i will be starting another series for Form handling in React We are going to use some libraries like formik yup tailwindcss to achieve form handling What is formik Formik is an open source library for managing forms in React applications It provides a set of tools and utilities that make it easier to handle complex forms and form validation by encapsulating the state management input tracking and submission logic Formik simplifies the process of creating handling and validating forms by offering a higher level abstraction and a more intuitive API compared to manually managing form state using React s built in state management What is Yup Yup is a JavaScript library that focuses on simplifying and enhancing form validation in applications particularly when working with forms in React or other JavaScript frameworks It provides a declarative and schema based approach to defining validation rules for various data types making it easier to validate user input ensure data integrity and provide helpful error messages What is TailwindCSS Tailwind CSS is a popular utility first CSS framework that helps developers quickly build responsive and visually appealing user interfaces Unlike traditional CSS frameworks that come with pre designed components Tailwind focuses on providing a set of utility classes that can be directly applied to HTML elements to style them Getting started To start our series on form handling we will create a simple form and style with tailwind classesimport React from react function DemoForm return lt div className grid container gt lt form className form container gt lt div className form control gt lt label htmlFor name gt Name lt label gt lt div className relative gt lt input type text id name name name className form input gt lt div gt lt div gt lt div className form control gt lt label htmlFor email gt E mail lt label gt lt div className relative gt lt input type email id email name email className form input gt lt div gt lt div gt lt div className form control gt lt label htmlFor company gt Company lt label gt lt div className relative gt lt input type text id company name company className form input gt lt div gt lt div gt lt button type submit className form submit gt Submit lt button gt lt form gt lt div gt export default DemoFormThis is how our form structure gonna look like with fields name email and company It also has a submit button to submit the form and some tailwind classes to style the form Formik initialization We have to import useFormik hook and initialise it with some initial valuesimport React from react import useFormik from formik const initialValues name Shubham email company const onSubmit values gt console log Form data values function DemoForm const formik useFormik initialValues onSubmit return lt div className grid container gt lt form className form container onSubmit formik handleSubmit gt rest of the form is sameWe have assigned the useFormik hook to a variable and passed things inside it initial values and a on submit handlerThen we have attached the onSubmit handler in Form onSubmit event remember the handler name is always formik handleSubmit don t change it with the one you have manually created Binding Input fields with statesTo each input field add these attribute value and onChange lt input type text id name name name className form input value formik values name onChange formik handleChange gt lt input type email id email name email className form input value formik values email onChange formik handleChange gt lt input type text id company name company className form input value formik values company onChange formik handleChange gt rest of the form is sameWe are handling the onChange event with formik handleChange this also should be same in every input and to control the states we use the dot notation formik values fieldname like formik values name and formik values email You can add a console log statement to check the state is changing on typing in the fields or not const formik useFormik initialValues onSubmit console log formik values here Final Code import React from react import useFormik from formik const initialValues name Shubham email company const onSubmit values gt console log Form data values function DemoForm const formik useFormik initialValues onSubmit console log formik values return lt div className grid container gt lt form className form container onSubmit formik handleSubmit gt lt div className form control gt lt label htmlFor name gt Name lt label gt lt div className relative gt lt input type text id name name name className form input value formik values name onChange formik handleChange gt lt div gt lt div gt lt div className form control gt lt label htmlFor email gt E mail lt label gt lt div className relative gt lt input type email id email name email className form input value formik values email onChange formik handleChange gt lt div gt lt div gt lt div className form control gt lt label htmlFor company gt Company lt label gt lt div className relative gt lt input type text id company name company className form input value formik values company onChange formik handleChange gt lt div gt lt div gt lt button type submit className form submit gt Submit lt button gt lt form gt lt div gt export default DemoFormThat s it for this part in the next part we will validate errors and show them in UI THANK YOU FOR CHECKING THIS POSTYou can contact me on Instagram LinkedIn Email shubhmtiwri gmail com You can help me with some donation at the link below Thank you gt lt Also check these posts as well 2023-08-28 11:05:14
Apple AppleInsider - Frontpage News Foxconn founder says Apple business means China can't risk threatening him https://appleinsider.com/articles/23/08/28/foxconn-founder-says-apple-business-means-china-cant-risk-threatening-him?utm_medium=rss Foxconn founder says Apple business means China can x t risk threatening himBillionaire Foxconn founder Terry Gou is running for president of Taiwan and says he will not bow to China s threats as any political pressure would disrupt sales to Apple Tesla and others Foxconn founder Terry GouGou previously announced a run for Taiwanese president in and as part of that said that he would stepped down from running Foxconn Read more 2023-08-28 11:16:03
海外TECH Engadget The Morning After: ‘GTA VI’ hacker leaked game footage with a Fire TV Stick https://www.engadget.com/the-morning-after-gta-vi-hacker-leaked-game-footage-with-a-fire-tv-stick-111524232.html?src=rss The Morning After GTA VI hacker leaked game footage with a Fire TV StickAll you need to leak footage from a highly anticipated multimillion dollar game is an Amazon Fire TV stick and a cheap UK hotel That massive Grand Theft Auto VI leak came from Arion Kurtaj a member of hacking group Lapsus And he managed to do so while already on bail for allegedly hacking NVIDIA The year old infiltrated GTA creators Rockstar Games even announcing himself as an quot attacker quot in the company s Slack channel While on bail he was not allowed internet access but he circumvented that with a Fire TV Stick as well as a newly purchased smartphone and keyboard from a hotel just outside Oxford UK Further details of the attack became public following a seven week trial and his being found guilty of hacking Rockstar Revolut and Uber A year old was also convicted but unlike Kurtaj is still out on bail Lapsus comprises mostly teenagers from Brazil and the UK ーKurtaj and the unnamed year old are two of seven members arrested in the UK Between and Lapsus also allegedly hacked Samsung T Mobile and Microsoft The group s motives seem to vary from attack to attack but appear to be a mix of financial gain through blackmail and sheer amusement It s also unclear how much Lapsus has made from its cybercrimes No companies have publicly admitted to paying the hackers Mat Smith​​You can get these reports delivered daily direct to your inbox Subscribe right here ​​The biggest stories you might have missedIs War Games Homeworld s secret weapon The best cheap phones for Hitting the Books Why AI needs regulation and how we can do it The best password managers for Engadget Podcast Is Sony s PlayStation Portal a huge mistake Another PlayStation handheld EngadgetThis week Sony announced the PlayStation Portal a handheld that can only stream games from your PS In this episode Devindra and producer Ben Ellman try to figure out what the heck Sony is doing Is the Portal something gamers actually want Or did Sony completely miss an opportunity to build a better portable Also we discuss why we re excited for Armored Core VI Listen here Dune Part Two delayed until March following writer strikesIt ll likely be one of many movie launches pushed back this year The release of Dune Part Two has been pushed back to March th amid ongoing writer and actor strikes The film was originally scheduled for November rd but Warner Bros and producer Legendary Entertainment agreed to delay it over four months ーlikely because the film wouldn t meet its full box office potential without publicity and support from the star studded cast Along with Part Two Godzilla x Kong The New Empire and Lord of the Rings The War of the Rohirrim have been pushed back to April th and December th respectively largely to accommodate Dune Part Two Continue reading The Solar Orbiter spacecraft may have discovered what powers solar windsThe spacecraft has imaged picoflare jets for the first time You ve probably heard of solar winds but the origin of these streams of charged particles remains a mystery even decades after their discovery The images captured last year by the Extreme Ultraviolet Imager EUI instrument aboard ESA s and NASA s Solar Orbiter however may have finally given us the knowledge to explain what powers these winds In a paper published in Science a team of researchers described a large number of jets coming out of a dark region of the sun They re called picoflare jets because they contain around one trillionth the energy the largest solar flares can generate These picoflare jets reach speeds of around kilometers per second lasting between and seconds The researchers believe they have the power to emit enough high temperature plasma to be a substantial source of our system s solar winds Continue reading This article originally appeared on Engadget at 2023-08-28 11:15:24
医療系 医療介護 CBnews 福祉用具 安全な利用促進などへの対応方針案を了承-ヒヤリ・ハット情報の共有など 厚労省検討会 https://www.cbnews.jp/news/entry/20230828165110 判断基準 2023-08-28 20:25:00
ニュース BBC News - Home Air traffic control: Airlines warn of flight delays over technical fault https://www.bbc.co.uk/news/uk-66637156?at_medium=RSS&at_campaign=KARANGA busiest 2023-08-28 11:47:08
ニュース BBC News - Home Covid in Scotland: Families demand apology over care home ban https://www.bbc.co.uk/news/uk-scotland-66617352?at_medium=RSS&at_campaign=KARANGA pandemic 2023-08-28 11:35:35
ニュース BBC News - Home Luis Rubiales: Spanish FA president's mother on hunger strike over kiss row https://www.bbc.co.uk/sport/football/66637880?at_medium=RSS&at_campaign=KARANGA Luis Rubiales Spanish FA president x s mother on hunger strike over kiss rowThe mother of Spanish football federation president Luis Rubiales goes on a hunger strike because of the inhuman hunt against her son 2023-08-28 11:20:43
IT 週刊アスキー 『サンバDEアミーゴ:パーティーセントラル』でピーナッツくんのコスチュームをダウンロードコンテンツで配信! https://weekly.ascii.jp/elem/000/004/152/4152566/ nintendo 2023-08-28 20:05:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)