投稿時間:2023-08-31 06:22:04 RSSフィード2023-08-31 06:00 分まとめ(27件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
AWS AWS Compute Blog Introducing Intra-VPC Communication Across Multiple Outposts with Direct VPC Routing https://aws.amazon.com/blogs/compute/introducing-intra-vpc-communication-across-multiple-outposts-with-direct-vpc-routing/ Introducing Intra VPC Communication Across Multiple Outposts with Direct VPC RoutingThis blog post is written by Jared Thompson Specialist Solutions Architect Hybrid Edge Today we announced AWS Outposts rack support for intra VPC communication across multiple Outposts You can now add routes in your Outposts rack subnet route table to forward traffic between subnets within the same VPC spanning across multiple Outposts using the Outpost local … 2023-08-30 20:00:35
AWS AWS Database Blog Improve performance of real-time analytics and mixed workloads using the Database In-Memory option on Amazon RDS for Oracle https://aws.amazon.com/blogs/database/improve-performance-of-real-time-analytics-and-mixed-workloads-using-the-database-in-memory-option-on-amazon-rds-for-oracle/ Improve performance of real time analytics and mixed workloads using the Database In Memory option on Amazon RDS for OracleIn this post I demonstrate how to improve the performance of real time analytics and mixed workloads without impacting Online Transaction Processing OLTP using the Oracle Database In Memory option for workloads running on Amazon Relational Database Service Amazon RDS for Oracle The demand for real time analytics requires analytic queries to be run in real time concurrently … 2023-08-30 20:53:32
AWS AWS Desktop and Application Streaming Blog Infor accelerates time to market by converting application to SaaS with Amazon AppStream 2.0 https://aws.amazon.com/blogs/desktop-and-application-streaming/infor-accelerates-time-to-market-by-converting-application-to-saas-with-amazon-appstream-2-0/ Infor accelerates time to market by converting application to SaaS with Amazon AppStream Infor founded in and headquartered in New York City is a global leader in ERP and industry specific cloud software products With over customers in countries across the globe Infor uses Amazon AppStream to cost effectively deliver a Software as a Service SaaS version of their construction payroll application to customers Opportunity Meet customer … 2023-08-30 20:37:26
AWS AWS Machine Learning Blog Deploy self-service question answering with the QnABot on AWS solution powered by Amazon Lex with Amazon Kendra and large language models https://aws.amazon.com/blogs/machine-learning/deploy-self-service-question-answering-with-the-qnabot-on-aws-solution-powered-by-amazon-lex-with-amazon-kendra-and-large-language-models/ Deploy self service question answering with the QnABot on AWS solution powered by Amazon Lex with Amazon Kendra and large language modelsPowered by Amazon Lex the QnABot on AWS solution is an open source multi channel multi language conversational chatbot QnABot allows you to quickly deploy self service conversational AI into your contact center websites and social media channels reducing costs shortening hold times and improving customer experience and brand sentiment In this post we introduce the new Generative AI features for QnABot and walk through a tutorial to create deploy and customize QnABot to use these features We also discuss some relevant use cases 2023-08-30 20:07:52
海外TECH MakeUseOf 9 Ways to Customize Your YouTube Channel for Better Engagement https://www.makeuseof.com/customize-youtube-channel-better-engagement/ engagement 2023-08-30 20:00:26
海外TECH DEV Community Every Project Deserves its CI/CD pipeline, no matter how small https://dev.to/aws-builders/every-project-deserves-its-cicd-pipeline-no-matter-how-small-19j9 Every Project Deserves its CI CD pipeline no matter how small TL DRIn today s tech industry setting up a CI CD pipeline is quite easy Creating a CI CD pipeline even for a simple side project is a great way to learn many things For today we will be working on one of my side projects using Portainer Gitlab and Docker for the setup My sample projectAs the founder of Apoti Development Association A D A an NGO I like organizing technical events in the Buea area SW Region of Cameroon Africa I was frequently asked if there is a way to know all the upcoming events the meetups the Jugs the ones organized by the local associations etc After taking some time to look into it I realized there was no single place which listed them all So I came up with  a simple web page which tries to keep an update of all the events This project is available in Gitlab Disclaimer Even though this is a simple project the complexity of this project is not important here The different components of our CI CD pipeline we will detail can also be used in almost the same way for more complicated projects However they are a nice fit for micro services A look at the codeTo make things as simple as possible we have an events json file in which all new events are added Let s look at a snippet of it events title Let s Serve Day desc Hi everyone We re back with minute practitioner led sessions and live Q amp A on Slack Our tracks include CI CD Cloud Native Infrastructure Cultural Transformations DevSecOps and Site Reliability Engineering hours speakers Free online date October online event ts T link sponsors name all day devops title Creation of a Business Blockchain lab amp introduction to smart contracts desc Come with your laptop We invite you to join us to create the first prototype of a Business Blockchain Lab and get an introduction to smart contracts ts T date October at pm at CEEI link sponsors name ibm … Our mustache template is applied to this file It will help us to generate the final web assets Docker multi stage buildOnce our web assets have been generated they are copied into an nginx image ーthe image that is deployed on our target machine Thanks to Gitlab s multi stage build our build is in two parts creation of the assetsgeneration of the final image containing the assetsLet s look at the Dockerfile used for the build Generate the assets FROM node alpine AS build COPY build WORKDIR build RUN npm i RUN node clean js RUN node modules mustache bin mustache events json index mustache gt index html Build the final Docker image used to serve themFROM nginx COPY from build build html usr share nginx html COPY events json usr share nginx html COPY css usr share nginx html css COPY js usr share nginx html js COPY img usr share nginx html img Local testingBefore we proceed we need to test the generation of our site Just clone the repository and run the test script This script will create an image and run a container out of it First Clone the repo git clone git gitlab com lucj ada events git Next cd into the repo cd sophia eventsNow let us run our test script test shThis is what our output looks lkeSending build context to Docker daemon MB Step FROM node alpine AS build gt dfbdaa Step COPY build gt faadf Step WORKDIR build gt Running in cbcf Removing intermediate container cbcf gt eaf Step RUN npm i gt Running in deeb npm notice created a lockfile as package lock json You should commit this file npm WARN www No repository field added packages from contributors and audited packages in s found vulnerabilitiesRemoving intermediate container deeb gt debef Step RUN node clean js gt Running in fdc Removing intermediate container fdc gt ce Step RUN node modules mustache bin mustache events json index mustache gt index html gt Running in bebdb Removing intermediate container bebdb gt dffccc Step FROM nginx gt a Step COPY from build build html usr share nginx html gt Using cache gt ecf Step COPY events json usr share nginx html gt Using cache gt eaced Step COPY css usr share nginx html css gt Using cache gt ecbc Step COPY js usr share nginx html js gt Using cache gt efdecebb Step COPY img usr share nginx html img gt ebfdf Successfully built ebfdf Successfully tagged registry gitlab com ada ada events latest gt web site available on http localhost http localhost We can now access our webpage using the URL provided at the end Our target environment Provisioning a virtual machine on a cloud providerAs you have probably noticed this web site is not critical only a few dozen visits a day and as such it only runs on a single virtual machine This one was created with Docker Machine on AWS the best cloud provider Given the scale of our project you must have noticed that it is not that critical barely a few visits a day so we will only one one virtual machine for it For that we created ours with Docker on Exoscale a nice European cloud provider Using Docker swarmWe configured our VM virtual machine above so it runs the Docker daemon in Swarm mode so that it would allow us to use the stack service secret primitives config and the great very easy to use orchestration abilities of Docker swarm The application running as a Docker stackThe file below gt ada yaml defines the service which runs our Nginx web server that contains the web assets version services www image registry gitlab com lucj sophia events networks proxy deploy mode replicated replicas update config parallelism delay s restart policy condition on failurenetworks proxy external trueLet s break this down The docker image is in our private registry on gitlab com The service is in replicated mode with replicas this means that tasks containers of the service are always running at the same time A virtual IP address VIP will be associated to this service by Docker Swarm That way each request targeted at the service is easily load balanced between our two replicas Every time that an update is done to our service like deploying a new version of the website one of our replicas is updated and the nd is updated secs after This makes sure our website is always available even during the update process Our service is also attached to the external proxy network This makes it so our TLS termination service which runs in another service which is deployed on docker swarm but out of our project can always send requests to our www service Our stack is executed with the command docker stack deploy c ada yml ada events Portainer One tool to manage them allPortainer is a really great web UI which will help us to manage all our Docker hosts and Docker Swarm clusters very easily Let s take a look at its interface where it lists all our stacks available in the swarm As you can see above our current setup has stacks First we have Portainer itselfThen we have ada events or in this case sophia events which contains the service which runs our web siteLast we have tls our TLS termination serviceIf we go ahead and list the details of the www service in the ada events stack we can easily see that the Service webhook is activated This is a new feature available since Portainer version This update allows us to define a HTTP Post endpoint which we can call to trigger an update of our service As you will notice later on our Gitlab Runner is in charge of calling this webhook Note As you see in the screenshot above I use localhost to access Portainer Since I don t want to expose our Portainer instance to the external world access is done through an ssh tunne which we open with the command belowssh i docker machine machines labs id rsa NL localhost USER HOSTOnce we have done this all requests targeted at our local machine on port gt localhost are sent to port on our VM through ssh Port is the port where Portainer is running on our VM but this port is not opened to the outside world We used a security group in our AWS config to block it NB Note that in the command above the ssh key that was used to connect to the VM was the one generated by our Docker Machine during the creation of the VM GitLab runnerA gitlab runner is a continuous integration tool that helps automate the process of testing and deploying any and all applications It works in with GitLab CI to run any job defined in the project s gitlab ci yml file So in our project our GitLab runner is in charge of executing all the actions we defined in the gitlab ci yml file On Gitlab you have a choice of using your runners or using the shared runners available In this project we used a VM on AWS as our runner First we register our runner providing a couple of commands CONFIG FOLDER tmp gitlab runner config docker run ーrm t i v CONFIG FOLDER etc gitlab runner gitlab gitlab runner register non interactive executor docker ー docker image docker stable url ー registration token PROJECT TOKEN ー description AWS Docker Runner tag list docker run untagged ー locked false docker privilegedAs you see above we have PROJECT TOKEN as one of the needed options We will get that from the project page on Gitlab used to register new runnersOnce we have registered our gitlab runner we can now start itCONFIG FOLDER tmp gitlab runner config docker run d name gitlab runner ー restart always v CONFIG FOLDER etc gitlab runner v var run docker sock var run docker sock gitlab gitlab runner latestOnce our VM has been setup as a Gitlab runner it will be listed in the CI CD page under settings of our projectNow that we have a runner it can now receive work to do every time that we have a new commit and push in our git repo It will sequentially run the different stages in our gitlab ci yaml file Let s look at our gitlab ci yaml file gt the file that configures our Gitlab CI CD pipelinevariables  CONTAINER IMAGE registry gitlab com CI PROJECT PATH DOCKER HOST tcp docker stages   test  build  deploytest  stage test image node alpine script    npm i   npm testbuild  stage build image docker stable services    docker dind script    docker image build t CONTAINER IMAGE CI BUILD REF t CONTAINER IMAGE latest    docker login u gitlab ci token p CI JOB TOKEN registry gitlab com   docker image push CONTAINER IMAGE latest   docker image push CONTAINER IMAGE CI BUILD REF only    masterdeploy  stage deploy image alpine script    apk add update curl   curl XPOST WWW WEBHOOK only    masterLet s break down the stagesFirst the test stage starts by running some pre checks ensuring that the events json file is well formed and also makes sure that there is no images missing Next the build stage uses docker to build the image and then pushes it to our GitLab registry Lastly the deploy stage triggers the update of our service via a webhook sent to our Portainer app gt Note that the WWW WEBHOOK variable is defined in the CI CD settings of our project page on GitLab com Some NotesOur Gitlab runner is running inside a container in our Docker swarm Like mentioned before we could have instead used a shared runner publicly available runners which share their time between the jobs needed by different projects hosted on GitLab ーbut in our case since the runner must have access to our Portainer endpoint to send the webhook and also because I don t want our Portainer app to be publicly accessible I preferred setting it this way having it run inside the cluster It is also more secure this way In addition to that because our runner is in a docker container it is able to send the webhook to the IP address of the Docker bridge network to connect with Portainer through port which it exposes on the host Thus our webhook has following format   api … a af ab bdfbb The DeploymentUpdating a new version of our app follows the workflow belowIt starts with a developer pushing some changes to our GitLab repo The changes in the code mainly involve adding or updating one or more events in our events json file while also adding some sponsors logo After this the GitLab runner performs all the actions that we defined in the gitlab ci yml file Then the GitLab runner calls our webhook that is defined in Portainer Finally upon the webhook reception Portainer deploys the newest version of the www service It does this calling the Docker Swarm API Portainer can access to the API because the var run docker sock socket is bind mounted once it is started Now our users can access the newest version of our events website Let s TestLet s test our pipeline by doing a couple changes in the code and then committing pushing those changes git commit m Fix image git push origin masterAs you can see in this screenshot below our changes triggered our pipeline Breaking down the stepsOn Portainer side the webhook was received and the service update was performed Also although we cannot see it clearly here one replica has been updated Like we mentioned before that still left the website accessible through the other replica The other replica also was updated a couple of seconds later SummaryAlthough this was a tiny project setting up a CI CD pipeline for it was a good exercise First it helped me get more familiar with GitLab which has been on my To Learn list for quite some time Having done this project I can say that it is an excellent professional product Also this project was a great opportunity for me to play with the long awaited webhook feature available in updated versions of Portainer Lastly choosing to use Docker Swarm for this project was a real no brainer so cool and easy to use Hope you found this project as interesting as I did No matter how small your project is it would be a great idea to build it using CI CD What projects are you working on and how has this article inspired you Please comment below 2023-08-30 20:30:00
海外TECH DEV Community #DEVDiscuss: Marketing for Developers https://dev.to/devteam/devdiscuss-5504 DEVDiscuss Marketing for DevelopersTime for DEVDiscuss ーright here on DEV Marketing for Developers The Unconventional Guide Ivan Novak・Aug startup webdev coding tutorial Inspired by inovak s Top post tonight s topic is Marketing for Developers For developers marketing is a blend of technical expertise and communication skills aimed at showcasing a product tool or service Unlike traditional marketing where the focus might be broader and less specialized marketing for developers requires an understanding of the specific needs languages and problems that developers face Questions What role does social media play in marketing for developers Can you share examples of developers who have excelled at marketing How can developers work on building their personal brand alongside their technical skills Any triumphs fails or other stories you d like to share on this topic 2023-08-30 20:13:28
海外TECH DEV Community Combining Delta Lake With MinIO for Multi-Cloud Data Lakes https://dev.to/sashawodtke/combining-delta-lake-with-minio-for-multi-cloud-data-lakes-cki Combining Delta Lake With MinIO for Multi Cloud Data LakesBy Matt Sarrel Director of Technical Marketing MinIO Delta Lake is an open source storage framework that is used to build data lakes on top of object storage in a Lakehouse architecture Delta Lake supports ACID transactions scalable metadata handling and unified streaming and batch data processing Delta Lake is commonly used to provide reliability consistency and scalability to Apache Spark applications Delta Lake runs on the top of the existing data lake storage such as MinIO and is compatible with Apache Spark APIs The original Delta Lake paper Delta Lake High Performance ACID Table Storage over Cloud Object Stores describes how it was built for cloud object storage When Vertica tested the use of Delta Lake for external tables they relied on MinIO HPE Ezmeral Runtime Enterprise customers run Delta Lake on MinIO MinIO supports Delta Lake s requirements for durability because MinIO follows strict read after write and list after write consistency models for all i o operations both in distributed and standalone modes and is widely acknowledged to run Delta Lake workloads Many organizations rely on cloud native object stores such as MinIO and AWS S to house large structured semi structured and unstructured datasets Each table is stored as a set of objects that are Parquet or ORC and arranged into partitions Queries over large files are basically scans that execute quickly Without Delta Lake more complex Spark workloads particularly those that modify add or remove data face challenges to performance and correctness under heavy multi user multi app loads Multi object updates are not atomic and queries are not isolated meaning that if a delete is conducted in one query then other concurrent queries will get partial results as the original query updates each object Rolling back writes is tricky and a crash in the middle of an update can result in a corrupted table The real performance killer is metadata for massive tables with millions of objects that are Parquet files holding billions or trillions of records metadata operations can bring the applications built on a data lake to a dead stop Delta Lake was designed to combine the transactional reliability of databases with the horizontal scalability of data lakes Delta Lake was built to support OLAP style workloads with an ACID table storage layer over cloud native object stores such as MinIO As described in the paper Delta lake high performance ACID table storage over cloud object stores “the core idea of Delta Lake is simple we maintain information about which objects are part of a Delta table in an ACID manner using a write ahead log that is itself stored in the cloud object store Objects are encoded in Parquet and can be read by an engine that understands Parquet Multiple objects can be updated at once “in a serialized manner while still achieving high parallel read and write performance The log contains metadata such as min max statistics for each file “enabling order of magnitude faster metadata searches than searching files in the object store directly Delta Lake provides the following ACID guarantees Delta Lake ensures that all changes to data are written to storage and committed for durability while being available to users and apps atomically There are no partial or corrupted files sitting in your data lake anymore Scalable data and metadata handling All reads and writes using Spark or another distributed processing engine can scale to petabytes Unlike most other storage formats and query engines Delta Lake leverages Spark to scale out all the metadata processing and can efficiently handle metadata of billions of files for petabyte scale tables Audit history and time travel The Delta Lake transaction log records details about every modification made to data including a full audit trail of the changes Data snapshots enable developers to access and revert to earlier versions of data for audits rollbacks or for any other reason Schema enforcement and schema evolution Delta Lake automatically prevents the insertion of data that does not match the existing table schema However the table schema can be explicitly and safely evolved to accommodate changes to data structure and format Support for deletes updates and merges Most distributed processing frameworks do not support atomic data modification operations on data lakes In contrast Delta Lake supports merge update and delete operations for complex use cases such as change data capture slowly changing dimension operations and streaming upserts Streaming and batch unification A Delta Lake table has the ability to work in batch mode and as a streaming source and sink Delta Lake works across a wide variety of latencies including streaming data ingest and batch historic backfill to provide real time interactive queries Streaming jobs write small objects into the table at low latency later transactionally combining them into larger objects for better performance Caching Relying on object storage means that the objects in a Delta table and its log are immutable and can be safely cached locally wherever across the multicloud locally is Lakehouse architecture Delta Lake in particular brings key new functionality to data lakes built on object storage Delta Lake works with a large and growing list of applications and compute engines such as Spark Starburst Trino Flink and Hive and also includes APIs for Scala Java Rust Ruby and Python Built for the cloud Kubernetes native MinIO enables performant resilient and secure data lake applications everywhere at the edge in the data center and in the public private cloud Delta Lake FilesA Delta table is a collection of files that are stored together in a directory for a file system or bucket for MinIO and other object storage To read and write from object storage Delta Lake uses the scheme of the path to dynamically identify the storage system and use the corresponding LogStore implementation to provide ACID guarantees For MinIO you will use SA see Storage configuration ーDelta Lake Documentation It is critical that the underlying storage system used for Delta Lake is capable of concurrent atomic reads writes as is MinIO Creating Delta tables is really writing files to a directory or bucket Delta tables are created opened by writing reading a Spark DataFrame and specifying the delta format and path In Scala for example Create a Delta table on MinIO spark range write format delta save sa lt your minio bucket gt lt path to delta table gt Read a Delta table on S spark read format delta load sa lt your mnio bucket gt lt path to delta table gt show Delta Lake relies on a bucket per table and buckets are commonly modeled after file system paths A Delta Lake table is a bucket that contains data metadata and a transaction log The table is stored in Parquet format Tables can be partitioned into multiple files MinIO supports S LIST to efficiently list objects using file system style paths MinIO also supports byte range requests in order to more efficiently read a subset of a large Parquet file MinIO makes an excellent home for Delta Lake tables due to industry leading performance MinIO s combination of scalability and high performance puts every workload no matter how demanding within reach MinIO is capable of tremendous performance a recent benchmark achieved GiB s GB s on GETs and GiB s GB s on PUTs with just nodes of off the shelf NVMe SSDs MinIO more than delivers the performance needed to power the most demanding workloads on Delta Lake It s likely that Delta Lake buckets will contain many Parquet and JSON files which aligns really well with all of the small file optimizations we ve built into MinIO for use as a data lake Small objects are saved inline with metadata reducing the IOPS needed both to read and write small files like Delta Lake transactions Most enterprises require multi cloud functionality for Delta Lake MinIO includes active active replication to synchronize data between locations on premise in the public private cloud and at the edge Active active replication enables enterprises to architect for multi geo resiliency and fast hot hot failover Each bucket or Delta Lake table can be configured for replication separately for greatest security and availability ACID Transactions with Delta LakeAdding ACID Atomicity Consistency Isolation and Durability transactions to data lakes is a pretty big deal because now organizations have greater control over and therefore greater trust in the mass of data stored in the data lake Previously enterprises that relied on Spark to work with data lakes lacked atomic APIs and ACID transactions but now Delta Lake makes it possible Data can be updated after it is captured and written and with support for ACID data won t be lost if the application fails during the operation Delta Lake accomplishes this by acting as an intermediary between Spark and MinIO for reading and writing data Central to Delta Lake is the DeltaLog an ordered record of transactions conducted by users and applications Every operation like an UPDATE or an INSERT performed on a Delta Lake table by a user is an atomic commit composed of multiple actions or jobs When every action completes successfully then the commit is recorded as an entry in the DeltaLog If any job fails then the commit is not recorded in the DeltaLog Without atomicity data could be corrupted in the event of hardware or software failure that resulted in data only being partially written Delta Lake breaks operations into one or more of the following actions Add file adds a fileRemove file removes a fileUpdate metadata records changes to the table s name schema or partitioningSet transaction records that a streaming job has committed dataCommit info information about the commit including the operation user and timeChange protocol updates DeltaLog to the newest software protocolIt s not as complicated as it appears For example if a user adds a new column to a table and adds data to it then Delta Lake breaks that down into its component actions update metadata to add the column and add file for each new file added and adds them to the DeltaLog when they complete Delta Lake relies on optimistic concurrency control to allow multiple readers and writers of a given table to work on the table at the same time Optimistic concurrency control assumes that changes to a table made by different users can complete without conflicting As the volume of data grows so does the likelihood that users will be working on different tables Delta Lake serializes commits and follows a rule of mutual exclusion should two or more commits take place at the same time In doing so Delta Lake achieves the isolation required for ACID and the table will look the same after multiple concurrent writes as it would if those writes had occurred serially and separately from each other When a user runs a new query on an open table that has been modified since the last time it was read Spark consults the DeltaLog to determine if new transactions have posted to the table and updates the user s table with those new changes This ensures that a user s version of a table is synchronized with the master table in Delta Lake to the most recent operation and that users cannot make conflicting updates to a table DeltaLog optimistic concurrency control and schema enforcement combined with the ability to evolve schema ensure both atomicity and consistency Digging into DeltaLogWhen a user creates a Delta Lake table that table s transaction log is automatically created in the delta log subdirectory As the user modifies the table each commit is written as a JSON file into the delta log subdirectory in ascending order ie json json json and on Let s say we add new records to our table from the data files parquet and parquet That transaction is added to the DeltaLog and saved as the file json Later we remove those files and add a new file parquet instead Those actions are recorded as a new file json After parquet and parquet were added they were removed The transaction log contains both of the operations even though they negate each other Delta Lake retains all atomic commits to enable complete audit history and time travel features that show users how a table looked at a specific point in time Furthermore the files are not quickly removed from storage until a VACUUM job is run MinIO versioning provides another layer of assurance against accidental deletion Durability with Delta Lake and MinIODelta Lake achieves durability by storing tables and transaction logs on persistent media Files are never overwritten and must be actively removed All data changes written to storage are available to users atomically as they occur Partial and corrupt files become a thing of the past Delta Lake does not hold tables and logs in RAM for very long and writes them directly to MinIO As long as commit data was recorded in the DeltaLog and the JSON files were written to the bucket data is durable in the event of a system or job crash MinIO guarantees durability after a table and its components are written through multiple mechanisms Erasure Coding splits data files into data and parity blocks and encodes it so that the primary data is recoverable even if part of the encoded data is not available Horizontally scalable distributed storage systems rely on erasure coding to provide data protection by saving encoded data across multiple drives and nodes If a drive or node fails or data becomes corrupted the original data can be reconstructed from the blocks saved on other drives and nodes Bit Rot Protection captures and heals corrupted objects on the fly to remove this silent threat to data durabilityBucket and Object Immutability protects data saved to MinIO from deletion or modification using a combination of object locking retention and other governance mechanisms Objects written to MinIO are never overwritten Bucket and Object Versioning further protect objects MinIO maintains a record of every version of every object even if it is deleted enabling you to step back in time much like Delta Lake s time travel Versioning is a key component of Data Lifecycle Management that allows administrators to move buckets between tiers for example to use NVMe for performance intensive workloads and to set an expiration date on versions so they are purged from the system to improve storage efficiency MinIO secures Delta Lake tables using encryption and regulates access to them using a combination of IAM and policy based access controls MinIO encrypts data in transit with TLS and data on drives with granular object level encryption using modern industry standard encryption algorithms such as AES GCM ChaCha Poly and AES CBC MinIO integrates with external identity providers such as ActiveDirectory LDAP Okta and Keycloak for IAM Users and groups are then subject to AWS IAM compatible PBAC as they attempt to access Delta Lake tables Delta Lake and MinIO TutorialThis section explains how to quickly start reading and writing Delta tables on MinIO using single cluster mode PrerequisitesDownload and install Apache Spark Download and install MinIO Record the IP address TCP port access key and secret key Download and install MinIO Client The following jar files are required You can copy the jars in any required location on the Spark machine for example home spark Hadoop hadoop aws jar Delta Lake needs the org apache hadoop fs sa SAFileSystem class from the hadoop aws package which implements Hadoop s FileSystem API for S Make sure the version of this package matches the Hadoop version with which Spark was built AWS aws java sdk jarSet Up Apache Spark with Delta LakeStart the Spark shell Scala or Python with Delta Lake and run code snippets interactively In Scala bin spark shell packages io delta delta core conf spark sql extensions io delta sql DeltaSparkSessionExtension conf spark sql catalog spark catalog org apache spark sql delta catalog DeltaCatalog Configure Delta Lake and AWS S on Apache SparkRun the following command to launch a Spark shell with Delta Lake and S support for MinIO bin spark shell packages io delta delta core org apache hadoop hadoop aws conf spark hadoop fs sa access key lt your MinIO access key gt conf spark hadoop fs sa secret key lt your MinIO secret key gt conf spark hadoop fs sa endpoint lt your MinIO IP port gt conf spark databricks delta retentionDurationCheck enabled false conf spark sql extensions io delta sql DeltaSparkSessionExtension conf spark sql catalog spark catalog org apache spark sql delta catalog DeltaCatalog Create a Bucket in MinIOUse the MinIO Client to create a bucket for Delta Lake mc alias set minio http lt your MinIO IP port gt lt your MinIO access key gt lt your MinIO secret key gt mc mb minio delta lakeCreate a test Delta Lake table on MinIOTry it out and create a simple Delta Lake table using Scala Create a Delta table on MinIO spark range write format delta save sa delta lake demo You will see something output indicating that Spark wrote the table successfully Open a browser to log into MinIO at http your MinIO IP with your access key and secret key You ll see the Delta Lake table in the bucket MinIO and Delta Lake for High Performance ACID Transactions on Data LakesThe combination of MinIO and Delta Lake enables enterprises to have a multi cloud data lake that serves as a consolidated single source of truth The ability to query and update Delta Lake tables provides enterprises with rich insights into their businesses and customers Various groups access Delta Lake tables for their own analytics or machine learning initiatives knowing that their work is secure and the data timely To go deeper download MinIO and see for yourself or spin up a marketplace instance on any public cloud Do you have questions Ask away on Slack or via hello min io 2023-08-30 20:02:23
Apple AppleInsider - Frontpage News Apple is eliminating the social media support roles from Twitter and others https://appleinsider.com/articles/23/08/30/apple-is-eliminating-the-social-media-support-roles-from-twitter-and-others?utm_medium=rss Apple is eliminating the social media support roles from Twitter and othersApple is reportedly looking to cut back on providing human support on various social media outlets like YouTube and Twitter Apple Support appThe official AppleSupport account was launched in and it s primarily used to provide tips for Apple products and address customers directly The account earned an award from Twitter that same year thanks to its high level of engagement Read more 2023-08-30 20:14:48
海外TECH Engadget Baidu opens up its ERNIE generative AI to the public https://www.engadget.com/baidu-opens-up-its-ernie-generative-ai-to-the-public-200655940.html?src=rss Baidu opens up its ERNIE generative AI to the publicAnother ChatGPT rival is out in the wild Baidu has made ERNIE Bot its generative AI product and large language model generally available to the public through various app stores and its website Alongside ERNIE Enhanced Representation through Knowledge Integration the company plans to release a string of AI apps it says will allow folks to fully experience the four core abilities of generative AI understanding generation reasoning and memory Opening up ERNIE Bot which is focused on the Chinese market to the public will enable Baidu to obtain much more human feedback according to CEO Robin Li The company notes that this will help it iterate on ERNIE Bot more quickly and improve the user experience Baidu announced the chatbot back in March demonstrating capabilities such as summarizing a sci fi novel and offering suggestions on how to continue the story in an expanded universe It can generate images and videos based on text inputs too Earlier this month Baidu said ERNIE Bot s training throughput had increased three fold since March and that it s now capable of data analysis and visualization generating results more quickly and handling image inputs As of August th Chinese companies need to obtain approval from authorities before they can release generative AI experiences to the public and Baidu was one of the first to get the green light according to Bloomberg The report suggests officials see AI as a business and political imperative given the transformative nature of the technology Beijing is said to want guardrails in place to keep a tight lid on content while still enabling Chinese companies to compete with overseas rivals This article originally appeared on Engadget at 2023-08-30 20:06:55
ニュース BBC News - Home Pret a Manger fined over London worker stuck in freezer https://www.bbc.co.uk/news/uk-england-london-66663556?at_medium=RSS&at_campaign=KARANGA freezerthe 2023-08-30 20:33:56
ニュース BBC News - Home Blue supermoon: World gazes at rare lunar phenomenon https://www.bbc.co.uk/news/in-pictures-66662857?at_medium=RSS&at_campaign=KARANGA phenomenon 2023-08-30 20:29:51
ニュース BBC News - Home England v New Zealand: Hosts comfortably win first T20 by seven wickets https://www.bbc.co.uk/sport/cricket/66657233?at_medium=RSS&at_campaign=KARANGA England v New Zealand Hosts comfortably win first T by seven wicketsEngland comprehensively thrash beat New Zealand by seven wickets with a professional performance in the first T at Chester le Street 2023-08-30 20:45:07
ニュース BBC News - Home Chelsea 2-1 AFC Wimbledon: Hosts progress in Carabao Cup https://www.bbc.co.uk/sport/football/66589697?at_medium=RSS&at_campaign=KARANGA carabao 2023-08-30 20:57:03
ビジネス ダイヤモンド・オンライン - 新着記事 ジャニー喜多川氏の「性嗜好異常」認定は、ジャニーズ事務所への“死刑宣告”だ - 情報戦の裏側 https://diamond.jp/articles/-/328422 辞任 2023-08-31 05:30:00
ビジネス ダイヤモンド・オンライン - 新着記事 「年内入試」に強い高校【首都圏・関西を除く全国編】地方校が推薦入試と好相性な理由とは? - 大学 地殻変動 https://diamond.jp/articles/-/327494 中高一貫校 2023-08-31 05:25:00
ビジネス ダイヤモンド・オンライン - 新着記事 東大・早慶…「ITベンダー就職者数」大公開!【トップ11大学・23年版】東大出身はNTTデータ24人、日本IBMが34人 - コンサル大解剖 https://diamond.jp/articles/-/328434 2023-08-31 05:20:00
ビジネス ダイヤモンド・オンライン - 新着記事 大阪万博危うし!ゼネコン業界団体トップが語るパビリオン「工期」と「コスト」の実情 - Diamond Premium News https://diamond.jp/articles/-/328425 大阪万博危うしゼネコン業界団体トップが語るパビリオン「工期」と「コスト」の実情DiamondPremiumNewsパビリオンの工事に向けた準備が混乱し、開催さえ危ぶまれる年の大阪・関西万国博覧会。 2023-08-31 05:15:00
ビジネス ダイヤモンド・オンライン - 新着記事 トヨタに反旗!盟友パナソニックがテスラと組み米国市場「EV電池争奪戦」で一歩リード - トヨタ 史上最強 https://diamond.jp/articles/-/328201 史上最強 2023-08-31 05:10:00
ビジネス ダイヤモンド・オンライン - 新着記事 円安と海外物価高の弊害を認識、家計が感じ始めた現預金以外を「持たざるリスク」 - 政策・マーケットラボ https://diamond.jp/articles/-/328433 外貨建て 2023-08-31 05:05:00
ビジネス 電通報 | 広告業界動向とマーケティングのコラム・ニュース Whyから始めない~「パーパス」再考~ https://dentsu-ho.com/articles/8665 行動 2023-08-31 06:00:00
ビジネス 電通報 | 広告業界動向とマーケティングのコラム・ニュース 「コテンラジオ」深井氏に聞く、日本人が“パフォーマンス”を上げるカギ https://dentsu-ho.com/articles/8659 存在意義 2023-08-31 06:00:00
ビジネス 東洋経済オンライン 「コロナ後も咳に悩む人」が見逃す"鼻の異変″ 「後遺症の原因」は自覚症状のないアノ疾患? | 医療・病院 | 東洋経済オンライン https://toyokeizai.net/articles/-/698012?utm_source=rss&utm_medium=http&utm_campaign=link_back 東洋経済オンライン 2023-08-31 05:55:00
ビジネス 東洋経済オンライン トヨタ新型アル/ヴェル乗って感じた明らかな差 スタイル以上に走りも違う!一押しモデルは? | 西村直人の乗り物見聞録 | 東洋経済オンライン https://toyokeizai.net/articles/-/697177?utm_source=rss&utm_medium=http&utm_campaign=link_back 東洋経済オンライン 2023-08-31 05:50:00
ビジネス 東洋経済オンライン 住宅ローン「残価設定型」がじわり広がる理由 銀行やハウスメーカーのビジネスも変わる? | 金融業界 | 東洋経済オンライン https://toyokeizai.net/articles/-/697849?utm_source=rss&utm_medium=http&utm_campaign=link_back 住宅ローン 2023-08-31 05:40:00
ビジネス 東洋経済オンライン この10年で「初任給がグンと伸びた」トップ50社 10万円近く増えて初任給30万円超の企業も | 就職四季報プラスワン | 東洋経済オンライン https://toyokeizai.net/articles/-/697778?utm_source=rss&utm_medium=http&utm_campaign=link_back 就職四季報 2023-08-31 05:30:00
ビジネス 東洋経済オンライン 「時価総額2兆円」エムスリーの子会社がステマ 「法人向けは対象外」10月開始ステマ規制の欠陥 | インターネット | 東洋経済オンライン https://toyokeizai.net/articles/-/697852?utm_source=rss&utm_medium=http&utm_campaign=link_back 広告出稿 2023-08-31 05:20:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 22:08:45 RSSフィード2021-06-17 22:00 分まとめ(2089件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)