TECH |
Engadget Japanese |
iOS版Twitter、まもなく絵文字リアクションや反対投票システム追加か。リプライ並べ替えと連動の可能性 |
https://japanese.engadget.com/twitter-ios-reactions-downvote-091524741.html
|
twitter |
2021-11-28 09:15:24 |
IT |
ITmedia 総合記事一覧 |
[ITmedia ビジネスオンライン] 未接種理由の配置転換はOKなのか? 妥当性の線引きが難しい |
https://www.itmedia.co.jp/business/articles/2111/28/news052.html
|
itmedia |
2021-11-28 18:03:00 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
気象学における流線関数・速度ポテンシャルとPythonでの計算方法 |
https://qiita.com/wm-ytakano/items/92570f11a0e39eac2a54
|
このような性質から、短期予報では細かい擾乱を把握するために鉛直渦度や水平発散に着目することが多いのに対して、季節予報ではより大きな循環場を把握するために流線関数や速度ポテンシャルを用いることが多い。 |
2021-11-28 18:58:16 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
PyQt5のチュートリアルを動かす ① |
https://qiita.com/hoshianaaa/items/21b17088e8e6dc3481c3
|
PyQtのチュートリアルを動かす①はじめに以下のサイトを参考にPyQtのチュートリアルを動かしてみました。 |
2021-11-28 18:35:09 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
【AtCoder解説】PythonでABC229のA,B,C,D,E問題を制する! |
https://qiita.com/u2dayo/items/faed7352ba3fbc4e6c8c
|
ABCDナップサック問題ABCDSimpleKnapsackこの問題と共通する部分がありますD問題『LongestX』問題ページDLongestX灰コーダー正解率茶コーダー正解率緑コーダー正解率入力SXまたはからなる、長さが以下の文字列KをXに置き換える操作を行える回数の上限考察Sのr文字目を右端として固定すると、K回以下の書き換えで、どこまでXが連続する区間を左に伸ばせるかわかります。 |
2021-11-28 18:33:43 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
condaとpipとTensorFlow【Python】 |
https://qiita.com/ktokey/items/f46b7a97b08c060f077c
|
condaとpipとTensorFlow【Python】はじめにAnaconda環境でTensorFlowを用いてコードを書いていたのですが、TensorFlow以降に追加された関数を利用したかったので、下記のようにcodnaインストールできるバージョンを探してみました。 |
2021-11-28 18:27:44 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
Incorrect string valueエラー |
https://teratail.com/questions/371358?rss=all
|
IncorrectstringvalueエラーMysqlを学習している初学者になります。 |
2021-11-28 18:50:58 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
アンドロイドアプリからGoogleドライブにアクセスしたい |
https://teratail.com/questions/371357?rss=all
|
アンドロイドアプリからGoogleドライブにアクセスしたい前提・実現したいことAndroidアプリを作っています。 |
2021-11-28 18:40:10 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
フラグメント上で、Threadを使うRunnableを実装したい。 |
https://teratail.com/questions/371356?rss=all
|
フラグメント上で、Threadを使うRunnableを実装したい。 |
2021-11-28 18:39:08 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
3Dガンアクション・アドベンチャーを作りたいと思っているのですが、unityで(以下の写真のような)どのような作り方をすればこのようなゲームは作れますか?特にどういうプログラミングを組めばできますか? |
https://teratail.com/questions/371355?rss=all
|
Dガンアクション・アドベンチャーを作りたいと思っているのですが、unityで以下の写真のようなどのような作り方をすればこのようなゲームは作れますか特にどういうプログラミングを組めばできますかafeedbdbfbffjpegソウル・オブ・セブンスやガンスリンガー・ガールpsのようなDガンアクション・アドベンチャーを作りたいと思っているのですが、unityで以下の写真のようなどのような手順の作り方をすればこのようなゲームは作れますか特にどういったプログラミングを組めばできますか詳しく教えてください。 |
2021-11-28 18:38:01 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
Arduino 最大で2つのボタンの同時押し、押されたボタンの組み合わせの判定と結果の入力 |
https://teratail.com/questions/371354?rss=all
|
Arduino最大でつのボタンの同時押し、押されたボタンの組み合わせの判定と結果の入力ArduinonbspMicroでつのボタンの同時押し、場合分けについて教えてください。 |
2021-11-28 18:32:56 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
node.jsからformと同じような画像POSTのやり方 |
https://teratail.com/questions/371353?rss=all
|
現状、下記のHTMLからpythonflaskへ画像をPOSTして画像解析を行っています。 |
2021-11-28 18:30:06 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
PHPで非同期処理を実装したい |
https://teratail.com/questions/371352?rss=all
|
PHPで非同期処理を実装したい前提・実現したいことPHPでajaxをつかっていいね機能を実装したいと思っています。 |
2021-11-28 18:27:39 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
c++であるポインター型の引数付きの関数が呼ばれたときにその引数に代入されたポインターをnullptrにしたい |
https://teratail.com/questions/371351?rss=all
|
cであるポインター型の引数付きの関数が呼ばれたときにその引数に代入されたポインターをnullptrにしたい前提・実現したいことvoidnbspCollisionBulletnbsppEnemynbspqという関数があったとして、この関数が呼ばれたら引数に代入したポインターをnullptrにしたいんですけど、この関数から出てしまうとpやqがnullptrではなくなってしまいます。 |
2021-11-28 18:12:38 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
入力された金額から、最小の貨幣を数えるプログラムでエラーが表示されます |
https://teratail.com/questions/371350?rss=all
|
入力された金額から、最小の貨幣を数えるプログラムでエラーが表示されます前提・実現したいことPython学習を初めて週間ほどの者です。 |
2021-11-28 18:06:30 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
unknown attribute 'recruitment_id' for Order.というエラーを解決したいです |
https://teratail.com/questions/371349?rss=all
|
unknownattributexrecruitmentidxforOrderというエラーを解決したいです前提・実現したいことフォームオブジェクトを用いた購入機能の実装。 |
2021-11-28 18:06:24 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
Vue + TypeScriptのコードを解析しています。特定の関数がどこで実行されているのかがわかりません。 |
https://teratail.com/questions/371348?rss=all
|
VueTypeScriptのコードを解析しています。 |
2021-11-28 18:05:19 |
Docker |
dockerタグが付けられた新着投稿 - Qiita |
Docker 環境で Jetty または Tomcat を使って Webアプリを実行する。 |
https://qiita.com/oyahiroki/items/9de427af967e2c7e643d
|
Docker環境でJettyまたはTomcatを使ってWebアプリを実行する。 |
2021-11-28 18:00:34 |
海外TECH |
DEV Community |
Typesafe F# configuration binding |
https://dev.to/symbolica/typesafe-f-configuration-binding-16gp
|
Typesafe F configuration bindingAt Symbolica we re building a symbolic execution service that explores every reachable state of a user s program and verifies assertions at each of these states to check that the program is correct By default it will check for common undefined behaviours such as out of bounds memory reads or divide by zero but it can also be used with custom application specific assertions too just like the kind you d write in a unit test Seen from this perspective it s kind of like FsCheck or Haskell s QuickCheck or Python s Hypothesis but much more exhaustive and without the randomness As much as we like finding bugs with Symbolica we prefer to not write any in the first place Our first line of defence is a strong type system so that we can try to design types that make invalid states impossible and let the compiler tell us off when we make a mistake For that reason we ve opted to build our service using F as it also interops nicely with the core part of our symbolic executor which is written in C One of the many things we love about F is that by default it doesn t permit null as a regular value This feature eliminates a whole class of errors caused by null values most notably the cursed NullReferenceException Another feature that we like is having access to the vast wealth of NET libraries that exist However many of these are written in C and so they are often places where null values can sneak into an F program through the backdoor at runtime One area where this was frequently biting us was the binding of configuration data using the Microsoft Extensions Configuration library Due to this and other problems that we ll go into below we created a safer alternative for configuration binding for F projects called Symbolica Extensions Configuration FSharp and open sourced it on GitHub Symbolica Symbolica Extensions Configuration FSharp Provides a safe API for binding the dotnet IConfiguration to types in F Symbolica Extensions Configuration FSharpProvides a safe API for binding an F type from the dotnet IConfiguration interface It is an F friendly alternative to using the reflection based ConfigurationBinder Bind MotivationOut of the box dotnet provides what it calls the the Options pattern which it describes as The options pattern uses classes to provide strongly typed access to groups of related settings Whilst this might be strongly typed in the sense that you re interacting with statically typed options objects the binding mechanism is not strictly safe and so the static types are often a lie This leads to a few notable problems especially when working with it from F It s a large source of NullReferenceExceptions because the binder will hapily set a value to null if it s missing in the underlying config This means your F type is probably lying to you about the fact its value cannot be null F developers would rather model… View on GitHub The problems with Microsoft Extensions Configuration BinderThe best way to highlight the shortcomings of the defacto config binder is with an example Let s say we want to model some logging options in our code We might start out with a simple record type like this to represent the options type LoggingOptions Level string Sink string Where Level represents how verbose we want the logging output to be e g Debug or Error etc and Sink is where we want to send the logs for example it might be Console or File Let s test this out with a little fsx script that we can run with FSI r nuget Microsoft Extensions Configuration Binder open Microsoft Extensions Configurationtype LoggingOptions Level string Sink string let config ConfigurationBuilder AddInMemoryCollection Logging Level Debug Logging Sink Console gt Map ofList Build config GetSection Logging Get lt LoggingOptions gt Here we re just seeding the in memory configuration provider with a dictionary of config data and then attempting to retrieve and bind the LoggingOptions Problem Mutable data and null valuesIf we run the above script you might be expecting it to print out a LoggingOptions with a Level of Debug and a Sink of Console However we actually hit a different problem The above script throws the following exception System InvalidOperationException Cannot create instance of type FSI LoggingOptions because it is missing a public parameterless constructor That s because an F record doesn t contain a parameterless constructor because all of the record s properties must be properly initialised and null isn t an allowed value To make matters worse the defacto binder mandates that the properties of the type being bound must be settable too breaking immutability and making the use of a record to model options kind of pointless There are two typical workarounds to this Define a mutable class instead of a record for the options type like we would in C Add the lt CLIMutable gt attribute to the LoggingOptions record Neither of these are particularly pleasing The first one means we have to give up on having immutable options types and the rest of the code base has to deal with the added complexity of potential mutability The second is basically a hack which provides a mutable backdoor at runtime to our immutable type Using lt CLIMutable gt actually opens up a can of worms because our types are now deceiving us Our simple record purports to be immutable and never contain null values and so in the rest of the code base we program as if this is the case On the other hand the config binder isn t abiding by these compile time invariants and may in fact initialise the record s properties as null at runtime To see this in action let s rerun the above example but this time with the lt CLIMutable gt attribute added to the LoggingOptions and a missing value for the Level In the raw config The modified script looks like this r nuget Microsoft Extensions Configuration Binder open Microsoft Extensions Configuration lt CLIMutable gt type LoggingOptions Level string Sink string let config ConfigurationBuilder AddInMemoryCollection Logging Sink Console gt Map ofList Build config GetSection Logging Get lt LoggingOptions gt Running it produces this output val it LoggingOptions Level null Sink Console We see that the type system has lied to us because the value of Level was actually null at runtime In this case it s relatively harmless but in a real application it s likely that we ll have a more complex hierarchy of option types and so we d end up trying to dereference a potentially null object leading to the dreaded NullReferenceException When working in F we d rather the config binder returned a Result if the config couldn t be parsed and allow us to use an Option type for config data that is well optional Which leads us to the next problem Problem No native support for binding DUsAs the defacto binder uses reflection to bind the raw config to strongly typed objects it only has support for a limited set of types This includes all the primitive types like int and string and a few of the common BCL collection types like List and Dictionary This is frustrating for both C and F developers that wish to use more complex types to model their options Particularly frustrating for F developers though is that this means it doesn t support discriminated unions DUs and therefore doesn t support types like Option To highlight this let s imagine we wanted to improve our LoggingOptions so that the Level was restricted to a discrete set of values To do this we ll create a DU called LoggingLevel and use it as the type for the Level property r nuget Microsoft Extensions Configuration Binder open Microsoft Extensions Configuration lt RequireQualifiedAccess gt type LogLevel Debug Info Warning Error lt CLIMutable gt type LoggingOptions Level LogLevel Sink string let config ConfigurationBuilder AddInMemoryCollection Logging Level Debug Logging Sink Console gt Map ofList Build config GetSection Logging Get lt LoggingOptions gt We re now supplying a config dictionary that looks correct it has properties for both of Logging Level and Logging Sink so let s run it and see what the output is val it LoggingOptions Level null Sink Console So we can see here that the binder has silently failed to bind the Level property now that its type is LoggingLevel If we want to bind more complex type we ll first have to bind to a simple type like a string and then write a parser ourselves to turn that into a LoggingLevel That s a slippery slope because it then probably means having something like a ParsedLoggingConfig which we create from the more loosely typed LoggingConfig Resulting in us needing to define a fair amount of config parsing “boilerplate anyway Problem Parse don t validateThe defacto binder doesn t really give us much help when our configuration is faulty We can write some options validators and wire these up with DI but as Alexis King has taught us parse don t validate In short parse don t validate tells us that it s better to parse data into a type that once constructed must be valid than it is to read the data into a more loosely typed object and then run some post validation actions over the values to make sure they re correct The primary reason being that if we know that our type only permits valid values then we no longer have to wonder whether or not it s already been validated The defacto configuration binder doesn t make it easy to adhere to this It s easy to forget to register a validator for the options and then when they re accessed at runtime we instead get a rather unhelpful null value like we observed earlier What we d prefer is for the compiler to prevent us from making such a mistake by enforcing validation through the type system To give a specific example let s imagine we want to be able to restrict the logging level to only have the values Info Debug Warning and Error We ve already seen we can t use a DU to model this So we have no way of knowing whether or not Level is valid when we come to use it all we know is that it s a string So if we want to be sure we re forced to keep validating the logging level at every point of use A better binder for F Given these shortcomings we decided to write our own config binder with the following design goals in mind Binding failures should be expected and be reflected in the type returned from the binder We should be made to deal with the unhappy path Binding should not break immutability Binding should work for all types including complex user defined types Binding should be composable such that if I can bind a type X which is then later used within Y I should be able to reuse the binder for X when defining the binder for Y Error reporting should be greedy and descriptive so that developers can quickly fix as many errors as possible when binding fails To that end we opted to write a binder that didn t use any reflection The trade off we re making here is that we re forced to be much more explicit when we bind a type and so we end up with what some people might consider to be boilerplate However we d personally rather have code that is explicit than have to read through documentation to discover the implicit behaviours of something magic because when the magic thing breaks we usually spend more time debugging that than we would have spent writing the explicit boilerplate to begin with Also thanks to the composable nature of functional programming languages and the power of F s computation expressions it s possible to be both explicit and terse It s probably best appreciated with an example So let s see how we d bind the above LoggingOptions using our new approach r nuget Symbolica Extensions Configuration FSharp open Microsoft Extensions Configurationopen Symbolica Extensions Configuration FSharp lt RequireQualifiedAccess gt type LogLevel Debug Info Warning Errormodule LogLevel let bind Binder fun s string gt s ToLowerInvariant gt gt function info gt Success LogLevel Info debug gt Success LogLevel Debug warning gt Success LogLevel Warning error gt Success LogLevel Error gt Failure ValueError invalidType lt LogLevel gt type LoggingOptions Level LogLevel Sink string let bindConfig Bind section Logging bind let level Bind valueAt Level LogLevel bind and sink Bind valueAt Sink Bind string return Level level Sink sink let config ConfigurationBuilder AddInMemoryCollection Logging Level Debug Logging Sink Console gt Map ofList Build bindConfig gt Binder eval config gt BindResult mapFailure fun e gt e ToString Running this script produces the following output val it BindResult lt LoggingOptions string gt Success Level Debug Sink Console From this example we can see that it s successfully bound our more complex LoggingOptions type that contains a DU There s also zero magic the binding process is clear to see and simple to customise Let s check that it s met our design goals Failures are expected We can see this by the fact that right at the end after we ve called eval on the Binder it s produced a BindResult Binding doesn t break immutability No lt CLIMutable gt required here Binding works for complex types Binding a DU was no problem We were also able to make it case insensitive just through a little function composition with ToLowerInvariant Binding is composable We defined the binder for the LogLevel in isolation to the overall config binder Error reporting is greedy and informative Let s simulate some failures and see what happens Let s run the script again but this time with the following input config Logging Level Critical So that the Level is invalid and the Sink is missing We get the following output Failure Logging all of these Level Value Critical Error Could not parse value as type LogLevel Sink The key was not found It s shown us all of the paths in the config for which it found errors and what those errors are The Implementation DetailsAt the heart of all of this is a Binder lt config value error gt type This type is just a wrapper around a function of the form config gt BindResult lt a error gt For the category theory inclined it s just a reader monad whose return type has been specialised to a BindResult The BindResult type is very similar to a regular F Result except that its applicative instance will accumulate errors whereas the regular Result will typically short circuit on the first error it encounters Binder and BindResult are defined generically to keep them as flexible as possible However at some point we want to provide some specialisations for the common binding scenarios There are really two primary specialisations to consider one for binding sections and another for binding values Section binders are of the form Binder lt IConfiguration a Error gt and value binders are of the form Binder lt string a ValueError gt By fixing error to the custom types Error and ValueError it s easy to compose Binders and also ensure that the errors can be properly accumulated in both applicative and alternative computations One of the primary specialisations comes from the bind applicative computation expression We saw in the example above how bind lets us compose a Binder for an IConfigurationSection by binding its properties using existing Binders and at the same time ensures all binding errors from this section are accumulated The bind CE gives us a declarative looking DSL for defining new binders for our application specific config objects In the Bind module the library also provides various combinators for building new Binders Such as Bind section and Bind valueAt which take an existing Binder and bind them to a section or a value at a particular key which are typically used inside a bind CE It also contains many binders for types like int bool System DateTime and System Uri as well as more complex structures like List and IDictionary Try it outThe code is available on GitHub and you can install the library via NuGet If you want to see even more sophisticated examples that shows how to do things like handle optional values deal with alternatives and bind units of measure then check out the IntegrationTests Of course if there s something that you think is missing then open an issue or a pull request I m sure there are plenty of other Binders that we can add to the Bind module to cover other common NET types Future ImprovementsIf you want to use things like IOptionsSnapshot then it requires interaction with the IServiceCollection and a call to Configure lt MyOptionsType gt configureAction Unfortunately the way that Microsoft have designed this means that a parameterless public constructor is required on the options type being configured so that an instance can be passed to configureAction which goes against our design principles here So currently this library won t play nicely with things like reactive options updates If this is something that you d like then it should be possible to provide a way around this by providing an alternative IOptionsFactory so please open an issue and let us know See the README for more details |
2021-11-28 09:34:51 |
海外TECH |
DEV Community |
small Google Search Sheet Cheat |
https://dev.to/annequinkenstein/small-google-search-sheet-cheat-3gk5
|
small Google Search Sheet Cheatdefine Returns Definitionor filetype Returns only search results that match a particular file extension site Returns only search results from a particular website to Convert measurements from one unit to another do a barrel rollother Search engines Duckduckgo metager |
2021-11-28 09:33:07 |
海外TECH |
DEV Community |
Cost Modeling Data Lakes for Beginners| AWS White Paper Summary |
https://dev.to/awsmenacommunity/cost-modeling-data-lakes-for-beginners-aws-white-paper-summary-kna
|
Cost Modeling Data Lakes for Beginners AWS White Paper Summary IntroductionCustomers want to realize the value held in the data their organization generates Common use cases include helping them expedite decision making publishing data externally to foster innovation or creating new revenue streams by monetizing the data Organizations that successfully generate business value from their data will outperform their peers An Aberdeen survey saw organizations who implemented a Data Lake outperforming similar companies by in organic revenue growth These leaders were able to do new types of analytics like machine learning over new sources like log files data from click streams social media and internet connected devices stored in the data lake This helped them to identify and act upon opportunities for business growth faster by attracting and retaining customers boosting productivity proactively maintaining devices and making informed decisions To best realize this value using data lakes customers need a technology cost breakdown for their budget to build a solution But without building the solution they don t know how much it will cost This is a common paradox that delays many customers from starting their data lake projects Customers need a platform that increases agility lowers cost of experimentation and provides a technical breadth to support all their use cases through an innovative platform Ideally the platform can rapidly validate or discount solutions against business objectives This encourages a culture of failing fast which enables further experimentation to optimize solution matching against business imperatives A data lake is a common way to realize these goals There are many considerations along this journey such as team structure data culture technology stack governance risk and compliance Costing data lakes requires a different approach than delivering them Customers must focus on identifying and measuring business value early on so that they can start their projects quickly and demonstrate the value back to the business quickly and incrementally What should the business team focus on Think big It s important to create a vision that your organization can rally around to help guide decision making in line with a common goal At Amazon we have a phrase “Stubborn on vision flexible on details This sums up the importance of creating an environment that allows autonomous decision making while everyone pulls in the same direction and is flexible about the implementation details Measure business value Without measuring business value it s hard to justify any expenditure or drive any value from your testing early on in the project Prototype rapidly Focus your energy on driving business outcomes with any experiments you run Understand what influences costs Analytics projects generally have similar stages ingestion processing analytics and visualization Each of these stages has key factors that influence the cost Cost model a small set of experiments Your analytics project is a journey As you expand your knowledge your capabilities will change The sooner you can start experimenting the sooner your knowledge will grow Build a cost model that covers to smallest amount of work to impact your business outcomes and iterate Avoid wasting energy building technology stacks Building solutions from the ground up is expensive time consuming and very rarely provides any direct value to your organization Defining the approach to cost modeling data lakesIT projects historically have well defined milestones that make cost modeling a fairly simple process Selected software is usually a commercial off the shelf COTS product which can be costed including licenses and based on predictable metrics such as number of users and number of CPUs The effort to implement the software is well within the standard skill sets of the IT department with plenty of experience in similar deployment models to draw on to get an accurate picture of the implementation time This will all feed into a large design document that can be used to calculate costs Therefore you can expect to use the same cost modeling you have previously used for an analytics project The challenge here is that an analytics project often doesn t have a clear end It s a journey where you explore and prepare data in line with some aspirational business outcomes This makes it s difficult to know the following How much data will be ingestedWhere that data will come fromHow much storage or compute capacity will be needed to make sense of that dataWhich services are required to exploit the data to realize and deliver value throughout the businessTo experience the business values of analytics project you must first get started An on premises data lake requires a large upfront expenditure which can be hard to justify when you are unclear on the return on investment However with the AWS Cloud you can deploy AWS services in minutes run your experiments and then shut down the infrastructure paying only for what you use With no capacity constraints and a broad array of analytics services you can run many exploratory experiments concurrently to find the right solution This coupled with the ability to turn off your experiments lowers the cost of experimentation while increasing speed Measuring business valueThe first project on your data lake journey starts by stating what your business goals are These are often extracted from your business strategy Business goals are high level aspirations that support the business strategy such as “improve customer intimacy Once these business goals are identified together with your team write a set of business outcomes that would have a positive impact against these goals Outcomes are targeted and measurable such as reduce cost of customer acquisition Finally we identify a number of metrics that can be measured to validate the success of the experiments This helps ensure that the right business value is being achieved A good example of this is Hyatt Hotels a leading global hospitality company Hyatt wanted to improve customer loyalty improve the success rate of upsells and add ons and better guide the users with accommodation recommendations This fed directly into their business goal to improve their American Customer Satisfaction Index ACSI To achieve this the team at Hyatt identified a requirement to build personalized connections with their customers Some example business outcomes could be Number of return visits per customers per monthNumber of upsells per customers per monthNumber of add ons per customers per monthNumber of bookings made per dayNumber of negative customer reviews per weeksThis is only an example of one piece of work a data lake could support After the initial piece of work is delivered the team can then iterate For example as a by product of delivering the previous feature they could have identified important information For example perhaps Hyatt discovered that commonly searched for add on purchases for example specific spa treatments were not offered in the chosen location Or they might have discovered that customers were searching for accommodation in areas that Hyatt doesn t yet have a footprint in The team could go on to develop new features and service offerings that would help them deliver a better customer experience or help them make decisions that would improve their chances of choosing the right location to scale their footprint globally to help them deliver against their business goal Establishing an agile delivery processOnce the measurable metrics are captured teams can start running a number of experiments to assess technology methodologies and most importantly explore the data to indicate whether a solution to deliver the business goals is possible and if so a possible design of that solution The breadth of AWS services helps remove the complexity and burden of designing and operationalizing the solution This enables organizations to rapidly deploy solutions that typically would take months It also allows organizations to focus on solving the business needs through deriving value from their data The AWS serverless services such as Amazon Athena Amazon Kinesis and AWS Glue allow for data manipulation and exploration without the need to deploy anything These services can be consumed immediately in an on demand basis You can also deploy capacity provisioning services where clusters can be provisioned in a matter of minutes These clusters are then ready to process customer data for supporting analytic services such as Amazon EMR Hadoop Amazon Elasticsearch Amazon Managed Streaming for Apache Kafka Amazon MSK and Amazon Redshift If the experiment fails you can shut down the services or just stop using the services to prevent incurring ongoing costs This allows you to experiment rapidly This was the case with Xylem a leading water technology company Xylem used AWS to increase innovation allowing them to support their customer by creating smart technologies that meet the world s water wastewater and energy needs Building data lakesBecause the aim is to get started on your data lake project let s break down your experiments into the phases that are typical in data analytics projects Data ingestionProcessing and transformationAnalyticsVisualization data access and machine learningBy breaking down the problem into these phases you reduce the complexity of the overall challenge This makes the number of variables in each experiment lower enabling you to get model costs more quickly and accurately We recommend that you start your analytics project by implementing the foundation of a data lake This gives you a good structure to tackle analytics challenges and allow great flexibility for the evolution of the platform A data lake is a single store of enterprise data that includes raw copies from various source systems and processed data that is consumed for various analytics and machine learning activities that provide business value Choosing the right storage to support a data lake is critical to its success Amazon Simple Storage Service Amazon S is an object storage service that offers industry leading scalability data availability security and performance This means customers of all sizes and industries can use it to store and protect any amount of data Amazon S provides easy to use management features so you can organize your data and configure finely tuned access controls to meet your specific business organizational and compliance requirements Amazon S is designed for s of durability and stores data for millions of applications for companies all around the world Data lakes generally support two types of processing batch and real time It is common for more advanced users to handle both types of processing within their data lake However they often use different tooling to deliver these capabilities We will explore common architectures for both patterns and discuss how to estimates costs with both Batch processingBatch processing is an efficient way to process large volumes of data The data being processed is typically not time critical and is usually processed over minutes hours and in some cases days Generally batch processing systems automate the steps of gathering the data extracting it processing it enriching it and formatting it in a way that can be used by business applications machine learning applications or business intelligence reports Before we get started let s look at a common set of services that customers use to build data lakes for processing batch data Figure Common services used to build data lakes for batch dataThe following example architecture is relatively common It uses AWS Glue Amazon Athena Amazon S and Amazon QuickSight Figure Example architecture for batch processingThe preceding example shows a typical pipeline to ingest raw data from CSV files AWS Glue automatically infers a schema to allow the data to be queried AWS Glue jobs are used to extract clean curate and rewrite the data in an optimized format Parquet before exposing visualizations to end users This is all achieved using serverless technologies that reduce the operational burden to the analytics team We are going to explore each of these steps in more detail in addition to the things you need to consider along the way But before we do let s take a quick look at the other form of processing Real time processingReal time processing is a way of processing an unbounded stream of data in order to generate real time or nearly real time alerts or business decisions The response time for real time processing can vary from milliseconds to minutes Real time processing has its own ingestion components and has a streaming layer to stream data for further processing Examples of real time processing are Processing IoT sensor data to generate alerts for predictive maintenanceTrading data for financial analyticsIdentifying sentiment using a real time Twitter feedBefore we get started let s look at a common set of services that customers use to build data lakes for processing real time data Figure Common services used to build data lakes for real time dataOur example architecture is relatively simple and uses the following services Amazon Kinesis AWS Lambda AWS Glue Amazon Athena Amazon S and Amazon QuickSight Figure Example architecture for real time processingFigure shows that many IoT devices send their telemetry to AWS IoT Core AWS IoT allows users to securely manage billions of connected devices and route those messages to other AWS endpoints In this case AWS IoT Core passes the messages Amazon Kinesis which ingests streaming data at any scale The raw data is split into two streams one that writes the raw data to Amazon S and a second that uses AWS Lambda a serverless compute service to filter aggregate and transform the data before again storing it on Amazon S The manipulated data is then cataloged in AWS Glue and made available to end users to run ad hoc queries using Amazon Athena and create visualizations using Amazon QuickSight Understanding what influences data lakes costsAcross both real time and batch processing data flows through different stages For each stage there is an option to use managed services from AWS or to use compute storage or network services from AWS with third party or open source software installed on top it In the case of managed services AWS provides service features like high availability backups and management of underlying infrastructure at additional cost In some cases the managed services are on demand serverless based solutions where the customer is charged only when the service is used More details on What Influences data lake costs can be found here Monitoring data lakes costsOnce the data lake is built to provide its intended features we recommend that you measure the cost and tie it back to business value it provides This enables you to perform a return on investment analysis on your analytics portfolio To track the cost utilization for your analytic workloads you need to define your cost allocation strategy Cost allocation tagging ensures that you tag your AWS resources with metadata key value pairs that reflect the business unit the data lake pipeline was built for Tags enable you to generate billing reports for the resources associated with a particular tag This lets you to either do charge back or return on investment analysis Another strategy to track your costs is to use multiple AWS accounts and manage them using AWS Organizations In this approach every business unit owns their AWS account and provisions and manages their own resources This lets them track all cost associated with that account for their data lake needs By tracking costs and tying it back to your business value you can complete your cost modeling for your first analytics workload This process also lets you iterate and repeat the process again of deciding business use cases defining KPI and building data lake features on top of your already built data lake foundation while monitoring the cost associated with it Cost modeling your first experimentsTo help give confidence to cost model your first analytics project we are going to do the following Generate a couple of fictitious customer scenarios for analytics experimentsWalk through our very lightweight cost modelBuild the actual solution and demonstrate how close we got to the actual costs AWS offers a free tier for most of its services This lets you experiment with no cost For the cost modeling that follows we discuss both free tier and the actual costs Scenario Improving healthcare KPIsIn this scenario we are a health trust company with a goal to improve the health of the community we are serving The following are a few key performance indicators KPIs that we want to achieve in near future Reduce health related crime rate by over monthsReduce health related crime rate by over months To achieve these KPIs we decide to analyze Drugs that have a strong correlation to criminal activityHealth conditions that have a strong correlation to criminal activityUnique citizens who are using the drugs that have top five correlations to criminal activityUnique citizens who have health conditions that have top five correlations to criminal activityOur plan is to use these and work with identified citizens and provide them alternate drugs or provide them counseling or any other treatments as applicable to prevent them from committing any crime We believe doing this will improve the overall mental health of our community in addition to other activities we have been working on Figure Example architecture for Scenario More details on this scenario available here Scenario Improving manufacture turnaround timeIn this second scenario we are a manufacturing company with a goal to improve the timing of our delivery The purchasing team of this manufacturing company is looking to improve the turnaround time of the parts required for their orders by To achieve this KPI we decided to analyze Customer orders and line items to identify parts that are frequently orderedSuppliers data to identify suppliers that kept orders waitingCustomer data to identify large volume customersSupplier shipping modes and order priorityCorrelation of any returned items with supplierOur plan is to identify the suppliers with good track record and use them for parts requested by large volume customers This should bring down the turnaround time for majority of the parts requested overall Figure Example architecture for Scenario More details on this scenario available here ConclusionCustomers struggle with starting their analytics projects because it is difficult to estimate costs when you have no knowledge or foresight of their unique requirements as an organization Without a cost estimate projects fail to get funding and organizations miss the enormous value making data driven decisions can offer You can use the scenarios as templates to build out a picture of what your analytics experiment will look like The scenarios can help your organization start a journey towards making data based decisions and drive business value offering benefits for your organization and its customers ReferenceOriginal paper |
2021-11-28 09:19:37 |
ニュース |
BBC News - Home |
Covid: Face masks to be compulsory in England from Tuesday, says Javid |
https://www.bbc.co.uk/news/uk-59449480?at_medium=RSS&at_campaign=KARANGA
|
omicron |
2021-11-28 09:23:45 |
ニュース |
BBC News - Home |
Covid-19: Measures to tackle Omicron variant begin this week and countries reveal more restrictions |
https://www.bbc.co.uk/news/uk-59449975?at_medium=RSS&at_campaign=KARANGA
|
coronavirus |
2021-11-28 09:09:36 |
ニュース |
BBC News - Home |
Nine-man team with two keepers are 7-0 down at half-time - farcical match in Portugal abandoned |
https://www.bbc.co.uk/sport/football/59448376?at_medium=RSS&at_campaign=KARANGA
|
Nine man team with two keepers are down at half time farcical match in Portugal abandonedThe Portuguese top flight match between Belenenses and Benfica is abandoned after the hosts are reduced to six men |
2021-11-28 09:04:17 |
ニュース |
BBC News - Home |
What are the rules and guidance for face masks and coverings? |
https://www.bbc.co.uk/news/health-51205344?at_medium=RSS&at_campaign=KARANGA
|
england |
2021-11-28 09:16:28 |
ニュース |
BBC News - Home |
The African countries on the red list - and other UK travel rules |
https://www.bbc.co.uk/news/explainers-52544307?at_medium=RSS&at_campaign=KARANGA
|
african |
2021-11-28 09:04:33 |
LifeHuck |
ライフハッカー[日本版] |
【Amazonブラックフライデー】ブラックフライデーってFire HDタブレットを買う日ですよね? こんな買い時他にない! |
https://www.lifehacker.jp/2021/11/amazon-blackfriday2021-firehd-tablet-sale.html
|
【Amazonブラックフライデー】ブラックフライデーってFireHDタブレットを買う日ですよねこんな買い時他にないAmazonのセールのたびに安くなっているFireHDタブレットですが、今回のブラックフライデーは一年の中で最も大きなセールと言っても過言ではない程の規模。 |
2021-11-28 18:15:00 |
北海道 |
北海道新聞 |
茂木氏、沖縄観光の立て直し支援 地元業者と対話、選挙へアピール |
https://www.hokkaido-np.co.jp/article/616540/
|
沖縄県沖縄市 |
2021-11-28 18:10:00 |
北海道 |
北海道新聞 |
コブクロライブで万博PR 参加はワクチン接種者限定 |
https://www.hokkaido-np.co.jp/article/616539/
|
限定 |
2021-11-28 18:06:00 |
コメント
コメントを投稿