IT |
気になる、記になる… |
ドコモオンラインショップ、7月14日以降に「5G WELCOME割」などの割引内容を変更 − 「iPhone 12/12 mini」が新規契約で20,000ポイント還元に |
https://taisy0.com/2021/07/09/142864.html
|
gwelcome |
2021-07-09 14:11:57 |
IT |
気になる、記になる… |
カスペルスキー、「カスペルスキー インターネット セキュリティ for iOS」を発売 |
https://taisy0.com/2021/07/09/142860.html
|
iphone |
2021-07-09 14:03:10 |
AWS |
AWS Government, Education, and Nonprofits Blog |
How one Caribbean university digitally transformed and saved money by migrating to the cloud |
https://aws.amazon.com/blogs/publicsector/how-one-caribbean-university-digitally-transformed-saved-money-lift-shift-cloud-migration/
|
How one Caribbean university digitally transformed and saved money by migrating to the cloudMoving to AWS helped the University of the West Indies Open Campus UWIOC improve performance of systems and operational efficiency while optimizing costs Learn how UWIOC migrated more than virtual machines applications and five networks plus their Moodle learning management system LMS and the UWIOC website while saving percent total cost of ownership along the way |
2021-07-09 14:47:09 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
残プロ 第-38回 ~コンソールにaa動画を表示する~ |
https://qiita.com/R1nY1x1/items/52cda53cefd3a7471905
|
|
2021-07-09 23:49:00 |
js |
JavaScriptタグが付けられた新着投稿 - Qiita |
Pay.jpを利用する際のJavaScriptの注意点 |
https://qiita.com/yf-steven/items/ea83e60ce80f7ba55563
|
Payjpを利用する際のJavaScriptの注意点概要フリマアプリを制作した際、Payjpを利用してクレジットカード機能を実装する際にかなりハマったことがあったので書き記したいと思います。 |
2021-07-09 23:14:06 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
Discord.pyを用いてBotを製作しています |
https://teratail.com/questions/348636?rss=all
|
Discordpyを用いてBotを製作しています前提・実現したいこと引用テキストコマンドを使うとBotがリアクションを行いそのリアクションを押すとロールを付与したり、何らかの処理を行わせようとしています。 |
2021-07-09 23:59:02 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
React-Rails+Typescriptのコンパイルエラーを解決したい。 |
https://teratail.com/questions/348635?rss=all
|
前提・実現したいことrubynbsponnbsprailsnbspで予約システムを作成中にエラーが発生しました。 |
2021-07-09 23:43:18 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
FPSでのCameraの反転について。 |
https://teratail.com/questions/348634?rss=all
|
FPSでのCameraの反転について。 |
2021-07-09 23:22:25 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
ビルドに失敗します CssSyntaxError in plugin 'gulp-cssnext' |
https://teratail.com/questions/348633?rss=all
|
ビルドに失敗しますCssSyntaxErrorinpluginxgulpcssnextx前提・実現したいこと私はgulp初心者です。 |
2021-07-09 23:20:58 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
リスト内の同じkey同士のvalueの合計 |
https://teratail.com/questions/348632?rss=all
|
リスト内の同じkey同士のvalueの合計前提・実現したいことjavascriptで以下のことを実行したいです。 |
2021-07-09 23:13:16 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
ハンバーガーメニューが開いたままになっている・レスポンシブ |
https://teratail.com/questions/348631?rss=all
|
デフォルトがハンバーガーメニューで、clickしたらバツ印になるようにしたいんです。 |
2021-07-09 23:04:55 |
Ruby |
Rubyタグが付けられた新着投稿 - Qiita |
【Rails】has_many : , through: について |
https://qiita.com/katuo0011/items/9d2801c17316431eab61
|
関連付けを行う目的Railsガイドに書かれていた内容として、筆者と書籍をクラスが存在し、関連付けをを行っていない場合モデルの宣言は以下のようになります。 |
2021-07-09 23:44:56 |
Ruby |
Rubyタグが付けられた新着投稿 - Qiita |
RSpecのletとlet!の違い |
https://qiita.com/tkygtr6/items/b0e0caac71ac9263521c
|
つまり、letを使ったとしても、let変数の評価中にまだ評価・宣言されていないletlet変数が登場してもエラーにはならない。 |
2021-07-09 23:22:58 |
Ruby |
Rubyタグが付けられた新着投稿 - Qiita |
cloud9 Railsで「Blocked host:-」エラー /Railsチュートリアル一日目 |
https://qiita.com/rocketpeace1753/items/adade6bd085669307784
|
cloudRailsで「Blockedhost」エラーRailsチュートリアル一日目問題cloudでrailsserverするも「Blockedhostドメイン名」となってしまいサイトが表示できない対処法configenvironmentdevelopmentrbファイルに以下文言を追加confighostsltltamazonawscomこれで、ドット以下のドメインこの場合amazonawscomを含む時、すべて許可するようになるこれを追記・保存して再度railss実行→無事サイトが表示できました※文言を追加しても保存・タブを閉じて「railss」をし直さないと変化しないので注意です。 |
2021-07-09 23:06:52 |
Azure |
Azureタグが付けられた新着投稿 - Qiita |
集まれ!Azure の Digital Twins と MR そして Metaverse の話 |
https://qiita.com/motoJinC25/items/0a43eda24f02e60c9734
|
EdgeとCloudの繋がりがDigitalTwinsで行われていますが、目に見えてない技術としてIoTが含まれていることを認識しておかないといけません。 |
2021-07-09 23:58:07 |
Azure |
Azureタグが付けられた新着投稿 - Qiita |
ラーニングパス「Azure Digital Twins と Unity を使用して Mixed Reality デジタル ツインを構築する」にAzure Mapsをアドオンしてみよう |
https://qiita.com/miyaura/items/dde60667a8e72fa14377
|
DWGパッケージファイルとStateSetについて今回ラーニングパスに合わせて風車の発電プラントを模した屋内マップを構築しています。 |
2021-07-09 23:57:52 |
Azure |
Azureタグが付けられた新着投稿 - Qiita |
Build Opening をみておもったこと (Elixir) |
https://qiita.com/torifukukaiou/items/7138e81e2ceb5573641d
|
|
2021-07-09 23:26:42 |
Ruby |
Railsタグが付けられた新着投稿 - Qiita |
【Rails】has_many : , through: について |
https://qiita.com/katuo0011/items/9d2801c17316431eab61
|
関連付けを行う目的Railsガイドに書かれていた内容として、筆者と書籍をクラスが存在し、関連付けをを行っていない場合モデルの宣言は以下のようになります。 |
2021-07-09 23:44:56 |
Ruby |
Railsタグが付けられた新着投稿 - Qiita |
cloud9 Railsで「Blocked host:-」エラー /Railsチュートリアル一日目 |
https://qiita.com/rocketpeace1753/items/adade6bd085669307784
|
cloudRailsで「Blockedhost」エラーRailsチュートリアル一日目問題cloudでrailsserverするも「Blockedhostドメイン名」となってしまいサイトが表示できない対処法configenvironmentdevelopmentrbファイルに以下文言を追加confighostsltltamazonawscomこれで、ドット以下のドメインこの場合amazonawscomを含む時、すべて許可するようになるこれを追記・保存して再度railss実行→無事サイトが表示できました※文言を追加しても保存・タブを閉じて「railss」をし直さないと変化しないので注意です。 |
2021-07-09 23:06:52 |
技術ブログ |
Developers.IO |
AWSマネジメントコンソールのスイッチロールの履歴をEditThisCookieで編集/個別削除する |
https://dev.classmethod.jp/articles/edit-and-delete-switch-role-history-on-aws-management-console-with-editthiscookie/
|
devel |
2021-07-09 14:59:09 |
技術ブログ |
Developers.IO |
軽量かつネイティブに動くJavaを求めてQuarkus 2.0をやってみてLambdaにデプロイした |
https://dev.classmethod.jp/articles/quarkus-2-0-getting-started/
|
lambda |
2021-07-09 14:27:09 |
技術ブログ |
Developers.IO |
머신러닝 학습법에 대한 정리 |
https://dev.classmethod.jp/articles/about-machine-learning-methods/
|
머신러닝학습법에대한정리안녕하세요클래스메소드김재욱 kim Jaewook 입니다 지난블로그「초보자도이해할수있는딥러닝 텐서플로우설치까지 」에이어서이번에는머신러닝학습법에대해서정리를해보았습니다 딥러 |
2021-07-09 14:15:57 |
技術ブログ |
Developers.IO |
Snowflakeでウェアハウスサイズを変更するとロード時間は変わるの? 実際に試してみた |
https://dev.classmethod.jp/articles/snowflake-execution-time-by-warehouse/
|
snowflake |
2021-07-09 14:00:35 |
海外TECH |
DEV Community |
How to Game Dev Metrics w/ Ray Elenteny |
https://dev.to/conorbronsdon/how-to-game-dev-metrics-w-ray-elenteny-5h3l
|
How to Game Dev Metrics w Ray ElentenyWhat leads teams to game metrics within their organization On this week s episode of Dev Interrupted we speak with agile expert Ray Elenteny Principal Owner at Solutech Consulting about how people game dev metrics and the underlying issues in culture amp leadership that lead to it So whether you re trying to game your own metrics don t do it or solve culture issues that have led to this issue at your organization give this episode a listen Listen to the full episode Episode Highlights include Which metrics are easiest to gameThe long term implications of gaming metricsHow poor culture and leadership lead engineering teams to game dev metrics Join the Dev Interrupted Discord ServerWith over members the Dev Interrupted Discord Community is the best place for Engineering Leaders to engage in daily conversation No sales people allowed Join the community gt gt |
2021-07-09 14:52:28 |
海外TECH |
DEV Community |
A Gentle Introduction to Reinforcement Learning |
https://dev.to/satwikkansal/a-gentle-introduction-to-reinforcement-learning-75h
|
A Gentle Introduction to Reinforcement Learning A gentle introduction to Reinforcement LearningIn AplhaGo a program developed for playing the game of Go made headlines when it beat the world champion Go player in a five game match It was a remarkable feat because the number of possible legal moves in Go are of the order of × To put this in context this number is far far greater than the number of atoms in the observable universe which are of the order of Such a high number of possibilities make it almost impossible to create a program that can play effectively using brute force or somewhat optimized search algorithms A part of the secret sauce of AlphaGO was the usage of Reinforcement Learning to improve its understanding of the game by playing against itself Since then the field of Reinforcement Learning has seen increased interest and much more efficient programs have been developed to play various games at a pro human efficiency Although you would find Reinforcement Learning discussed in the context of Games and Puzzles in most places including this post the applications of Reinforcement Learning are much more expansive The objective of this tutorial is to give you a gentle introduction to the world of Reinforcement Learning ️First things first This post was written in collaboration with Alexey Vinel Professor Halmstead University Some ideas and visuals are borrowed from my previous post on Q learning written for Learndatasci Unlike most posts you ll find on Reinforcement learning we try to explore Reinforcement Learning here with an angle of multiple agents So this makes it slightly more complicated and interesting at the same time While this will be a good resource to develop intuitive understanding of Reinforcement Learning Reinforcement Q learning to be specific it is highly recommended to visit the theoretical parts some links shared in the appendix if you re willing to explore Reinforcement Learning beyond this post I had to fork openAIs gym library to implement a custom environment The code can be found on this github repository If you d like to explore an interactive version you can check out this google colab notebook We use Python to implement the algorithms if you re not familiar with Python you can simply pretend that those snippets don t exist and read through the textual part including code comments Alright time to get started What is Reinforcement Learning Reinforcement learning is a paradigm of Machine Learning where learning happens through the feedback gained by an agent s interaction with its environment This is also one of the key differentiators of Reinforcement Learning with the other two paradigms of Machine learning Supervised learning and Unsupervised learning Supervised learning algorithms require fully labelled training data and Unsupervised learning algorithms need no labels On the other hand Reinforcement learning algorithms utilize feedback from the environment they re operating in to get better at the tasks they re being trained to perform So we can say that Reinforcement Learning lies somewhere in the middle of the spectrum It is inevitable to talk about Reinforcement Learning with clarity without using some technical terms like agent action state reward and environment So let s try to gain a high level understanding of Reinforcement Learning and these terms through an analogy Understanding Reinforcement learning through BirbingLet s watch the first few seconds of this video first Pretty cool isn t it And now think about how did someone manage to teach this parrot to reply with certain sounds on certain prompts And if you carefully observed part of the answer lies in the food the parrot is given after every cool response The human asks a question and the parrot tries to respond in many different ways and if the parrot s response is the desired one it is rewarded with food Now guess what The next time the parrot is exposed to the same cue it is likely to answer similarly expecting more food This is how we reinforce certain behaviours through positive experiences If I had to explain the above process in terms of Reinforcement learning concepts it d be something like The agent learns to take desired for a given state in the environment where The agent is the parrotThe state is questions or cues the parrot is exposed toThe actions are the sounds it is uttering The reward is the food he gets when he takes the desired actionAnd the environment is the place where the parrot is living or in other words everything else than the parrot The reinforcement can happen through negative experiences too For example if a child touches a burning candle out of curiosity s he is unlikely to repeat the same action So in this case instead of a reward the agent got a penalty which would disincentivize the agent to repeat the same action in future again If you try to think about it there are countless similar real world analogies This suggests why Reinforcement Learning can be helpful for a wide variety of real world applications and why it might be a path to create General AI Agents think of a program that can not just beat a human in the game of Go but multiple games like Chess GTA etc It might still take a lot of time to develop agents with general intelligence but reading about programs like MuZero one of the many successors of Alpha Go hints that Reinforcement learning might have a decent role to play in achieving that After reading the analogies a few questions like below might have come into your mind Real world example is fine but how do I do this reinforcement in the world of programs What are these algorithms and how do they work Let s start answering such questions as switch gears and dive into certain technicalities of Reinforcement learning Example problem statement Self driving taxiWouldn t it be fantastic to train an agent i e create a computer program to pick up from a location and drop them at their desired location In the rest of the tutorial we ll try to solve a simplified version of this problem through reinforcement learning Let s start by specifying typical steps in a Reinforcement learning process Agent observes the environment The observation is represented in digital form and also called state The agent utilizes the observation to decide how to act The strategy agent uses to figure out the action to perform is also referred to as policy The agent performs the action in the environment The environment as a result of the action may move to a new state i e generate different observations and may return feedback to the agent in the form of rewards penalties The agent uses the rewards and penalties to refine its policy The process can be repeated until the agent finds an optimal policy Now that we re clear about the process we need to set up the environment In most cases what this means is we need to figure out the following details The state spaceTypically a state will encode the observable information that the agent can use to learn to act efficiently For example in the case of self driving taxi the state information could contain the following information The current location of the taxiThe current location of the passengerThe destinationThere can be multiple ways to represent such information and how one ends up doing it depends on the level of sophistication intended The state space is the set of all possible states an environment can be in For example if we consider our environment for the self driving taxi to be a two dimensional x grid there are possible locations for the taxi possible locations for the passengerand possible destinationThis means our state space size becomes x x i e at any point in time the environment must be in either of these states The action spaceAction space is the set of all possible actions an agent can take in the environment Taking the same D grid world example the taxi agent may be allowed to take the following actions Move NorthMove SouthMove EastMove WestPickupDrop offAgain there can be multiple ways to define the action space and this is just one of them The choice also depends on the level of complexity and algorithms you d want to use later The rewardsThe rewards and penalties are critical for an agent s learning While deciding the reward structure we must carefully think about the magnitude direction positive or negative and the reward frequency every time step based on specific milestone etc Taking the same grid environment example some ideas for reward structure can be The agent should receive a positive reward when it performs a successful passenger drop off The reward should be high in magnitude because this behaviour is highly desired The agent should be penalized if it tries to drop off a passenger in the wrong locations The agent should get a small negative reward for not making it to the destination after every time step This would incentivize the agent to take faster routes There can be more ideas for rewards like giving a reward for successful pickup and so on The transition rulesThe transition rules are kind of the brain of the environment They specify the dynamics of the above discussed components state action and reward They are often represented in terms of tables a k a state transition tables which specify that For a given state S if you take an action A the new state of the environment becomes S and the reward received is R StateActionRewardProbabilityNext StateSpAqRpqSp An example row could be when the taxi s location is in the middle of grid the passenger s location in in the bottom right corner The agent takes the Move North action it gets a negative reward and the next state becomes the state that represents the taxi in its new position Note In the real world the state transitions may not be deterministic i e they can be either Stochastic which means the rules operate by probability i e if you take an action there s an X chance you ll end up in state S and Xn chance you d end up in a state Sn Unknown which means it is not known in advance what all possible states the agent can get into if it takes action A in a given state S This might be the case when the agent is operating in the real world Implementing the environmentImplementing a computer program that represents the environment can be a bit of a programming effort Apart from deciding the specifics like the state space transition table reward structure etc we need to implement other features like creating a way to input actions into the environment and getting feedback in return More often than not there s also a requirement to visualize what s happening under the hood Since the objective of this tutorial is Introduction to Reinforcement Learning we will skip the how to program a Reinforcement learning environment part and jump straight to using it However if you re interested you can check the source code and follow the comments there Specifics of the environmentWe ll use a custom environment inspired by OpenAI gym s Taxi v environment We have added a twist to the environment Instead of having a single taxi and a single passenger we ll be having two taxis and a passenger The intention behind the mod is to observe interesting dynamics that might arise because of the presence of another taxi This also means the state space would comprise an additional taxi location and the action space would comprise of actions of both the taxis now Our environment is built on OpenAI s gym library making it a bit convenient to implement environments to evaluate Reinforcement learning algorithms They also include some pre packaged environment Taxi v is one of them and their environments are a popular way to practice Reinforcement Learning and evaluate Reinforcement Learning algorithms Feel free to check out their docs to know more about them Exploring the environmentIt s time we start diving into some code and explore the specifics of the environment we ll be using for Reinforcement learning in this tutorial Let s first install the custom gym module which contains the environment pip uninstall gym ypip install git git github com satwikkansal gym dual taxi git egg gym amp subdirectory gym import gymenv gym make DualTaxi v env render PS If you re using jupyter notebook and get env not registered error you have to restart your kernel after install the custom gym package in the last step In the snippet above we initialize our custom DualTaxi v environment and rendered its current state In the rendered output The yellow and red rectangles represents both taxis on the x gridR G B and Y are the possible pick up or drop off locations for the passengerThe character “ represents a wall which the taxis can t crossThe blue colored letter represents the pick up location of the passengerThe purple letter represents the drop off location Any taxi that gets the passenger aboard would turn green in color gt gt gt env observation space env action space Discrete Discrete You might have noticed that the only information that s printed is their discrete nature and the size of the space The rest of the details are abstracted This is an important point and as you ll realize by the end of the post our RL algorithm won t need any more information However if you re still curious to know how the environment functions feel free to check out the enviroment s code and follow the comments there Another thing that you can do is peek into the state transition table check the code in the appendix if you re curious how to do it The objectiveThe objective of the environment is pick up the passenger from the blue location and drop to the violet location as fast as possible An intelligent agent should be able to do this with consistency Now let s see what information to we have for the environment s state space a k a observation space and action space But before we dive into implementing that intelligent agent let s see how a random agent would perform in this kind of enviromnet def play random env num episodes Function to play the episodes for i in range num episodes state env reset done False while not done next action env action space sample state reward done env step next action Trying the dumb agentprint frames play random env num episodes check github for the code for print framesYou can see the episode number at the top In our case an episode is the timeframe between the steps where the taxis make the first move and the step where they drop a passenger at the desired after picking up When this happens the episode is over and we have to reset the environment to start all over again You can see different actions at the bottom and how the state keeps changing and the reward the agent gets after every action As you can might have realized these taxis are taking a while to finish even a single episode So our random approach is very dumb for sure Our intelligent agent definitely will have to perform this task better Introducing Q learningQ learning is one among several Reinforcement Learning algorithms The reason we are picking Q learning is because it is simple and straightforward to understand We ll use Q learning to make our agent somewhat intelligent Intuition behind Q learningThe way Q learning works is by storing what we call Q values for every state action combination The Q value represents the quality of an action taken from that state Of course the initial q values are just random numbers but the goal is to iteratively update them in the right direction After enough iterations these Q values can start to converge i e the size of update in upcoming iterations gets so small that it has a negligible impact Once that is the case we can safely say that For a given state the higher the Q value for the state action pair the higher would be the expected long term reward of taking that particular action So long story short the developing intelligence part of Q learning lies in how the Q values after agent s ineteraction with the environment which requires discussion of two key concepts The bellman equationAttached below is the bellman equation in the context of updating Q values this is the equation we use to update Q values after agent s interaction with the environment The Q value of a state action pair is the sum of the instant reward and the discounted future reward of the resulting state Where st represents the state at time tat represents action taken at time t the agent was in state st at this point in time rt is the reward received by performing the action at in the state st st is the next state that our agent will transition to after performing the action at in the state st The discount factor γ gamma determines how much importance we want to give to future rewards A high value for the discount factor close to captures the long term effective award whereas a discount factor of makes our agent consider only immediate reward hence making it greedy The alpha alpha is our learning rate Just like in supervised learning settings alpha here is representative of the extent to which our Q values are being updated in every iteration Epsilon greedy methodWhile we keep updating Q values every iteration there s an important choice the agent has to make while taking an action The choice it faces is whether to explore or exploit So with time the Q values get better at representing the quality of a state action pair But to reach that goal the agent has to try different actions how can it know if a state action pair is good if it hasn t tried it So it becomes critical for agent to explore i e take random actions to gather more knowledge about the environment But there s a problem if the agent only explores Exploration can only get the agent so far Imagine that the environment agent is in is like a maze Exploration can put agent on unknown path and give feedback to make q values more valuable But if the agent is only taking random actions at every step it is going to have a hard time reaching the end state of the maze That s why it is also important to exploit The agent should also consider using what it has already learned i e the Q values to decided what action to take next That s all to say the agent needs to balance exploitation and exploration There are many ways to do this Once common way to do it with Q learning is to have a value called epsilon which denotes the probability by which the agent will explore A higher epsilon value results in interactions with more penalties on average which is obvious because we are exploring and making random decisions We can add more sophistication to this method and its a common practice that people start with a high epsilon value and keep reducing it as time progresses This is called epsilon decay The intution is that as we keep adding more knowledge to Q values through exploration the exploitation becomes more trustworthy which in turn means we can explore at a lower rate Note There s usually some confusion around if epsilon represents probability of exploration or exploitation You ll find it used both ways on the internet and other resources I find the first way more comfortable as it fits the terminology epsilon decay If you see it other way around don t get confused the concept is still the same Using Q learning for our environmentOkay enough background about Q learning Now how do we apply it to our DualTaxi v environment Because of the fact that we have two taxis in our environment we can do it in a couple of ways Cooperative approachIn this approach we can assume that there s a single agent with a single Q table that controls both the taxis think of it like a taxi agency The overall goal of this agent would be to maximize the reward these taxis receive combined Competitive approachIn this approach we can train two agents one for each taxi Every agent has its own Q table and gets its own reward Of course the next state of the environment still depends on the actions of both the agents This creates an interesting dynamic where each taxi would be trained to maximize its own individual rewards Cooperative approach in actionBefore we see the code let us specify the steps we d have to take Initialize the Q table size of the Q table is state space size x action space size by all zeros Decide between exploration and exploitation based on the epsilon value Exploration For each state select any one among all possible actions for the current state S Exploitation For all possible actions from the state S select the one with the highest Q value Travel to the next state S as a result of that action a Update Q table values using the update equation If the episode is over i e goal state is reached reset the environment for next iteration Keep repeating steps to until we start seeing decent results in agent s performance from collections import Counter dequeimport random def bellman update q table state action next state reward Function to perform q value update as per bellman equation Get the old q value old q value q table state action Find the maximum q value for the actions in next state next max np max q table next state Calculate the new q value as per the equation new q value alpha old q value alpha reward gamma next max Finally update the q value q table state action new q valuedef update q table env state Selects an action according to epsilon greedy method performs it and the calls bellman update to update the Q values if random uniform gt epsilon action env action space sample else action np argmax q table state next state reward done info env step action bellman update q table state action next state reward return next state reward done infodef train agent q table env num episodes log every running metrics len evaluate every evaluate trials This is the training logic It takes input as a q table the environment The training is done for num episodes episodes The results are logged preiodcially We also record some useful metrics like average reward in last k timesteps the average length of last episodes and so on These are helpful to gauge how the algorithm is performing over time After every few episodes of training We run evaluation routine where we just exploit i e rely on the q table so far and see how well the agent has learned so far Over the time the results should get better until the q table starts converging after which there s negligible change in the results rewards deque maxlen running metrics len episode lengths deque maxlen total timesteps metrics for i in range num episodes epochs state env reset num penalties reward done False while not done state reward done info update q table env state rewards append reward epochs total timesteps if total timesteps log every rd Counter rewards avg ep len np mean episode lengths zeroes fill percent calculate q table metrics q table print f Current Episode i print f Reward distribution rd print f Last episode lengths avg avg ep len print f zeroes Q table zeroes fill percent percent filled episode lengths append epochs if i evaluate every print print f Running evaluation after i episodes finish percent avg time penalties evaluate agent q table env evaluate trials print rd Counter rewards avg ep len float np mean episode lengths zeroes fill percent calculate q table metrics q table metrics i train reward distribution rd train ep len avg ep len fill percent fill percent test finish percent finish percent test ep len avg time test penalties penalties print Training finished return q table metricsdef calculate q table metrics grid This function counts what perecentage of cells in the q table are non zero Note There are certain state action combinations that are illegal so the table might never be full r c grid shape total r c count for row in grid for cell in row if cell count fill percent total count total return count fill percentdef evaluate agent q table env num trials The routine to evaluate an agent It simply exploits the q table and records the performance metrics total epochs total penalties total wins for in range num trials state env reset epochs num penalties wins done False while not done next action np argmax q table state state reward done env step next action if reward lt num penalties elif reward gt wins epochs total epochs epochs total penalties num penalties total wins wins average penalties average time complete percent compute evaluation metrics num trials total epochs total penalties total wins print evaluation metrics average penalties average time num trials total wins return complete percent average time average penaltiesdef print evaluation metrics average penalties average time num trials total wins print Evaluation results after trials format num trials print Average time steps taken format average time print Average number of penalties incurred format average penalties print f Had total wins wins in num trials episodes def compute evaluation metrics num trials total epochs total penalties total wins average time total epochs float num trials average penalties total penalties float num trials complete percent total wins num trials return average penalties average time complete percentimport numpy as np The hyper parameters of Q learningalpha learning rategamma discout factorepsilon env gym make DualTaxi v num episodes Initialize a q table full of zeroesq table np zeros env observation space n env action space n q table metrics train agent q table env num episodes Get back trained q table and metricsTotal encoded states are Running evaluation after episodesEvaluation results after trialsAverage time steps taken Average number of penalties incurred Had wins in episodes Skipping intermediate output Running evaluation after episodesEvaluation results after trialsAverage time steps taken Average number of penalties incurred Had wins in episodes Current Episode Reward distribution Counter Last episode lengths avg Q table zeroes percent filledTraining finished I have skipped the intermediate output on purpose you can check this pastebin if you re interested in full output Competitive ApproachThe steps for this are similar to the cooperative approach with the differnce that now we have multiple Q tables to update Initialize the Q table and for both the agents by all zeros The size of each Q table is state space size x sqrt action space size Decide between exploration and exploitation based on the epsilon value Exploration For each state select any one among all possible actions for the current state S Exploitation For all possible actions from the state S select the one with the highest Q value in the Q tables of respective agents Transition to the next state S as a result of that combined action a a Update Q table values for both the agents using the update equation and respective rewards amp actions If the episode is over i e goal state is reached reset the environment for next iteration Keep repeating steps to until we start seeing decent results in the performance def update multi agent q table q table env state Same as update method discussed in the last section just modified for two independent q tables if random uniform gt epsilon action env action space sample action action env decode action action else action np argmax q table state action np argmax q table state action env encode action action action next state reward done info env step action reward reward reward bellman update q table state action next state reward bellman update q table state action next state reward return next state reward done infodef train multi agent q table q table env num episodes log every running metrics len evaluate every evaluate trials Same as train method discussed in the last section just modified for two independent q tables rewards deque maxlen running metrics len episode lengths deque maxlen total timesteps metrics for i in range num episodes epochs state env reset done False while not done Modification here state reward done info update multi agent q table q table env state rewards append sum reward epochs total timesteps if total timesteps log every rd Counter rewards avg ep len np mean episode lengths zeroes fill percent calculate q table metrics q table zeroes fill percent calculate q table metrics q table print f Current Episode i print f Reward distribution rd print f Last episode lengths avg avg ep len print f zeroes Q table zeroes fill percent percent filled print f zeroes Q table zeroes fill percent percent filled episode lengths append epochs if i evaluate every print print f Running evaluation after i episodes finish percent avg time penalties evaluate multi agent q table q table env evaluate trials print rd Counter rewards avg ep len float np mean episode lengths zeroes fill percent calculate q table metrics q table zeroes fill percent calculate q table metrics q table metrics i train reward distribution rd train ep len avg ep len fill percent fill percent fill percent fill percent test finish percent finish percent test ep len avg time test penalties penalties print Training finished n return q table q table metricsdef evaluate multi agent q table q table env num trials Same as evaluate method discussed in last section just modified for two independent q tables total epochs total penalties total wins for in range num trials state env reset epochs num penalties wins done False while not done Modification here next action env encode action np argmax q table state np argmax q table state state reward done env step next action reward sum reward if reward lt num penalties elif reward gt wins epochs total epochs epochs total penalties num penalties total wins wins average penalties average time complete percent compute evaluation metrics num trials total epochs total penalties total wins print evaluation metrics average penalties average time num trials total wins return complete percent average time average penalties The hyperparameter of Q learningalpha gamma epsilon env c gym make DualTaxi v competitive True num episodes q table np zeros env c observation space n int np sqrt env c action space n q table np zeros env c observation space n int np sqrt env c action space n q table q table metrics c train multi agent q table q table env c num episodes Total encoded states are Running evaluation after episodesEvaluation results after trialsAverage time steps taken Average number of penalties incurred Had wins in episodes Skipping intermediate output Running evaluation after episodesEvaluation results after trialsAverage time steps taken Average number of penalties incurred Had wins in episodes Current Episode Reward distribution Counter Last episode lengths avg Q table zeroes percent filled Q table zeroes percent filled Running evaluation after episodesEvaluation results after trialsAverage time steps taken Average number of penalties incurred Had wins in episodes Current Episode Reward distribution Counter Last episode lengths avg Q table zeroes percent filled Q table zeroes percent filledCurrent Episode Reward distribution Counter Last episode lengths avg Q table zeroes percent filled Q table zeroes percent filledTraining finished I have skipped the intermediate output on purpose you can check this pastebin if you re interested in full output Evaluating the performanceIf you observed the code carefully the train functions returned q tables as well as some metrics We can use the q table now for taking agent s actions and see how intelligent it has become Also we ll try to plot these metrics to visualize how the training progressed from collections import defaultdictimport matplotlib pyplot as plt import seaborn as pltdef plot metrics m Plotting various metrics over the number of episodes ep nums list m keys series defaultdict list for ep num metrics in m items for metric name metric val in metrics items t type metric val if t in float int np float series metric name append metric val for m name values in series items plt plot ep nums values plt title m name plt xlabel Number of episodes plt show def play q table env num episodes for i in range num episodes state env reset done False while not done next action np argmax q table state state reward done env step next action def play multi q table q table env num episodes Capture frames by playing using the two q tables for i in range num episodes state env reset done False while not done next action env encode action np argmax q table state np argmax q table state state reward done env step next action plot metrics metrics frames play q table env print frames frames plot metrics metrics c print frames play multi q table q table env c Some observationsWhile Q learning agent commits errors initially during exploration but once it has explored enough seen most of the states it starts to act wisely Both the approaches did fairly well However in relative comparison the cooperative approach seem to perform better The plots of competitive approach are more volatile It took around episodes for agents to explore most of the possible state action pairs Note that not state action pairs are feasible because some states aren t legal for example states where both the taxis are at same location aren t possible As the training progressed the number of penalties reduced They didn t reduce completely because of the epsilon we re still exploring based on the epsilon value during training The episode length kept decreasing which means the taxis were able to pickup and drop the passenger faster because of the new learned knowledge in q tables So to summarize the agent is able to get around the walls pick the passengers take less penalties and reach the destination timely And the fact that the code where q learning update happens is merely around lines of Python code makes it even more impressive From what we ve discussed so far in the post it s likely that you have a fair bit of intution about how Reinforcement Learning works Now in the last few sections we will dip our toes in some broader level ideas and concepts that might be relevant to you when exploring Reinforcement Learning further Let s start with the common challenges of Reinforcement Learning first Common challenges while applying Reinforcement learning Finiding the right HyperparametersYou might be wondering how did I decide values of alpha gamma and epsilon In the above program it was mostly based on intuition from my past experience and some hit and trial This goes a long way but there are also some techniques to come up with good values The process in itself is sometimes referred to as Hyperparamter tuning or Hyperparameter optimization Tuning the hyperparametersA simple way to programmatically come up with the best set of values of the hyperparameter is to create a comprehensive search function that selects the parameters that would result in best agent performance A more sophisticated way to get the right combination of hyperparameter values would be to use Genetic Algorithms Also it is a common practice to make these parameters dynamic instead of fixed values For example in our case all of the three hyperparmeters can be configured to decrease over time because as the agent continues to learn it builds up more resilient priors Choosing the right algorithmsQ learning is just one of the many Reinformcement Learning algorithms out there There are multiple ways to classify Reinforcement Learning algorithms The selection depends on various factors including the nature of the environment For example if the state space of action space is continuous instead of discrete imagine that the environment now expects continuous degree values instead of discrete north east etc directions as actions and the state space consists of more precise lat lng location of taxis instead of grid coordinates tabular Q learning can t work There are hacks to get around continuous spaces like bucketing their range and making it discrete as a result but these hacks fail too if the state space and action space gets too large In those cases it is preferred to use more generic algorithms usually the ones that involve approximators like Neural Networks More often than not in practice the agent is trained with multiple algorithms initially to decide which algorithm would fit the best Reward StructureIt is important to think strategically about the rewards to be given to the agent If the rewards are too sparse the agent might have difficulty in learning Poorly structured rewards can also lead to cases of non convergence and situations in which agent gets stuck in local minima For example let s say the environment gave reward for successfully picking up passenger and no penalty for dropping the passenger So it might happen that the agent might end up repeatedly picking up and dropping a passenger to maximise it rewards Similary if we there was very high negative reward for picking up passenger agent would eventually learn to not pick a passenger at all and hence would never finish successfully The challenges of real world environmentsTraining an agent on an openAI gym environment is realtively easy because you get a lot of things out of the box The real world however is a bit more unorganised We sensors to ingest environment information and mechanism to translate it into something that can be fed to a Machine Learning algorithm So such systems involve a lots of techniques overall aside from the learning algorithm As a simple example consider a general Reinforcement Learning agent that is being trained to play ATARI games The information this agent needs to be passed is pixels on the screen So we might have to use deep learning techniques like Convolutional Neural Networks to interpret the pixels on the screen and extract information out of the game like scores to enable the agent to interpret the game There s also a challenge of sample efficiency Since the state spaces and action spaces might be continuous and have big ranges it becomes critical to achieve a decent sample efficiency that makes Reinforcement Learning feasible If the algorithm needs high number of episodes high enough that we cannot make it to produce results in reasonable amount of time then Reinforcement Learning becomes impractical Respecting the theoretical boundariesIt is easy to sometimes get carried away and see Reinforcement Learning to be the solution of most problems It helps to have a theoretical understanding of how these algorithm works and fundamental concepts like Markov Decision Processes and awareness of the state of the art algorithms to have a better intution about what can and what can t be solved using present day Reinforcement Learning algorithms Wrapping upIn this tutorial we began with understanding Reinforcement Learning with the help of real world analogies Then we learned about some fundamental conepts like state action and rewards Next we went over the process of framing a problem such that we can traing an agent through Reinforcement Learning algorithms to solve it We took Self driving taxi as our reference problem for the rest of the tutorial We then used OpenAL s gym module in python to provide us with a related environment where we can develop our agent and evaluate it Then we observed how terrible our agent was without using any algorithm to play the game so we went ahead to implement the Q learning algorithm from scratch We then introduced Q learning and went over the steps to use it for our environment We came up with two approaches cooperative and competitive We then evaluated the Q learning results and saw how the agent s performance improved significantly after Q learning As mentioned in beginning Reinforcement learning is not just limited to openAI gym environments and games It is also used for managing portfolio and finances for making humanoid robots for manufacturing and inventory management to develop general AI agents agents that can perform multiple things with a single algorithm like same agent playing multiple Atari games Appendix Further reading Reinforcement Learning An Introduction Book by Andrew Barto and Richard S Sutton Most popular book about Reinforcement Learning out there Highly recommended if you re planning to dive deep into the field Lectures by David Silver also available on YouTube Another great resource if you re more into learning from videos than books Tutorial series on medium on Reinforcement learning using Tensorflow by Arthur Juliani Some interesting topics related to Multi Agent environments Friend and foe Q learning in general sum gamesGame theory concepts likeStrictly dominant strategiesNash equilibriumShapely values for reward distribution Visualising the transition table of our dual taxi enviromentThe following is an attempt to visualize the internal tranistion table of our environment in a human readable way The source of this information is the env P object which contains a mapping of the formcurrent state action taken transition prob next state reward done this is all the info we need to simulate the environment and this is what we can use to create the transition table env P First let s take a peek at this object False True True True True True True False True False False False True True False True True True True False True False False False True False True False False False True False True False False False False True True True True True True False True False False False True True False True True True True False True False False False True False True False False False True False True False False False omitting the whole output because it s very long Now let s put some code together to convert this information in more readable tabular form pip install pandasimport pandas as pdtable env c gym make DualTaxi v competitive True def state to human readable s passenger loc R G B Y T T s destination R G B Y s return f Taxi s Taxi s Pass passenger loc Dest destination def action to human readable a actions NSEWPD return actions a actions a for state num transition info in env c P items for action possible transitions in transition info items transition prob next state reward done possible transitions table append State state to human readable list env decode state num Action action to human readable env decode action action Probablity transition prob Next State state to human readable list env decode next state Reward reward Is over done pd DataFrame table State Action Probablity Next State Reward Is over Taxi Taxi Pass R Dest R N N Taxi Taxi Pass R Dest R False Taxi Taxi Pass R Dest R N S Taxi Taxi Pass R Dest R True Taxi Taxi Pass R Dest R N E Taxi Taxi Pass R Dest R True Taxi Taxi Pass R Dest R N W Taxi Taxi Pass R Dest R True Taxi Taxi Pass R Dest R N P Taxi Taxi Pass R Dest R True Taxi Taxi Pass T Dest Y D S Taxi Taxi Pass T Dest Y True Taxi Taxi Pass T Dest Y D E Taxi Taxi Pass T Dest Y False Taxi Taxi Pass T Dest Y D W Taxi Taxi Pass T Dest Y True Taxi Taxi Pass T Dest Y D P Taxi Taxi Pass T Dest Y False Taxi Taxi Pass T Dest Y D D Taxi Taxi Pass T Dest Y False rows × columns BloopersIn retrospect the hardest part of writing this post was to get the dual taxi environment working There were so many moments like below It took a lot of trial and errors tweaking rewards updating rules for situations like collision reducing state space to get to a stage where the solutions for competitive set up were converging The feeling when the solution converges for the first time is very cool So if you have some free time I d recommend you to hack up an environment yourself the first time I tried q learning was with a snake apple game I developed using pygame and try to solve it with Reinforcement Learning Trust me you ll be humbled and learn lots of interesting things along the way |
2021-07-09 14:36:20 |
海外TECH |
DEV Community |
How to Build a Stock Trading Bot with Python |
https://dev.to/codesphere/how-to-build-a-stock-trading-bot-with-python-b1
|
How to Build a Stock Trading Bot with PythonEarlier this week we explored how code has drastically changed financial markets through the use of autonomous trading algorithms Surprisingly building your own trading bot is actually not that difficult In this tutorial we re going to be using Python to build our own trading bot Keep in mind that this tutorial is not about how to make billions off of your trading bot If I had an algorithm that sophisticated I probably wouldn t be giving it away Rather I m going to show you how you can read market data buy and sell stocks and program the logic of your trading algorithm all with some relatively simple Python code And of course This article is for information purposes only It is not intended to be investment advice Seek a duly licensed professional for investment advice You can open up a quick demo of the project on Codesphere here However you will need an API key before you can actually start trading with our bot More on that later Some Helpful TermsBefore we get started it ll be helpful to define a couple of terms Paper Trading The trading of securities with fake money for educational or testing purposes Backtesting Testing a trading algorithm against past market data in order to evaluate its effectiveness Moving Average The average of a certain amount of recent entries in a set of data S amp P A stock market index composed of the largest companies listed on US stock exchangesClosing Price The final price of a security during a unit of timeGood Til Cancel GTC When you place a trade it may not be met right away A broker will continue to try and execute a GTC trade until you cancel it SetupThe trading API we re going to be using is called Alpaca and is by far one of the most intuitive trading APIs I ve found In its free tier Alpaca includes both Paper and Real Trading and both Historical and Live market data It also has an incredibly clean user interface and Python library In addition unless you re willing to leave your python script running on your computer you re going to need to deploy your trading bot in the cloud For this we re going to use Codesphere Since Codesphere s front end is an IDE we can develop our bot directly on the platform If you wish to do the coding on your local machine however you can connect your GitHub repo to Codesphere and deploy afterward The only environment setup we really need before we can start coding is to create our pip environment pipenv shellAnd then install the Alpaca APIpipenv install alpaca trade apiWe are also going to need to make a free Alpaca account and then navigate to our Paper Trading Account Notice your API Key on the right hand side When you first open your account you will be prompted to generate a key and both public and private key will be shown to you We re going to need those for later Buying and Selling StocksWe can then set up our Alpaca Trading library and buy and sell stocks in Python like so Our StrategyThe strategy we re going to use is to buy and sell whenever the minute moving average crosses our price Now this is FAR from a good trading strategy but the logic is relatively simple and will allow us to focus on the general structure of a trading bot In the above example the red line is the stock price and the blue line is the moving average When the moving average crosses under our price we are going to buy a share of our stock We are then going to hold the stock until the moving average crosses again and goes above the price When that happens we are going to sell our share and then wait for the next buying signal In this article we ll be trading SPY which is an index that tracks the S amp P and we will only be trading one stock at a time Keep in mind that if you were to make these trades with real money you would have to comply with day trading regulations and brokerage fees which would likely offset your gains Reading Market DataNow let s go over how to read market data using the Alpaca API in Python If you re looking for more in depth information for when you build your strategy check out Alpaca s documentation Executing Our StrategyNow let s finally put all of this together for our complete trading algorithm And there we have it We just built a trading bot in lines of code Now if we leave this running on Codesphere throughout the day we should see our Alpaca dashboard update throughout the day Backtesting a StrategyNow if you don t want to wait around to see if your algorithm is any good we can use Alpaca s market data API to backtest our Python algorithm against historical data Next StepsSo there you have it we just created a rudimentary trading bot with some fairly simple Python Here is the full repo While I highly encourage you guys to play around with the Alpaca API for educational purposes be extremely careful if you are going to trade real securities One bug in your code could have disastrous effects on your bank account On a lighter note this is a great opportunity to put those statistics classes you took to work Comment down below if you re going to build your own trading algorithm Happy Coding from your folks at Codesphere the next generation cloud provider |
2021-07-09 14:21:16 |
海外TECH |
DEV Community |
New to node.js and struggling with socket.io |
https://dev.to/fletch0132/new-to-node-js-and-struggling-with-socket-io-o75
|
New to node js and struggling with socket ioHi all Nervous first post but really need some help I m working on a Web application for pp communication both video and text The video communication works some teething issues but my main issue is getting user socket id specifically the user just having connected I have tried many things including Socket on connected console log socket id All I get is undefined Yet if I run te same console log code after the page loads I can get it displayed Not sure how to work with that I want to store the socket id and username in an object array Thank you |
2021-07-09 14:02:10 |
Apple |
AppleInsider - Frontpage News |
Apple ad spot highlights Ping iPhone capabilities of Apple Watch |
https://appleinsider.com/articles/21/07/09/apple-ad-spot-highlights-ping-iphone-capabilities-of-apple-watch?utm_medium=rss
|
Apple ad spot highlights Ping iPhone capabilities of Apple WatchApple has shared a new ad spot highlighting how an Apple Watch could help users find a misplaced iPhone ーeven if it s literally lost in a haystack Credit AppleThe short ad features a local rancher and his dog driving down a country road When they arrive at a haystack the rancher uses his Apple Watch to ping his iPhone which is hidden within the pile of hay Read more |
2021-07-09 14:52:35 |
Apple |
AppleInsider - Frontpage News |
First M1 battery tests lasted so long, Apple thought indicator was buggy |
https://appleinsider.com/articles/21/07/09/first-m1-battery-tests-lasted-so-long-apple-thought-indicator-was-buggy?utm_medium=rss
|
First M battery tests lasted so long Apple thought indicator was buggyApple marketing vice president says that staff suspected a bug in macOS when the battery indicator results came in from the first M Apple Silicon Mac The Apple executive says that to the amusement of engineers staffers didn t believe the first battery life results from the M processor When we saw that first system and then you sat there and played with it for a few hours and the battery didn t move we thought Oh man that s a bug the battery indicator is broken Bob Borchers VP of worldwide product marketing for Apple told Tom s Guide Read more |
2021-07-09 14:38:07 |
Apple |
AppleInsider - Frontpage News |
Ireland plays defense as overhaul to global corporate tax rate looms |
https://appleinsider.com/articles/21/07/09/ireland-plays-defense-as-overhaul-to-global-corporate-tax-rate-looms?utm_medium=rss
|
Ireland plays defense as overhaul to global corporate tax rate loomsThe Irish government is on the defensive because of a new global tax plan could threaten its status as a tax haven for multinational corporations like Apple Credit AppleEarlier in the G group of nations agreed to close tax loopholes leveraged by global companies by enforcing a minimum corporate tax rate of at least Now The New York Times reports that Ireland plans to put up a fight Read more |
2021-07-09 14:10:56 |
Apple |
AppleInsider - Frontpage News |
Mobeewave domain now owned by Apple one year after acquisition |
https://appleinsider.com/articles/21/07/09/mobeewave-domain-now-owned-by-apple-one-year-on-from-company-acquisition?utm_medium=rss
|
Mobeewave domain now owned by Apple one year after acquisitionThe ownership of Mobeewave com has shifted to Apple a sign that the year ago acquisition has progressed internally Mobeewave is a startup seeking to bring NFC peer to peer payments to smartphonesMobeewave is a startup seeking to make peer to peer payments easier by tapping NFC enabled credit cards to the backs of smartphones Apple acquired the company in August and that process seems to be moving forward Read more |
2021-07-09 14:19:33 |
Apple |
AppleInsider - Frontpage News |
Biden Big Tech anti-competition order imminent, will call for return of net neutrality |
https://appleinsider.com/articles/21/07/09/president-biden-to-sign-big-tech-anti-competition-order-call-for-return-of-net-neutrality?utm_medium=rss
|
Biden Big Tech anti competition order imminent will call for return of net neutralityA new Executive Order will contain multiple measures aimed at protecting the ability of small businesses to compete against Big Tech firms including asking the FCC to restore net neutrality Credit White HouseFollowing his asking the FTC to step up right to repair regulations President Biden is now also to require the commission and other bodies to increase their anti competition role Read more |
2021-07-09 14:20:06 |
Apple |
AppleInsider - Frontpage News |
Lockdown thriller 'Charon' shot entirely on iPhone 8 Plus |
https://appleinsider.com/articles/21/07/09/lockdown-thriller-charon-shot-entirely-on-iphone-8-plus?utm_medium=rss
|
Lockdown thriller x Charon x shot entirely on iPhone PlusWriter director Jennifer Zhang s new movie was made on an iPhone and is now heading to Cannes Writer director Jennifer Zhang in Charon Prolific writer director Jennifer Zhang s new film Charon is now being shown to distributors at this year s Cannes Film Festival Shot exclusively on Zhang s iPhone Plus set entirely in her own home and with no budget at all it is reportedly a compelling thriller Read more |
2021-07-09 14:23:59 |
海外TECH |
Engadget |
'PUBG Mobile' update adds a self-driving Tesla Model Y |
https://www.engadget.com/pubg-mobile-tesla-update-145129031.html?src=rss
|
x PUBG Mobile x update adds a self driving Tesla Model YPUBG Mobile probably isn t the first game you d expect to have an electric vehicle tie in but it s here all the same Krafton and Tencent Games have rolled out a update for the phone focused shooter that includes a raft of not so subtle plugs for Tesla and its cars Most notably you can find a Model Y on Erangel that can drive itself when you activate an autopilot mode on the highway ーnot that far off from the real Autopilot mode You ll also find a Gigafactory on Erangel where you can build the Model Y by activating switches and self driving Semi trucks roam around the map dropping supply crates when you damage the vehicles No despite the imagery you can t drive a Cybertruck or Roadster not yet at least The additions are part of a larger quot technological transformation quot for Erangel that includes an overhaul of the buildings and new equipment including an anti gravity motorcycle As is often the case you shouldn t expect these updates in regular PUBG ーthe battle royale brawler for consoles and PCs has a more realistic atmosphere The PUBG Mobile update is really a not so subtle way for Tesla to advertise its EVs in countries where it doesn t already have strong word of mouth working in its favor |
2021-07-09 14:51:29 |
海外TECH |
Engadget |
WhatsApp is adding a 'best quality' setting for sending photos and videos |
https://www.engadget.com/whatsapp-photo-video-best-quality-settings-143506690.html?src=rss
|
WhatsApp is adding a x best quality x setting for sending photos and videosWhatsApp is working on a setting that will let users more easily bypass its iffy image compression and send photos and videos in the highest available fidelity The quot best quality quot option will likely join quot auto quot and quot data saver quot choices in a future version of the app It appears users will eventually have the choice of whether to compress photos and videos to perhaps save on their data allowance send them in the best available quality or let WhatsApp automatically select the optimal level of compression for files The settings are present in an update WhatsApp submitted to the Google Play Beta Program as spotted by WABetaInfo The options will probably arrive in the public Android build of the app though it s not clear when ーthey re currently in development It s likely the additional image quality options will come to iOS as well since WhatsApp generally maintains the same features across both platforms This could come as welcome news for those who don t use the stock messaging apps on iOS or Android and often share photos and videos of their loved ones Apple Messages retains the original image quality most of the time Meanwhile multi device support is also on the way to WhatsApp |
2021-07-09 14:35:06 |
海外TECH |
Engadget |
Arsenal is the latest soccer team to feature in Amazon's 'All or Nothing' docuseries |
https://www.engadget.com/amazon-prime-video-arsenal-soccer-documentary-142131434.html?src=rss
|
Arsenal is the latest soccer team to feature in Amazon x s x All or Nothing x docuseriesAmazon is reportedly turning back to the UK s Premier League for the focus of its next All or Nothing sports documentary Deadline has learned the new series will cover Arsenal as it plays the League s season The documentary deal hasn t been finalized according to the site s sources but Amazon Arsenal and production company Films all confirmed the plans The Gunners documentary will debut on Prime sometime after the season in nbsp Whether or not it s an exciting series is unclear Arsenal is a legendary team but its current roster hasn t produced top tier results It finished the season in eighth place and bowed out of the UEFA Europa League below the Champions League in the semis It does have a rising star in the form of midfielder Bukayo Saka though and Deadline pointed out that the team finished the Premier League season on a strong note Arsenal might make for a good comeback story then Amazon has diversified the scope of All or Nothing over the years to include three soccer teams New Zealand s All Blacks rugby squad a range of American football teams and soon a veteran hockey team the Toronto Maple Leafs The strategy however has remained the same ーAmazon is determined to be a go to source for sports shows and give you a reason to subscribe to Prime Video instead of or alongside rivals like Netflix |
2021-07-09 14:21:31 |
海外TECH |
Engadget |
Biden's wide-ranging executive order covers Big Tech, net neutrality and more |
https://www.engadget.com/biden-executive-order-call-for-reinstating-net-neutrality-141029471.html?src=rss
|
Biden x s wide ranging executive order covers Big Tech net neutrality and moreThe movement to get the FCC to restore net neutrality just gained some serious traction The White House just announced that president Joe Biden will be signing a new executive order today that will establish a quot whole of government effort to promote competition in the American economy quot In other words it s targeting anticompetitive practices across a wide range of industries including internet services and tech nbsp The order contains proposals and actions among which it specifically says quot the President encourages the FCC to restore Net Neutrality rules undone by the prior administration quot It also asked the agency to consider limiting early termination fees and prevent internet service providers from making deals with landlords that limit tenant choices In addition it urged the FCC to revive the Broadband Nutrition Label that was developed under the Obama administration that would offer greater price transparency The order also looked at how quot dominant tech firms are undermining competition and reducing innovation quot and announced an administration policy of greater scrutiny of mergers It would focus on quot dominant internet platforms quot especially around quot the acquisition of nascent competitors serial mergers the accumulation of data competition by “free products and the effect on user privacy quot As part of its crackdown on Big Tech the order called on the Federal Trade Commission to quot establish rules on surveillance and the accumulation of data quot along with banning quot unfair methods of competition on internet marketplaces quot and quot anticompetitive restrictions on using independent repair shops or doing DIY repairs of your own devices and equipment quot nbsp In other industries like banking and personal finance the order similarly asked for more robust scrutiny of mergers It also urged the Consumer Financial Protection Bureau CFPB to quot issue rules allowing customers to download their banking data and take it with them quot nbsp Similar notions of price transparency consumer rights increased scrutiny of mergers and prevention of excessive fees were prevalent across the other industries covered Under agriculture for example the order also highlighted the need to give consumers the right to repair their tractors and equipment nbsp Proposals for the healthcare sector include allowing for hearing aids to be sold over the counter supporting price transparency rules preventing surprise hospital billing and standardizing plan options in the National Health Insurance Marketplace for easier comparison shopping In the transportation section airlines were the focus of the suggestions The order called for rules around greater transparency and disclosure over baggage change and cancellation fees as well as better guidelines on when a company must issue refunds over delayed baggage or non working services like in flight WiFi or entertainment After the order is signed later today the administration will have plenty of work to do to get these initiatives moving It s not a guarantee that all the suggestions announced here will eventually happen but it s a clear sign that the Biden team is paying attention to the issues of anticompetition a lack of transparency in multiple industries and other unfair practices nbsp |
2021-07-09 14:10:29 |
海外TECH |
Engadget |
TCL’s Nxtwear G cinema glasses could have been great |
https://www.engadget.com/tcl-nxtwear-g-wearable-display-140057578.html?src=rss
|
TCL s Nxtwear G cinema glasses could have been greatLet me ask you a question Do you really want to buy a pair of Personal Cinema glasses As cool as they could be they always feel like an artefact from a dystopia that s yet to engulf us When the air burns and the seas boil you won t be able to fit a inch HDTV into your existence support pod so these will have to do It hardly screams “aspirational It doesn t help that nobody ーnot Sony Avegant Royole nor others ーhas managed to make this concept work Personal cinemas then have replaced VR as the go to whenever anyone needs to talk about a product that s perennially on the edge of breaking through and never has But despite them being a solution in search of a problem and their historical suckiness things may be about to change You see TCL has been banging against this particular door for years and now it s gearing up to launch its first model The Nxtwear G Wearable Display Glasses solve many of the problems that dogged those earlier attempts They re not perfect and you ll probably not want to buy a pair now but this is the closest anyone has gotten to making this concept work TCL s Nxtwear G puts two tiny displays close to your eyes in order to trick you into thinking you re looking at a bigger screen Rather than cram the glasses full of tech TCL put two displays a pair of speakers and positioning hardware inside That keeps the weight down to a very manageable grams oz much kinder to your neck for long term wear Everything else including power is handled by the device you plug this into and the list of compatible hardware is pretty long You can use major phones from Samsung LG and OnePlus as well as over laptops and more than tablets and in s Essentially TCL made a plug and play external display for your head that should play nice with any compatible DisplayPort equipped USB C device The company decided to swim against much of the received wisdom that we ve seen with other personal cinemas Rather than trying to enclose the user in a black void all the better to replicate that tenth screen in a mall multiplex feeling TCL wants you to see the outside world Even when I tried the prototype back in its representatives said that you should feel comfortable wearing this on public transport interacting with people as you do Daniel CooperWith every device I ve tried them with you simply need to plug the Nxtwear G in and everything starts If you re using a compatible TCL phone you ll get a pop up asking if you want to use mirror mode or PC mode which sets you up inside Android s desktop mode The phone then acts as a touchpad for you to navigate around with your finger although if you want to do more than hunt and peck buy a Bluetooth keyboard and mouse Connecting it to my MacBook Pro too and the machine recognized it as an external display and I was able to work and watch TV with my primary displays turned off In fact I wrote a chunk of this piece while inside this thing even if I had to turn the zoom up to mad levels to make sure everything was readable The Nxtwear G packs a pair of Hz micro OLED p displays that the company says is the equivalent to a inch screen That requires the usual suspension of ocular disbelief but the effect works here and the speakers do their job well enough It s worth saying that they are essentially blasting audio in every direction so grab your Bluetooth headphones if say your partner gets really annoyed when they can hear you watching Columbo when you re both in bed I don t know if you should expect pixel perfect video quality from a pair of screens this tiny but be advised that they won t beat your smartphone Certainly HD video looks fine but the smallness of the screens means it s really tough to see good detail Colors were washed out certainly compared to the footage that was playing back on the TCL Pro G and MacBook Pro I was connected to during testing nbsp TCL s pitch is to say that as well as passive viewing you can also use the glasses to work and it s here that I think TCL may have some success As I said it s possible to work with these on and it would make sense to use them if you had to view sensitive documents When you re working say on a train this is the perfect antidote to shoulder surfers and other drive by snoopers Of course for whoever makes the inevitable joke about watching adult content with nobody noticing have a cookie What TCL has managed to do is several times over solve the riddle as to why you could ever want to use a personal cinema There are times and places where you could do so both for work more or less and play in some circumstances Unfortunately while the company was making great strides to solve the technical issues it didn t have a huge amount of time to devote to making this experience comfortable Your mileage may vary but I found using these glasses to be a delightful experience right up until the moment it became painful It is right now impossible to use these for a prolonged period of time before something starts hurting either inside or outside your skull Daniel CooperOne of the more problematic design decisions that TCL took was to include a trio of nose pads that push the screen up and higher The idea is to keep the screens in line with your eyes but the unfortunate result is that you need to put the nose pads way down your nose Like to the point where you feel like no matter the size it feels like you re wearing those wire grips to close your nostrils that professional swimmers wear during sporting events Then there re the Temple Tips the part of the glasses arms which bend down to hook over your ears Whereas with regular glasses those tips are semi plastic and can be adjusted by an optician or at home with a hairdryer and some guile the Nxtwear G s arms are rigid Prolonged periods of wear mean that you ll get two slices of hard plastic sticking into the soft fleshy bit of your head behind your ears The solution I found to alleviate both of those issues at least for a bit was to pull out the nose pads entirely and wear them as I would regular glasses After all as a seasoned specs wearer I accepted that the experience might not be as good ーbut found that this was actually better I got a full view of the screen and it was significantly more comfortable to wear for longer periods of time But unfortunately the reason the nose pads stand the glasses off your schnozz is to avoid it getting warm since the Nxtwear G does generate a decent amount of warmth not heat warmth mind you And then finally there s the issue of eye strain which no matter how I wore these things still meant I had to give up for significant rest periods Maybe it s because I m short sighted and so my eyes are already weak and feeble compared to the average personal cinema enjoyer But I doubt it and suspect that lots of people may run the risk of an eye strain headache if they use this for too long at once Now I bet you re thinking gee if these were priced like an accessory I d grab a pair just to see what the fuss is about I don t blame TCL for needing to recoup some of the development costs for these things but boy These glasses are going on sale in Oz for AUS which is the better part of in the US Heck you can buy TCL s new Pro G for and just hold it near to your face and pat yourself on the back for your thriftiness Facetiousness aside I think TCL deserves enormous credit for making what can only be described as the best wearable display ever made And if you re able I d say you should go and try these out because my comfort related dealbreakers may not affect you And TCL deserves a fair crack at making these things cheaper and a little less prone to pinching because we re so damn close Sincerely if personal cinemas are going to become a success it ll be because it follows the template that TCL has laid down It just needs a few tweaks |
2021-07-09 14:00:57 |
海外科学 |
NYT > Science |
The Sea of Marmara, a ‘Sapphire’ of Turkey, Is Choking From Pollution |
https://www.nytimes.com/2021/07/09/world/europe/istanbul-sea-of-marmara-pollution.html
|
warming |
2021-07-09 14:22:42 |
海外TECH |
WIRED |
Joe Biden Wants You to Be Able to Fix Your Own Damn iPhones |
https://www.wired.com/story/biden-executive-order-right-to-repair
|
devices |
2021-07-09 14:05:04 |
金融 |
金融庁ホームページ |
CSF(豚熱)の患畜の確認を踏まえた金融上の対応について公表しました。 |
https://www.fsa.go.jp/news/r3/ginkou/20210709-2.html
|
Detail Nothing |
2021-07-09 15:27:00 |
ニュース |
@日本経済新聞 電子版 |
政策総動員、物価安定にも配慮 G20財務相が議論
https://t.co/c77reH1oi6 |
https://twitter.com/nikkei/statuses/1413507710759292929
|
配慮 |
2021-07-09 14:37:54 |
ニュース |
@日本経済新聞 電子版 |
3回目接種で免疫強化 2社が許可申請へ、英など検討
https://t.co/rCtcZKAUL1 |
https://twitter.com/nikkei/statuses/1413507709748486160
|
許可申請 |
2021-07-09 14:37:54 |
ニュース |
@日本経済新聞 電子版 |
債務危機回避、米の戦後に教訓 問われる成長戦略
https://t.co/8FcCSfcRSR |
https://twitter.com/nikkei/statuses/1413507707491938310
|
成長戦略 |
2021-07-09 14:37:53 |
ニュース |
@日本経済新聞 電子版 |
NYダウ反発で始まる 一時300ドル高、金融株が上昇
https://t.co/0yy47i5HwZ |
https://twitter.com/nikkei/statuses/1413502665993031683
|
金融株 |
2021-07-09 14:17:51 |
ニュース |
@日本経済新聞 電子版 |
政府、西村氏発言を撤回「金融機関が飲食店に働きかけ」
https://t.co/KUvE8Zp0XQ |
https://twitter.com/nikkei/statuses/1413498650018271238
|
働きかけ |
2021-07-09 14:01:54 |
ニュース |
BBC News - Home |
Coronavirus: Keep using NHS Covid app people urged |
https://www.bbc.co.uk/news/uk-57781115
|
transmission |
2021-07-09 14:39:28 |
ニュース |
BBC News - Home |
Southern Water fined record £90m for dumping raw sewage |
https://www.bbc.co.uk/news/uk-england-kent-57777935
|
sussex |
2021-07-09 14:22:49 |
ニュース |
BBC News - Home |
Covid vaccines do work well in clinically vulnerable |
https://www.bbc.co.uk/news/health-57781073
|
people |
2021-07-09 14:47:39 |
ニュース |
BBC News - Home |
Calls grow for extra bank holiday if England win |
https://www.bbc.co.uk/news/business-57774782
|
england |
2021-07-09 14:05:18 |
LifeHuck |
ライフハッカー[日本版] |
カンカン照りにも急な大雨にもすぐ対応。持ち運びラクラクな晴雨兼用日傘が便利すぎ! |
https://www.lifehacker.jp/2021/07/amazon-ogawa-linedrops-zeroand.html
|
ogawa |
2021-07-09 23:30:00 |
コメント
コメントを投稿