投稿時間:2023-10-30 00:04:09 RSSフィード2023-10-30 00:00分まとめ(1件)
カテゴリー | サイト名 | 記事タイトル | リンクURL | 頻出ワード・要約等 | 登録日 |
---|---|---|---|---|---|
海外TECH | Engadget | What the evolution of our own brains can tell us about the future of AI | https://www.engadget.com/hitting-the-books-a-brief-history-of-intelligence-max-bennett-mariner-books-143058118.html?src=rss | The explosive growth in artificial intelligence in recent years ーcrowned with the meteoric rise of generative AI chatbots like ChatGPT ーhas seen the technology take on many tasks that formerly only human minds could handle But despite their increasingly capable linguistic computations these machine learning systems remain surprisingly inept at making the sorts of cognitive leaps and logical deductions that even the average teenager can consistently get right nbsp In this week s Hitting the Books excerpt A Brief History of Intelligence Evolution AI and the Five Breakthroughs That Made Our Brains AI entrepreneur Max Bennett explores the quizzical gap in computer competency by exploring the development of the organic machine AIs are modeled after the human brain nbsp Focusing on the five evolutionary breakthroughs amidst myriad genetic dead ends and unsuccessful offshoots that led our species to our modern minds Bennett also shows that the same advancements that took humanity eons to evolve can be adapted to help guide development of the AI technologies of tomorrow In the excerpt below we take a look at how generative AI systems like GPT are built to mimic the predictive functions of the neocortex but still can t quite get a grasp on the vagaries of human speech HarperCollinsExcerpted from A Brief History of Intelligence Evolution AI and the Five Breakthroughs That Made Our Brains by Max Bennett Published by Mariner Books Copyright by Max Bennett All rights reserved Words Without Inner WorldsGPT is given word after word sentence after sentence paragraph after paragraph During this long training process it tries to predict the next word in any of these long streams of words And with each prediction the weights of its gargantuan neural network are nudged ever so slightly toward the right answer Do this an astronomical number of times and eventually GPT can automatically predict the next word based on a prior sentence or paragraph In principle this captures at least some fundamental aspect of how language works in the human brain Consider how automatic it is for you to predict the next symbol in the following phrases One plus one equals Roses are red violets are You ve seen similar sentences endless times so your neocortical machinery automatically predicts what word comes next What makes GPT impressive however is not that it just predicts the next word of a sequence it has seen a million times ーthat could be accomplished with nothing more than memorizing sentences What is impressive is that GPT can be given a novel sequence that it has never seen before and still accurately predict the next word This too clearly captures something that the human brain can Could you predict that the next word was do I m guessing you could even though you had never seen that exact sentence before The point is that both GPT and the neocortical areas for language seem to be engaging in prediction Both can generalize past experiences apply them to new sentences and guess what comes next GPT and similar language models demonstrate how a web of neurons can reasonably capture the rules of grammar syntax and context if it is given sufficient time to learn But while this shows that prediction is part of the mechanisms of language does this mean that prediction is all there is to human language Try to finish these four questions If x then x equals I am in my windowless basement and I look toward the sky and I see He threw the baseball feet above my head I reached my hand up to catch it jumped and I am driving as fast as I can to LA from New York One hour after passing through Chicago I finally Here something different happens In the first question you likely paused and performed some mental arithmetic before being able to answer the question In the other questions you probably even for only a split second paused to visualize yourself in a basement looking upward and realized what you would see is the ceiling Or you visualized yourself trying to catch a baseball a hundred feet above your head Or you imagined yourself one hour past Chicago and tried to find where you would be on a mental map of America With these types of questions more is happening in your brain than merely the automatic prediction of words We have of course already explored this phenomenonーit is simulating In these questions you are rendering an inner simulation either of shifting values in a series of algebraic operations or of a three dimensional basement And the answers to the questions are to be found only in the rules and structure of your inner simulated world I gave the same four questions to GPT here are its responses responses of GPT are bolded and underlined If x then x equals I am in my windowless basement and I look toward the sky and I see a light and I know that it is a star and I am happy He threw the baseball feet above my head I reached my hand up to catch it jumped and caught it It was a lot of fun I am driving as fast as I can to LA from New York One hour after passing through Chicago I finally get to see the Pacific Ocean All four of these responses demonstrate that GPT as of June lacked an understanding of even simple aspects of how the world works If x then x equals not If you were in a basement and looked toward the sky you would see your ceiling not stars If you tried to catch a ball feet above your head you would not catch the ball If you were driving to LA from New York and you d passed through Chicago one hour ago you would not yet be at the coast GPT s answers lacked common sense What I found was not surprising or novel it is well known that modern AI systems including these new supercharged language models struggle with such questions But that s the point Even a model trained on the entire corpus of the internet running up millions of dollars in server costs ーrequiring acres of computers on some unknown server farm ーstill struggles to answer common sense questions those presumably answerable by even a middle school human Of course reasoning about things by simulating also comes with problems Suppose I asked you the following question Tom W is meek and keeps to himself He likes soft music and wears glasses Which profession is Tom W more likely to be Librarian Construction workerIf you are like most people you answered librarian But this is wrong Humans tend to ignore base ratesーdid you consider the base number of construction workers compared to librarians There are probably one hundred times more construction workers than librarians And because of this even if percent of librarians are meek and only percent of construction workers are meek there still will be far more meek construction workers than meek librarians Thus if Tom is meek he is still more likely to be a construction worker than a librarian The idea that the neocortex works by rendering an inner simulation and that this is how humans tend to reason about things explains why humans consistently get questions like this wrong We imagine a meek person and compare that to an imagined librarian and an imagined construction worker Who does the meek person seem more like The librarian Behavioral economists call this the representative heuristic This is the origin of many forms of unconscious bias If you heard a story of someone robbing your friend you can t help but render an imagined scene of the robbery and you can t help but fill in the robbers What do the robbers look like to you What are they wearing What race are they How old are they This is a downside of reasoning by simulating ーwe fill in characters and scenes often missing the true causal and statistical relationships between things It is with questions that require simulation where language in the human brain diverges from language in GPT Math is a great example of this The foundation of math begins with declarative labeling You hold up two fingers or two stones or two sticks engage in shared attention with a student and label it two You do the same thing with three of each and label it three Just as with verbs e g running and sleeping in math we label operations e g add and subtract We can thereby construct sentences representing mathematical operations three add one Humans don t learn math the way GPT learns math Indeed humans don t learn language the way GPT learns language Children do not simply listen to endless sequences of words until they can predict what comes next They are shown an object engage in a hardwired nonverbal mechanism of shared attention and then the object is given a name The foundation of language learning is not sequence learning but the tethering of symbols to components of a child s already present inner simulation A human brain but not GPT can check the answers to mathematical operations using mental simulation If you add one to three using your fingers you notice that you always get the thing that was previously labeled four You don t even need to check such things on your actual fingers you can imagine these operations This ability to find the answers to things by simulating relies on the fact that our inner simulation is an accurate rendering of reality When I mentally imagine adding one finger to three fingers then count the fingers in my head I count four There is no reason why that must be the case in my imaginary world But it is Similarly when I ask you what you see when you look toward the ceiling in your basement you answer correctly because the three dimensional house you constructed in your head obeys the laws of physics you can t see through the ceiling and hence it is obvious to you that the ceiling of the basement is necessarily between you and the sky The neocortex evolved long before words already wired to render a simulated world that captures an incredibly vast and accurate set of physical rules and attributes of the actual world To be fair GPT can in fact answer many math questions correctly GPT will be able to answer because it has seen that sequence a billion times When you answer the same question without thinking you are answering it the way GPT would But when you think about why when you prove it to yourself again by mentally imagining the operation of adding one thing to another thing and getting back two things then you know that in a way that GPT does not The human brain contains both a language prediction system and an inner simulation The best evidence for the idea that we have both these systems are experiments pitting one system against the other Consider the cognitive reflection test designed to evaluate someone s ability to inhibit her reflexive response e g habitual word predictions and instead actively think about the answer e g invoke an inner simulation to reason about it Question A bat and a ball cost in total The bat costs more than the ball How much does the ball cost If you are like most people your instinct without thinking about it is to answer ten cents But if you thought about this question you would realize this is wrong the answer is five cents Similarly Question If it takes machines minutes to make widgets how long would it take machines to make widgets Here again if you are like most people your instinct is to say One hundred minutes but if you think about it you would realize the answer is still five minutes And indeed as of December GPT got both of these questions wrong in exactly the same way people do GPT answered ten cents to the first question and one hundred minutes to the second question The point is that human brains have an automatic system for predicting words one probably similar at least in principle to models like GPT and an inner simulation Much of what makes human language powerful is not the syntax of it but its ability to give us the necessary information to render a simulation about it and crucially to use these sequences of words to render the same inner simulation as other humans around us This article originally appeared on Engadget at | 2023-10-29 14:30:58 |
コメント
コメントを投稿