BrianPansky Wiki
Advertisement

There's this online lecture by Robert Sapolsky about "Chaos and Reductionism", it says things that sound similar to stuff I've heard before, and it doesn't seem to me that these people are getting things right. So I'll try to see if I can figure out what's going on here. Is he wrong about stuff? Am I? What's he even saying?


Table of contents of the lecture video

The contents of his lecture:

  • 0:07, intro, "i'm not sure if i completely understand what i'm talking about"
  • 2:20, start with a framework for "the standard western approach to understanding complicated systems with scientific bases",
    • 2:20, history: dark ages, transitive property in logic
    • 9:45, "the single most important concept in all of science in the last 500 years" is "reductionism"
    • 11:03, "it came with a bunch of corollaries", if you know the starting state you will 100% know what the full complex mature system will look like (and vice versa), something about algebra
    • 13:45, fancy systems need blueprints, something...[basically "top-down" design stuff]
    • 14:50, variability is bad "noise" in the system you want to get rid of
  • 19:50, "so where does that begin to cause problems?", "reductionism has to fail when you're looking at biological systems",
    • neurobiology "grandmother neurons", @ about 26:11 "grandmothers face at this angle" , @ about 28:25 says there are sometimes seemingly neurons like this, "sparse coding" neurons.
    • 30:43, most searches for grandmother neurons failed "for a very simple reason" [he doesn't mention the difficulty of testing millions of individual neurons as one of those reasons], "you run out of neurons", says each layer of neurons needs MORE than the last layer to pull off the processing. Seemingly says it's due to all the different ways you can combine different signals from the "lower" layer, as if you would need one neuron to respond to each unique combination for it to work this way. I want to check either those papers or machine learning to see if this is actually the case, i would have thought you can still get this orderly "grandmother-neuron" progression of layers but with fewer and fewer neurons in higher layers as they become more abstract.
    • 32:06, says now dominant approach in that field is "an explicitly non-reductive approach" which is looking at "neural networks". and complex information is not encoded in "a single protein, a single synapse, a single neuron", instead it is encoded for "in patterns" and patterns of activation of many many neurons, networks that are interacting [this is somehow not exactly like the layers approach?? i can only guess it is just not necessarily like the layers approach, but could include it] @32:57 "you can't solve the problem of recognizing faces by using reductive component part neurobiology"
    • 33:07, [bifurcating system, scale-free complexity][@35:45 says one neuron branches can have exact same complexity as entire circulatory system, lol, seems exaggerated] supposedly same problem with genes, you'd run out of genes [basically, can't code fractal shape if you need one gene to specify each point/vertex/bifurcation on that shape, there are infinite points, or a high number. point-for-point requires too many points]["will bifurcate to millions of capillaries and there's only 20,000 genes"]
    • 38:23, chance and Brownian motion throws off...point-for-point system...[claims reductive would be "the same all the way down to single molecules" in twins...]...chance throws off ability to know the complex system (i'd say the future state) based on the starting state
    • 40:24, the fish hierarchy experiment
    • 42:32, kind of a word-salad summary, and some clearer-ish summary, and "it's got to be something else" whatever that "it" means
  • 43:46, chaotic systems, "a system that is not reductive", "where there is non-linear non-additivity", contrasts with fixing a clock, and trying to understand why a cloud isn't raining
  • 5 minute break

Does Sapolsky construct opposing sides, and if so what are they?


seemingly,

Side A:

  • "reductive science"
  • defines reductionism @10:00, "if you want to understand a complex system, you break it down into its component parts. And when you understand the individual parts, you will be able to understand the complex system"
  • says "the concept of linearity"/"additivity" is "intrinsic to that", when you have the parts "all you need to do" is "add them together" and they will "increase in their complexity in a linear manner", ("and you will produce the whole complex system")
  • "this is westernized reductionism"
  • [something about top-down design "needs" to be done]
  • variability is noise in the system, it's bad, stuff you want to get rid of, "noise represents instrument error"
  • to avoid [error? noise?], become more reductive. the more detail you see, the closer you are to seeing what's actually going on. as you look closer and closer, variability should disappear.
  • underneath the noise is an idealized "norm" as to what the answer is...@18:12 he says this means if someone's temperature isn't the perfect human temperature, then the measurement device is just wrong. lol
  • all noise is discrepancy from what is truly going on
  • [point for point relationships are reductive, or reductionism assumes them, or whatever]["point for point reductionism" @24:47 ish]


Side...2:

  • not reductive?
  • "chaoticism" apparently


Does he describe the sorta principles of science correctly

Pretty sure you can throw out all the stuff he lumps together, and just stick to "if you want to understand a complex system, you break it down into its component parts". This section below is basically a bunch of different ways of saying the other stuff he lumps together doesn't belong together, they aren't the same stuff.

My experience in science education (mechanical engineering):

  • both linear and nonlinear systems are taught
  • need for iterative methods if you want to solve nonlinear systems (because there is no analytic solution, it's dynamic)
  • in control theory, both stable (including cyclically stable or whatever?) and unstable (and chaotic?) systems are taught
  • i didn't see anything like "everything is linear and simple to predict" or "everything is point for point" whatever that is
  • contrary to what he says at 44:50 "it doesn't work that way", this sounds a lot like a technique known as "finite element analysis" and it does in fact work better the more detailed you make your simulation. This method can be used for fluids like clouds just as well as for solids. It's just a detailed simulation. In fact this is often the best method for calculating non-linear deformations and fluid simulations and such. Sapolsky gives no workable alternative. His example is also conveniently garbled [what are we trying to do? blaming one cloud for a drought??? why would "reductive" scientists do that?] so there's not much way for me to even discuss it.
  • cutting up a system into parts using "control volumes" (or a "Free Body Diagram", or a "Gaussian Surface") always requires taking into account how things outside of that part impact it. The moment a conceptual "cut" is made, a conceptual "join" also has to be made in that same place. So if you conceptually cut up a support beam, you need to include how each side of that cut is transmitting forces of tension/compression and torsion to the other side. Those forces don't disappear on one side of a merely conceptual cut. See also the split between individual part and context on my page "How Do Things Happen?".


obviously, variability is known to come from BOTH the measuring instrument AND the thing being measured. He uses an example of human temperature, but we know in reality science does the opposite of what he says: we know that variation from human temperature can mean the person has a fever (and the measuring instrument is accurately measuring it, this is not just measurement noise).

i think he mostly relies on equivocating "point for point" ideas with everything else. which actually contradicts his insistence that this same paradigm collapses complex noisy systems into one simple measurement like "temperature". when point-for-point fails, he says it's reductionism that failed, when one-point-for-many-points fails (grandmother neurons etc., which he calls point-for-point anyways), he says it's reductionism that failed yet again. And then he says the butterfly effect (which is one point having a large impact on another far away point) also contradicts reductionism (i might be stretching my criticism with that one, but it might be worth considering how different this really is)[and his scale invariance with his testosterone effect graph also seems like he himself is trying to set up a point-for-point relation of some kind while selling it as yet another failure of reductionism?].

perhaps how i would have characterized science (at least, the aspects of it that are related to what he's talking about), compare to what he says:

  • it's true that "if you want to understand a complex system, you break it down into its component parts.", and it does help you make models to predict how those systems will behave, just not "perfectly immediately forever" accuracy.
  • learn about parts and their relations to each other.
    • equations are about relations, not just parts
    • these equations don't just use the plus sign, they use multiplication and logarithms and whatever else
    • the equations of science can be non-linear
  • variability is known to come from BOTH the measuring instrument AND the thing being measured
    • all noise is what is truly going on, but you can't assume it is the system aside from the device, it might be largely what is truly going on with the measurement device, and we know which measurement devices produce such and such amount of noise and can take this into account
    • the more detail you see, the more accurate your knowledge is. more accurate is more accurate. that includes being able to see more variability where before there seemed to be uniformity (like how a smooth surface looks rough under a microscope).
      • where you will find that variability disappears: when in comparing more than one measurement (using precise measuring instruments) of exactly the same state. Not all things are changing over time, or if they are, you can make two measurements simultaneously, and the two measurements will match (leaving aside relativity, but even then the different perspectives actually can be transformed into each other and thus do match).
  • literally no one makes any blanket statements about how much complexity comes from combining things, because it always depends upon what things you combine and how you combine them. no idea where he heard otherwise
  • there is none of whatever this "point-for-point" stuff is he thinks is so pervasive and intrinsic to western science. it's not a precept of western science. he's making that up or something, and it's really convenient for him to have something so ridiculous so he can say "it fails" in any random example he can pull out of a hat. seems like he's using an equivocation fallacy to create a straw-enemy to attack
  • yes there's difficulty predicting complex systems perfectly, but no other methods are capable of predicting those systems any better

more about how he lumps some things together that really aren't the same:

  • whether reductive methods work is not the same as whether there is a quick point-for-point way to find the answer using those methods
  • the usefulness of modeling systems in terms of parts does not depend on those parts being non-interacting ("linear")

Reductionism and emergence


Is the fish experiment reductionist at all?

if you take one emergent property of a system, and say you understand the system, is that "Reductionism"? Two fish, interacting, each with billions of neurons, and you conceptually just lump them together as one immutable thing...no, this is the opposite of "if you want to understand a complex system, you break it down into its component parts".

this seems quite clearly to be based on the other meaning of the word "reductive" (the meaning that means something like "too simplistic"). Yes it's weird and annoying that this word has two opposite meanings, but that's how the English language is sometimes, look up "words that are their own antonyms" and you'll find several. You can't equivocate two meanings of a word just because they sound the same and are spelled the same. No more than baseball players use winged mammals to hit baseballs.

throws around terms like "starting states" in ways that don't seem to fit their definition. the results of the two-by-two experiment is not in any way what is meant by the starting state of the larger group experiment. The starting state of the larger group experiment is something like "what are all the locations and states of every atom when you begin the large group experiment" (and obviously the low amount of information of who dominated who in the two-by-two experiments is not going to tell you that, not even close).

Is his graph of testosterone effect variance meaningful at all?

what he initially says about the paper before showing the graph

what he later says about the graph, and what the graph actually has on it

what about it?

Is he implying true things about how the brain recognizes people?

@32:57 he says "you can't solve the problem of recognizing faces by using reductive component part neurobiology"

except you can. just not the way he must be imagining. see stuff like "control volumes" or a "Free Body Diagram", or a "Gaussian Surface". you can use techniques like that on anything. Just model what the parts are like and how they interact.

Advertisement