Miscellaneous thoughts on genius

    In this post, I am going to discuss some miscellaneous ideas about genius.  I will be working within the framework explored in Bruce Charlton and Edward Dutton's book The Genius Famine.           

    The main idea of this book is that breakthroughs whether in science or any other area (there can be geniuses in any field of human endeavor) don't "just happen"; they are the product of individual geniuses.  Another idea in this book is the "invisibility of genius," the idea that genius can go unrecognized because once a genius makes a breakthrough, the breakthrough becomes "obvious."  Many technologies that we take for granted, such as the needle and the wheel were at one time the product of genius.  

    Geniuses work at different levels of generality.  The geniuses that are most often recognized are the geniuses that work at a middle level, something that combines theory with application, for instance Newton's work on physics.  However, the geniuse who work at the most general and the most specific levels often go unrecognized.  I would call them meta-geniuses and micro-geniuses, specifically.  

    A meta-genius is someone who begins a paradigm that others then follow.  They make a breakthrough at a conceptual level and once the idea exists, then it can be elaborated on.  Often, a meta-genius's ideas may only be appreciated by a few and so they are considered a dreamer rather than someone who made an important breakthrough.  Two examples would be Rudolf Steiner and Paracelsus.  Paracelsus's ideas motivated people to develop different approaches to medicine and Rudolf Steiner's ideas about the evolutionary development of consciousness were also a conceptual breakthrough.  

    By contrst, micro-geniuses figure out how to make the details work together.  They take an idea or an existing technology and improve it.  Many people think that this is mere "tinkering," not genius at all.  But it certainly is.  There is a great deal of creativity involved in making something work.  An example of this kind of genius is Presper Eckert, who worked on the ENIAC computer with John Mauchly.  Mauchly was the "idea man" and Eckert figured out how to put the ideas into practice.  Here is what Jean Bartik, one of the programmers for the ENIAC said about the two: 

    "And they complemented each other so well, because John [Mauchly] said tht the thing about Presper was when he would give him an idea, Presper would say 'Well, we can do it if we're careful.'  And he said 'He never pooh-poohed any of my ideas.' He's always considered them.  And John says that he believed that Presper was the greatest component engineer in the country at that time, in terms of components."

and also

    Bartik: The thing that made Pres Eckert such a great engineer was that - and I guess nobody had really thought of it up to then - those decade counters were made up of a series of flip-flops, and they just flipped back and forth, like the binary system; but the working of this machine did not depend that much on the amplitude or the cleanness of the signal, because he arranged it so these signals only had to act like a trigger: either it triggered or it didn't trigger.  So you didn't have to have that good a signal to do it.  Everybody said, 'Oh, well, these vacuum tubes won't work, because the signals would fluctute'; but he designed it so that they didn't have to work very well for them to still work. 


So it was robust.


Robust? [laughs.] Well, I guess you could say that!  It was robust, but it was clever, and nobody thought it would work.  But Pres said, 'If we're careful, it will work.'  He was a brilliant man.  

    Another commonly held idea about genius is discussed in this post by Bruce Charlton: 

    "During the 1800s it was generally recognised that 'great men' - including geniuses - were essential to the survival, problem-solving ability and progress of societies.  If there was an insufficient supply of geniuses, then society would be static at best, and would crumble and collapse as soon as it encountered a novel threat which tradition or trial and error was incapable of solving. 

    But through the twentieth century the idea emerged, especially in science, that no individual person made an essential contribution - and that if Professor A had not made his big discovery, then one or several of Professors B, C, or D would have made essentially the same breakthrough within a short space of time.  This suggested that science was primarily a process, and tht no individul was indispensable.

This idea was propagated even among some geniuses, and even when arguing for the existence of exceptions - for example Paul Dirac (himself a genius) said in praising Einstein for the uniquely personal breakthrough of General Relativity that all other breakthroughs in physics (including his own) merely accelerated the progress of the subject by a few years at most.

    But I believe this view was an artefact of the extremely-unusual high prevalence of geniuses in science during the couple of centuries leading up to the mid-twentieth century; the fact that many were working in certain specific areas such as physics, and the sudden pooling of talent resulting from fast international travel and communication.  For a while, a short while in fact, just a few decades, there were more physics geniuses than were strictly needed - and any one of them (except probably Einstein) had 'back-up' from one or more individuals of similar ability and interests."

    This is a very important point.  The amazing thing is, when one considers the history of science in the West, from the early 1600s up until 1970 or so, it is amazing that even though certain geniuses stick out from others (Newton and Leibniz for instance, in the late late 17th century) there are many others any one of whom would stand out tremendously were they to live today, such as Christopher Wren, Hooke, Halley, Huygens, and even among non-geniuses many people of tremendous knowledge and technical skill.  Over the broad scope of history this is a very unusual situation.  In fact, it has only happened once in world history so far as we know.  

    So, looking back on the history of the West, people tend to think that genius is something that "just happens," that there is always a baseline level of genius and given the opportunity, science will occur.  But that is not the case.  This situation lasted for approximatley four centuries, but is now gone.  This is discussed in more detail in The Genius Famine book, but essentially what has happened in many instances is that new ideas have dried up and old ideas are being mined for smaller and smaller scraps. 

    In his book Meditations on the Tarot, Valentin Tomberg discusses an interesting idea about the horizontal and vertical aspects of the human being.  The horizontal is influences of ancestors, whether genetic or spiritual, while the vertical is the influence of each individual human.  

    This concept applies exactly to genius.  Any breakthrough of a genius takes place within a broader paradigm (the horizontal influence) and the geniuses individul breakthrough is the vertical influence.  It is commonly believed that breakthroughs (especially technological breakthroughs) are neutral, but this is not true.  Some are.  For example, the knife is an invention that is very close to neutral.  The circumstances of a breakthrough determine whether it is good, bad or neutral.  

     Both the horizontal and vertical influence have an effect.  The horizontal are the ideas and influences drawn on by the genius and the vertical are the character and motivations of the genius.  For example, social media isn't netural.  It was deliberately designed as a form of ersatz socializing.  Further, the inventors of social media were, at best people duped into believing it was inevitable and at worst, expecting huge amounts of money and influence.  Under those circumstanes, a good invention will not come about.  

    By contrast, think about the dog.  The dog is one of humankind's greatest "inventions" in the sense that there had to be a first person or group of people, maybe a family who first conceived of the idea of domesticating wolves.  Maybe multiple people had this idea independently, but nonetheless, the domestication of the dog was the product of a genius.  And, since the dog has maintined its loyalty for many thousands of years, I would guess probably one of the greatest geniuses who ever lived.    

    The importance of this idea is that we need to recognize that the situation we are in now is not inevitable.  Because technological change is impersonal as opposed to cultural change which manifests itself in personal manner, it is easy to just consider technological and scientific change "background," something that just happens and couldn't be otherwise.  But this is not true.  The last two centuries could have been entirely different had people made better choices.  And then the present would be something almost unimaginable.  But to recognize this fact is very important for the present time.  It allows us to recognize that there is always an option to make a better choice.  The system is not inevitable and, so, at least spiritually, we can not give in.  

Is it possible to upload human consciousness to a computer?

    Short answer: No.  

    However, this is a topic that is worth discussing further because it touches on some important issues, especially the question of how the soul relates to the body.  

    There are two means which are suggested for "uploading" human consciousness.  The first is based on the idea that human thought is something like software.  The idea is that since the same software can run on multiple machines, if an individual's thought processess could be perfectly simpulated in a computer, the computer would then contain an identical copy of their consciousness.  The second believes that thought is more like hardware; it is not sufficient to simulate the processes of thought, in addition, it is necessary to make a new brain, either biological, technological, or a mixture of both. 

     The first thing to notice is that the way these arguments are usually presented involves a philosophical sleight of hand.  That is to say, in the typical scenario, the body of the person whose consciousness is to be transferred is destroyed in the process, or the individual is removed from the picture in some way.  We are presented with a situation that starts with a person and then ends with a either a computer or robot that acts like the person.  But removing the person is not necessary.  If it is possible to build an electronic brain or to simulate someone's consciousness, then why is it necessary to destroy their body in the process?  

    And if we imagine the scenario in this way, then we can envision a human being and a computer or robot side by side.  And then it's quite clear that consciousness has not been transferred at all.  Even if we assume for the sake of argument that the computer or robot may is conscious, the human is clearly not inside the computer or the robot.  The human being has the same consciousness as before, it has just been mimicked.  If the computer or robot were moved to Antarctica, the person would not suddenly feel cold.  Even the word "upload" implies the fact that consciousness has not actually been transferred.  When a file is uploaded from one computer to another, it's not like sending a letter since the file does not leave the computer it originated on; the information contained in the file is simply copied by the second machine.  

    Likewise, even if we assume consciousness can be copied, that's all that has happened with these situations.  And so this means that if the person whose consciousness was copied dies, then their conciousness goes wherever it would normally go after death, which is not into a machine.  So, this idea of uploading cannot cheat death.  

    Also, notice that in the revised scenario where the human being and computer both appear together, what has not been copied is the subjective sense of self, the "I," as it is referred to by Rudolf Steiner. This suggests that the subjective sense of self has an important relation to consciousness.  

    The second thing is that arguments for the possibility of uploading consciousness are based on an analogy, between the mind and either software or hardware.  Ironically, materialist computer scientists argue by analogy all the time: almost all their wild futurist speculations are based on analogies, but they automatically rule out religious arguments by analogy.  Argument by analogy is neither automatically good nor automatically bad; it depends the analogy in question.  

    In his Meditations on the Tarot, Valentin Tomberg has the following to say about analogy: 

    "Now 'pure induction' is founded on simple enumeration and is essentially only conclusion based on the experience of given statistics. Thus one could say: 'As John is a man and is dead, and as Peter is a man and is dead, and as Michael is a man and is dead, therefore man is mortal.' The force of this argument depends on number or on the quantity of facts known through experience. The method of analogy, on the other hand, adds the qualitative element, i.e. that which is of intrinsic importance, to the quantitative. Here is an example of an argument by analogy: 'Andrew is formed from matter, energy and consciousness. As matter does not disappear with his death, but only changes its form, and as energy does not disappear but only modifies the mode of its activity, Andrew's consciousness, also, cannot simply disappear, but must merely change its form and mode (or plane) of activity. Therefore Andrew is immortal.' This latter argument is founded on the formula of Hermes Trismegistus: that which is below (matter) (energy) is as that which is above (consciousness). Now, if there exists a law of conservation of matter and energy (although matter transforms itself into energy and vice versa), there must necessarily exist also a law of conservation of consciousness, or immortality."

    So, we must look at whether the analogy between hardware or software and consciousness is a good analogy or not.  I will first consider the software analogy.  This analogy misses the subjective and qualitative element of consciousness.  What is it like to run an algorithm?  Well, we know what it's like because everyone who has done long division or multiplication has run an algorithm.  But the experience does not come from the long division algorithm itself, the consciousness is already there and the experience of doing long division is one thing among many that can be experienced.  If anything, we might say that this example shows that consciousness can run programs: it puts the shoe on the other foot.  

      Thus, consciousness is something extra that goes beyond an program.  We know consciousness can generate programs.  Indeed, all of the computer programs we know have come about by precisely this means.  But there is no reason to assume that programs generate consciousness.  A program is just an abstract procedure with no subjective element inherent in the program.  So, the analogy fails for this reason.  

    The problem with the hardware analogy is that it assumes that if we mimic the human body and brain, then consciousness will automatically happen.  But this is just kicking the can down the road.  Even if it were possible, the designers of the hypothetical robot would not be creating consciousness, they would just be taking advantage of a natural (or perhaps supernatural) process that gives rise to consciousness.  Similar to setting a broken bone.  The body heals itself, the cast only helps the body heal properly.  But since we have no idea how consciousness connects to the body, there is no reason to believe that we can make it happen by mimicking the body, so this analogy fails as well. 

The Logic of Leftism: Outsourcing our Thinking

    What is leftism?  It has taken many different forms over the course of time, but in 2021, the essence of leftism is simple: outsourcing one's thinking to the System, in particular the mass media.  

    One of Bruce Charlton's crucial insights is that the mass media (which includes social media) is the driving force of leftism.  It sets the direction that everything else follows.  And if the media changes, then leftism changes.  

    Another feature of Leftism that Bruce Charlton has pointed out is that Leftism is no longer based on a particular ideology or political theory, but is purely oppositional.  It opposes traditional Western culture, Christianity, and in general anything good, but leftism really has no positive program.  Anything can be taken up, used and then discarded.  

    One good example of this is the New Atheist movement.  Had the New Atheists not been picked up by the media, the movement would have remained fairly academic.  It would have been known among people interested in philosophy and religion but not been widely recognized by the general public.  However, the movement is now gone.  And this was also done by the media.  The New Atheists were either ignored or attacked by the media.  And I believe it was because they were ineffective.  In fact, by attacking religion directly, the New Atheists caused more people to think about religious beliefs.  After all, if you have a debate about the existence of God, some of the people listening might come to the wrong conclusion.  Not only that, for every atheist book written, there were multiple religious books being written in response.

    Hence, New Atheism as a movement was started by the media and ended by the media.  It was just one tool among many.  

    Bruce Charlton's insight is important because it is easy to fall into the trap of believing that if we can just come up with the perfect argument against Leftism, then we can steer people away from it.  Because leftism seems to be concerned with ideas, like equality, tolerance, etc.  But these aren't the essence of leftism.  Tolerance and equality may be bumper stickers, but they aren't the one driving the car.  Any words or ideas may be used but those words and ideas will mean whatever the media wants them to mean.  

    Now, this is not to say that argumentation and logic are ineffective.  Rather, their power does not lie primarily in the arguments themselves.  It lies in the person considering the arguments.  There are many ways out of leftism and people can be reached in both logical and emotional ways, but the crucial point is that it must come from within.  It is vanishingly rare to leave leftism passively.   

    Leftism will never go away until people stop outsourcing their thinking.  The media can give canned responses for anything.  And people who want to believe will grasp at them, not so much becasue they are plausible as because they are no longer thinking for themselves.  Bruce Charlton gives an example of this in his recent post "The ongoing collapse of brain-thinking":

    "All that happens now is an ignorant 'parroting' of the superficial forms of brain-thinking - such as managerialist flow-charts and checklists - whose application is rigid but whose content is increasingly arbitrary and incoherent."

    But leaving leftism is only the beginning of thinking.  For those of us who are not leftists, the task is not only to think for ourselves, but to think from ourselves, which is intuition.  (Both Bruce Charlton and William Wildblood have written many posts on this subject).  Rather than thinking by means of methodologies that we have adopted, we need to know and understand things for ourselves.  

Summary and Discussion of Ecological Formulas

     This post is a continuation of the previous two posts which came from William James Tychonievich's excellent post "Calculating beta diversity."  I want to write this post to describe these issues in an intuitive and "big picture" manner and in such a way that the earlier posts, which give the details, can be skipped.  In those posts, I derived two formulas: 



    each of which relates γ (the probability of picking two different trees from a population of trees sorted into forests) to of other variables.  α is the average probability of picking two different trees from the same forest, δ is the average probability of picking two different trees if each one comes from a different forsests, and F is the number of forests.  δ was also described by William James Tychonievich in the original post, where it was called Approach 3.  β is the average proportion that the composition of a  pair of does not overlap.  For example, this picture by William James Tychonievich shows an example where β = 0.5 for two forests: 

    Here is another picture showing the same situation in a different way: 
    B is the spectrum of values that beta can take over all pairs of forests, so Var(B), the variance of B is a measure of the spread of the values of B, i.e., how close these values are to β, the mean.  

    Both of these formulas were derived under the assumption that every forest has the same number of trees, but they do not depend on the number of species of trees, or the particular composition of each forest.  If we allow forests of different sizes, then we will have to follow the suggestion given in comments by John Goes and William James's brother Luther from the original post and use weighted averages.  This is because alphas for larger forests will have a greater influence on gamma than those of smaller forests and likewise, the influence of beta and delta values from pairs of large forests will have a greater influence than beta and delta values for pairs of small forests.  
    Discussion of Formulas: 
    In this part of the post, I am going to discuss what these formulas tell us about the relationships of these measures of ecological diversity.    

    Formula 1: 
    This formula expresses γ in terms of α and δ and this makes sense because of how γ is defined.  γ is the probability of selecting two different trees from the population as a whole and there are two ways to do this.  We can either select two different trees from the same forest or from two different forests.   And our equation has terms expressing the average probabilities of both of these options.  

    Aside: If we wanted to know the probability of selecting three different trees, then I suspect that there would be an equation with three terms.  There are three ways to select three different trees: all three trees from the same forest, each tree from a different forest, or two trees from one forest and one from another.  We would probably have terms expressing the average probability of these three options, somehow related to the number of forests.  

    The derviation of this formula involves first expressing γ, α for each forest, and δ for each pair of forests and using algebra to substitute expressions for the individual alphas and deltas into the formula for gamma.  
    This formula tells us that if the number of forests stays constant, then as either α or δ increases, γ increases as well.  And this makes sense because if there is a greater chance of selecting two different trees either for a single forest or pairs of forests, then the probability of selecting two different trees from the forest as a whole should rise.  
    Also, if α and δ are constant, then as the number of forests increases, γ decreases.  In other words, if individual forests have less variety of species, then as the number of forests increases, the population as a whole will have less variety.  

    Lastly, we see that the coefficient of δ is F-1, so as the number of forests increases, α has a much smaller effect on γ than δ.  We would expect this because if there are a large number of forests, then there will be many more ways to pick two different trees from different forests than from the same forest.  

    Formula 2: 

    This is a somewhat strange formula.  I was not expecting it and certainly did not expect to see the variance of the different betas involved.  This is an inequality, not an equation, but it shows that γ is larger than the expression on the right.  The main idea is that if we calculate β according to William James Tychonievich's slice-matching method, then there is a relationship between beta for a pair of forests and delta for a pair of forests.  Give a pair of forests, if we pick one of the trees from one of the non-overlapping regions from one forest and one tree comes the other, then we are guaranteed that these two trees will be of different species.  If we have the pair forest j and forest i, then the relationship can be expresed by the inequality:

    Where the left hand side is delta for this pair and the right hand side contains the beta for this pair.  After this, one has to get an expression in terms of delta to substitute into the original equation.  

    This formula tells us that β has a direct relationship with γ.  As β increases or decreases, γ increases or decreases as well.  And this makes sense because if the composition of forests becomes more similar, then the probability of picking two different trees will decrease.  In addition, as β decreases, the contribution of the second term shrinks, so α has more influence on γ.  Likewise, Var(B) has a direct relationship with γ.  If there is a larger spread in the different beta values, then γ will increase.  Also, as Var(B) increases then the effect of α on γ becomes greater as the second term shrinks.  
    What is interesting about this inequality is that it shows one of the formulas given in the original post:  β = γ - α, which we can rewrite as γ = α + β, correctly expresses the relationships among these variables.  α and β both have a direct relationship with γ and γ depends on both α and β.  The above inequality helps by allowing us to calculate more precisely the effect of these variables on each other.  


An inequality involving beta

     Continued from the previous post and following William James Tychonievich's post "Calculating beta diversity"

    In the previous post, a formula was described that related alpha, gamma, the number of forests, and delta if we assumed all forests are the same size.  The formula is: 

     Notice that α is being divided by F, so as the number of forests increases, the effect of α on 𝛾 diminishes.  On the other hand, δ is multiplied by (F-1) and divided by F.  So, as the number of forests increases, the effect of δ on 𝛾 will be much larger than α.  And this makes sense because δ is the probability of picking two different trees assuming that each tree comes from a different forest.  As the number of forests becomes larger, there will be many more ways to pick different trees from different forests than there are ways to pick different trees from the same forest.  

     𝛾 is the probability of selecting two different trees from the population as a whole and there are two ways to do this: select two different trees from the same forest or two different trees from two different forests.  The equation consists of one term for the average probability of the first, and a term for the average probability of the second.  

    If we wanted to know the probability of selecting three different trees from a population as a whole, we would probably have an equation with three terms because there are three ways to select three different trees: they may all be from the same forest, all three may be from different forests, or two may be from the same forest and one from a different forest.  We would then have terms representing the average probability of all three of these events.  

    In his previous post, William James Tychonievich wondered if there was a formula relating alpha, gamma, the number of trees, and beta, where β is determined using his method of slice-matching.  I have not been able to find an equation, but there is an inequality involving β because β and δ are related.  To see this, let's first look at some easy cases.  If every forest is the same, then β = 0 and α = δ because the probability of picking two trees from different forests is the same as picking one tree from the same forest twice.  But then, we can subsitute α in for δ in the above formula and determine that in this case γ = α.  Or, if every forest is different, then β = 1 and the probability of picking two different trees from two different forests is 1, so we can then substitute δ = 1 into the formula and it simplfies to: 


    We see that β affects δ.  Consider the following example: 

    In this situation, we have two forests where exactly half of the tree species overlap, so β = 0.5.  We can also calculate δ.  There are four ways to pick one tree from the first forest and one from the second.  We can pick a red tree from the first and a blue from the second, a red from the first and a yellow from the second, a blue from the first and a blue from the second, and a blue from the first and a yellow from the second.  Of these, three are ways to pick two different trees and one is a way to select two of the same type of trees.  The probability of any of these four cases is (0.5)(0.5) = 0.25.  So, we can determine δ for this pair either by adding up the three instances that give two different trees or subtracting from 1 the instance that gives two of the same trees.  Either way, δ for this pair is 0.75.  

    And this relationship holds for any pair of forests.  Suppose we have a pair of forests, forest j and forest k.  β for this pair of forests is the fraction of each forest which remains once we have removed all overlap.  

    In other words given any pair of forests, we first find the largest possible region which is identical in both forests (denoted here by X).  Then, whatever remains (here denoted by Y and Z) takes up a proportion of β of the forest.  (If both forests are the same, then X is the entire forest, while if both are different Y and Z are the entire forests, respectively).  We know that Y and Z are completely dissimilar in terms of their tree species because if there was any overlap between Y and Z, we could remove that and add it to X.  

    There are four ways to select one tree from one forest and one from another.  We could select one tree from Y and one from Z, a tree from Y and a tree from X, a tree from Z and a tree from X, or a tree from X in one forest and a tree from X in the other forest.  

    If we select a tree from Y and a tree from Z, we know that both trees will be different species because Y and Z do not overlap at all.  Since both Y and Z are of proportion beta for this pair, delta for this pair is at least as large as beta squared for this pair.  We can express this symbolically as: 

    In other words, the probability of selecting two different trees from both forests is at least as big as the probability of selecting one tree from Y and one from Z.  The subscripts emphasize that this is delta and beta specifically for the pair forest j and forest k.   

    In addition to selecting a tree from Y and one from Z, any of the other three ways to select trees could also pick two different trees, so we want to calculate the probability of that.  Consider the lefthand circle, representing forest j.  Alpha for forest j is the probability of selecting two different trees from forest j.  There are three ways this could happen.  Both trees could come from Y, both from X or one from Y and one from X.  We can express this in the following equation: 


    In this equation, alpha j is the probability of selecting two trees from forest j as a whole, alpha x is the probability of selecting two trees from X, alpha y the same for Y and alpha x y the probability of selecting one tree from X and one from Y.  We need to multiple alpha x y by 2 because we could select a tree from X first and then a tree from Y or one tree from Y and one from X.  

    We can make a similar equation for forest k: 

    Based on these two equations, we can express delta for forest j and k: 

    So, delta for this pair of forests depends on alpha each forest as well.  I am not sure how to deal with these alpha terms.  Maybe they cancel nicely when you add up all the pairs.  But, even without those, we can form an inequality which relates beta to alpha and gamma.  So, we return to the inequality: 


     We want to find an expression in terms of δ, the average of the deltas of each of the pairs.  Given that there are F forests, the number of pairs of forests is: 
    So, for both sides of the inequality, we will add up all F(F-1)/2 terms (one for each pair of forests) and then divide by F(F-1)/2.  On the left hand side, this is just δ, but what do we get on the right hand side?  

    In order to form a simple expression for the right hand side, first we must conceptualize α, δ, and β in a somewhat different way.  Suppose we have a collection of forests, each of which has its own α value, and for which each pair of forests has its own β and δ values.  Consider the entire distribution of values over all forests.  There are three distributions, one for α, one for β, and one for δ.   

    We will call these three distributions (or populations)  A, B, and D, respectively.  All of these three have a mean, which is the average of all the values for the alphas of each forest or for the betas and deltas each pair.  We already know the means: they are α, β, and δ.  But all of these populations also have a variance, which measures the spread of values around the mean.  Do the different forests have widely varied α diversity values or are the α values similar?  Similarly, we can ask the same question for the β and δ values.  The more famous standard deviation is the square root of the variance  

    There is a formula for the variance of a general population X, which states that: 

    In other words, to find the population variance, first we square each value in the population X, then take their average.  Next, we take the average of the values of the population and square this average.  Subtracting the first of these numbers from the second gives the population variance. 

    But we can solve this for the first term on the right hand side and write: 

     Now, we have a simple way to express the right hand side of the above equation relating β for a pair and δ for a pair: 

    And so, we can finally substitute this into our above formula to find an inequality relating beta to alpha and gamma:

    Simplifying, our final inequality is: 


Calculating delta diversity

     William James Tychonievich's recent post "Calculating beta diversity" presents a fascinating problem of trying to find a formula relating three types of ecological diversity: gamma diversity, alpha diversity, and beta diversity.  He presents these concepts lucidly, so read the above post before reading on.   

    Tychonievich presents five ways (there are four approaches, but Approach 2 contains two formulas) of calculating beta.  In the second and third, beta is expressed in terms of gamma and alpha, so it is not necessary to have an independent idea of between forest diversity to calculate beta.  The fourth and fifth methods give methods for calculating beta independent of alpha and gamma, so it is not necessary to know alpha or gamme before calculating beta.  For the reasons given in the post, William James Tychonievich shows that the first four methods fail to capture the intuitive idea of between forest diversity.  The fifth method best captures this concept.  

    At the end of the post, Tychonievich postulates that it should be possible to derive beta from alpha and gamma, possibly with a fourth variable.  

    Let us call the type of diversity in Approach 3 delta diversity, represented by the greek letter δ.  Also, let N represent the total number of trees in the environment, S the size of each forest (the number of trees it contains), and F the total number of forests.  If we assume that each forest has the same size, then it is possible to find a formula relating gamma, alpha, delta, and F.  The formula is: 

γ = α/F + δ(F-1)/F.  

Here is a link to a pdf with the details of the derivation of this formula.   Perhaps there is a way to derive it conceptually using minimal algebra.    

    Even though this is not what was asked for, since delta does not express the intuitive idea of between forest diversity, it is interesting that William James Tychonievich's hunch was borne out in that there is a relatively simple formula relating alpha, gamma, the number of forests, and a third measure relating to picking trees from different forests.  

    Relating beta, alpha, and gamma is a different type of problem.  It might be solved by finding a formula for beta first, rather than trying to derive beta from a formula involving alpha and gamma.  

    One relationship is that when the forests are the same size and beta is 0, then alpha must be equal to gamma.  We can see this because if beta = 0, this means that every forest is the same in terms of the proportion of different species of trees.  So, the macrocosom completely reflects the microcosm.  It seems like beta is a parameter that tells us to what extent we can expect that the microcosm of diversity of individual forests will reflect the macrocosm.  

Myths and Dreams

    In William James Tychonievich's post "The magician: preliminary thoughts", he cites a comment by Bruce Charlton: 

    "By analogy consider a myth: what is The myth of King Arthur, or Robin Hood or Merlin?  The answer is that there is no canonical or definitive myth, but only many different versions; yet somehow we feel that behind all the versions is a true myth, which operates without words or pictures but at a level of feelings. 

So the idea would be that that is the true meaning of a dream: the myth behind the dream - the same deep myth might lead to many different surface dreams."   

    In Tychonievich's post, this is in the context of the tarot cards.  He writes: 

    "A Tarot card like the Magician may also be considered analogous to a legendary figure like Arthur or Robin Hood.  It exists in many different versions, some of which constitute a more serious contribution to the myth than others. 


Yet, as Bruce says, behind all the versions lies a single myth - and, despite its unhistorical nature, a true one."  

    This is a very interesting idea.  In this post, I want to explore this idea further.  Bruce Charlton has also written about three levels of consciousness: deep sleep, dreaming sleep, and normal waking consciousness.  Here is a representative post.  Also, one could search "Deep Sleep" on Bruce Charlton's blog.  Deep sleep is when there is no dreaming going on, when sleepers are least responsive to the external world.  

    Bruce Charlton has also written about Rudolf Steiner's idea that when interpreting dreams, "We should understand dreams by the feelings they evoke."  This makes perfect sense.  Because to interpret dreams according to pre-selected symbols doesn't take account of the personal character of dreams.  In one individual's dream, a dog may be frightening while in another person's dream, a dog may be friendly.  So, the role of any element in a dream depends on the person experiencing the dream and the role of that element in the dream as a whole.  

     And these ideas help us to make sense of myths as well.  Just as with dreams, a myth will have certain details that all feel significant in their place in the story.  Similar to folk tales, how certain numbers such as 3 or certain themes such as helping animals or unassuming people will show up repeatedly.  C.S. Lewis wrote about this in his book An Experiment on Criticism: 

    "There is, then a particular kind of story which has a value in itself - a value independent of its embodiment in any literary work.  The story of Orpheus strikes and strikes deep, of itself; the fact that Virgil and others have told it in good poetry is irrelevant.  To think about it and be moved by it is not necessarily to think about those poets or to be moved by them.  It is true that such a story can hardly reach us except in words.  But this is logically accidental.  If some perfected art of mime or silent film or serial pictures could make it clear with no words at all, it would still affect us in the same way. 


    It is difficult to give such stories any name except myths." 

    Lewis also says that myths differ greatly in quality.  And this is the same of dreams.  Many dreams are simply incoherent, a jumble of images and events with no seeming meaning and some myths likewise have elements like this in them.  

    Now, if dreams are associated with myths, then is deep sleep associated with the feeling behind the dream?  In other words, is deep sleep a simple consciousness of pure feeling where we apprehend the feeling behind the dream, which then is shaped into the forms of a dream?  The other question is, is deep sleep or the true myth a lower or higher form of consciousness than normal consciousness?  I think for the true myths and true dreams it is probably both.  

    For example, myths are probably the most ancient forms of literature.  They arise out of a simple consciousness.  It is not necessary to be a great wordsmith to tell a mythic story, it is only necessary to be in touch with the deep springs from which such myths arise.  But on the other hand, even with our highly intellectual consciousness, we have not succeeded in plumbing all the depths of myths. 

Randomness, Determinism, and Free Will

    What is randomness?  To say something is random simply means that we "let the chips fall where they may;" we impose no additional structure on an event beyond what is intrinsic to the event.  So, to say we toss a die randomly simply means that we toss the die without trying to influence the roll in any way.  If this is a balanced die, then the probability of any number coming up is 1/6 because all that matters is the intrinsic structure of the die (having six sides).  

    It is not necessary for the die to be balanced to toss it randomly.  An unbalanced die where different sides are weighted differently can still be tossed randomly if we do nothing to influence which face comes up on the die.  This means that randomness is a negative condition; it is the absence of restrictions.  Hence, randomness cannot be a cause of anything.  Bruce Charlton has a post about randomness, saying similar things.  

    If random means nothing outside influences the experiment, then the important thing in randomness is the underlying universe of possibilities of the experiment.  In the case of a die, this would be the six sides.  In the case of a survey, this would be a population.  So, the concept of randomness implies an underlying universe of possibilities.  

    Determinism on the other hand, only considers one possibility.  It may be that the single possibility is found at the end of a long chain of reasoning or calculations, but there was only one to begin with.  

    In some discussions about free will, it is stated that the only possibilities are randomness and determinism, so how can freedom come in?  But I think this sidesteps the issue.  

    Pierre Simon Laplace wrote: 

    "We may regard the present state of the universe as the effect of its past and the cause of its future.  An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these date to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes."

    This hypothetical being is known as Laplace's Demon.  The idea is that given the laws of physics, there is only one possible future and one possible past.  

    On the other hand, the quantum multiverse theory (also known as the Everett interpretation) states that every time there are multiple possibilities for something to happen, the universe splits and each one happens in a different copy of the universe.  In the previous case, only one thing can happen, but in this case, everything happens.  

    But the problem is, that neither of these options are actually solutions to the problem.  The common sense interpretation of events is that at any given time multiple things could happen, but only one thing does happen.  In order to explain this, we need some means to explain how one option among many is chosen, we need a means of choice.  

    Determinism avoids the problem by saying, "Well, it looked like multiple things could happen, but actually there was only one possibility to begin with."  The quantum multiverse avoids the problem by saying, "It appears that only one thing happens, but actually everything happens in some universe."  Both sidestep the problem by denying the common sense interpretation.  

    But who is to say the common sense interpretation is wrong?  I think what we have here is that we cannot model choice, so it appears that the only two possibilities are determinism and randomness because we have models that incorporate those concepts.  But, the common sense interpretation, that we have free will and there are multiple possibilities for events in the universe but only one happens are confirmed by our experience.  Models are also confirmed by experience, when they make predictions, so free will is on the same epistemic footing as science.  

    What's more, we not only have empirical experience of free will, we also have the internal experience of choosing, which is as real as anything else in our consciousness.  

    Because science has been successful, people have been conditioned to believe that the only things that are real are what we can model.  But, this is in fact not true.  The laws of nature exist in nature, not on a piece of paper.  We have models for them, but those are not the same as the laws themselves.  In order to really understand free will, we have to move past model to reality.

Is it still possible to be a pagan?

    In this post I want to examine this question and use it to think through some other things as well.  

    We can distinguish between two types of paganism: devotional paganism and philosophical paganism.  Devotional paganism is paganism as experienced by ordinary people, involving rituals, visiting temples, oracles, etc.  Philosophical paganism is the philosophy of such figures as Plato, Pythagoras, Plotinus, etc.  Philosophical paganism in many cases involves religious practices, such as contemplation, purification, or austerities but differes from the devotional paganism in that it is primarily based on philosophical doctrines, rather than tradition.  

    A specific individual can practice both types of paganism.  Indeed, many of the philosophers were pious in the devotional sense.  It makes moderns uncomfortable to think about, but Socrates and Pythagoras really did believe in the gods.  They probably did not think of the gods in a mythological way, but the key point is that the philosophers believed in devotional piety: they just thought philosophy was a higher form than its popular manifestation.  For example, Pythagoras did not sacrifice animals; he burned frankincense instead.  But this was not becasue Pythagoras disbelieved in sacrifice; rather, he viewed bloodless sacrifices as a purer form of devotion. 

   I would say that it is not possible to be a devotional pagan nowadays.  This is because devotional paganism depended crucially on tradition.  And that tradition is now that is now long gone.  Imagine the Greco-Roman world, with its temples and priests in every major city as well as many local cults.  Not to mention oracles and the personal piety of the individuals, such as the household gods of the Romans.  

    C.S. Lewis gave a good illustration of this in his inaugural Cambridge lecture "De Descriptione Temporum": 

    "One thing I know: I would give a great deal to hear any ancient Athenian, even a stupid one, talking about Greek tragedy. He would know in his bones so much that we seek in vain. At any moment some chance phrase might, unknown to him, show us where modem scholarship had been on the wrong track for years."  

    Devotional paganism took place within this atmosphere; it was not individual, but crucially partook of the surrounding environment and there is simply no way to reconstruct that environment.  In addition, the world of paganism arose from a different form of human consciousness.  So, even if people wanted to reconstruct this environment and tried, they would no longer be able to.  We simply cannot see the world in the same way as the ancients did.  

    Although devotional paganism is not possible today, philosophical paganism is possible.  However, it is very difficult.  One difficulty is that all the Hellenistic philosophers lived within the culture of devotional paganism.  Their philosophy was intended to be studied within that context.  Furthermore, throughout the entire lifetime of Hellenistic philosophy (rougly 1200 years, from the Seven Wise Men of Greece to Simplicius), the philosophy was primarily passed down orally.  Texts were studied, but almost always with a teacher to explain them and to instruct the student in other matters.  

    In fact, this oral tradition was key to the success of Hellenistic philosophy.  Although at any given time there were probably always some corrupt teachers of philosophy, there were also always wise and good teachers.  Many of these teachers gave public lectures, but they also carefully selected other students to pass on philosophy in greater depth and hence the lineage continued uncorrupted.  

    Thus, anyone who wants to be a philosophical pagan in these days is starting with a major handicap: he cannot avail himself of the oral tradition, which vanished with the final generation of paganism.  Such an individual would have to pursue his own path to some extent.  Three examples are Gemistus Pletho, Thomas Taylor, and Bronson Alcott.  

    Pletho (c. 1355- 1454) was a Byzantine scholar who essentially read himself into heresy.  He was known to tell certain people that he believed the religion of the future would be paganism, who were, naturally, horrified by this statement.  Pletho also wrote a book, Nomoi, found after his death which detailed his philosophical system, which, unfortunately was burned.  

    Thomas Taylor (1758-1835) made it his life's work to translate Plato, Aristotle, and the Neoplatonists into English.  But he was not just a translator; Taylor's religion actually was Platonism.  Taylor was highly motivated in working on his translations.  Although later in life, he had more financial stability, for a period of time, he would work his job then come home and work into the night translating.  

    Bronson Alcott (1799-1888) was one of the New England Transcendentalists and a close friend of Ralph Waldo Emerson.  Alcott was also a Platonist.  When he had to sell some books after financial difficulties, Alcott said that the books he was most disappointed to part with were the dialogues of Plato, but Alcott felt that he could bear it becasue he had absorbed their spirit through repeated readings.  Alcott also followed a vegetarian diet precisely because he wanted to emulate the ancient Pythagoreans.  The book Thomas Taylor, the Platonist: Selected Writings has two excellent essays in the beginning of the book, by Kathleen Raine and George Mills Harper.  Raine's essay discusses the Taylor's influence in Britain, while Harper's discusses the influence of Taylor's translations in America, particularly among the Transcendentalists.  Harper suggests that when Emerson first encountered Plato in the Harvard library in the 1820s, it was likely in a translation by Taylor.  

    Alcott also read Taylor's translations of Plato.  Incidentally, Alcott was the father of the writer Louisa May Alcott (they also shared a birthday).  In Little Women, which is semi-autobiographical, the father of the March family is described as follows: 

    "To outsiders the five energetic women seemed to rule the house, and so they did in many things, but the quiet scholar, sitting among his books, was still the head of the family, the household conscience, anchor, and comforter, for to him the busy, anxious women always turned in troublous times, finding him, in the truest sense of those sacred words, husband and father.

The girls gave their hearts into their mother's keeping, their souls into their father's, and to both parents, who lived and labored so faithfully for them, they gave a love that grew with their growth and bound them tenderly together by the sweetest tie which blesses life and outlives death."

Incidentally, the "usual suspects" have tried to co-opt Louisa May Alcott and her most famous book.  However, the book is about nothing more than the importance of family.  Furthermore, there is a chapter when the main character of Little Women, Jo, (based on the author) goes to a party in New York City and is horrified by the bad behavior and irreligiousness of literary and cultural elites.  Anyone who has any doubt what side either of the Alcotts would be on if they were around today has only to read that chapter.  

So, what is the significance of considering this question?  Well, the main reason is just because it is an interesting question to ponder.  However, it is a significant question today because all three of these figures show that a modern pagan must follow an individual path.  Furthermore, all three viewed their philosophy as a way of life, not just a lifestyle; they took it seriously.  In addition, all the true teachers of Platonic and Pythagorean philosophy viewed this philosophy as incompatible with a self-indulgent life.  Philosophy required struggle as well as an austere mode of life.  Indeed, a true Platonist cannot be a sexual revolutionary or an Epicure (in terms of food).  

These three figures provide a good example for Christians in the West today.  We can draw much from the tradition of Christianity that has been preserved in writing and which exists in Churches today, but we no longer live in a Christian society.  Hence, we must do what we can individually to keep the faith.  Also, just like Alcott we have to maintain the moral standards of Christianity.  If it is possible to follow a Pythagorean diet in the 1800s, it is also possible to follow Christian morality in the 21st century.

True and false charisma

     The previous post on celebrity naturally leads to a discussion of true and false charisma.  True charisma is internal, it is intrinsic to the person and based on personality, while false charisma is external and based on persona.  

    Personality is natural: an individual's personality is expressed in almost anything he does, while a persona is an artificial construction.

    Here is an example from Tom Simon, quoting a passage from G.K. Chesterton's Autobiography

    "I can still remember old Yeats, [the father of the famous poet] that graceful greybeard, saying in an offhand way about the South African War, 'Mr. Joseph Chamberlain has the character, as he has the face, of the shrewish woman who ruins her husband by her extravagance; and Lord Salisbury has the character, as he has the face, of the man who is so ruined.'  That style, or swift construction of a complicated sentence, was the sign of a lucidity now largely lost.  You will find it in the most spontaneous explosions of Dr. Johnson.  Since then some muddled notion has arisen that talking in that complete style is artificial; merely because the man knows what he means and means to say it.  I know not from what nonesense world the notion first came; that there is some connection between being sincere and being semi-articulate.  But it seems to be a notion that a man must mean what he says, because he breaks down even in trying to say it; or that he must be a marvel of power and decision, because he discovers in the middle of a sentence that he does not know what he was going to say.  Hence the conversation of current comedy; and the pathetic belief that talk may be endless, so long as no statement is allowed to come to an end."

    The idea here is that Yeats pere and Johnson's eloquence was an expression of their own personality and understanding; they were speaking for themselves.  It is one thing if someone is naturally articulate, but what Chesterton is writing about here is inarticulateness as a persona, pretending to be overcome by emotion or pretending to be less fluent than one naturally is. 

    Another example would be Pythagoras.  There is a story of Pythagoras that he had traveled to Phonecia, near Mount Carmel.  A ship came by and Pythagoras asked if the sailors were going to Egypt.  They replied that they were and so he went aboard the ship.  The sailors then had the idea to sell Pythagoras into slavery when they got to Egypt.  But Pythagoras simply sat silently meditating without drinking or eating.  The sailors then worried they might have picked up a god and so decided not to sell Pythagoras into slavery.  When they got to Egypt, they fed him fruit and he simply walked away.  This story is taken from Iamblichus's Life of Pythagoras.  Other stories about Pythagoras also indicate that he was an impressive person, in his bearing and demeanor. 

    Bruce Charlton has discussed Charles Williams and Ralph Waldo Emerson, stating that their personal impact, the influence of meeting them personally was more influential than their writings.  Another Transcendentalist of whom this could be said is Bronson Alcott, who was, incidentally, a vegetarian because he wanted to emulate Pythagoras.  Here is what Thoreau had to say about Alcott in Walden

    "I should not forget that during my last winter at the pond there was another welcome visitor, who at one time came through the village, through snow and rain and darkness, till he saw my lamp through the trees, and shared with me some long winter evenings. One of the last of the philosophers,—Connecticut gave him to the world,—he peddled first her wares, afterwards, as he declares, his brains. These he peddles still, prompting God and disgracing man, bearing for fruit his brain only, like the nut its kernel. I think that he must be the man of the most faith of any alive. His words and attitude always suppose a better state of things than other men are acquainted with, and he will be the last man to be disappointed as the ages revolve. He has no venture in the present. But though comparatively disregarded now, when his day comes, laws unsuspected by most will take effect, and masters of families and rulers will come to him for advice.—

'How blind that cannot see serenity!'

A true friend of man; almost the only friend of human progress. An Old Mortality, say rather an Immortality, with unwearied patience and faith making plain the image engraven in men’s bodies, the God of whom they are but defaced and leaning monuments. With his hospitable intellect he embraces children, beggars, insane, and scholars, and entertains the thought of all, adding to it commonly some breadth and elegance. I think that he should keep a caravansary on the world’s highway, where philosophers of all nations might put up, and on his sign should be printed, “Entertainment for man, but not for his beast. Enter ye that have leisure and a quiet mind, who earnestly seek the right road.” He is perhaps the sanest man and has the fewest crotchets of any I chance to know; the same yesterday and tomorrow. Of yore we had sauntered and talked, and effectually put the world behind us; for he was pledged to no institution in it, freeborn, ingenuus. Whichever way we turned, it seemed that the heavens and the earth had met together, since he enhanced the beauty of the landscape. A blue-robed man, whose fittest roof is the overarching sky which reflects his serenity. I do not see how he can ever die; Nature cannot spare him."

    Contrast this to the modern idea of charisma, which is almost entirely persona.  Alcott, Emerson, Williams, Pythagoras, Johnson all interacted with other people and the world organically based on their own personal qualities and capabilities.  In contrast, many in the modern world who are called charismatic interact with the world through the media.  Their entire interaction is mediated and situations are carefully constructed to portary a certain image.  Furthermore, in the modern world, charisma is often viewed as inherently concerned with manipulating people's instincts.  It's within the modern social context, which unfortunately is viewed as a Machiavellian free for all.  

    In contrast, real charisma can inspire and movtivate others.  Furthermore, in the modern world it will not be found primarily in the media or public discourse and will be primarily local and personal.  Also, varieties of true charisma are as varied as individual personalities.   

The Strangeness of Celebrity

     These two posts on Junior Ganymede and this post by Bruce Charlton discuss celebrity.  I have always found celebrity to be a strange phenomenon because why do the lives of entertainers whom one does not know personally have any relevance to one's life?  And yet, it is a very powerful phenomenon.  

    Celebrity is a distinctly modern phenomenon.  It really began in the 20th century.  This is because celebrity and fame or being well-known are two distinct phenomena.  For example, everyone in Palestine in Jesus' day knew who Pontius Pilate was.  But this was because of his role as the governor, it was the role he filled that was important.  Colin Wilson tells an anecdote in his book Rudolf Steiner: The Man and his Vision about a time when Charles Dickens was on a train and was recognized by the conductor but not bothered by anyone.  Dickens was a popular writer, but popular is distinct from celebrity.  A popular writer is widely read and may draw many people if he lectures, but he is known from his writing, for his work, not known because of a persona.  

    It is somewhat difficult to give a description that encapsulates the phenomenon of celebrity.  But it involves someone who is both well-known but also felt to be someone who a person wants to meet.  A celebrity is felt to be both personal but elevated.  Everyone who lives in the modern West knows what celebrity is without a definition, but I think the difficulty of describing it is because celebrity is instictive.  It operates below the rational level, so it is a feeling rather than an opinion or belief.      

    Our culture has increasingly gone off the rails.  We can't refer to where we are now as a development from the past in the same way that if someone talking a walk to a location falls down a hill and lands in a ditch that we can say the ditch is their destination.  On an earlier post of mine, Francis Berger left a comment about art that describes this well:

    "Once we get into the twentieth century, the signs of degeneration are everywhere.  Unwilling to make something new from the creativity of the past, art beings to deconstruct and destroy itself under the imperative to be original at all costs and 'make it new' (sorry, Ezra Pound).  Everything begins to splinter and roll back upon itself - art becomes a destructive feedback loop.  The great is considered mundane; the mundane, great.  After a while, the kaleidoscope revelry ends and everything just stagnates.

    The idea is that modern art isn't a continuation of what came before but a perversion, a degeneration.  Likewise, celebrity is presented as if it was the most natural thing in the world, a natural development from popularity, but it is not.  

    Celebrity comes from a confluence of two factors, electronic communications technology and the flattening of hierarchy.  Radio seems to be less powerful in this respect, but with the introduction of film and then television, as well as the ability to amplify music so that larger crowds could be accomodated a performer could perform for a much larger number of people than had been possible.  Also, with film and television, the performance is now mediated.  Rather than the show being sustained by the individual performer's ability, we introduce the external factor of them being placed on a screen.  This seems to have a very powerful effect for some people.  In addition, with the flattening of hierarchy increasing in the 20th century, these performers were now viewed as someone who could be known personally.  

    I think the reason that Wilson told this story about Charles Dickens in his book is because Steiner started lecturing just as celebrity culture was about to begin.  However, Steiner was not viewed as a celebrity.  Many people wanted to meet him, but as a wise man or guru, not because of a persona.  

  So, that is one reason why celebrity is so weird.  It is something abnormal that only happened through a special confluence of factors.  

Miscellaneous thoughts on genius

    In this post, I am going to discuss some miscellaneous ideas about genius.  I will be working within the framework explored in Bruce Cha...