Sunday, August 23, 2009

The Probability of Information -- Part 3

A thoughtful person once asked the question: “Does entropy apply to information?” The answer is obviously “yes”, but why? The short answer is that it’s because the probability of information is extremely low. The purpose of this blog entry is to attempt to explain my answer.

Consider the highly organized sequence of characters below.

Mathematics is the language of science. To succeed in science, one must use mathematics. Thus high quality science depends on high quality mathematics.

Now if we were to apply a random single point mutation to the above paragraph we might get the paragraph below.

Mathematics is the language of science. To succeed iK science, one must use mathematics. Thus high quality science depends on high quality mathematics.

In this case an “n” was replaced by a “K”. The above character sequence is less organized than the original paragraph. In other words the entropy of the character sequence increased. We could apply another random single point mutation to the character sequence and we might get the original sequence. This would be an increase in organization and therefore a decrease in entropy. However, given the 55 character set used in The Probability of Information, as shown below, there are 16,665 ways to apply a single point modification to the character sequence, and only one of these mutations would represent a decrease in entropy.

Any of the 151 characters in the sequence can be changed to one of the other 54 characters (151 x 54).

Any character can be deleted (151).

Any of the 55 characters could be inserted into or appended to the sequence (152 x 55).

(151 x 54) + 151 + (152 x 55) = 16,665

So while it is possible for a random mutation to decrease entropy (increase organization), it is much more likely that a random mutation will increase entropy (decrease organization). In this case, the chances of increasing the organization of the second character sequence are only 1 in 16,665. If we apply just three single point mutations there are over 4 billion (16,665 x 16,665 x 16,665) ways to do this and the chances of increasing the organization of the character sequence is considerably slimmer. In other words, if you randomly modify information there is a very strong tendency for it to become disorganized.

If you continue at apply random modifications to the character sequence it will wander away from information into a vast sea of nonsense never to return again. Is it possible that the sequence will ever again turn into something that resembles a coherent thought? Well … yes. However, we showed in The Probability of Information that the chances of a character sequence of similar size representing information to be less than 1 in 8.5E+90. So you could make trillions of single point modifications per microsecond to the sequence for a length of time comparable to the age of the universe probably still not manage to happen upon another sequence representing information.

5 Comments:

At 3:38 AM, August 29, 2009, Blogger Jeffrey Shallit said...

I don't think you understand the meaning of the term "information". "Information", as it is generally understood by mathematicians and computer scientists, has nothing to do with "meaning", and a completely random string of letters is likely to have the maximum or nearly-maximum amount of information.

I recommend getting a textbook on Kolmogorov complexity and reading it, such as the one by Li and Vitanyi. After that, you will understood why nearly everything you have said is incorrect.

 
At 10:01 PM, August 29, 2009, Blogger Intelligent Designer said...

Hi Jeffrey,

I know where you are coming from, but just to make it clear to anyone else that might read this blog please answer the following question. Which sequence of characters contains more information, the sequence comprising this paragraph, or the random sequence of characters below?

7HJe9brianf2fR S2oR Ok1VPDLWRovveVJl8G MSDeW 8KyMS12YWg9I6D h65HkZsRquQR3phAo 1DOt35xX RLy5q3 2OiUcG8 unfRG665gahT FVIO1I1TzIyCohq ZbFja2 BrhLQkqa 2qKVS8S7PB oqr5Bek7tu 2wS3c Kwl9e 7lxKq jZXPD6V3 9PDqos idNIm7CYsz GDzcXHzQ1FU11YgQ7Fm vXhjaqixnxm tu8J44Lw Vx1QP DVed 96b L3Cj l2APSbn Le96K o4EKA 56rAmPr vv2Bqsd 3SmIZiofq OjtIQ dnTTWIBp8B1Yb OSVqSI Bw4wbxTPE JF6csNIgidn AWdP FSbAs3mW7k IgY4nW cj3IKvTcojg f9bvlzXQPgGU TKh64uqjErAp7f7jGBo yXmCrcSe ekvtRh GYmd8SZw EKjBWKbBz kkEjSm nveBhUnbA eZn ngZ 5jjh DXjIB QYQUi

 
At 9:12 PM, September 03, 2009, Blogger Stripe said...

You cannot use maths to define meaning. But it is certainly impossible to mutate something in order to make it meaningful. :)

 
At 9:55 PM, September 17, 2009, Blogger Intelligent Designer said...

Hi Jeffrey,

I think I have given you sufficient time to answer my question, and since you haven't I'll speculate about what your answer would be. If you were to answer in a way consistent with your comment, you would tell me that the random sequence of characters has more (or probably has more) information. Now since you have a PhD in mathematics you ought to know that if your line of reasoning leads to an absurd conclusion somewhere along the line you’ve made a blunder. So where did you go wrong? Well, you assumed that the only relevant definition for information is that which is used by computer scientists and mathematicians in information theory – a definition which basically concerns itself with the number of bits required to transmit a message. It should be obvious from the context of my blog entries that I am talking about information as it is generally understood by almost every one – information that has meaning. And it is a fact that if you apply successive random mutations to meaningful information it turns to nonsense and its entropy -- as understood in information theory -- increases.

 
At 3:39 PM, August 05, 2010, Anonymous Anonymous said...

What you are saying seems obvious to me. What is harder to understand is why you are saying it. Are you aware of someone who believes that randomly changing information might be expected to add information somehow? I am not.

If you are trying to relate this in some way to evolution I don't quite understand how or why. Natural selection is not random. It is the opposite of random. It doesn't add information either. If certain objects exist that (1) contains bits of information that effects its manner of reproduction and (2) reproduces, and the bits of information are allowed to change slightly, the objects will naturally change in ways that produce the object in higher numbers. Information in that object might get more complex or it may get less complex, whatever that even means.

Since more complexity in the organism is likely to provide for better adaption to a changing environment, it is likely that the information might change in ways that cause the object to get more complex to adapt in some way to its surroundings, but this may not be the case at all. In any case it is the organism, not the information that is increasing in complexity.

Or if the environment changes too fast it is likely the object can't adapt fast enough and will die out completely.

No information is increasing in the process of evolution. Humans have less genes than some types of fungus, but most people think humans are more complex.

 

Post a Comment

<< Home