A Strange Recursive Relation, Automatic

Hofstadter mentions the following recursive relation in his great book "Gödel, Escher, Bach": \begin{align} g(0) &= 0;\\ g(n) &= n-g(g(n-1)). \end{align} I claim that $$g(n) = \left\lfloor \phi\cdot (n+1) \right\rfloor$$, where $$\phi = \frac{-1+\sqrt{5}}{2}$$, and I'll show this using a technique that makes proving many identities of this type nearly automatic.

Let $$\phi\cdot n = \left\lfloor \phi\cdot n \right\rfloor + e$$, where $$0 < e < 1$$ as $$\phi$$ is irrational, nor can $$e = 1-\phi$$, and note that $$\phi$$ satisfies $${\phi}^2 + \phi - 1 = 0$$. Some algebra gives \begin{align} n-\left\lfloor \left( \left\lfloor \phi\cdot n \right\rfloor + 1 \right) \cdot \phi \right\rfloor &= n-\left\lfloor \left( n\cdot \phi - e + 1 \right) \cdot \phi \right\rfloor \\ &= n-\left\lfloor n\cdot {\phi}^2 - e\cdot \phi + \phi \right\rfloor \\ &= n-\left\lfloor n\cdot \left(1-\phi\right) - e\cdot \phi + \phi \right\rfloor \\ &= n-n-\left\lfloor -n\cdot \phi - e\cdot \phi + \phi \right\rfloor \\ &= -\left\lfloor -n\cdot \phi - e\cdot \phi + \phi \right\rfloor \\ &= -\left\lfloor -n \cdot \phi + e - e - e\cdot \phi + \phi \right\rfloor \\ &= \left\lfloor \phi\cdot n \right\rfloor -\left\lfloor - e - e\cdot \phi + \phi \right\rfloor. \end{align}
Now if \begin{align} 0 < e < 1-\phi &\implies 0 < - e - e\cdot \phi + \phi < \phi;\\ 1-\phi < e < 1 &\implies -1 < - e - e\cdot \phi + \phi < 0. \end{align}
This implies $n-\left\lfloor \left( \left\lfloor \phi\cdot n \right\rfloor + 1 \right) \cdot \phi \right\rfloor = \left\lfloor \phi\cdot (n+1) \right\rfloor .$ Since $$\left\lfloor \phi\cdot (0+1) \right\rfloor = 0$$, we're done.

The point of the algebra was to move all terms involving $$n$$ out, and then checking to see how the remaining term varied with $$e$$. A simple idea, but very useful.

A Bayes' Solution to Monty Hall

For any problem involving conditional probabilities one of your greatest allies is Bayes' Theorem. Bayes' Theorem says that for two events A and B, the probability of A given B is related to the probability of B given A in a specific way.

Standard notation:

probability of A given B is written $$\Pr(A \mid B)$$
probability of B is written $$\Pr(B)$$

Bayes' Theorem:

Using the notation above, Bayes' Theorem can be written: $\Pr(A \mid B) = \frac{\Pr(B \mid A)\times \Pr(A)}{\Pr(B)}$Let's apply Bayes' Theorem to the Monty Hall problem. If you recall, we're told that behind three doors there are two goats and one car, all randomly placed. We initially choose a door, and then Monty, who knows what's behind the doors, always shows us a goat behind one of the remaining doors. He can always do this as there are two goats; if we chose the car initially, Monty picks one of the two doors with a goat behind it at random.

Assume we pick Door 1 and then Monty sho…

What's the Value of a Win?

In a previous entry I demonstrated one simple way to estimate an exponent for the Pythagorean win expectation. Another nice consequence of a Pythagorean win expectation formula is that it also makes it simple to estimate the run value of a win in baseball, the point value of a win in basketball, the goal value of a win in hockey etc.

Let our Pythagorean win expectation formula be $w=\frac{P^e}{P^e+1},$ where $$w$$ is the win fraction expectation, $$P$$ is runs/allowed (or similar) and $$e$$ is the Pythagorean exponent. How do we get an estimate for the run value of a win? The expected number of games won in a season with $$g$$ games is $W = g\cdot w = g\cdot \frac{P^e}{P^e+1},$ so for one estimate we only need to compute the value of the partial derivative $$\frac{\partial W}{\partial P}$$ at $$P=1$$. Note that $W = g\left( 1-\frac{1}{P^e+1}\right),$ and so $\frac{\partial W}{\partial P} = g\frac{eP^{e-1}}{(P^e+1)^2}$ and it follows $\frac{\partial W}{\partial P}(P=1) = … Solving a Math Puzzle using Physics The following math problem, which appeared on a Scottish maths paper, has been making the internet rounds. The first two parts require students to interpret the meaning of the components of the formula $$T(x) = 5 \sqrt{36+x^2} + 4(20-x)$$, and the final "challenge" component involves finding the minimum of $$T(x)$$ over $$0 \leq x \leq 20$$. Usually this would require a differentiation, but if you know Snell's law you can write down the solution almost immediately. People normally think of Snell's law in the context of light and optics, but it's really a statement about least time across media permitting different velocities. One way to phrase Snell's law is that least travel time is achieved when \[ \frac{\sin{\theta_1}}{\sin{\theta_2}} = \frac{v_1}{v_2},$ where $$\theta_1, \theta_2$$ are the angles to the normal and $$v_1, v_2$$ are the travel velocities in the two media.

In our puzzle the crocodile has an implied travel velocity of 1/5 in the water …