### Solving problems with games

This post is about mostly things that I don’t know, but which are very interesting.. The basic topic is threefold:

1) sometimes computers are not capable of doing things, and sometimes it’s things that humans instead can do. I’m not speaking about having feelings or eating, but rather things that computers should be able to do better than humans, namely doing simulations, solving complicated problems, etc. Examples follow below.

2) on the other hand, humans are less easy (or cheap) to convince to do tasks, than computers, more or less by definition of a computer.. they seem to prefer much more playing videogames, looking at Facebook, or downloading illegally music and books.

3) on the other hand (i.e. on the fist hand) one can make those other “unproductive” preferred activities a tiny bit useful for the community (or rather for some precise guys), by using wicked tricks. One can use the same wicked tricks actually also to make the humans more useful to themselves.

Examples

A) One example where computers were beaten by humans is in relation to Foldit, a game devised to simulate protein folding. The recreational aspect hides a deep scientific importance, since no efficient way to simulate the folding process of proteins efficiently is yet known, the computational power available being one side of the limitation, and the lack of an algorithm adapted to the problem being the other. However online gamers have an edge on the machines, as shown in this article in Nature.

B) Similar stuff with RNA instead of proteins is called EteRNA.

C) Another project of similar nature is also done to help people place transistors on microchips efficiently, and the game is called FunSAT.

D) If you are interested in the sequencing of human genome, you might also know that the genome contains not all the answers about humans as we liked them. In fact, it looks more like a huge (sequenced) mess than like a nice programming code. One method to get some hints is Multiple Sequence Alignment, and is the topic of this game called Phylo.

E) A more boring (too scientific -looking) one is EyeWire, where you help a program to detect neurons of the retina.

F) During other activities than gaming, like downloading illegally books, or also during legal pseudo-activities like Facebook, you are sometimes asked to “prove that you’re human” by decyphering a text, composed of two words: a so-called captcha. (click link for a picture)

In some Captchas (for example the ones called reCAPTCHA) one of the words looks much more like a real, typed word than the other, and that’s why because the website you are accessing does not know that word, and you are actually helping some company transcribing scanned books from paper to digital format. The true verification is done through the other “artificial-looking” word.

If you try to write a random thing instead of the “book-word” then they will still think you are a human, while doing errors in the other “computer-made-word” is not forgiven (the computer just knows that part o the information).  Here is the guy who got the idea. He also got many other nice ideas. And a blog.

I wanted to write more about other aspects of this, but I’ll leave that for another post.

### branched trees with few leaves

I just thought of the following question, I’ve no idea if it’s a well-known riddle, but it seems a nice story, and it should not prove too difficult to answer:

God is tired before the 6th day of creation so he actually gives you a task, while he goes to take a nap. Namely you have to build the Amazon forest.  (That’s because people in the 21st century are going to need to chop trees in huge quantity in order to make paper, so the plan is that you have to fill the whole South America with trees before midnight). You take an energy drink and prepare to do the job…

Basically the technique to do one of these trees is quite simple: you are given some sticks and you have to glue them at their ends, making a caricature tree, then you plant one of these in the ground, and magically a leaf is going to sprout at the free end of the sticks, while the rest is becoming a realistic wodden tree.

You notice that the sticks become quite nice trunk pieces, while you don’t like so much the way the leaves come out, they seem boring to you (and you immagine that after all 21st century people will look more for wood than for leaves, so you want to concentrate on making that part). So you end up doing trees with many sticks, but as little leaves as possible. The result is not very nice, because you end up gluing all the sticks in a row, and you get a long tree with just one leaf in the end. This makes you wonder what would happen if you start doing a lot of branchings. And here is the question:

If you impose yourself that no gluing is done involving only 2 stick ends (so that your tree has just branch points, possibly with more than 2 branches), what is the least number of leaves that you can get if you use 4 sticks? What about 15 sticks? Is there a formula valid in the case of an arbitrary number of sticks?

Epilogue: Lost in those dreams you fall asleep, leaving an Amazon forest with a total of about 7 trees for the posterity.. but God is forgiving, so he offers you a beer.

### Kochen-Specker theorem

I learned from E. Kowalski’s blog a nice theorem of Kocher and Specker:

There does not exist a function $f:\mathbb S^2\to\{0,1\}$, such that for any $x,y,z\in\mathbb S^2\subset\mathbb R^3$ which are orthogonal to each other, $f(x)+f(y)+f(z)=2$.

The statement here was at first counter-intuitive, so I tried to prove it. I found some kind of geometrical proof, I think it’s nice (and I hope it’s right!):

Proof: Label all $p\in\mathbb S^2$ by $f(p)$ and identify points of $\mathbb S^2$ with vectors of $\mathbb R^3$. For every othogonal triple one and only one of the labels is $0$. Therefore, for every “zero” label (call it “north pole”), the “equator” will be made of “one”s, and the “south pole” will be a “zero”.

Consider $2$ labels “zero” which have an angle between them $<\pi/2$, there are two intersecting (non-orthogonal!) equators of “one”s. Now while a point $a$ moves on one of the equators, there is exactly one point $b$ on the other equator which is orthogonal to it: as $a$ describes the first equator, $b$ will therefore describe the second equator. The third point $c$ completing an orthogonal basis $(a,b,c)$, will also describe a curve of “zero”s: it is a circumference having the two “zero”s as diameter!

Iterating the above construction it is easy to see that the set of zeroes is open. But since the set of “one”s depends continuously on the set of “zero”s (through the “equator-pole” construction), it is also open! Contraddiction (since the sphere is connected). $\square$

### Banach and Mazur game

I want to rewrite here a part of my degree thesis speaking about a very nice way to describe residual sets.. I start with the abstract Bourbakist definition anyway.
Let $E$ be a topological space.

Definition
A set $S\subset E$ is called nowhere dense if it fails to be dense in every open subset of $E$: that is, for every nonempty open $U\subset E$ there is a nonempty open $V\subset U\setminus S$.

$S$ is then nowhere dense if and only if its closure $\bar{S}$ has empty interior; a finite union of nowhere dense sets is also nowhere dense; the topological boundary of a closed set or of an open set is nowhere dense.

Definition

A set $A\subset E$ is called of first (Baire) category if $A$ is a countable union of nowhere dense sets.
A set that is not of first category is called of second category.
The complement of a first category set is called residual.

Residual sets are then sets that are countable intersections of sets with dense interior. If a set is first category, then all its subsets are. In some sense the sets of first category are “small” (they are also called meagre sets because of this) and the sets of second category are “large”. We will show this using a description introduced by Banach and Mazur.

The Banach-Mazur game
Let $X$ be a metric space.

Suppose that there is given a subclass $\mathcal{E}$ of the subsets of $X$ having nonempty interior, such that each open set is contained in some member of $\mathcal{E}$. Suppose that there are given two sets $A\subset X$ and $B=X\setminus A$. The game $[A,B]$ is played according to the following rules: two players (A) and (B) alternately choose sets
$U_1\supset V_1\supset\ldots\supset U_n\supset V_n\supset\ldots$
from the class $\mathcal{E}$, with (A) choosing the $U_i$‘s and (B) choosing the $V_i$‘s, numerably many times. Such a nested sequence is called a play of the game. The player (B) is declared winner if
$\bigcap_{i=1}^{\infty}V_i\subset B$,
while otherwise (that is, if $\bigcap_{i=1}^{\infty}V_i\cap A\neq\emptyset$) the winner is (A).

To be more precise, we will call a strategy for (B) a sequence of functions $\beta=\{\beta_n\}$ giving for any sequence $(U_1,V_1,\ldots , U_n)$ of members of $\mathcal{E}$ as above, a new member

$V=\beta_n(U_1,V_1,\ldots, V_{n-1},U_n)\in\mathcal{E},\; V\subset U_n$

A play of the game $(U_i,V_i)_{i=1}^{\infty}$ is called consistent with a strategy $\beta$ if for all $n$,
$V_n= \beta_n(U_1,V_1,\ldots, V_{n-1},U_n)$
A strategy $\beta$ is winning for (B) if every play of the game consistent with $\beta$ ends up with a victory of B. If such a strategy exists, the game is said to be determined in favour of (B).
Clearly, in order to win, (A) will hope that the set $A$ is large, while (B) will have the same hopes for the set $B$. The right meaning for the word “large” in this context was (conjectured by Mazur and) proved by Banach to be the same as “residual”:

Theorem (Banach-Mazur)
The game $[A,B]$ is determined in favour of player (B) if and only if the set $B$ is residual in $X$.

Proof:
“if” part: Let $B\supset\cap_{i=1}^{\infty}G_i$ with $G_i$ open dense sets. We then choose
$V_n:=\beta_n(U_1,\ldots, U_n)\subset U_n\cap G_n,$
which can be done for the properties of $\mathcal{E}$ and of $G_n$. The resulting strategy $\beta=\{\beta_n\}$ is then easily seen to be winning, by the definition of the $G_i$‘s.
“only if” part: Let $\beta=\{\beta_n\}$ be a winning strategy for (B). We will call a consistent sequence of $n$ members of $\mathcal{E}$  a $\beta-$chain of order $latex n$, and the interior of $latex V_n$ will be called the interior of the chain. We will now inductively construct a sequence of dense open sets $\{G_n\}$ whose intersection is contained in the set $B$.
Let $F_1$ be a maximal subfamily of the $\beta-$chains of order 1 whose members have disjoint interiors, and let $G_1$ be the union of the interiors of the members of $F_1$. This open set is dense by maximality.
Then, among all the $\beta-$chains of order 2 that are continuations of some member of $F_1$ we choose a maximal subfamily $F_2$ with disjoint interiors, and let $G_2$ be the union of these interiors. The open set $G_2$ is again dense by maximality.
Iterating this procedure, we get a sequence of dense open sets $set{G_n}$. If $x\in \cap_{i=1}^{\infty}G_i$ then there is a unique sequence $\{C_n\}$ of $\beta-$chains $C_n\in F_n$ such that for each $n$, $x$ is in the interior of $C_n$. These $\beta-$chains are ordered by continuation and the limit play is consistent with $\beta$, so it must be a winning one. Thus $x\in B$, and since $x$ was arbitrary, $B\supset\cap_i G_i$.$\square$

As an application, we cite the following, which is usually proved using directly the definitions. We give instead a proof using the Banach-Mazur game:

Corollary

If the metric space $X$ is complete and $B\subset X$ is a dense $G_{\delta}$ set (i.e. a countable intersection of open sets), then $B$ is residual.

Proof:
Let $B=\cap_m A_m$, where the sets $A_m$ are open. We take $\mathcal{E}$ be the family of all closed balls, and we describe the winning strategy of player (B) in the game $[X\setminus B, B]$: let $U_n$ be the closed ball chosen by (A) at his $n$-th move. Since $B$ is dense, $B\cap$ int$U_n$ is nonempty. So int$U_n\cap A_n$ is a nonempty open set. By regularity there exists a ball of radius less than $2^{-n}$  whose closure is contained in the above set. We then choose that ball as $V_n$. We obtain that the sequence $(U_{i+1})$ refines the Cauchy filter basis $(V_i)$ and by completeness it has as intersection a single point $x$, contained in each $A_n$, and thus in $B$. $\square$

### inscribed triangle and square problem

I have found today this problem (or conjecture, if you wish):

Does every Jordan curve $\Gamma$ have 4 points on it which form the vertices of a square?

There is a survey about partial results on the Quomodocumque blog, but I have no access to a library from where I am now, so I think trying to figure it out alone might be a nice idea.

Jordan’s theorem When I mention it I mean this: any Jordan curve $\Gamma$ has an interior and an exterior, which are open connected by arcs sets such that the one called interior is bounded and together they form the complement of $\Gamma$. In particular any path (i.e. continuous curve) starting inside $\Gamma$ and ending outside it must pass through $\Gamma$.

Easy part: the triangle case

I already previously had come in contact with the analogous problem where one looked for inscribed equilateral triangles instead of squares, and that is easier: the answer was yes, by some topological argument, as follows.

There is a ($2$-valued) function $c$ with values in $\mathbb R^2$ depending continuously on $a, b\in\Gamma$ such that $[a,b,c]$ is an equilateral triangle. Using this and Jordan’s theorem, (with some patience) one verifies that there are couples $(a,b), (a',b')$ such that $c, c'$ are respectvely in the interior  and in the exterior of $\Gamma$. Moving the couple $(a,b)$ in $(a',b')$, by Jordan’s theorem again one has a nice equilateral triangle $[a,b,c'']$ with vertices all on $\Gamma$.

The procedure which works for equilateral triangles also works for any kind of triangle (you just have to use the ordering on edge lenghts carefully in each step, and after some trial and error you manage the right combination).

The square case

This case is really square. The problem with the above continuity methods based on Jordan’s theorem seems to be that as 2 vertices wander on $\Gamma$ the other two may pass independently from one connected component of $\mathbb R^2\setminus \Gamma$ to the other.

1) If you suppose the curve is smooth (say $C^1$) then you might try to take an inscribed “half-square” triangle $[a,b,c]$ (i.e. isosceles with one $\pi/2$ angle in $b$) which exists by the above theorem, and try to perturb it still leaving it inscribed, seeing what happens to the $4$th vertex $d$ of the square. To help intuition, suppose that near $a$ and $b$ the graph of the curve is a piece of line (recall you assumed $\Gamma\in C^1$ so this is not far from true). Then perturbations of $a,b$ give a $2$-dimensional set of perturbations of $c$‘s if and only if the lines representing $\Gamma$ near $a,b$ do not form an angle of $3\pi/4$. If this happens, then by the inverse function theorem one can find for each $a$ one $b$ such that $c$ completing the half square is still on $\Gamma$.

Then you could try to reason like before, asking yourself if there are two squares $[abcd], [a',b',c',d']$ with $a,b,c, a',b',c'\in\Gamma$ and $d, d'$ in different connected components of $\mathbb R^2\setminus \Gamma$. You can construct them as follows:

- if $a,b\in \Gamma$ are very close then there are 2 vertices completing the square inside $\Gamma$ (this uses the regularity of $\Gamma$). Then you can move $b$ along $\Gamma$ until the first time when one of $c,d$ touch $\Gamma$.

- if $a',b'\in \Gamma$ realize the diameter of $\Gamma$ then you can move $b'$ towards $a'$ until the first hitting as above and you get the second square.

Now you move $a, a'$ closer together until they become the same point, and let $b,c,b',c'$ follow them staying on $\Gamma$ and forming half-squares as discussed earlier, and $d,d'$ follow them completing at each moment the respective squares. During all this procedure by Jordan’s theorem $d, d'$ have kept on staying on opposite components of the complement of $\Gamma$. However nothing tells that there is a general way of moving one onto the other while $3$ vertices of the cubes stay all the time on the curve.

2) Let’s use the $3\pi/4$ angle found above to try producing a counterexample. The first idea I get is starting from a regular octahedron and moving two adjacent sides parallel to themselves towards the center, leaving all the other sides supported on the same lines. Exercise: find a square inscribed in this figure!

(there is no square in the original octagon that “survives” on this new one. However:)

3) Lemma: in all convex curves there is an inscribed square: just take any small square completely inside and “blow it up”, immagining that the curve represents a “rigid hole” in outside which the square cannot pass (but it can slide if this helps continuing to “expand”). At a certain point you cannot blow anymore: the square has at least 3 vertices on the curve and has reached the maximal size. Now suppose 3 vertices $a,b,c$ of $[abcd]$ are on $\Gamma$ and $d$ isn’t. This implies that the tangent cone to the interior of $[abcd]$ in $a$ and $c$ do not contain the sides $[ad],[cd]$ else by convexity $d\in\Gamma$. Then it (is easy to see that it) is not true that you cannot blow anymore. $\square$

### Hi!

I have started this blog in order to keep some track about what I’m doing, and to have a way to easily share it with othen people. :)