Overly Sharpened Blog

They never told me what happens when you sharpen the saw too much...

Other Games of Life

I've been playing with duplicating some of the results on the emergence of cooperation in variations of iterated prisoner's dilemma.

An interesting (if somewhat trivial) thing to note is how easily behaviour similar but distinct from the Game of Life. A small grid randomly populated only with Defectors and Cooperators, for instance, often produces glider-like patterns.

My goal is to implement some dynamic programming approach to generating strategies, and the attached source betrays that goal with some complexity that is unnecessary for the behaviour I'm talking about, but anyways.

Minting is Rare

Or, "How to resolve revocation with an immutable capability-secure world."

A path is stored not by minting a new reference to the target (a hardlink), but rather by storing the path itself (a symlink). Each segment of a path represents a node that serves as a proxy for the rest of the path (onion routing).

Now, where does this leave me if I want permissions to be baked into a capability, and generally immutable?

On the one hand, I can now manage mutability in a sane way in the UI layer, because now the size of local neighborhood is under the control of node. This is bit of a return to the heavier weight approach to linking that I was originally considering, although it still only requires write access to one side of think, plus mint access (which could be a limited form of write in the mutable case, but it doesn't have to be).

On the other hand, immutability is really really nice. Specifically, being able to decode the permissions and determine if an operation is allowable offline is a big win, as it allows for some fairly aggressive caching even in the worst case, and in the best case may actually reduce the time-complexity through memoization.

Notes on the Implementation of a Blog

Publishing is currently a three step process consisting of writing an entry perfectly, publishing it to a staging point, and then committing contents of that stage point to the real blog. This causing me a small amount of grief:

  • Links don't work on the staging site

  • Comments are published to the staging site rather than the front page (this is not quite a feature)
Approaches I've considered to fix this:

  • Just use Blogger as intended

    Not going to happen because of principles I'll explain some other time

  • Rewrite the content as Blogger uploads it to the staging site

    A bit more tempting, although it's brittle. More importantly, it opens me up to all sorts of security vulnerabilities that I'd rather avoid in what should be a rock solid "deploy" script.

  • Replace Blogger with something I write myself

    Now we're getting somewhere!

However, understand that when I say "replace blogger", I'm not starting from square one. Some time ago, I actually used a framework of my own invention based on a capability security model inspired by Richard Kulisz (don't ever tell me I didn't give credit where credit is due :p). The problem being that I wasn't following my own advice, and didn't have a backup of the material in a usable form when the inevitable happened. Ah well, live and learn, it wasn't that fun to work with anyway.

The idea here is to do it again, but with a focus on creating on top of that architecture that ends up feeling more like a blog than a wiki.

So, how do you create a blog on top of a CSM architecture? Why, I'm glad you asked...

A question about PyPy's JIT

Although I'm sure this is already obvious to the PyPy people, I'm quite interested to see how close they are to a system that would be capable of efficiently executing interpreters written on top of the existing system.

PyPy is a python implementation written in python. The translation and jit architecture (as I understand it) uses manually inserted hints [pdf] to indicate what variables belong to the interpreter vs the interpreted program, so that the jit can accurately determine when the interpreted program has looped (as opposed to the interpreter itself). This is important because, in general, optimizing the code executed of the interpreter has fairly limited gains: you gain a faster interpreter, but execution is still interpreted. The hints allow the system to distinguish between the accidental work of interpretation from the essence of what is being interpreted.

But it's not recursive. Even though the runtime has the required logic, it is missing the hints, and so an interpreter running on top of this stack will run faster, but it won't make the jump to direct execution.

The question that intrigues me is this:

Would it be possible to generate those hints dynamically to make the gains available to higher level interpreters?

Wait, I know what happens next...

Interesting service being launched today by Wolfram Research

Any teenager is capable of making a machine from scratch that can seriously injure a person. The difference between a motor that you can stop with a bare hand, and one that will simply take your hand off can remarkably incremental.

I wonder how close we have to get to that threshold in order to have enough experience to see how close we are.

No, I don't think the imminent launch implies an imminent hard takeoff. I just find myself wondering if we'll recognize the potential of the technique, or whether we'll just make two big lumps of fissile uranium by accident ("Oooo! Shiny!") and get on with the business of bashing them together.

Kelly criterion

Aside from the obvious issue of compensating for errors, I don't have a good intuitive understanding of why one wouldn't want to maintain a bet as close to the kelly bet as possible.

It seems that the usual complaint seems to be that you want to minimize your downside risk in the short term, and a kelly bet is concerned only with maximizing the long term gains.

What doesn't sit well:
  • "Short term" is already well known to be a bad measuring stick. You measure your progress over months and years, not days and weeks. Optimizing for the short term isn't much different than sitting down at a $1-$2 no limit poker game with your last $50, hoping that you can play so conservative that you somehow stave off the inevitable ruin.

  • I suspect that the simple explanations are applying a rule-of-thumb in order to model the needs of a career player: the competing constraints of paying the bills while also growing the bankroll. My intuition is that including this explicitly, or baking the effect into an "effective bankroll" (instead of a fudge factor on the ideal bet) would give a better over all result, or at the very least, a more intuitive explanation.
I need to think about this more.

Card-counting and evolutionary algorithms

I'm a bad blackjack player. Bad enough that I refrain from playing anywhere except a friendly home situation with no money at stake, where I definitively demonstrate how bad I actually am.

That said, I find myself intrigued:

Given a population of players based on existing counting techniques, with crossover and mutation, and a fitness function that included optimizing for function size and minimum worst-case runtime memory usage, what sort of counting technique might we turn up?