Well-Posed Problems

August 23, 2009

To continue along with the mathematical analogies that I built in my previous essay on iterative processes, I thought I would examine and riff on the concept of ‘well-posed problems.’ To be honest, this isn’t something that I have much experience with in mathematics. Because, typically, in all the math classes I’ve had, the problems are ALWAYS well-posed. That is, they’re always something that can be solved.

But life isn’t like math class. Typically, the majority of problems in my life aren’t well-posed. Heck, half of them aren’t even POSED. And if that’s the case, how can I expect to get anywhere with them?

To go back to the math metaphor again, what does it even mean for a problem to be well-posed? It means that a solution exists. That you can get there from here. So, on the flip-side, what does it mean if a problem isn’t well-posed? What does it mean for a problem to be ill-posed? It means that there IS NO SOLUTION. No matter how hard you try, you can’t get there from here. You can scratch your head, jump on one foot while patting your belly, or even bang your head against the wall continuously. That won’t get you to the solution. Because quite simply, that solution doesn’t exist.

And in life? You can go about life half asleep. That’s analogous to always dealing with ill-posed problems. And yes, it has its advantages. Like the fact that you don’t have to think quite so much. Because thinking is, like, hard. And if all you ever do is try to solve one ill-posed problem after another, you’ll always feel like you’re doing something. And if just doing ‘something’ is what you’re after, then damn if you’re not doing a good job!

But if you’re interested in doing anything more than ‘something,’ say ‘something in particular,’ then it’s going to require that you find a well-posed problem. Although life has the slight advantage over math in that sometimes you might start with an ill-posed problem and still get an answer (even a ‘correct’ answer, if that terminology has any meaning in the real world). But then you’re just being lucky, you’re not being smart. And although those two domains are not strictly exclusive, one is well within my control while the other is, almost by definition, not.

So, to move things forward you need to pose the ‘thing’ well. I guess that’s analogous to setting up real, honest, concrete goals. And I would imagine that setting those goals must also involve writing the thing down. Because the mind has a silly way of making it seem like you ‘know’ and ‘remember’ things, just because you happen to be able to keep a fuzzy concept of those things in your mind’s eye for a few seconds. Like this whole whole ‘well-posed problems’ analogy. It was stuck in my mind when I first thought of it, but it certainly didn’t look anything like it does now.

And I certainly don’t really have any concrete details as to how to tell whether a ‘life’ problem is well-posed. That will require more thinking (eke!) on my part.

Time to pose some problems. And iterate.

Advertisements

I was reading something by Eliezer today. And he mentioned something about morality being an iterative process. This got me to thinking about iterative processes and life in general. I mean, I spent the greater part of this summer working with an iterative process (namely, Levenberg-Marquadt) for minimizing a certain cost function in order to fit a model to data. And if I spent so much time with it, it would only stand to reason that I SHOULD be able to apply that idea somehow to real life. I mean, it isn’t a one-to-one correspondence. I’m not literally going to use some mathematical algorithm to ‘optimize’ my life. Though that would really be cool.

Here’s my thought process on this: for some iterative method (take Newton’s method in one dimension, since that’s pretty standard fair in any Calc I course and is pretty easy to think about), the goal is to find some value for x, call it x*, that stands as the ‘answer’ to some problem. Usually, at least with Newton’s method, that x* represents the solution, or zero, to some function. But you could just as easily use Newton’s method to minimize a 1D function by finding the zero of the derivative of the function (say the original function is something nasty that you can’t solve analytically, like f(x) = x * exp(x) – 5*x^2). But that’s just a random digression into numerical analysis.

My point is, with all these methods, the first thing you have to do is determine an initial iterate. With a high-power method like Newton’s method (which has quadratic convergence, for Jebus’ sake!), you want your initial guess to be within the neighborhood of the correct answer. Otherwise, you’re going to diverge like woah and never get to the answer you were looking for. If you want to get there more slowly, but also more surely, you might want to use something like the bisection method. That only has linear convergence, but you’ll DEFINITELY get there.

Again, that’s a bit of a divergence (but NOT del dot F!) from the main thrust of what I’m thinking which is this: you need that initial iterate to get the process started. It doesn’t matter how good the method is, if you don’t pick a first guess and then plug it into the technique, you stand 0% chance of getting to where you’re going. Which I guess is just a really fancy-ass way of saying, “You miss 100% of the shots you don’t make.” But somehow at the moment that I thought of this analogy, it sounded really profound.

Anyway, what’s the takeaway message? In numerical analysis, as in life, the main thing you can do to make sure something gets done is to take the first step, pick the initial iterate, and then see what happens. It might be the case that you picked something that doesn’t fulfill any of the conditions for convergence (damn you, fixed point method!). In that case, you note the failure of the method (or the iterate), evaluate your situation, and try again. And again. And again. You may have to try a hundred different iterates and a dozen different methods before you converge to the right answer. But that’s okay. You’re living anyway. Might as well make the most of it and ride the gradient to that optimal solution.

And take heed: the first extrema you find might not be global. You may need some sort of momentum term built into your algorithm to make sure you don’t run into a rut. But all things considered, that’s rarely the problem with your problems. More often than not, it’s the simple fact that you don’t seem to want to get started out of fear that you’ll ‘do it wrong.’ But there is no wrong iterate other than no iterate. Anything you do will give you feedback on what you could be doing better. Expect for doing nothing. That gives you feedback, but all it tells you is that you should be doing something!

So start iterating!

Sidenote: I wonder if I’m going to start thinking more in these sorts of terms the more I get into applied math. I would find that both amusing and terrifying. This is both a new toolkit of metaphors to look at life with and a scary way to sound retarded to the rest of humanity. Let’s hope I do a lot of the former and not much of the latter.