Today I finished run #10 from yesterday, in which I tried whitening the data before doing inference. The results were inconclusive but not encouraging.
I need to clean up the code which has gained some cruft from yesterday's tests. I'm removing the whitening option, since it doesn't help with synthetic data, but I'll keep the whtening function for testing on real data later.
Description: Introduce piecewise continuity constriants. Re-run and evaluate prediction error.
Method: add a special case to Piecewise_linear_model. If b.size() == 1, assume contiguous model and infer the other b's on the fly. Initial model estimate uses the same code as the non-continouous model, and simply resizes b.x.b to 1 afterward.
Results:
Ground Truth
--------------
Training error: 88.6756
Prediction error: 90.3623
Initial Model
------------
Training error: 88.6303
Prediction error: 90.4582
Best Model
----------
Training error: 88.6268
Prediction error: 90.4576
Discussion:
The prediction error of the estimated model is extremely close to that of ground truth. This is a simpler model, so that probably explains improved prediction accuracy.
It's notable that the best model is now different than the initial estimate. That's because the continuity constraints make the problem not solvable analytically.
Also notable is that training error for the estimated models is better than that of the ground truth.