For the most part, they do. That's known as "hindcasting" and it's done with virtually all forward looking models. However, it's not completely a fair test because if a model is demonstrably wrong at hindcasting it will be corrected (one way or another) to give accurate results.