Events since then have been a huge a vindication for the IS-LM types, including Krugman. The last ten years are a complete vindication of Krugman and other Keynesians.
If you are curious about how economists actually work nowadays and how we would approach the IS-LM model in particular, you might be interested in this post.
There are entire hosts of problems with the IS-LM model that make it an unworkable model. It doesn't take foresight seriously, it lacks a production sector and a labor market, etc. and, obviously, government spending doesn't materialize out of thin air. Something a professor of mine did for bachelor students was to combine the IS-LM model with a production sector, a labor market and make sure governments followed a budget. It still has a lot of problems, but you can at least then ask how well it fits the data. The problem is that your primary sources of cyclical behavior are monetary shocks and spending shocks and that is a
HUGE problem. Why? Because it means the cyclical component of investment and consumption will move in opposite directions, though a robust feature of macroeconomic data in the US, as well as abroad is that this correlation is positive (that is the essence of the Barro King curse).
If you are curious, this problem also plagued many New Keynesian DSGE models, as well until the work of Ascari, Phaneuf, and Sims (2019). If people are curious about how well these model fit the data:
APS March 2019.pdf
On page 38, they show what we call a cross-correlogram. Essentially, you take variables (Y := GDP, C := consumption, I := Investment, L := Labor) and filter out the trend components. If I am not mistaken, in other words, they used growth rates. Once you have the data, you compute correlations of the type COR(Y(t), C(t-k)) with k=0,...,4, for example, meaning you'd like your model to not only be correct as to the sign of how Y and C move AT THE SAME TIME, you would like also the MAGNITUDE to be correct AND you would this to also be true for delayed values of C. You do it for all combinations of variables in the data. Then, because DSGE models are essentially a system of stochastic difference equations, you can simulate data. You simulate, say, a thousand history of exactly the same length as in the data. In each of them, you compute those statistics and then report the averages across your thousand runs as the model-implied values, or theoretical values. If your model is good, you should be close to the data.
So, what the figure shows is (1) a typical NK DSGE, (2) their NK DSGE (the benchmark) and (3) the data. Anyone who knows what is a correlation is more than knowledgeable enough to understand that figure and, if you do check it out, for the first time in your life you will have a real idea of how well up-to-date economic theory fits the data. The crux of the Barro-King curse is in the second line, third column: typical NK models understate those cross-correlations, specifically. What you can see in that figure is 100 correlations used to evaluate where the model does well and where it fails.