Olsen points out that there is value in researching bold new user interface systems and danger in trying to evaluate such research using the same approaches taken with more incremental work. He doesn’t exactly provide slam-dunk alternative evaluation methodologies for such research, but he does enumerate a number of ways in which judging UI systems research can go awry. This checklist may in turn help reviewers to realize that the reason a square peg is not fitting into their round hole is not (necessarily) a problem with the peg.
As someone who has spent most of their career beholden to the market, I find this whole evaluation business to be strange. I don’t necessarily have a better idea for how to decide what’s “good research” and what’s not, but when you’re dealing with questions like “what’s the best way to interact with a general purpose computing device,” objective measures are in short supply.