Static analysis tools attempt to detect faults in code without executing it. Understanding the strengths and weaknesses of such tools, and performing direct comparisons of their ef-fectiveness, is difficult, involving either manual examination of differing warnings on real code, or the bias-prone construction of artificial test cases. This paper proposes a novel automated approach to comparing static analysis tools, based on producing mutants of real code, and comparing detection rates over these mutants. In addition to making tool differences quantitatively observable without extensive manual effort, this approach offers a new way to detect and fix omissions in a static analysis tool's set of detectors. We present an extensive comparison of three smart contract static analysis tools, and show how our approach allowed us to add three effective new detectors to the best of these. We also evaluate popular Java and Python static analysis tools and discuss their strengths and weaknesses.