We present simple randomized and exchangeable improvements of Markov's inequality, as well as the related Chebyshev's and Chernoff's inequalities (like Hoeffding's and Bernstein's), for example. Our variants are never worse and typically strictly more powerful than the original inequalities, which can be recovered as simple corollaries. The proofs are short and somewhat elementary. We provide several simple statistical applications involving e-values, such as uniformly improving the power of universal inference (https://arxiv.org/abs/1912.11436, PNAS'20), and tighter nonparametric confidence intervals based on betting (https://arxiv.org/abs/2010.09686, JRSSB'23 discussion paper). This is joint work with Tudor Manole, a PhD student at CMU. A preprint is not yet on arXiv, but it builds off our previous work in https://arxiv.org/abs/2103.09267 (IEEE Trans. Info. Theory, 2023). The talk will only require a beginner graduate student understanding of statistics and probability.