We present and compare two approaches to the task of summarizing evaluative arguments. The first is a domain-independent sentence extraction-based approach, while the second is a weakly domain-dependent language generation-based approach. We evaluate these approaches in a user study and find that they quantitatively perform equally well. Qualitatively, we find that they perform well for different but complementary reasons. We conclude that an effective method for summarizing evaluative arguments must effectively synthesize the two approaches.