Oh, I love the repugnant conclusion! I just had a discussion about this with my brother the other day; we cast the graph onto the TV and talked through the different comparisons. I'll give you what we eventually agreed on, and you can tell me what you think. So, Utilitarianism, basically, is saying "If there is a magic equation that determines the maximum happiness (or maximum average happiness, or whatever) for everyone, then sticking to that is morally optimal." And that makes a lot of sense, right? It's got the basic precept of everyone's happiness being important, and covers a lot of corner cases by providing clear answers to thorny questions like "Is it okay to cause one person to die in order to save five?" or "Even if torture is immoral, is it immoral to torture someone if we are absolutely guaranteed to save a million lives?" So Utilitarianism is definitely a step forward from, for instance, the Golden Rule, which would trip over a lot of those questions, but Utilitarianism trips over questions like the Repugnant Conclusion, and also questions like "Is it okay to brutally torture someone to death in order to prevent a sufficiently large number of people from having a speck of dust in their eye?" So I won't presume to try to imagine an ideally moral society - I'm not sure I could improve upon the idea of a philosopher-king or council, anyway - but for myself in my personal life, I like to practice what I call Utilitarianism But. Basically, all else equal, utilitarianism is optimally moral... BUT when something feels really wrong, like the repugnant conclusion does, or like torturing someone to prevent dust specks does, or something, I put that on hold and go with what feels right, allowing myself and the rational people around me to override the magic equation. I think that before utilitarianism, the closest you could get to it might have been The Golden Rule But, and I think Utilitarianism But will serve until and unless we arrive at a more complete understanding of rational morality.
You brought up a perfect instance where Utilitarianism falls short. In the instance where you "enslave a small population for the betterment of the masses" this is morally permissible by pure utilitarianism. But image Utilitarianism that has a side constraint that generally maximizes happiness when adhered to. Therefore actions considered have to be asked first and foremost, "does this violate this side constraint?" and if it doesn't, "which action that doesn't violate this side constraint maximizes happiness?" That side constraint is up to your imagination, and I would love to hear what side constraints you would apply. What do you think?