Morality

Discussion for all topics (until the forum becomes large enough to justify splitting things up this will be where all topics go)
Raininginsanity
Posts: 51
Joined: Tue May 23, 2017 4:50 am

Morality

Post by Raininginsanity » Wed May 31, 2017 12:15 am

Scott expressed some doubts about Utalitarianism because of some of the conclusions that it comes to, such as wireheading. What's opines the community? Is Scott wrong to think wireheading is not a desirous state of being? Is utalitarianism based on hedonism the best moral system?

dylygs
Posts: 7
Joined: Thu May 25, 2017 5:18 pm

Re: Morality

Post by dylygs » Wed May 31, 2017 1:23 am

I'm not sure if this is "utilitarianism based on hedonism" or not (my guess is probably not), but I definitely agree with Scott that pure utilitarianism will lead to (for most people) undesirable situations. For example, I would not take the brain-in-a-vat-with-unlimited-heroin deal, since my utility includes whether or not I'm experiencing reality - similarly, I wouldn't take the VR-exactly-like-real-life deal. Obviously, that might change under edge cases or circumstances explicitly designed to challenge that idea, but that's what I value generally.

Interestingly, what comes to mind as a good analogy for what I'm thinking is Eliezer's coherent extrapolated volition, about which I've only read Nick Bostrom's summary near the end of Superintelligence. True morality seems more like a hazy, democratically-defined thing that we can derive from saying that it's what we would all want for ourselves and each other if we were smarter, wiser, more empathetic, etc., as those qualities get arbitrarily high. This gives a more "go with your gut" sort of attitude, which simultaneously fits my personality pretty well and clearly isn't as immediately useful as a solid theory like utilitarianism or something.

Overall, it seems like utilitarianism and consequentialism are closest to what I think is the best system out there, which involves things like trying to save the most lives with the resources available and not just choosing something because it looks like the least mean at first glance. But they have their own failure modes, like wireheading.

I dunno, that wasn't so much of a position as a dump of thoughts on the subject. I'd like to hear what some other people think first and see how I respond to what they say.

ASol
Posts: 3
Joined: Mon May 15, 2017 3:31 am

Re: Morality

Post by ASol » Wed May 31, 2017 2:15 am

I recently read about a strategy in AI development where the utility function is never explicitly defined. The working theory being that if the system is unsure of the ultimate goal, chance of perverse instantiation is reduced. (I.E. Infinite paperclip factories are less likely if AI is unsure of the full utility of more paperclips.) This necessitates the AI to repeatedly "check in" to make sure utility of a course of action is still positive.

Essentially, the opposite of coherent extrapolated volition. Unfortunately, my google-fu isn't strong enough to find a link to what I'm talking about, tho I think Scott may have posted it about it in a link dump.

The point I'm getting to is this: Maybe following Utilitarianism, or any philosophical system, to an ultimate conclusion is a mistake. If you're looking for a consistent philosophy that doesn't have any weird failure modes, you're gonna have to wait around for a long time. Maybe the most that can be done with a philosophy is to use it for discrete bits of reasoning, then check back in using some different-if-not-higher-level process to see if the conclusions are still valid.

archon
Posts: 59
Joined: Thu May 25, 2017 11:02 am

Re: Morality

Post by archon » Wed May 31, 2017 2:59 am

My general understanding of utilitarianism is that if following your utility to its logical extremes results in a result you didn't want, you really need to think more about your utility function.

Thus, the main issue with utilitarianism, is that it doesn't say much about what you should put in your utility function, and you get all of the examples being the wire-heading inducing pure hedonism, because that is easy to think about.
"Don't be silly -- if we were meant to evolve naturally, why would God have given us subdermal implants?"

hoghoghoghoghog
Posts: 5
Joined: Sun May 14, 2017 8:15 pm

Re: Morality

Post by hoghoghoghoghog » Wed May 31, 2017 3:51 am

Hedonic utilitarianism is indirectly self-defeating. But I think the flaw is the "hedonic" part.

One virtue of utilitariansim that has not been mentioned yet: it lets you handle practical moral questions using expected value, rather than waiting until you've sussed out Ultimate Truth. Take vegetarianism. Anti-vegetarians can distract from the question of whether we should eat meat by arguing about whether animals are morally considerable, which is a hard problem. But utilitarians can just make a conservative guess at the expected harm of killing an animal for food (weighted by how likely it is that the animal is morally considerable), versus the expected gain, which makes the conclusion pretty obvious imho.

Raininginsanity
Posts: 51
Joined: Tue May 23, 2017 4:50 am

Re: Morality

Post by Raininginsanity » Wed May 31, 2017 5:34 pm

Good points all around.
True morality seems more like a hazy, democratically-defined thing that we can derive from saying that it's what we would all want for ourselves and each other if we were smarter, wiser, more empathetic, etc., as those qualities get arbitrarily high. This gives a more "go with your gut" sort of attitude, which simultaneously fits my personality pretty well and clearly isn't as immediately useful as a solid theory like utilitarianism or something.
Sounds like your treading close to virtue theory. Who's to say empathy is desirable in a rational community or would lead to good outcomes? Once we get into emotion, things become very arbitrary and I'm wondering how your average rationalist would feel about basing a moral system off of any gut feeling (and who's gut feeling anyways? I guess that's where "democracy loosely defined comes in)? I'd probably be more OK with it than most here, but my baptism into the community isn't complete yet. I think virtues, as classically defined and as understood by gut feelings, has great utility. I think the democratic aspect is also very important. Self-determination at the community level seems about as important as liberty.
Overall, it seems like utilitarianism and consequentialism are closest to what I think is the best system out there, which involves things like trying to save the most lives with the resources available and not just choosing something because it looks like the least mean at first glance. But they have their own failure modes, like wireheading.
I've been looking into teleological consequentialism (well, I read the wiki explanation under consequentialism), and I think that I prefer that model. It's the hedonism aspect that's always thrown me off of utilitarianism, and as aSol points out, following any utilitarianism to its logical conclusion will probably lead to something undesirable. So why not start at the end? We should play the game with the end in mind or else we are behavior executors instead of utility maximizers. Archon says we need to define our utility function appropriately. But I'm wondering if utils are too abstract for this? Hence why we're as confused about morality as we are. If utils were easy to define, we would all be utilitarians. I mean how do you quantify 'having meaning in your life'? And is it possible to have meaning if everyone is already in utopia? What can any individual contribute at that point? If God exists, perhaps he made life miserable just so he could have meaning to his/her own life? 🤔

Hoghoghoghog also brings up a good point, though I feel like that such decision making logic could lead to immoral behavior as often as good behavior.

(Written on my phone. Forgive any errors)

liskantope
Posts: 9
Joined: Tue May 02, 2017 11:33 am

Re: Morality

Post by liskantope » Wed May 31, 2017 8:47 pm

Overall, it seems like utilitarianism and consequentialism are closest to what I think is the best system out there, which involves things like trying to save the most lives with the resources available and not just choosing something because it looks like the least mean at first glance. But they have their own failure modes, like wireheading.
I basically agree. I've always held that the problem of utilitarianism implying some weird things when taken all the way to its logical conclusions is not really a reason to reject utilitarianism. Instead, there are at least two possibilities:
(1) utilitarianism, like the theory of Newtonian mechanics, provides a good model for everyday situations but it sort of breaks down at the extremes and we need to replace it with some sort of refined, enhanced version that gives better answers when taken to logical extremes; or
(2) utilitarianism, like the theory of quantam mechanics, provides a model which is somewhat counterintuitive but only when viewed at a scale we're not used to in our everyday lives (could it conceivably come up that torturing one person might save 3^^^3 people from getting dust specks in their eyes?).

archon
Posts: 59
Joined: Thu May 25, 2017 11:02 am

Re: Morality

Post by archon » Thu Jun 01, 2017 4:38 am

liskantope wrote:
Wed May 31, 2017 8:47 pm
I basically agree. I've always held that the problem of utilitarianism implying some weird things when taken all the way to its logical conclusions is not really a reason to reject utilitarianism. Instead, there are at least two possibilities:
(1) utilitarianism, like the theory of Newtonian mechanics, provides a good model for everyday situations but it sort of breaks down at the extremes and we need to replace it with some sort of refined, enhanced version that gives better answers when taken to logical extremes; or
(2) utilitarianism, like the theory of quantam mechanics, provides a model which is somewhat counterintuitive but only when viewed at a scale we're not used to in our everyday lives (could it conceivably come up that torturing one person might save 3^^^3 people from getting dust specks in their eyes?).
See, I would probably vote in favour of the second case. But that is more going with my gut, and the fact that people's intuitions almost always break down in extreme cases (being not designed for such circumstances). Its like what Scott said about Bayesian reasoning - it is almost always to make up numbers, and then put them through some kind of model, than it is to just use your intuition. But then, it could just be wrong. I don't have a ton of evidence.

Also, in both cases, and their associated analogies, there is the point that they are too complicated to use in day-to-day life.

Thus: might it be more efficient to use utilitarianism or consequentialism to generate some effective heuristics (which look much more like vitue ethics or something like that), which you can then use in real life, in a much simpler manner. (i.e. having determined in a typical case that vegetarianism is more ethical, not eating meat unless you can see a clear reason not to, rather than re-calculating based on the particular circumstances, i.e. nature of alternative and the meat in question. )

This seems to be what I see people doing, both in arguments, and in real life, but I haven't seen is discussed anywhere.
"Don't be silly -- if we were meant to evolve naturally, why would God have given us subdermal implants?"

Raininginsanity
Posts: 51
Joined: Tue May 23, 2017 4:50 am

Re: Morality

Post by Raininginsanity » Thu Jun 01, 2017 2:38 pm

It seems pretty easy to justify Machiavellian policies based on utalitarianism. I don't think it's in the fringes that the philosophy leads to strange results. But even the fact that it ever leads to strange results means we should probably define our utility function better.

rlms

Re: Morality

Post by rlms » Thu Jun 01, 2017 4:13 pm

Greedy preference utilitarianism. At a given moment, try to satisfy the preferences that exist at that moment, weighted by how strongly they are held and how complex the agents that hold them are (measured by their preferences). Has many of the benefits of utilitarianism, but avoids various problems. No forced wireheading because even if someone will be really happy (/have their preferences really satisfied) once we hook them up to a morphine drip, their current preference against it is the only thing we count (voluntary wireheading is fine). If a "preference monster" comes along with really complex preferences that involve eating people, we are still obliged to feed it. I'm willing to take that hit though; I think it's plausible that an entity orders of magnitude more complex than humans might have preferences that matter more than our relatively puny desires about not being eaten. However, unlike regular utilitarianism, if we get a "preference monster egg" we don't have any obligation to incubate it and let it eat us: this has potential real implications regarding whether we have a moral imperative to create superintelligent AI even if it is malevolent.

Post Reply