sparr: (Default)
Often in discussions about how to make better decisions, people will object to attempts to quantify something, to set a value or threshold for some action. They will say something like "you can't quantify that" or "you can't use math to solve this problem". Not only can you quantify it and use math to solve the problem, but most people already do those things unconsciously.

Consider that I might want to provide you with a meal that is spicy, but not too spicy for your tastes. It is easy to say "You can't quantify how spicy it is" or "There are too many variables", but neither of those are true. You might not even know what the variables are, but what you personally know doesn't actually affect whether or not the solution to this problem can be quantified. You might be thinking that simply measuring the amount of capsaicin in the dish won't give an effective answer to the question, and you're probably right about that. However, if you give me samples of ten dishes that you enjoy and ten that you think are too spicy, it is entirely possible to start narrowing down the other variables. We will determine that you accept more capsaicin if there is more dairy in the dish, and less capsaicin if there is more citrus in the dish. We will discover that concentrated ginger or garlic or cinnamon also trigger your "too spicy" threshold, but not in the presence of particular oils. These are things we can quantify, increasingly accurately as we get more samples or information. I can use those results to decide what to cook for you, better matching your spiciness preferences the more information I have.

The above is a good method if we can easily experience or remember or even [re]produce many examples across a spectrum. Another valid approach would be to take a single situation that exists on one side of a line, and ask you to imagine alternatives with small changes that would cross the line. When I ask "what makes an automobile look good to you?", you might say that can't be quantified, but you would probably be at least mostly wrong. We can take a single automobile that looks good to you, and then ask you to make a list of changes to the car that would still look good, and a list of changes that would make it look bad. From those lists we can quantify the properties of a car that make it look good to you. The more interdependent those properties are, the more examples it will take, but we can start approximating an answer with just a few. With that approximation I might become more confident in choosing a car as a gift for you.

Any time you make a decision, you are quantifying something, whether you realize it or not, whether it's intentional or not. While standing on the side of the street and waiting to cross, you are repeatedly classifying the current scenario into one of two buckets, "cross the street now" or "wait", and that binary classification is itself a quantification, albeit a simplistic one. It's actually more nuanced than that, again whether you realize it or not. You actually have some internal risk threshold about how safe the crossing needs to be in the context of your current situation, and you're evaluating scenarios to see whether they cross that threshold, so the buckets are really "this situation is above X% safe" and "this situation is below X%" safe, even if you don't know what X is. We could keep track of when you do and don't cross the street, and look up statistics about how safe those scenarios actually were, and come up with an approximation of the X that you don't even know you're using. You're also [probably-]unconsciously applying some function of how certain you are in your assessment; a blind turn nearby on the road will make you more uncertain about how safe it is to cross, adding wider error bars to the quantification, and you'll be choosing based on the extent of those error bars.

The examples above are casual, friendly, and relatively simple, but all of this also applies to harder and more controversial topics as well. Everything above applies to deciding whether or not to tap someone on the shoulder to get them to stop blocking the door on the train. It applies to deciding if you should lie when someone asks you about a secret. It applies to deciding when to ask someone out on a date and when to initiate sex. In every one of these scenarios, every variable that affects your decision can be quantified to some degree; most of them are things that you effectively quantify by the simple act of repeatedly making the decision one way or the other in different scenarios. Discussing the numbers and thresholds and ratios involved in these decisions doesn't make the decisions themselves more or less acceptable. Knowing what those values are is a tool for introspection. They can help you match your actions to your beliefs, or to another person's actions or a social norm, more effectively. There is nothing wrong with wanting to accomplish those things, or with using numbers in an attempt to do so.
sparr: (cellular automata)
I follow a variation of consequentialism, filtered through the opposite of paternalism which doesn't have a more specific name.

My value system is how I decide which outcomes are better than which others. It is important to note that the philosophical concepts below are not dependent on that value system. Everything in the next few paragraphs holds regardless of what value system we are considering. Wherever you see "good", "bad", "positive", "negative", "better", "worse", etc below, those can mean whatever you want them to mean, especially if your value system is internally consistent and universalizable. I sometimes even prefer to operate in your value system, if we are discussing a situation where the positive and negative outcomes affect mostly to only you.

I apply a maximax criterion regarding the choices of other actors with agency. That's someone like you, in most cases. When I take an action that allows you to choose between two actions of your own, I am responsible for the most good outcome you could choose, and you are responsible for any less good or more bad in the outcome that you do choose. If I opt to not give you that choice because I expect you would choose the less good outcome, I am denying you agency in the situation, and that would be paternalistic. When I tell you that your dog is trapped in a burning building, you might decide to run inside; if the outcome of your choice is worse than if I had not told you then you are responsible for that outcome, not me. When the villain drops two people off a bridge and you can only save one, someone is responsible for the death of the person that you do not save, and it is mostly to completely not you.

I apply an expected value criterion regarding actions with random outcomes. When I play a game of Russian Roulette, the death of the loser is as much my responsibility as that of the person who made the unlucky trigger pull.

Finally, I do not recognize a fundamental distinction between action and inaction. If I tell you that pressing the button will do something and you press it, you're responsible for the outcome. If I tell you that not pressing the button will do that same something and you don't press it, you're equally responsible. Not pressing the button is just as much a choice as pressing it. This concern is most often illustrated with variations of the trolley problem where the two tracks are switched, which I don't consider to actually change the problem at all.

That's all I've got for now. This is my first real attempt to put this all together in a reference document. It will certainly be revised in the future, as I get a better grasp on the concepts that drive my decisions, and also continue to become better at describing them.

Profile

sparr: (Default)
Clarence "Sparr" Risher

February 2025

S M T W T F S
      1
2345678
9101112131415
16 171819202122
232425262728 

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 7th, 2025 02:26 am
Powered by Dreamwidth Studios