tag:blogger.com,1999:blog-66420112024-03-13T11:09:56.345-04:00Philosophy, et ceteraProviding the questions for all of life's answers.Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.comBlogger2255125tag:blogger.com,1999:blog-6642011.post-12466637508673825702022-12-30T10:43:00.000-05:002022-12-30T10:43:13.570-05:002022 in review<div style="text-align: left;"><span style="background-color: white; color: #333333; text-align: justify;">While new substantive posts go </span><a href="https://rychappell.substack.com/" style="text-align: justify;">on the substack</a><span style="background-color: white; color: #333333; text-align: justify;">, I plan to keep (cross-)posting annual review posts here, for ease of archiving.</span></div><div style="text-align: left;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333;"><span style="text-align: justify;">[Past annual reviews: <a href="https://www.philosophyetc.net/2021/12/2021-in-review.html">2021</a>, <a href="https://www.philosophyetc.net/2020/12/2020-in-review.html" style="color: #336699;">2020</a>, </span><a href="https://www.philosophyetc.net/2019/12/2019-and-18-in-review.html" style="color: #336699; text-align: justify;">2019 & '18</a><span style="text-align: justify;">, </span><a href="http://www.philosophyetc.net/2017/12/2017-in-review.html" style="color: #336699; text-align: justify;">2017</a><span style="text-align: justify;">, </span><a href="http://www.philosophyetc.net/2017/01/2016-in-review.html" style="color: #336699; text-align: justify;">2016</a><span style="text-align: justify;">, </span><a href="http://www.philosophyetc.net/2015/12/2015-in-review.html" style="color: #336699; text-align: justify;">2015</a><span style="text-align: justify;">, </span><a href="http://www.philosophyetc.net/2014/12/2014-in-review.html" style="color: #336699; text-align: justify;">2014</a><span style="text-align: justify;">, </span><a href="http://www.philosophyetc.net/2013/12/2013-in-review.html" style="color: #336699; text-align: justify;">2013</a><span style="text-align: justify;">, </span><a href="http://www.philosophyetc.net/2012/12/2012-in-review.html" style="color: #336699; text-align: justify;">2012</a><span style="text-align: justify;">, </span><a href="http://www.philosophyetc.net/2011/12/2011-my-web-of-beliefs.html" style="color: #336699; text-align: justify;">2011</a><span style="text-align: justify;">, </span><a href="http://www.philosophyetc.net/2010/12/2010-my-web-of-beliefs.html" style="color: #336699; text-align: justify;">2010</a><span style="text-align: justify;">, </span></span><a href="http://www.philosophyetc.net/2009/12/2009-my-web-of-beliefs.html" style="background-color: white; color: #336699; text-align: justify;">2009</a><span style="background-color: white; color: #333333; text-align: justify;">, </span><a href="http://www.philosophyetc.net/2008/12/2008-my-web-of-beliefs.html" style="background-color: white; color: #336699; text-align: justify;">2008</a><span style="background-color: white; color: #333333; text-align: justify;">, </span><a href="http://www.philosophyetc.net/2007/12/2007-my-web-of-beliefs.html" style="background-color: white; color: #336699; text-align: justify;">2007</a><span style="background-color: white; color: #333333; text-align: justify;">, </span><a href="http://www.philosophyetc.net/2007/01/2006-my-web-of-beliefs.html" style="background-color: white; color: #336699; text-align: justify;">2006</a><span style="background-color: white; color: #333333; text-align: justify;">, </span><a href="http://www.philosophyetc.net/2006/01/2005-my-web-of-beliefs.html" style="background-color: white; color: #336699; text-align: justify;">2005</a><span style="background-color: white; color: #333333; text-align: justify;">, and </span><a href="http://www.philosophyetc.net/2005/01/2004-my-web-of-beliefs.html" style="background-color: white; color: #336699; text-align: justify;">2004</a><span style="background-color: white; color: #333333; text-align: justify;">.]</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></span></div><h3 style="text-align: left;">Posts from before the move (that weren't cross-posted)</h3><div><ul style="text-align: left;"><li><a href="https://www.philosophyetc.net/2022/01/longtermism-contra-schwitzgebel.html">Longtermism contra Schwitzgebel</a></li><li><a href="https://www.philosophyetc.net/2022/01/emergence-and-incremental-probability.html">Emergence and Incremental Probability</a></li><li><a href="https://www.philosophyetc.net/2022/02/guest-post-animal-population-ethics.html">Guest post: Animal Population Ethics</a></li><li><a href="https://www.philosophyetc.net/2022/02/objections-to-rule-consequentialism.html">Objections to Rule Consequentialism</a></li><li><a href="https://www.philosophyetc.net/2022/03/rescuing-maligned-views-in-phil-mind-hyc.html">Rescuing Maligned Views in Philosophy of Mind [HYC]</a></li><li><a href="https://www.philosophyetc.net/2022/05/writing-papers-with-pandoc.html">Writing Papers with Pandoc</a></li></ul></div><h2 style="text-align: left;">Cross-posted annual review from <i><a href="https://rychappell.substack.com/">Good Thoughts</a></i></h2><div style="text-align: justify;"><p data-pm-slice="0 0 []">I started the new substack in May 2022, after 18 years of blogging at <a href="https://www.philosophyetc.net/" rel="noopener noreferrer nofollow" target="_blank">philosophyetc.net</a>, with the hope that the new platform would boost my reach and prompt more reader engagement (e.g. comments). So far, it seems to be working! I’ve enjoyed many interesting discussions (thank you, commenters!), and was delighted to surpass 2000 subscribers in early November, after the welcome surprise of being featured on the Substack main page for a couple of days.</p><p>In this post, I’ll flag some of the year’s highlights, and <strong>bold</strong> a handful of posts that I especially recommend (for anyone who missed them the first time around).</p><h3>Off the blog</h3><p>Last spring/summer I was awarded tenure (and promoted to Associate Professor) at the University of Miami. I received a grant from <a href="https://www.longview.org/" rel="noopener noreferrer nofollow" target="_blank">Longview Philanthropy</a>, allowing me to take this academic year off from my faculty position, work full time on a mix of research and outreach projects (including <a href="https://www.utilitarianism.net/" rel="noopener noreferrer nofollow" target="_blank">utilitarianism.net</a> and this blog), and visit Oxford’s outstanding <a href="https://globalprioritiesinstitute.org/" rel="noopener noreferrer nofollow" target="_blank">Global Priorities Institute</a> for this past autumn term. My paper ‘<a href="https://doi.org/10.1093/phe/phab031" rel="noopener noreferrer nofollow" target="_blank">Pandemic Ethics and Status Quo Risk</a>’ (summarized <a href="https://rychappell.substack.com/p/beware-status-quo-risks" rel="noopener noreferrer nofollow" target="_blank">here</a>) was published in <em>Public Health Ethics</em>. And, a few days before Christmas, I was <a href="https://www.wbur.org/onpoint/2022/12/21/what-is-the-effective-altruism-movement" rel="noopener noreferrer nofollow" target="_blank">interviewed live on NPR</a> about the ideas behind effective altruism.</p><p>New pages I wrote this year for utilitarianism.net include:</p><ul><li><p>The chapter on ‘<a href="https://www.utilitarianism.net/near-utilitarian-alternatives" rel="noopener noreferrer nofollow" target="_blank">Near-Utilitarian Alternatives</a>’</p></li><li><p>A <a href="https://www.utilitarianism.net/peter-singer-famine-affluence-and-morality" rel="noopener noreferrer nofollow" target="_blank">study guide to Singer’s ‘Famine, Affluence, and Morality’</a></p></li><li><p>New objections pages on <a href="https://www.utilitarianism.net/objections-to-utilitarianism/mere-means" rel="noopener noreferrer nofollow" target="_blank">the mere means objection</a>, the <a href="https://www.utilitarianism.net/objections-to-utilitarianism/separateness-of-persons" rel="noopener noreferrer nofollow" target="_blank">separateness of persons</a>, <a href="https://www.utilitarianism.net/objections-to-utilitarianism/alienation" rel="noopener noreferrer nofollow" target="_blank">alienation</a>, and <a href="https://www.utilitarianism.net/objections-to-utilitarianism/special-obligations" rel="noopener noreferrer nofollow" target="_blank">special obligations</a></p></li></ul><p>There are always more things I want to work on than I’m actually able to get around to. But once you add these 50-odd substack posts into the mix, and new academic papers currently under review and in draft, I’m overall pretty happy with my productivity. I’m also excited about my plans for next year—and will be happy if I manage to complete at least half of what I have in mind.</p><h3>Posts on Effective Altruism & Applied Ethics</h3><p>A major theme of <i>Good Thoughts</i> is that it’s good to do good things (and even better to do better)! Some relevant posts include:</p><ul><li><p><a href="https://rychappell.substack.com/p/effective-altruism-faq" rel="noopener noreferrer nofollow" target="_blank">Effective Altruism FAQ</a> - what I wish everyone knew about EA</p></li><li><p><a href="https://rychappell.substack.com/p/beneficentrism" rel="noopener noreferrer nofollow" target="_blank"><strong>Beneficentrism</strong></a> - how the moral foundations of EA are much broader (and less controversial/disputable) than full-blown utilitarianism</p></li><li><p><a href="https://rychappell.substack.com/p/the-nietzschean-challenge-to-effective" rel="noopener noreferrer nofollow" target="_blank"><strong>The Nietzschean Challenge to Effective Altruism</strong></a> - here’s a foundational challenge one doesn’t often hear: <em>maybe well-being is overrated? </em>At least, it may be worth giving weight to things like <em>achievement</em> and not just things like <em>comfort</em>.</p></li><li><p><a href="https://rychappell.substack.com/p/ethics-as-solutions-vs-constraints" rel="noopener noreferrer nofollow" target="_blank">Ethics as Solutions vs Constraints</a> - contrasting beneficence-first vs purity-first ways of thinking about ethics</p></li><li><p><a href="https://rychappell.substack.com/p/pick-some-low-hanging-fruit" rel="noopener noreferrer nofollow" target="_blank">Pick some low-hanging fruit</a> - while not quite as vivid as <a href="https://www.utilitarianism.net/peter-singer-famine-affluence-and-morality" rel="noopener noreferrer nofollow" target="_blank">Singer’s pond</a>, I quite like this alternative metaphor for (moderate) effective altruism in the face of seemingly limitless demands.</p></li><li><p><a href="https://rychappell.substack.com/p/the-strange-shortage-of-moral-optimizers" rel="noopener noreferrer nofollow" target="_blank">The Strange Shortage of Moral Optimizers</a> - Why doesn’t EA have more competition? It’s weird that more people aren’t even <em>trying</em> to “promote the general good in a serious, scope-sensitive, goal-directed kind of way.”</p></li><li><p><a href="https://rychappell.substack.com/p/billionaire-philanthropy" rel="noopener noreferrer nofollow" target="_blank">Billionaire Philanthropy</a> - would you prefer they spend it on luxury consumption? Or donate to the US treasury? Seriously?</p></li><li><p><a href="https://rychappell.substack.com/p/review-of-what-we-owe-the-future" rel="noopener noreferrer nofollow" target="_blank">Review of <em>What We Owe the Future</em></a> - an important book, well-targeted at introducing longtermism to a general audience, but in many respects too uncontroversial for philosophical audiences. Expect academic critics to exaggerate the core thesis (or even conflate it with total utilitarianism) to give them more of a target.</p></li><li><p><a href="https://rychappell.substack.com/p/utilitarianism-and-abortion" rel="noopener noreferrer nofollow" target="_blank">Utilitarianism and Abortion</a> - there’s no particular reason for longtermist pro-natalists to focus specifically on abortion (rather than other non-procreative choices), and there’s no utilitarian excuse to <em>force</em> people to do good things (like procreate) when you could instead incentivize them. (Cf. kidney donations.)</p></li></ul><h3>On Utilitarianism and Ethical Theory</h3><p>I think most people—including most academic philosophers—have a pretty terrible understanding of utilitarian ethics, relying on misleading and oversimplified caricatures. Some of the below posts try to correct those misunderstandings. Others more positively explore what we should think about tricky issues in ethical theory.</p><ul><li><p><a href="https://rychappell.substack.com/p/introducing-utilitarianismnet" rel="noopener noreferrer nofollow" target="_blank"><strong>Introducing utilitarianism.net</strong></a> - an overview of the new website and its main features. (N.B. more updates coming soon!)</p></li><li><p><a href="https://rychappell.substack.com/p/utilitarianism-and-reflective-equilibrium" rel="noopener noreferrer nofollow" target="_blank">Utilitarianism and Reflective Equilibrium</a> - why utilitarianism is (contrary to common perception) actually the <em>most intuitive</em> moral theory: its conflicts with intuitive verdicts are shallow and easy to accommodate, whereas deontology’s conflicts with intuitive principles are deep and utterly irresolvable.</p></li><li><p><a href="https://rychappell.substack.com/p/utilitarianism-debate-with-michael" rel="noopener noreferrer nofollow" target="_blank"><strong>Utilitarianism debate with Michael Huemer</strong></a> - expanding on the above point, and on the inferential role of <em>wrongness</em></p></li><li><p><a href="https://rychappell.substack.com/p/impermissibility-is-overrated" rel="noopener noreferrer nofollow" target="_blank">(Im)permissibility is Overrated</a> - distinguishing right and wrong is less important than settling <em>what’s worth caring about</em>.</p></li><li><p><a href="https://rychappell.substack.com/p/theses-on-mattering" rel="noopener noreferrer nofollow" target="_blank"><strong>Theses on Mattering</strong></a> - addressing common misconceptions about what it takes to truly value people equally</p></li><li><p><a href="https://rychappell.substack.com/p/a-new-paradox-of-deontology" rel="noopener noreferrer nofollow" target="_blank"><strong>A New Paradox of Deontology</strong></a> - how only consequentialism combines normative authority, guidance, and adequate concern for rescuable victims</p></li><li><p><a href="https://rychappell.substack.com/p/constraints-and-candy" rel="noopener noreferrer nofollow" target="_blank">Constraints and Candy</a> - both appeal to our lizard-brains, but neglect less salient interests</p></li><li><p><a href="https://rychappell.substack.com/p/deontic-pluralism" rel="noopener noreferrer nofollow" target="_blank"><strong>Deontic Pluralism</strong></a> - How to reconcile Maximizing, Satisficing, and Scalar Consequentialisms</p></li><li><p><a href="https://rychappell.substack.com/p/consequentialism-beyond-action" rel="noopener noreferrer nofollow" target="_blank"><strong>Consequentialism Beyond Action</strong></a> - and why we need two dimensions of moral evaluation: the <em>fitting</em> the and the <em>fortunate</em>. (Too many consequentialists neglect the former!)</p></li><li><p><a href="https://rychappell.substack.com/p/caplans-conscience-objection-to-utilitarianism" rel="noopener noreferrer nofollow" target="_blank">Caplan’s Conscience Objection to Utilitarianism</a> - why the demandingness objection is confused, and utilitarianism does not in fact imply that we're all bad people</p></li><li><p><a href="https://rychappell.substack.com/p/emergency-ethics" rel="noopener noreferrer nofollow" target="_blank">Emergency ethics</a> - and why I think there’s no <em>special</em> duty of easy rescue, just general reasons of beneficence</p></li><li><p><a href="https://rychappell.substack.com/p/level-up-impartiality" rel="noopener noreferrer nofollow" target="_blank">Level-Up Impartiality</a> - non-utilitarians sometimes imagine that impartiality means treating everyone as badly as they treat strangers, rather than as well as they treat their friends and loved ones. But I think there’s independent reason to think we’re more likely right about the latter.</p></li><li><p><a href="https://rychappell.substack.com/p/ethically-alien-thought-experiments" rel="noopener noreferrer nofollow" target="_blank">Ethically Alien Thought Experiments</a> - don’t let alien cases masquerade as real-world ones (transparently alien thought experiments are fine, though!)</p></li><li><p><a href="https://rychappell.substack.com/p/consequentialism-and-cluelessness" rel="noopener noreferrer nofollow" target="_blank">Consequentialism and Cluelessness</a> - why I'm skeptical of Lenman's Epistemic Objection</p></li><li><p><a href="https://rychappell.substack.com/p/a-multiplicative-model-of-value-pluralism" rel="noopener noreferrer nofollow" target="_blank">A Multiplicative Model of Value Pluralism</a> - how do distinct kinds of value combine?</p></li><li><p><a href="https://rychappell.substack.com/p/double-or-nothing-existence-gambles" rel="noopener noreferrer nofollow" target="_blank">Double or Nothing Existence Gambles</a> - seem like a bad deal! But what’s the best theoretical explanation of this?</p></li><li><p><a href="https://rychappell.substack.com/p/killing-vs-failing-to-create" rel="noopener noreferrer nofollow" target="_blank">Killing vs Failing to Create</a> - addressing the replaceability objection by allowing both impersonal <em>and</em> person-directed reasons</p></li><li><p><a href="https://rychappell.substack.com/p/puzzles-for-everyone" rel="noopener noreferrer nofollow" target="_blank"><strong>Puzzles for Everyone</strong></a><strong> - </strong>Some of the deepest puzzles in ethics concern how to coherently extend ordinary <em>beneficence</em> and <em>decision theory </em>to extreme cases. Too often, people mistakenly believe that these are only puzzles for utilitarians, as though other theories needn’t care at all about beneficence or decision-making under conditions of uncertainty. I explain why this is a mistake, and especially explain why appealing to “neutrality” about adding happy lives is not an adequate solution to the problems of population ethics.</p></li></ul><h3>On the link between Theory and Practice</h3><ul><li><p><a href="https://rychappell.substack.com/p/theory-driven-applied-ethics" rel="noopener noreferrer nofollow" target="_blank">Theory-Driven Applied Ethics</a> - how utilitarianism may inspire mid-level “beneficentric” principles that can command wider assent, and still suffice for all practical purposes.</p></li><li><p><a href="https://rychappell.substack.com/p/is-non-consequentialism-self-effacing" rel="noopener noreferrer nofollow" target="_blank">Is Non-Consequentialism Self-effacing?</a> - turning Bernard Williams on his head: even non-consequentialists should probably want others to be more beneficent, which is a goal that may be better served by promoting utilitarian ethics.</p></li><li><p><a href="https://rychappell.substack.com/p/how-useful-is-utilitarianism" rel="noopener noreferrer nofollow" target="_blank">How Useful is Utilitarianism?</a> - some early thinking about what a ‘Beneficence Project’ for utilitarian-leaning academics might look like (with an invitation for potential collaborators to get in touch).</p></li><li><p><a href="https://rychappell.substack.com/p/naive-vs-prudent-utilitarianism" rel="noopener noreferrer nofollow" target="_blank">Naïve vs Prudent Utilitarianism</a> - careless pursuit of the good is bad in expectation (but of course nothing in utilitarianism justifies such carelessness).</p></li><li><p><a href="https://rychappell.substack.com/p/ethical-theory-and-practice" rel="noopener noreferrer nofollow" target="_blank"><strong>Ethical Theory and Practice</strong></a> - stipulated thought experiments are not a good guide to how to behave in real life, with its ineliminable uncertainties. As a result, it turns out that utilitarianism and moderate deontology are surprisingly difficult to differentiate in terms of their real-world implications.</p></li></ul><h3>Other Posts</h3><ul><li><p><a href="https://rychappell.substack.com/p/agency-and-epistemic-cheems-mindset" rel="noopener noreferrer nofollow" target="_blank">Agency and Epistemic Cheems Mindset</a> - <em>use</em> your best judgment, don’t <em>suspend</em> it! (Winner of a <a href="https://effectiveideas.org/blog-prize-digest-june/" rel="noopener noreferrer nofollow" target="_blank">Blog Post Prize</a>.)</p></li><li><p><a href="https://rychappell.substack.com/p/the-fine-tuning-god-problem" rel="noopener noreferrer nofollow" target="_blank">The Fine-Tuning God Problem</a> - without an explanation of why (moderately) life-friendly creator gods are a priori more likely than others, deism doesn’t seem to give us an explanation of fine-tuning after all.</p></li><li><p><a href="https://rychappell.substack.com/p/when-metaethics-matters" rel="noopener noreferrer nofollow" target="_blank">When Metaethics Matters</a> - and how it might affect our practical commitments</p></li><li><p><a href="https://rychappell.substack.com/p/metaethics-and-unconditional-mattering" rel="noopener noreferrer nofollow" target="_blank">Metaethics and Unconditional Mattering</a> - we should oppose gratuitous suffering, no matter what's true</p></li><li><p><a href="https://rychappell.substack.com/p/parfit-in-seven-parts" rel="noopener noreferrer nofollow" target="_blank"><strong>Parfit in Seven Parts</strong></a> - “In <a href="https://philpapers.org/rec/CHAPE-5" rel="noopener noreferrer nofollow" target="_blank"><em>Parfit’s Ethics</em></a>, I critically introduce Parfit’s central insights and arguments... But even this very short book is still, you know… a <em>book</em>… and so unlikely to be as widely read as random blog posts on the internet. Solution: turn the book into a series of blog posts!”</p><ol><li><p><a href="https://rychappell.substack.com/p/against-egoism-and-subjectivism" rel="noopener noreferrer nofollow" target="_blank">Against Egoism and Subjectivism</a></p></li><li><p><a href="https://rychappell.substack.com/p/priority-and-aggregation" rel="noopener noreferrer nofollow" target="_blank">Priority and Aggregation</a></p></li><li><p><a href="https://rychappell.substack.com/p/rational-irrationality-and-blameless" rel="noopener noreferrer nofollow" target="_blank">Rational Irrationality and Blameless Wrongdoing</a></p></li><li><p><a href="https://rychappell.substack.com/p/parfits-triple-theory" rel="noopener noreferrer nofollow" target="_blank">Parfit’s Triple Theory</a></p></li><li><p><a href="https://rychappell.substack.com/p/do-you-really-exist-over-time" rel="noopener noreferrer nofollow" target="_blank">Do you really exist over time?</a></p></li><li><p><a href="https://rychappell.substack.com/p/the-birth-of-population-ethics" rel="noopener noreferrer nofollow" target="_blank">The Birth of Population Ethics</a></p></li><li><p><a href="https://rychappell.substack.com/p/moral-truth-without-substance" rel="noopener noreferrer nofollow" target="_blank">Moral Truth without Substance</a></p></li></ol></li></ul><h3>My Top Three</h3><p>For any new readers, I’d especially encourage you to check out my following “top three” most-liked posts:</p><div class="embedded-post-wrap" data-attrs="{"id":72586569,"url":"https://rychappell.substack.com/p/puzzles-for-everyone","publication_id":876842,"publication_name":"Good Thoughts","publication_logo_url":"https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/6657349d-8f70-496d-a060-01196c1cd263_399x399.png","title":"Puzzles for Everyone","truncated_body_text":"Some of the deepest puzzles in ethics concern how to coherently extend ordinary beneficence and decision theory to extreme cases. The notorious puzzles of population ethics, for example, ask us how to trade off quantity and quality of life, and how we should value future generations. Beckstead & Thomas discuss","date":"2022-09-10T01:38:48.008Z","like_count":35,"comment_count":39,"bylines":[{"id":32790987,"name":"Richard Y Chappell","previous_name":null,"photo_url":"https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/2975dff8-e0e5-4f51-8d47-b9bc2dfd700b_1683x1790.jpeg","bio":"Philosophy Prof.","profile_set_up_at":"2021-11-29T00:14:10.968Z","publicationUsers":[{"id":817917,"user_id":32790987,"publication_id":876842,"role":"admin","public":true,"is_primary":false,"publication":{"id":876842,"name":"Good Thoughts","subdomain":"rychappell","custom_domain":null,"custom_domain_optional":false,"hero_text":"Consequentialist moral philosophy and analysis","logo_url":"https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/6657349d-8f70-496d-a060-01196c1cd263_399x399.png","author_id":32790987,"theme_var_background_pop":"#8AE1A2","created_at":"2022-05-05T18:07:57.628Z","rss_website_url":null,"email_from_name":null,"copyright":"Richard Y Chappell","founding_plan_name":null,"community_enabled":true,"invite_only":false,"payments_state":"disabled"}}],"twitter_screen_name":"RYChappell","is_guest":false,"bestseller_tier":null,"inviteAccepted":true}],"utm_campaign":null,"belowTheFold":true,"type":"newsletter"}"><div class="embedded-post-header"><ol><li><a class="embedded-post" href="https://rychappell.substack.com/p/puzzles-for-everyone?utm_source=substack&utm_campaign=post_embed&utm_medium=web" native="true">Puzzles for Everyone</a></li><li><a class="embedded-post" href="https://rychappell.substack.com/p/beneficentrism?utm_source=substack&utm_campaign=post_embed&utm_medium=web" native="true">Beneficentrism</a></li><li><a class="embedded-post" href="https://rychappell.substack.com/p/theses-on-mattering?utm_source=substack&utm_campaign=post_embed&utm_medium=web" native="true"><div class="embedded-post-title-wrapper" style="display: inline !important;"><div class="embedded-post-title" style="display: inline !important;">Theses on Mattering</div></div></a></li></ol><div>Happy New Year!</div></div></div></div>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com0tag:blogger.com,1999:blog-6642011.post-7913774469565452922022-05-06T15:26:00.000-04:002022-05-06T15:26:26.699-04:00Moving to Substack<p>I started this blog 18 years ago, as a second-year undergraduate philosophy major. The first couple years were... very undergrad-y... but I think by <a href="https://www.philosophyetc.net/2007/01/2006-my-web-of-beliefs.html">2006</a> or so I was doing some pretty interesting philosophy on here, much (but not all) of which I would still stand by. The next few years (heading into early grad school) were probably the peak years for the blog in terms of audience engagement, commonly getting dozens of comments per post.</p><p>After the old blogosphere largely died off, and engagement moved to Facebook and Twitter, I've kept the blog going as a kind of "extended mind" for organizing my thoughts. Sharing the posts on FB sometimes leads to some good discussion there. And I get loads of random google hits through all the page-rank built up over the years. I've written over 2000 posts, received over 14K (non-spam) comments, and since 2010 have received over 5 million page views. Still, the old Blogger software is no longer well-supported, so I'm curious to see if I can do better on a new platform.</p><p>I've heard good things about Substack, so have set up a new blog/newsletter there, called <i><a href="https://rychappell.substack.com/">Good Thoughts</a></i>. (The 'goodthoughts' substack url was already taken, alas, so I've gone with rychappell.substack.com instead.) Existing email subscribers should be carried over automatically. Others can click through or use the following form to subscribe:</p>
<iframe src="https://rychappell.substack.com/embed" width="480" height="320" style="border:1px solid #EEE; background:white;" frameborder="0" scrolling="no"></iframe>
<p>Whereas <i>Philosophy, et cetera</i> was always primarily a self-indulgent project, my aim with <i>Good Thoughts</i> is to be a little more thoughtful about writing for an audience, e.g. writing more self-contained posts, with less reliance on back-linking to previous posts to fill in essential background, etc. We'll see how it goes, but I hope it proves worthwhile.</p>
<p>Hope to see you there!</p>
<p>P.S. For a synoptic view of my past blogging, check out the annual review posts under my '<a href="https://www.philosophyetc.net/search/label/compendia">compendia</a>' category.</p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com0tag:blogger.com,1999:blog-6642011.post-43001868590209091372022-05-01T22:49:00.001-04:002022-05-01T22:49:52.164-04:00Utilitarianism Debate with Michael Huemer<p>Matthew Adelstein kindly invited me & Michael Huemer to hash out <a href="https://www.philosophyetc.net/2022/01/utilitarianism-and-reflective.html">our disagreements about utilitarianism</a> over on his YouTube channel. The <a href="https://www.youtube.com/watch?v=RP1jEFMLsv8">resulting discussion</a> was fun and wide-ranging. In this post, I just want to highlight a couple of major themes that seemed fairly central to our dispute: (1) which intuitions we place the most weight on, and (2) the inferential role of <i>wrongness</i>.</p><p><b><span></span></b></p><a name='more'></a><b>(1) Which intuitions?</b><p></p><p>Huemer, like many philosophers, regards <a href="https://www.utilitarianism.net/">utilitarianism</a> as extremely "counter-intuitive", because there are many hypothetical cases in which it recommends actions that intuitively seem wrong. Against this, I argue that (something close to) <a href="https://www.philosophyetc.net/2022/01/utilitarianism-and-reflective.html">utilitarianism is actually the <i>most intuitive</i> moral theory</a>, because (i) its conflicts with intuition are <i>shallow</i>, and can generally be accommodated at least reasonably well by appeal to related moral considerations such as character evaluations; whereas (ii) non-consequentialism conflicts with our intuitions about <i>what matters</i> in ways that are <i>deep</i> and <i>irresolvable.</i></p><p>The dispute arises because our intuitions about cases, if taken at face value, are simply impossible to mesh with coherent and plausible principles about what's actually important. They're an unprincipled mess. So you can either accept those verdicts at face value (as Huemer does) and give up all hope of having "right" and "wrong" track anything that's independently understandable as <i>worth caring about</i>, or we can seek to "charitably reinterpret" those verdicts, occasionally rejecting their face-value claims while seeking to accommodate their underlying spirit as well as possible.</p><p>Which route is better? Well, I guess it depends which intuitions you're more confident of. I'm much more confident of the deeper intuitions about what matters. I'm not particularly attached to my intuition that it's "wrong" to kill one to save five in the trolley bridge case, for example. I think there are obvious psychological confounders here (e.g. involving disparities of salience between the one and the five) that could be expected to distort my immediate intuitive judgment. And I'm not even confident that the "don't push" intuition speaks to the strengths of my reasons for action at all; it seems at least as plausible to me that I'm instead reacting negatively to the decision procedure or dispositions of character that would lead someone to be cavalier about killing innocent people. As R.M. Hare pointed out long ago, utilitarians can fully endorse the rejection of decision procedures that would lead one to engage in instrumental harm, for those seem unlikely to be the best decision procedures around. As a result, it's hard to see how our intuitive rejection of those decision procedures is supposed to count against the view. </p><p>(Huemer wants to claim that it's not <i>just</i> that the person is vicious or their decision procedure is intuitively bad, but further, the act is <i>intuitively wrong</i>. But I dispute that it's at all clear that there is any such <i>further</i> intuition in this case. And as we'll see in the second section, below, I think it's especially unclear what Huemer's intuitions of "wrongness" are really <i>about</i>.)</p><p>By contrast, I think it is <i>much clearer</i> that (e.g.) innocent people's lives are more important than deathbed promises. That is, we should care more about the former, and prioritize them when they come into conflict with the latter. I think it's also pretty clear that there <a href="https://www.philosophyetc.net/2021/08/preferring-to-act-wrongly.html">shouldn't be a total disconnect</a> between <i>what we should care about</i> and <i>what we should do</i>. Insofar as there is such a disconnect on Huemer's view, I think it really undermines the normative authority of what he calls "wrongness". Like archaic honor codes, when deontic constraints conflict with what's <i>actually worth caring about</i>, they lose their force.</p><p>And I don't just mean this as a general intuition for a free-floating, untested principle. This also seems intuitively right (to me, at least) when reflecting more deeply about the specific cases. When you start to think more about which possible outcome seems morally preferable, all things considered (even taking into account whatever moral significance killing may have), it seems intuitively clear that (i) you should prefer that any one person be killed so that five comparable others may be saved; and (ii) you should, if you can, choose to bring about this morally preferable outcome. This verdict can be further reinforced by thinking about what all the affected parties would have preferred from behind a veil of ignorance, or about what a benevolent observer/God would wish for you to do, or about how deontic constraints enshrine status quo bias, etc.</p><p>So: Deontology may capture superficial verdicts, but it just falls apart completely when you dig a little deeper into the cases. And I don't see any hope for the deontologist to "accommodate the underlying spirit" of these deeper intuitions. They're simply lost. So I think that <i>even our case-based intuitions</i>, upon deeper reflection, favour utilitarianism on net -- and overwhelmingly so.</p><p>This is important, since Huemer's main response to the clash of intuitions was to appeal to the general epistemic principle that (he thinks) our intuitions about cases are on much firmer footing than our intuitions about general principles. That doesn't seem universally correct to me, and indeed it seems self-defeating for Huemer to appeal to this <i>general principle</i> when faced with a <i>specific dialectic</i> in which the conflict of intuitions <i>intuitively favours</i> utilitarianism. (By his own principles, he should instead assess this particular dialectic on its merits, or so I would imagine.) But regardless, I would now suggest that even our intuitions about cases favour utilitarianism, so long as we're thinking about the cases deeply and not just superficially. When thinking about these cases, we need to bring to bear <b>the full range of our intuitions about what's worth caring about in the scenario, what it makes sense to do in light of the appropriate concerns, and how it all fits together</b>. These are really important intuitions!</p><p><br /></p><p><b>(2) What is <i>wrongness</i>?</b></p><p>I'm interested in the strengths of our reasons for action: which acts are really <i>worth </i>doing, and <i>how important</i> it is that we do them. In my '<a href="https://philpapers.org/rec/CHADPA-8">Deontic Pluralism</a>' paper, I point out that there are multiple ways to reconstruct notions of "right" and "wrong" out of scalar reasons. We might talk (with maximizers) about what we have <i>most reason</i> to do. Or we might talk (with satisficers) about what we'd be <i>blameworthy</i> for failing to do. But I don't believe in any completely independent, indefinable sense of "wrongness" (or what Parfit called <i>mustn't-be-done</i>), and I don't know of any utilitarians who do.</p><p>If Huemer believes in indefinable wrongness (not reducible to reasons, etc.), then we risk talking past each other. I got the impression that his methodology was to <i>first</i> form an intuition about an act's (indefinable) wrongness, and then <i>infer </i>from this that we have decisive reason not to do it.</p><p>One simple revision that might reconcile our views would be if he gave up on making that subsequent inference. (How confident is he in the <i>general principle</i> that this inference is warranted? How does it compare to all the rational principles that conflict with deontic constraints?) Perhaps there is some indefinable property described by Huemer's common-sense morality, but it simply <i>lacks normative authority, </i>and we don't have any good reason to act upon it when it conflicts with things -- like people's lives -- that we ought to care about more. I'd be curious whether he's at all open to this possibility.</p><p>Given the divergence between the <a href="https://www.philosophyetc.net/2021/12/consequentialisms-central-concept.html">normative concepts central to consequentialism</a> and the ordinary (vague and undefinable) sense of "wrongness", it's an interesting question whether consequentialism is best understood as offering an <i>internal revision</i> of how we should understand "wrong", etc., or as offering an <i>external rival</i> to morality. I guess the difference is ultimately just terminological, so shouldn't matter too much as long as we're clear. But either way, I think it helps to further defang the supposed "counterintuitiveness" of utilitarianism's verdicts if defenders of the view can respond, "No, I don't mean right/wrong in <i>that</i> sense. I <i>just</i> mean that we have more reason to bring about the better outcome!"</p><p>Most of the intuitions evoked by critics of utilitarianism arguably don't directly address this question of what we have most reason to do (or what we <i>really</i> ought -- as opposed to merely "morally" ought, in some restricted or "inverted-commas" sense -- to do). When we formulate the normative question in terms of practical reasons, my intuitions about cases <i>overwhelmingly</i> favour utilitarianism over deontology. Maybe I'm idiosyncratic. But I think that at least <i>many</i> who are initially inclined to think that utilitarianism is "counterintuitive" only believe this because they've misconceptualized what normative ethics is really <i>about</i>. So I'm optimistic that significant progress could be made on this front (which is not to say universal agreement, of course) by getting clearer on this. That's one of my major goals for the next few years...</p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com3tag:blogger.com,1999:blog-6642011.post-46290260649200243962022-05-01T00:29:00.000-04:002022-05-01T00:29:45.162-04:00Writing Papers with Pandoc<p>At Daily Nous, there's a discussion of <a href="https://dailynous.com/2022/04/29/tech-advice-for-a-new-philosophy-grad-student/">Tech Advice for a New Philosophy Grad Student</a>. There's some dispute about whether or not it's worth learning LaTeX. I recommend <a href="https://pandoc.org/getting-started.html">pandoc</a> instead for those who are on the fence. You write in <a href="https://pandoc.org/MANUAL.html#pandocs-markdown">markdown</a>, a simpler and more readable plain text syntax (compare *markdown italics* to \emph{LaTeX italics}, for example!). But it subsequently uses LaTeX to produce good-looking PDFs. Or it can just as easily convert into other document formats, such as Word .docx, if needed. It's very flexible.</p><p><span></span></p><a name='more'></a>You'll need to install:<p></p><p>(1) <a href="https://pandoc.org/installing.html">Pandoc</a></p><p>(2) A LaTeX distribution, e.g. <a href="https://miktex.org/">MikTeX</a>, if this isn't already installed with pandoc.</p><p>(3) A BibTeX bibliography manager. <a href="https://www.jabref.org/">JabRef</a> works for me. Create a single master bibliography with everything you ever want to cite (you can, of course, add to it later), and pandoc will pull in entries as you need them.</p><p>(4) A text editor. I use <a href="https://atom.io/">Atom</a>, with the extra packages (added under settings): 'autocomplete-bibtex', 'language-pfm', 'wordcount', 'markdown-writer', and 'markdown-preview-plus'. Set your OS to open .md files with Atom by default.</p><p>(Autocomplete for citations is great. I start writing "@Par" and the text editor shows a list of all the Parfit entries in my master bibliography, so I can easily pick the right one without needing to have memorized the full reference label. And, as per standard pandoc citations, "[@ParfitRP]" in my text file becomes "(Parfit 1984)" in the compiled PDF, with the full bibliographic details automatically added to the References section at the end of the document. When writing the paper, I don't have to look up a thing.)</p><p>Once that's all set up, you'll want to make a pandoc template with the standard header code that you'll use to start off all your papers. I offer an example below, which includes some fake text to demonstrate some common markdown syntax, plus a bit of extra LaTeX code at the end to format the references nicely.</p><p>It may look intimidating, but aside from updating the text in ALL CAPS, you only need to learn the pandoc-markdown syntax, i.e. the stuff between the introduction and conclusion. The full user guide is <a href="https://pandoc.org/MANUAL.html#pandocs-markdown">here</a>, but my sample text demonstrates the main things you'll need to know, e.g. sections, citations, and footnotes. Other than that, you can mostly just write, using asterisks for *emphasis*.</p><p>To actually produce a PDF (or whatever output) from your markdown text, you'll need to <a href="https://pandoc.org/getting-started.html">run pandoc from the command line</a>. In Windows, use File Explorer to browse to the folder containing your .md text file, hold SHIFT and click the right mouse button to display a menu from which you can select 'open powershell window here'. Then copy and paste the running code (included in a comment at the bottom of the template, below -- the line starting "pandoc" and ending with "TITLE.pdf") into the powershell window and hit ENTER. It should then create your PDF.</p><p>Feel free to borrow the following pandoc-template.md:</p><blockquote><p>---</p>title: PAPER TITLE<br />author: YOUR NAME<br />thanks: Thanks to X, Y, and Z for helpful comments.<br />date: \today<br />abstract: \noindent YOUR ABSTRACT.<br />output: pdf-document<br />papersize: letter<br />fontsize: 11pt<br />documentclass: article<br />linestretch: 2<br />fontfamily: librebaskerville<br />link-citations: true<br />linkReferences: true<br />indent: true<br />...<br /><br /># Introduction {-}<br /><br />Test citation [@ParfitRP]---em-dashes---and soon a footnote.^[This is an inline footnote.] I discuss this further in [@sec:LABEL].<br /><br /># SECTION TITLE {#sec:LABEL}<br /><br />As @ParfitRP[p. 131] explains:<br />> Here is a blockquote. *This sentence of it is italicized.*<br /><br /> ## SUBSECTION TITLE<br /><br /># Conclusion {-}<br /><br />\newpage<br /><br /># References {-}<br /><br />\setlength{\parindent}{-0.2in}<br />\setlength{\leftskip}{0.2in}<br />\setlength{\parskip}{8pt}<br />\vspace*{-0.2in}<br />\noindent<br /><br /> <!-- this is a comment containing the running code to paste into the command line:<br />pandoc -s -N --filter pandoc-crossref --bibliography 'C:\PATH\TO\YOUR BIBTEX BIBLIOGRAPHY.bib' --filter pandoc-citeproc TITLE.md -o TITLE.pdf<br />--></blockquote><p>But again, don't be put off by all the code. You can play around with it if you enjoy that, but the whole point of the template is to take care of the background code for you, so you can focus on writing your paper. And aside from the citations, the rest of your paper will mostly look like straightforwardly comprehensible plain text.</p><p>Like LaTeX, Pandoc isn't for everyone. But I hope this info is helpful for some. For additional background, see Thomas Hodgson's blog post, '<a href="https://doctoralwriting.wordpress.com/2015/10/06/try-pandoc-instead-of-word-for-your-research-writing/">Try Pandoc instead of Word for your research writing</a>'. For a more detailed tutorial, see <a href="https://programminghistorian.org/en/lessons/sustainable-authorship-in-plain-text-using-pandoc-and-markdown">here</a> (via <a href="https://jadamcarter.wixsite.com/workflow">J. Adam Carter</a>).</p><blockquote><p></p></blockquote><p></p><p></p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com0tag:blogger.com,1999:blog-6642011.post-77009308838278661072022-03-09T15:12:00.001-05:002022-03-09T15:29:42.697-05:00Rescuing Maligned Views in Phil Mind [HYC]Epiphenomenalism and Idealism are two of the most maligned views in philosophy of mind. So it's kind of funny that <a href="http://yetterchappell.net/Helen/">Helen</a> defends both. Something I really like about her papers is that they really bring out why these views are much more defensible -- or even appealing -- than others usually realize. This comes through especially strongly in her two latest papers:<div><br /></div><div>(1) '<a href="https://philpapers.org/rec/YETGAW">Get Acquainted With Naïve Idealism</a>' (forthcoming in <i>The Roles of Representations in Visual Perception</i>) argues that only idealists can truly secure the putative epistemic benefits of direct realism about perception, as the only well-developed conception of <i>direct</i> <i>acquaintance</i> on offer in phil mind involves the objects of direct acquaintance (i.e., phenomenal experiences) being literal <i>constituents</i> of our thoughts. Helen shows how idealists can extend this account to make sense of direct acquaintance with "physical" objects (that are themselves ultimately made of phenomenology, and hence apt to <i>enter our minds </i>in the relevant way), while traditional materialist accounts of physical reality can't make sense of this. The resulting theory of perception -- <i>naive idealism</i> -- is completely wild, but a lot of fun to think about! </div><div><br /></div><div>(2) '<a href="https://philpapers.org/rec/YETDAT">Dualism All the Way Down: Why There is No Paradox of Phenomenal Judgment</a>' (forthcoming in <i>Synthese</i>) should instantly become required reading for any class that covers epiphenomenalism. In this paper, Helen expands upon Chalmers' classic defense of epiphenomenalism against the paradox of phenomenal judgment ("how can you know you're conscious, if qualia can't cause this belief?"), emphasizing that the paradox -- including Kirk's post-Chalmers development of it -- loses its force when one takes care to adopt a <i>systematically</i> dualistic conception of the mind, such that <i>you</i> are not <i>your brain</i>. This putative "paradox" is usually taken to be <i>the</i> objection to epiphenomenalism, and this paper basically offers a knock-down refutation of it (and a half-dozen closely related variants of the objection).</div><div><br /></div><div>Enjoy!</div>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com0tag:blogger.com,1999:blog-6642011.post-1165918331081193652022-02-16T12:01:00.003-05:002022-02-16T12:06:20.050-05:00Objections to Rule Consequentialism<p>Those put-off by the <a href="https://www.philosophyetc.net/2012/08/counterexamples-to-consequentialism.html">putative counterexamples to Act Consequentialism</a> may consider Rule Consequentialism a more appealing alternative. Michael Huemer goes so far as to suggest that it is "<a href="https://fakenous.net/?p=2789">not a crazy view</a>." In this post, I'll explain why I think Rule Consequentialism is not well-supported -- and, at least as standardly formulated, may even be crazy.</p><p>There are three main motivations for Rule Consequentialism (RC). One -- most common amongst non-specialists -- stems from the sense that it would be <i>better</i> (in practice) for people to be guided by generally-reliable rules than to attempt to explicitly calculate expected utilities on a case-by-case basis. But of course this is no reason to prefer RC as a <i>criterion</i> of right; this consideration instead pulls one towards <a href="https://www.utilitarianism.net/types-of-utilitarianism#the-difference-between-multi-level-utilitarianism-and-rule-utilitarianism">multi-level act utilitarianism</a> (on which the right <i>decision procedure</i> is something other than constant calculation).</p><p>A better argument for RC (and the one that seems to motivate Huemer) is that it better <i>systematizes our moral intuitions</i> about cases. But I think this is <a href="https://www.philosophyetc.net/2022/01/utilitarianism-and-reflective.html">bad moral methodology</a> -- matching superficial intuitions about cases is much less important than conforming to our deeper understanding of <i>what really matters</i>. And RC is notoriously difficult to reconcile with the idea that promoting well-being (rather than blindly following rules) is <i>what matters</i>.</p><p><span></span></p><a name='more'></a>Perhaps the most principled argument for RC stems from the contractualist ideal of acting on principles that are systematically <i>justifiable to others</i>. Parfit's project in <i>On What Matters</i> was to argue that such contractualist foundations should lead one to Rule Consequentialism. But as I argue in chapter 5 of <i><a href="https://philpapers.org/rec/CHAPE-5">Parfit's Ethics</a></i>, it's obscure why we should want the <i>rules</i> we act upon, rather than simply our <i>acts</i> themselves, to be justifiable to others:<p></p><p></p><blockquote>[T]he
mere fact that the best <i>uniform </i>(or universal) principles recommend an act does not mean that this <i>specific</i> act is any good—the
principles’ benefits may stem from other cases. This prompts a
couple of deep challenges to Parfit’s rule-based approach: (i) When
an optimal act is ruled out by optimal principles, why prioritize
the principles—why should acting optimally ever be considered
“unjustifiable”? (ii) Different people might do better to be guided
by different principles—so, even on a rule- or principle-based
approach, why require uniformity?</blockquote><p></p><p>So I'm dubious of the putative reasons to favour RC in the first place. Moreover, it seems to me that RC is subject to powerful objections.</p><p>(1) It's subject to <a href="https://www.utilitarianism.net/arguments-for-utilitarianism#the-poverty-of-the-alternatives">all the standard objections</a> to views that aren't <i>fundamentally</i> consequentialist: (i) it gives bad (rule-fetishizing) answers to the question of what fundamentally matters; (ii) it implies that benevolent spectators should often <a href="https://www.utilitarianism.net/arguments-for-utilitarianism#the-hope-objection">hope that (fully-informed) agents act wrongly</a>; (iii) it's subject to the paradoxes of deontology, both <a href="https://www.utilitarianism.net/arguments-for-utilitarianism#the-paradox-of-deontology">old</a> and <a href="https://www.philosophyetc.net/2021/07/the-cost-of-constraints.html">new</a>.</p><p>(2) More distinctively, RC (at least as standardly formulated) has absurd implications in any scenario where the optimific rules were good <i>to accept</i> but not good <i>to act upon</i>.<br /><br />For example, an evil demon could threaten to torture us all unless we come to accept & approve of torturing puppies. (Crucially, the actual act of torturing puppies does not achieve any good whatsoever in this scenario; the belief is enough.) Obviously, one should not torture puppies in this case -- there isn't even the slightest reason to do so.</p><p>This is very different from putative counterexamples to act consequentialism, where one might feel that the act "seems wrong", but you can at least see how there are weighty reasons counting in its favour (e.g. saving more lives!). In this case, what we're able to show is that the Rule Consequentialist's assumed link between reasons for accepting a moral code and reasons for acting upon it is fallacious. There's just <i>no essential connection </i>there. But that's the basis for the whole theory.</p><p>Could RC be saved by reformulating it in terms of rules that are good <i>just in virtue of the value of the acts that they lead to</i>? I don't recall seeing anyone else formulate the view this way, but it does seem an essential move in order to address this (otherwise decisive) objection. The resulting view starts to look increasingly ad hoc, however -- once you've gone this far, why not simply accept the multi-level act utilitarian view that the rules are mere rules of thumb, rather than in-principle determinants of rightness or normative reasons for actions?</p><p>(3) As <a href="https://philpapers.org/rec/PODWIB-2">Podgorski</a> argues, RC is subject to the "distant world" objection, as it "determines what we ought to do by evaluating worlds that differ from ours in more than what is up to us." It seems that this will inevitably lead to clearly bad recommendations in special cases (such as Podgorski's "duds").</p><p>(<a href="https://philpapers.org/rec/PERSTI-4">Caleb Perl claims to "solve" this</a> by jettisoning counterfactual evaluation in favour of the "consilience" principle that "the moral value of
a rule R is everything actual that’s agent-neutrally
good or bad to the extent it’s caused by actions that
R classifies as morally right." But such a blinkered form of evaluation will surely be subject to even more egregious counterexamples. E.g. suppose that R permits both good and extremely bad acts, but we're in a world where people have only performed the good acts. We shouldn't conclude from this that R is a good rule, or that its non-actual (extremely bad!) instances are permissible.)</p><p>(4) RC is <a href="https://www.philosophyetc.net/2009/09/reasons-and-rule-consequentialism.html">a structural mess</a>. As I explain in my (2012) '<a href="https://philpapers.org/rec/CHAFAF">Fittingness</a>' paper:</p><p></p><blockquote>Rule consequentialists first identify the rules that are best in terms of
impartial welfare (or what’s antecedently desirable), and then specify that
we have decisive reasons to act in accordance with these rules. Finally, they
might add, we have overriding reasons to desire that we so act. This way,
a prohibited act may be ‘best’ according to the antecedent (agent-neutral
welfarist) reasons for desire, and yet be bad (undesirable) all things considered. This avoids the incoherence [of preferring to act wrongly]. But it also brings
out how convoluted the view really is. It is recognizably consequentialist in
the sense that it takes (<i>some</i>) reasons for desire as fundamental, and subsequently derives an account of reasons for action. But then it goes back and
“fills in” further reasons for desire — trumping the original axiology — to
make sure that they fit the account of right action. In this sense it exhibits
a deontological streak: reasons for action are at least <i>partly </i>prior to reasons
for desire. In other words, the initial axiology includes only some values
(the ‘non-moral’, agent-neutral welfarist ones), and what’s <i>right </i>serves to
determine the remaining (‘post-moral’, all things considered) good.</blockquote><p></p><p>I don't have a further argument against accepting a moral theory with this structure. It's not strictly incoherent or anything. I just think it's unappealing once brought to light, especially when the view lacks significant compensating advantages. (I think this also brings out why we might reasonably regard RC as <i>not really consequentialist</i>, despite its name.)</p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com10tag:blogger.com,1999:blog-6642011.post-17016343773107935112022-02-10T18:56:00.006-05:002022-02-10T19:03:01.049-05:00Guest Post: Animal Population EthicsEvan Dawson-Baglien wrote to me with some interesting thoughts on the challenge of incorporating non-persons into (non-total views of) population ethics. I asked him if he'd be willing to compose and share his thoughts as a guest post, and he generously agreed. Here's the result. Enjoy!<span><a name='more'></a></span><div><p class="MsoNormal">* * *</p><p class="MsoNormal">Most people have a very strong moral intuition that people are
not replaceable. This intuition is usually grounded in our unique personal
identities and future-oriented desires.<span style="mso-spacerun: yes;">
</span>Animals, which likely lack these features, are often regarded as more
replaceable than human beings. Richard succinctly states this idea in his post
“<a href="https://www.philosophyetc.net/2006/03/when-death-doesnt-harm-you.html">When
Death Doesn’t Harm You</a>:”</p><p class="MsoNormal"></p><blockquote><p class="MsoNormal">It is good to have
chicken-pleasure in the world. But it doesn't much matter which chickens have
it. If some die and are replaced by others which go on to have just as pleasant
an experience, this change makes no moral difference.</p><p class="MsoNormal">People are not replaceable in this
sense, due to their persisting identities and future-regarding desires… So when
one person dies and is "replaced" with another, something is lost
that has no analogue in cases of chicken-replacement.</p></blockquote><p class="MsoNormal"></p><p class="MsoNormal" style="margin-left: 0.5in;"><o:p></o:p></p>
<p class="MsoNormal">This argument is consequentialist, not deontological. <span style="mso-spacerun: yes;"> </span>Someone’s “replacement” is bad whether it
happens naturally or through human agency. <span style="mso-spacerun: yes;"> </span>It is not infinitely bad, it may be outweighed
by other factors. It seems wrong, for example, to sterilize humanity to extend
one person’s life by one day. Rather, the argument is that something important
has been lost in the replacement, and <a href="https://www.philosophyetc.net/2008/05/question-of-conservatism-is-value.html">appropriate
disvalue must be assigned to that</a> loss.<o:p></o:p></p>
<p class="MsoNormal">This argument is persuasive, but is difficult to convert into
population ethics. I will explore ways to modify theories of population ethics
to account for the central moral intuition of nonreplaceability. I will use terms
from population ethics, <a href="https://www.utilitarianism.net/population-ethics">Richard’s introduction
to the subject</a> defines them in detail.<o:p></o:p></p>
<p class="MsoListParagraphCxSpFirst" style="mso-list: l0 level1 lfo1; text-indent: -0.25in;"><!--[if !supportLists]--><span style="mso-bidi-font-family: Calibri; mso-bidi-theme-font: minor-latin;"><span style="mso-list: Ignore;">1.<span style="font: 7pt "Times New Roman";">
</span></span></span><!--[endif]--><a href="https://www.utilitarianism.net/population-ethics#critical-level-and-critical-range-theories"><b style="mso-bidi-font-weight: normal;">Standard Critical Range Utilitarianism</b></a>.
Of the existing theories, this one accommodates intuitions about replaceability
the best.<span style="mso-spacerun: yes;"> </span>In order for the “mere
addition” of a new person to be good their level of welfare must significantly exceed
the “barely worth living” level. This means that someone with welfare level X
is not morally equivalent to two people with welfare level 0.5X.</p>
<p class="MsoListParagraphCxSpMiddle">However, this theory would likely treat most
animal life on Earth as “meh.” If lower animals are disconnected “moments” of
consciousness, it is unlikely that any “moments” will surpass the critical
range. If the lower bound of the range is set near zero, which should be done
anyway to avoid the Sadistic Conclusion, the existence of animals at least doesn’t
generate a counterintuitively large amount of disvalue. However, this theory implies
that a world of happy lower animals is “on a par” with an empty one. Can we do
better?</p>
<p class="MsoListParagraphCxSpMiddle" style="mso-list: l0 level1 lfo1; text-indent: -0.25in;"><!--[if !supportLists]--><span style="mso-bidi-font-family: Calibri; mso-bidi-theme-font: minor-latin;"><span style="mso-list: Ignore;">2.<span style="font: 7pt "Times New Roman";"> </span></span></span><b style="mso-bidi-font-weight: normal;">Critical
Range Utilitarianism with Differing Critical Ranges</b>: When <a href="https://www.nybooks.com/articles/1980/08/14/right-to-life/">discussing
replaceability</a>, Peter Singer considered treating preferences as “debts”
that are “paid” when they are satisfied. This framework makes people nonreplaceable,
but implausibly suggests that a person’s existence can never be better than
neutral.<span style="mso-spacerun: yes;"> </span>However, it can be improved.
Instead of treating the creation of people/preferences as “debts,” we could
instead treat them as “investments.”</p>
<p class="MsoListParagraphCxSpLast">Creating a new person/preference creates an
initial “debt,” but that debt can “pay off” later by generating greater value
when the person has a flourishing life/the preference is satisfied. <span style="mso-spacerun: yes;"> </span>Creatures with persisting
identities/future-oriented preferences such as humans produces the largest
initial “debt,” but also the largest “payoff” later.<span style="mso-spacerun: yes;"> </span>This means that replacement of people generates
twice as much “debt” for the same “payoff.”<span style="mso-spacerun: yes;">
</span>Creating simpler creatures with simpler desires generates much less
“debt,” making them more “replaceable.” But it also a produces a much smaller
“payoff” of value, so humans cannot easily be replaced by animals either.</p><p class="MsoListParagraphCxSpLast">This framework easily translates
into <a href="https://www.utilitarianism.net/population-ethics#critical-level-and-critical-range-theories">critical
level/range utilitarianism</a>. The
“debt” is the critical level or the upper bound of the critical range. The “value
blur” in the critical range could be modeled as uncertainty/pluralism about the
size of the “debt,” or tension <a href="https://www.philosophyetc.net/2009/08/acquired-non-instrumental-value.html">between
respecting value versus promoting it</a>. This reflects moral intuitions about
replaceability without declaring all of animal life to be “meh,” but has a few
problems. It may suggest that the Repugnant Conclusion holds true for some
animals. It gives initially
counterintuitive results when comparing populations with different critical
levels. Most seriously, it might overvalue creating animals over creating people.
Likely people generate sufficiently more value than animals once they surpass
“critical level” that this isn’t an issue. But perhaps we can do even better:</p><p class="MsoNormal" style="margin-left: 0.5in;"><o:p></o:p></p>
<p class="MsoListParagraphCxSpFirst" style="mso-list: l0 level1 lfo1; text-indent: -0.25in;"><!--[if !supportLists]--><span style="mso-bidi-font-family: Calibri; mso-bidi-theme-font: minor-latin;"><span style="mso-list: Ignore;">3.<span style="font: 7pt "Times New Roman";">
</span></span></span><!--[endif]--><b style="mso-bidi-font-weight: normal;">Critical
Range Utilitarianism with Fractional Critical Ranges:</b> <span style="mso-spacerun: yes;"> </span>This theory refines the previous one. If
animals have no individual identity, we could assign critical levels to
portions of their psyche. For example, we could assign a critical level to “satisfaction
of animal desires” or some other identity-less measure of animal flourishing.<span style="mso-spacerun: yes;"> </span>While each “unit” of animal welfare would
have a lower critical range than a person, a lifetimes’ worth of them might
have a similar aggregate range. If calibrated correctly this could alleviate
both the Animal Repugnant Conclusion and the problem of undervaluing humans.</p>
<p class="MsoListParagraphCxSpMiddle" style="mso-list: l0 level1 lfo1; text-indent: -0.25in;"><!--[if !supportLists]--><span style="mso-bidi-font-family: Calibri; mso-bidi-theme-font: minor-latin;"><span style="mso-list: Ignore;">4.<span style="font: 7pt "Times New Roman";">
</span></span></span><!--[endif]--><b style="mso-bidi-font-weight: normal;">Average
Utilitarianism with Fractional Denominators: </b>Average utilitarianism could
count creatures without personal identities as fractions of lives. When adding
them to the denominator to determine the average, it could count them as “0.1,”
“0.0001,” or some other number instead of “1.”<span style="mso-spacerun: yes;"> </span></p>
<p class="MsoListParagraphCxSpMiddle" style="mso-list: l0 level1 lfo1; text-indent: -0.25in;"><!--[if !supportLists]--><span style="mso-bidi-font-family: Calibri; mso-bidi-theme-font: minor-latin;"><span style="mso-list: Ignore;">5.<span style="font: 7pt "Times New Roman";">
</span></span></span><!--[endif]--><b style="mso-bidi-font-weight: normal;">Fractional
Variable Value Utilitarianism:<span style="mso-spacerun: yes;"> </span></b>Variable
value utilitarianism acts like total utilitarianism for small populations and
like average utilitarianism for large ones.<span style="mso-spacerun: yes;">
</span>It could be modified to act like one of the critical range theories
discussed earlier when the population is small, and like average utilitarianism
with fractional denominators when it is large.</p>
<p class="MsoListParagraphCxSpLast" style="mso-list: l0 level1 lfo1; text-indent: -0.25in;"><!--[if !supportLists]--><span style="mso-bidi-font-family: Calibri; mso-bidi-theme-font: minor-latin;"><span style="mso-list: Ignore;">6.<span style="font: 7pt "Times New Roman";">
</span></span></span><!--[endif]--><b style="mso-bidi-font-weight: normal;">Standard Critical
Range Utilitarianism plus a nonwelfarist value of “Humans Living with Nature:” </b>This
theory assigns some nonwelfarist value to preserving nature world that, under
some circumstances, would overcome the “value blur.” If this value includes people
and animals happily coexisting, we could assign positive value to animal lives
without concluding that a huge world of only happy animals is better than one
that contains both animals and people.<span style="mso-spacerun: yes;">
</span>This fits common moral intuitions about the value of nature. Most people
want to preserve biodiversity, not just the species most capable of happiness.</p>
<p class="MsoNormal">Each of these ideas could be expanded on or modified
further.<span style="mso-spacerun: yes;"> </span>However, they demonstrate that
it is possible to incorporate moral intuitions about replaceability into the
field of population ethics. <span style="mso-spacerun: yes;"> </span><o:p></o:p></p><p class="MsoNormal"><i>- Evan Dawson-Baglien</i></p></div>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com0tag:blogger.com,1999:blog-6642011.post-79290138199875372752022-01-22T15:16:00.002-05:002022-01-22T15:16:53.568-05:00Utilitarianism and Reflective Equilibrium<p>In '<a href="http://fakenous.net/?p=2757">Why I Am Not a Utilitarian</a>', Michael Huemer objects that "there are so many counter-examples, and the intuitions about these examples are strong and widespread, it’s hard to see how <a href="https://www.utilitarianism.net/">utilitarianism</a> could be justified overall." But I think it's actually much easier to bring utilitarianism (or <a href="https://www.philosophyetc.net/2021/03/three-dogmas-of-utilitarianism.html">something close to it</a>) into reflective equilibrium with common sense intuitions than it would be for any competing deontological view. That's because I think the clash between utilitarianism and intuition is <i>shallow</i>, whereas the intuitive problems with non-consequentialism are <i>deep</i> and irresolvable.</p><p>To fully make this case would probably require a book or three. But let's see how far I can get sketching the rough case in a mere blog post.<span></span></p><a name='more'></a><p></p><p>Firstly, and most importantly, the <a href="https://www.philosophyetc.net/2012/08/counterexamples-to-consequentialism.html">standard counterexamples</a> to utilitarianism only work if you think our intuitive responses exclusively concern 'wrongness' and not closely related moral properties like <i>viciousness</i> or <i>moral recklessness</i>:</p><blockquote>They generally start by describing a harmful act, done for purpose of some greater immediate benefit, but that we would normally expect to have further bad effects in the long term (esp. the erosion of trust in vital social institutions). The case then stipulates that the immediate goal is indeed obtained, with none of the long-run consequences that we would expect. In other words, this typically disastrous act type happened, in this particular instance, to work out for the best. So, the argument goes, Consequentialism must endorse it, but doesn't that typically-disastrous act type just seem clearly <i>wrong</i>? (The organ harvesting case is perhaps the paradigm in this style.)<br /><br />To that objection, the appropriate response seems to me to be something like this: (1) You've described a morally reckless agent, who was almost certainly <i>not warranted</i> in thinking that their particular performance of a typically-disastrous act would avoid being disastrous. Consequentialists can certainly criticize that. (2) If we imagine that somehow the voice of God reassured the agent that no-one would ever find out, so no long-run harm would be done, then that changes matters. There's a big difference between your typical case of "harvesting organs from the innocent" and the particular case of "harvesting organs from the innocent when you have 100% reliable testimony that this will save the most innocent lives on net, and have no unintended long-run consequences." The <a href="https://www.philosophyetc.net/2012/04/singers-pond-and-quality-of-will.html">salience</a> of the harm done to the first innocent still makes it a bitter pill to swallow. But when one carefully reflects on the whole situation, vividly imagining the lives of the five innocents who would otherwise die, and cautioning oneself against any <a href="https://www.utilitarianism.net/arguments-for-utilitarianism#status-quo-bias">unjustifiable status-quo bias</a>, then I ultimately find I have no trouble at all endorsing this particular action, in this very unusual situation.</blockquote><div><br /></div><div>Utilitarianism clearly endorses our being strongly reluctant to murder innocent people (and <a href="https://www.utilitarianism.net/utilitarianism-and-practical-ethics#respecting-commonsense-moral-norms">respecting commonsense moral norms</a> more generally). While it's possible to imagine hypothetical cases in which an agent ought (by utilitarian lights) to override this general disposition, it's an open question what lesson we should draw from our intuitive resistance to such overriding. If someone insists that they not only endorse the utilitarian-compatible claims in this vicinity, but <i>additionally</i> judge that the act itself "clearly" ought not to be done (even in the "100% reliable" version of the case), then I'll grant that <i>they</i> find utilitarianism counterintuitive in this respect. But then the question still remains whether they might find further implications of deontology to be even <i>more</i> counterintuitive.</div><div><br /></div><div>Consider <a href="https://www.utilitarianism.net/arguments-for-utilitarianism#the-poverty-of-the-alternatives">the poverty of the alternatives</a>:</div><div><br /></div><div>* <a href="https://www.utilitarianism.net/arguments-for-utilitarianism#status-quo-bias">Deontology prioritizes those who are privileged by default</a>; but this violates the strong theoretical intuition that status quo privilege is morally arbitrary. (Why should the five have to die rather than the one, just because organ failure happened to occur in their bodies rather than his?)</div><div><br /></div><div>* It rests on a distinction between doing and allowing that <a href="https://www.utilitarianism.net/arguments-for-utilitarianism#skepticism-about-the-doing-vs-allowing-distinction">doesn't seem capable of carrying the weight</a> that deontologists place upon it. </div><div><br /></div><div>* <a href="https://www.utilitarianism.net/arguments-for-utilitarianism#the-hope-objection">It implies that we should often hope/prefer that others act wrongly</a>: since, after all, impartial observers should want and hope for the best outcome.</div><div><br /></div><div>* Worse, according to my <a href="https://www.dropbox.com/s/dxv8vusnf6228aw/Chappell-NewParadoxDeontology.pdf?dl=0">new paradox of deontology</a>, deontic constraints are self-undermining in the strong sense of being incompatible with taking their violations (e.g. the killing of an innocent person) to be <a href="https://www.philosophyetc.net/2021/07/the-cost-of-constraints.html">particularly <i>important</i></a>.</div><div><br /></div><div>* Most importantly, deontology makes incredible claims about <a href="https://www.utilitarianism.net/arguments-for-utilitarianism#what-fundamentally-matters">what fundamentally matters</a>. It seems completely wild to claim that keeping a deathbed promise (to borrow one of Huemer's examples) is seriously <i>more important</i>, in principle, than the <i>entire lives</i> of many innocent people. So either deontologists are stuck making completely wild claims of this sort, or their normative prescriptions (concerning what we allegedly ought to do) bear no relation to <i>what really matters</i>.</div><div><br /></div><div>Now, I think our deepest intuitions about <i><a href="https://www.philosophyetc.net/2021/12/consequentialisms-central-concept.html">what really matters</a></i> are much more methodologically significant, and should play a greater role in determining our ethical theory, than superficial verdicts about the extension of the word 'wrong' in various highly-specified cases. So that's why I think (something close to) utilitarianism is actually the <i>most</i> intuitive moral theory.</div>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com10tag:blogger.com,1999:blog-6642011.post-80261975600336044042022-01-19T14:34:00.003-05:002022-01-19T14:34:10.390-05:00Emergence and Incremental Probability<p>In '<a href="Emergence and Incremental Impact">Emergence and Incremental Impact</a>', I argued (contra <a href="https://philpapers.org/rec/KINWWW">Kingston and Sinnott-Armstrong</a>) that emergent properties do nothing to undermine the basic case for individual impact: they're just another kind of threshold case, and thresholds are compatible with difference-making increments.</p><p>In that old post, I assumed counterfactual determinacy to make the case for there being some precise increment(s) that make a difference whenever a collection of increments together does. But while revising <a href="https://www.dropbox.com/s/lh0fn7qj4kuaxid/Chappell-CollectiveHarm.pdf?dl=0">my paper on collective harm</a>, it occurred to me that the case becomes much more clear-cut when made in terms of probabilities.<span></span></p><a name='more'></a><p></p><p>Consider. Kingston & Sinnott-Armstrong object (p.179):</p><p></p><blockquote>[T]he expected disvalue approach requires that the probability of dangerous events can themselves be increased (minutely) by the addition of relatively tiny emissions. But why should we assume this? ... Emergence affects probability as it does other properties. While adding oil to an engine reduces the probability of a moving part failing, it is implausible that adding a molecule of oil reduces that probability of failure by 1/10^25.</blockquote><p></p><p style="margin: 0px;">Why is this implausible? Suppose that adding a large drop of oil containing 10^23 molecules would reduce the probability of engine failure by at least 1/100. Now consider the sequence of possible futures M[<i>n</i>] that consist in adding precisely <i>n</i> molecules of oil to the engine. By our initial supposition, the probability of engine failure in M[10^23] is at least 1/100 less than in M[0]. But then it's <b>logically impossible</b> to assign probabilities of engine failure to each intermediate state in the sequence without some of those values in adjacent states differing by at least 1/10^25. ...</p><p style="margin: 0px;"><br /></p><p style="margin: 0px;">Of course, it may well be that adding <i>only the first</i> molecule of oil would indeed have a much lower than average chance of making a difference. But even if so, this merely ensures that some <i>other</i> increments--namely, those in the threshold vicinity--have a correspondingly higher chance of making a difference. This is the familiar structure of expected-value reasoning in threshold cases. As previously argued [in my paper], if we've no idea where the thresholds lie, or no special reason to expect ourselves to be disproportionately likely to be distant from them, then the mere existence of such thresholds makes no difference to the expected value of our contribution: it remains equal to the average value of many such contributions. Nothing about emergent properties changes this basic reasoning. But it does help to emphasize a crucial dialectical point, that the important question is not whether a single increment <i>in isolation</i> makes a difference (it need not), but rather whether some increment <i>in context</i> does so (that is, given how many previous increments have already been made).</p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com0tag:blogger.com,1999:blog-6642011.post-71373765483712692062022-01-06T11:25:00.002-05:002022-01-06T19:00:49.550-05:00Longtermism Contra Schwitzgebel<p>In '<a href="https://schwitzsplinters.blogspot.com/2022/01/against-longtermism.html">Against Longtermism</a>', Eric Schwitzgebel writes: "I accept much of Ord's practical advice. I object only to justifying this caution by appeal to expectations about events a million years from now." He offers four objections, which are interesting and well worth considering, but I think ultimately unpersuasive. Let's consider them in turn.<span></span></p><a name='more'></a><p></p><p><b>(1) There's no chance humanity will survive long-term:</b></p><blockquote>All or most or at least many future generations with technological capabilities matching or exceeding our own will face substantial existential risk -- perhaps 1/100 per century or more. If so, that risk will eventually catch up with us. Humanity can't survive existential risks of 1/100 per century for a million years.<br />If this reasoning is correct, it's very unlikely that there will be a million-plus year future for humanity that is worth worrying about and sacrificing for.</blockquote><div><br /></div><div>This seems excessively pessimistic. Granted, there's certainly <i>some</i> risk that we will never acquire resilience against x-risk. But it's hardly <i>certain</i>. Two possible routes to resilience include: (i) fragmentation, e.g. via interstellar diaspora, so that different pockets of humanity could be expected to escape any given threat; or (ii) universal surveillance and control, e.g. via a "friendly AI" with effectively god-like powers relative to humans, to prevent us from doing grave harm.</div><div><br /></div><div>Maybe there are other possibilities. At any rate, I think it's clear that we should not be too quick to dismiss the possibility of long-term survival for our species. (And note that <i>any</i> non-trivial probability is enough to get the astronomical expected-value arguments off the ground.)</div><div><br /></div><div><b>(2) "The future is hard to see."</b> This is certainly true, but doesn't undermine expected value reasoning.</div><div><br /></div><div>Schwitzgebel writes:</div><div><blockquote>It could be that the single best thing we could do to reduce the risk of completely destroying humanity in the next two hundred years is to <i>almost </i>destroy humanity right now... that might postpone our ability to develop even more destructive technologies in the next century. It might also teach us a fearsome lesson about existential risk....</blockquote><blockquote>What we do know is that nuclear war would be terrible for us, for our children, and for our grandchildren. That's reason enough to avoid it. Tossing speculations about the million-year future into the decision-theoretic mix risks messing up that straightforward reasoning. </blockquote><p>But that <i>isn't</i> really "reason enough to avoid it", because if Schwitzgebel were right that immediate nuclear war was the only way to save humanity, that would obviously change its moral valence. It would be horribly immoral to let humanity go extinct just because saving it would be "terrible for us". When interests conflict, you can't just ignore the overwhelming bulk of them for the sake of maintaining "straightforward reasoning". (I'm sure confederate slaveowners regarded the abolition of slavery as "terrible for us, for our children, and for our grandchildren," but it was morally imperative all the same!)</p><p>Of course, I don't really think it's remotely credible that nuclear war has positive expected value in the way that Schwitzgebel speculates. The hope that it "might" teach us a lesson seems far-fetched compared to the more obvious risks of permanently thwarting advanced civilization. (We're not even investing seriously in future pandemic prevention! If we can't learn from the past two years, I'm not confident that a rebuilt civilization centuries or millennia hence would learn anything from tragedies in its distant history. And again, there are serious risks that civilization would never fully rebuild.)</p><p>So I think longtermism remains practically significant for raising the moral stakes of existential risk reduction. However important you think it is to avoid nuclear war, it's much <i>more</i> important once you take the long term into account (assuming you share my empirical beliefs about its expected harmfulness). It also suggests that there's immense expected value to <i>research</i> that would allow us to form better-grounded beliefs about such matters. We shouldn't just pre-emptively ignore them, as Schwitzgebel seemingly recommends. If it's remotely possible that we might find a way to reliably shape the far-future trajectory in a positive direction, it's obviously important to find this out!</p><p><b>(3) "Third, it's reasonable to care much more about the near future than the distant future." </b>Schwitzgebel stresses that this concern can be relational in form (tied to particular individuals or societies and their descendants), which avoids the problems with pure time discounting. That's an important point. But I don't think any reasonable degree of partiality can be so extreme as to swamp the value of the long-term future.</p><p>To see why, just imagine a Parfitian "depletion" scenario, where we imagine that the harms of global warming are delayed by two centuries. Imagine that everyone currently alive (and a couple of generations hence) could reap a bonanza by burning all the planet's fossil fuels, condemning all distant future people to difficult lives in a severely damaged world. Or they could severely limit consumption while investing significantly in renewables, lowering quality of life over these two centuries while protecting the planet for all who come in the further future. Should they choose depletion or preservation? <i>Obviously</i> preservation, right? It's clearly immoral to drastically discount future generations when the trade-offs are made this explicit.</p><b>(4) "Fourth, there's a risk that fantasizing about extremely remote consequences becomes an excuse to look past the needs and interests of the people living among us, here and now."</b></div><div><b><br /></b></div><div>It's always possible that a moral view is self-effacing, but that's <a href="https://www.philosophyetc.net/2008/11/whats-wrong-with-self-effacing-theories.html">no objection to the truth of the view</a>. Empirically speaking, the people I know to be most concerned about the far-future (i.e., effective altruists) are <i>also</i> the people who seem to do the most to help the global poor, factory-farmed animals, etc. So this fear doesn't seem empirically well-grounded.</div><div><br /></div><div>By contrast, I think there's a much more credible risk that defenders of conventional morality may use dismissive rhetoric about "grandiose fantasies" (etc.) to discourage other conventional thinkers from taking longtermism and existential risks as seriously as they ought, <a href="https://www.philosophyetc.net/2021/02/the-most-important-thing-in-world.html">on the merits</a>, to take them. (I don't accuse Schwitzgebel, in particular, of this. He grants that most people unduly neglect the importance of existential risk reduction. But I do find that this kind of rhetoric is troublingly common amongst critics of longtermism, and I don't think it's warranted or helpful in any way.)</div><div><br /></div><div>Of course, it's possible that enthusiasts might end up drawn towards bad bets if they exaggerate their likely efficacy on influencing the far future. But that's just more reason to think that it's really important to investigate these questions carefully, and get the empirical estimates right. It's not a reason to reject longtermism wholesale.</div>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com10tag:blogger.com,1999:blog-6642011.post-18323237830197442742021-12-31T09:50:00.008-05:002021-12-31T09:55:07.663-05:002021 in review<div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;">[Past annual reviews: <a href="https://www.philosophyetc.net/2020/12/2020-in-review.html">2020</a>, </span><a href="https://www.philosophyetc.net/2019/12/2019-and-18-in-review.html" style="background-color: white; color: #336699; text-align: justify;">2019 & '18</a><span style="background-color: white; color: #333333; text-align: justify;">, </span><a href="http://www.philosophyetc.net/2017/12/2017-in-review.html" style="background-color: white; color: #336699; text-align: justify;">2017</a><span style="background-color: white; color: #333333; text-align: justify;">, </span><a href="http://www.philosophyetc.net/2017/01/2016-in-review.html" style="background-color: white; color: #336699; text-align: justify;">2016</a><span style="background-color: white; color: #333333; text-align: justify;">, </span><a href="http://www.philosophyetc.net/2015/12/2015-in-review.html" style="background-color: white; color: #336699; text-align: justify;">2015</a><span style="background-color: white; color: #333333; text-align: justify;">, </span><a href="http://www.philosophyetc.net/2014/12/2014-in-review.html" style="background-color: white; color: #336699; text-align: justify;">2014</a><span style="background-color: white; color: #333333; text-align: justify;">, </span><a href="http://www.philosophyetc.net/2013/12/2013-in-review.html" style="background-color: white; color: #336699; text-align: justify;">2013</a><span style="background-color: white; color: #333333; text-align: justify;">, </span><a href="http://www.philosophyetc.net/2012/12/2012-in-review.html" style="background-color: white; color: #336699; text-align: justify;">2012</a><span style="background-color: white; color: #333333; text-align: justify;">, </span><a href="http://www.philosophyetc.net/2011/12/2011-my-web-of-beliefs.html" style="background-color: white; color: #336699; text-align: justify;">2011</a><span style="background-color: white; color: #333333; text-align: justify;">, </span><a href="http://www.philosophyetc.net/2010/12/2010-my-web-of-beliefs.html" style="background-color: white; color: #336699; text-align: justify;">2010</a><span style="background-color: white; color: #333333; text-align: justify;">, </span></span><a href="http://www.philosophyetc.net/2009/12/2009-my-web-of-beliefs.html" style="background-color: white; color: #336699; font-family: inherit; text-align: justify;">2009</a><span style="background-color: white; color: #333333; font-family: inherit; text-align: justify;">, </span><a href="http://www.philosophyetc.net/2008/12/2008-my-web-of-beliefs.html" style="background-color: white; color: #336699; font-family: inherit; text-align: justify;">2008</a><span style="background-color: white; color: #333333; font-family: inherit; text-align: justify;">, </span><a href="http://www.philosophyetc.net/2007/12/2007-my-web-of-beliefs.html" style="background-color: white; color: #336699; font-family: inherit; text-align: justify;">2007</a><span style="background-color: white; color: #333333; font-family: inherit; text-align: justify;">, </span><a href="http://www.philosophyetc.net/2007/01/2006-my-web-of-beliefs.html" style="background-color: white; color: #336699; font-family: inherit; text-align: justify;">2006</a><span style="background-color: white; color: #333333; font-family: inherit; text-align: justify;">, </span><a href="http://www.philosophyetc.net/2006/01/2005-my-web-of-beliefs.html" style="background-color: white; color: #336699; font-family: inherit; text-align: justify;">2005</a><span style="background-color: white; color: #333333; font-family: inherit; text-align: justify;">, and </span><a href="http://www.philosophyetc.net/2005/01/2004-my-web-of-beliefs.html" style="background-color: white; color: #336699; font-family: inherit; text-align: justify;">2004</a><span style="background-color: white; color: #333333; font-family: inherit; text-align: justify;">.]</span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><b>Off the blog:</b></span></span></div><div style="text-align: left;"><br /></div><div style="text-align: left;">The biggest development for me was joining <a href="http://utilitarianism.net">utilitarianism.net</a> as lead editor. I then completed their chapters on <a href="https://www.utilitarianism.net/population-ethics">population ethics</a> and <a href="https://www.utilitarianism.net/theories-of-wellbeing">theories of well-being</a>, and wrote a new chapter outlining some basic <a href="https://www.utilitarianism.net/arguments-for-utilitarianism">arguments for utilitarianism</a>. More to come soon!</div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><b><br /></b></span></span></div><div style="text-align: left;">For more traditional academic publications:</div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;">* <i><a href="https://philpapers.org/rec/CHAPE-5">Parfit's Ethics</a></i> appeared in print with Cambridge University Press. (Summary <a href="https://www.philosophyetc.net/2020/08/synopsis-of-parfits-ethics.html">here</a>.)</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;">* '<a href="https://philpapers.org/rec/CHAPEA-10">Pandemic Ethics and Status Quo Risk</a>' (summarized <a href="https://www.philosophyetc.net/2021/12/pandemic-ethics-and-status-quo-risk.html">here</a>) was accepted by <i>Public Health Ethics</i>.</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;">* '<a href="https://philpapers.org/rec/CHANUM">Negative Utility Monsters</a>' was published in <i>Utilitas</i>.</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;">I'm also pretty excited about various works-in-progress that are currently under review, especially my <a href="https://www.dropbox.com/s/dxv8vusnf6228aw/Chappell-NewParadoxDeontology.pdf?dl=0">new paradox of deontology</a>...</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><b>Blog posts:</b></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><b><br /></b></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><div style="color: black; text-align: left;"><span style="font-family: inherit;"><span style="color: #333333; text-align: justify;"><i>Normative Ethics</i></span></span></div><div style="color: black; text-align: left;"><span style="font-family: inherit;"><span style="color: #333333; text-align: justify;"><i><br /></i></span></span></div><div style="color: black; text-align: left;"><span style="font-family: inherit;"><span style="color: #333333; text-align: justify;"><i>* </i><a href="https://www.philosophyetc.net/2021/07/the-cost-of-constraints.html">The Cost of Contraints</a> -- sets out the core of my "new paradox of deontology". Further developed in <a href="https://www.philosophyetc.net/2021/08/preferring-to-act-wrongly.html">Preferring to Act Wrongly</a>, <a href="https://www.philosophyetc.net/2021/08/why-constraints-are-agent-neutral.html">Why Constraints are Agent Neutral</a>, and <a href="https://www.philosophyetc.net/2021/09/discounting-illicit-benefits.html">Discounting Illicit Benefits</a>.</span></span></div><div style="color: black; text-align: left;"><span style="font-family: inherit;"><span style="color: #333333; text-align: justify;"><i><br /></i></span></span></div><div style="color: black; text-align: left;"><span style="font-family: inherit;"><span style="color: #333333; text-align: justify;"><i>* </i><a href="https://www.philosophyetc.net/2021/02/the-most-important-thing-in-world.html">The Most Important Thing in the World</a> -- is plausibly the trajectory of the long-term future.</span></span></div><div style="color: black; text-align: left;"><span style="font-family: inherit;"><span style="color: #333333; text-align: justify;"><br /></span></span></div><div style="color: black; text-align: left;"><span style="font-family: inherit;"><span style="color: #333333; text-align: justify;">* <a href="https://www.philosophyetc.net/2021/06/the-paralysis-of-deontology.html">The Paralysis of Deontology</a></span></span></div><div style="color: black; text-align: left;"><span style="font-family: inherit;"><span style="color: #333333; text-align: justify;"><br /></span></span></div><div style="color: black; text-align: left;"><span style="font-family: inherit;"><span style="color: #333333; text-align: justify;">* <a href="https://www.philosophyetc.net/2021/03/three-dogmas-of-utilitarianism.html">Three Dogmas of Utilitarianism</a> -- (i)</span></span> Confusing <i>value </i>with <i>what's valuable; </i>(ii) Neglecting fittingness; and (iii) Treating all interests as innocent.</div><div style="color: black; text-align: left;"><br /></div><div style="color: black; text-align: left;">* <a href="https://www.philosophyetc.net/2021/09/agency-as-force-for-good.html">Agency as a Force for Good</a> -- and the appeal of consequentialism.</div><div style="color: black; text-align: left;"><br /></div><div style="color: black; text-align: left;">* <a href="https://www.philosophyetc.net/2021/03/learning-from-lucifer.html">Learning from Lucifer</a> -- If Satan would be a consequentialist, should the good guys be likewise (just, you know, with better goals)? Or is there a deeper asymmetry between right and wrong?</div><div style="color: black; text-align: left;"><br /></div><div style="color: black; text-align: left;">* <a href="https://www.philosophyetc.net/2021/09/tendentious-terminology-in-ethics.html">Tendentious Terminology in Ethics</a> -- against common uses of "mere means" and "separateness of persons" talk.</div><div style="color: black; text-align: left;"><br /></div><div style="color: black; text-align: left;">* <a href="https://www.philosophyetc.net/2021/03/is-effective-altruism-inherently.html">Is Effective Altruism Inherent Utilitarian?</a> I suggest not. There's a weaker normative principle in the vicinity, potentially shareable by any other sensible view, which should be difficult to deny. In a later post, I call this: <a href="https://www.philosophyetc.net/2021/12/beneficentrism.html">Beneficentrism</a>: The view that <i>promoting the general welfare is deeply important</i>.</div><div style="color: black; text-align: left;"><br /></div><div style="color: black; text-align: left;">* <a href="https://www.philosophyetc.net/2021/12/consequentialisms-central-concept.html">Consequentialism's Central Concept</a> may be <i>importance</i> rather than <i>rightness</i>.</div></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i><br /></i></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i>* </i><a href="https://www.philosophyetc.net/2021/04/whats-at-stake-in-objectivesubjective.html">What's at Stake in the Objective/Subjective Wrongness Debate?</a> Seems terminological. Appeal to "what a morally conscientious agent would be concerned about" doesn't help, because (my <i>Moral Stunting Objection</i> shows) a morally conscientious agent wouldn't be concerned about right or wrong <i>per se</i>.</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></span></div><div style="text-align: left;"><i>Welfare and Population Ethics</i></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></span></div><div style="text-align: justify;"><span style="color: #333333;"><span style="background-color: white;">* Is <a href="https://www.philosophyetc.net/2021/06/conscientious-sadism.html">Conscientious Sadism</a> still bad?</span></span></div><div style="text-align: justify;"><span style="color: #333333;"><span style="background-color: white;"><br /></span></span></div><div style="text-align: justify;"><span style="color: #333333;"><span style="background-color: white;">* <a href="https://www.philosophyetc.net/2021/07/is-objective-list-theory-spooky.html">Is Objective List Theory "Spooky"?</a></span></span></div><div style="text-align: justify;"><span style="color: #333333;"><br /></span></div><div style="text-align: justify;"><span style="color: #333333;">* <a href="https://www.philosophyetc.net/2021/08/parsimony-in-theories-of-welfare.html">Parsimony in Theories of Welfare</a> -- is it really a relevant consideration at all?</span></div><div style="text-align: justify;"><span style="color: #333333;"><span style="background-color: white;"><br /></span></span></div><div style="text-align: justify;"><span style="color: #333333;"><span style="background-color: white;">* <a href="https://www.philosophyetc.net/2021/07/the-limits-of-defective-character.html">The Limits of Defective Character Solutions</a> -- and why they don't help with the non-identity problem.</span></span></div><div style="text-align: justify;"><span style="color: #333333;"><span style="background-color: white;"><br /></span></span></div><div style="text-align: justify;"><span style="background-color: white; text-align: left;">*</span><span style="background-color: white; text-align: left;"> </span><a href="https://www.philosophyetc.net/2021/03/stable-actualism-and-asymmetries-of.html" style="background-color: white; text-align: left;">Stable Actualism and Asymmetries of Regret</a><span style="background-color: white; text-align: left;"> -- actualist partiality is defensible once you subtract the possibility of elusive permissions.</span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i>Pandemic Ethics</i></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i><br /></i></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i>* </i><a href="https://www.philosophyetc.net/2021/01/lessons-from-pandemic.html">Lessons from the Pandemic</a>: blocking innovation is bad.</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i><br /></i></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i>* </i><a href="https://www.philosophyetc.net/2021/01/the-risk-of-excessive-conservatism.html">The Risk of Excessive Conservatism</a>. See also <a href="https://www.philosophyetc.net/2021/08/pandemic-paralysis.html">Pandemic Paralysis</a> and <a href="https://www.philosophyetc.net/2021/09/jcvi-endorses-status-quo-bias.html">JCVI endorses Status Quo Bias</a>.</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><br /></span></div><div style="text-align: left;"><span style="font-family: inherit;">* <a href="https://www.philosophyetc.net/2021/01/epistemic-calibration-bias-and-blame.html">Epistemic Calibration Bias and Blame Aversion</a> -- we're often too scared of being wrong, and not sufficiently attuned to the risks of <i>failing to be right </i>(e.g. by instead remaining non-committal) when it matters.</span></div><div style="text-align: left;"><span style="font-family: inherit;"><br /></span></div><div style="text-align: left;"><span style="font-family: inherit;">* <a href="https://www.philosophyetc.net/2021/01/theres-no-such-thing-as-following.html">There's No Such Thing as "Following the Science"</a> -- normative principles are needed to bridge the is/ought gap. Better slogan: <a href="https://www.philosophyetc.net/2021/04/follow-decision-theory.html">Follow Decision Theory</a>!</span></div><div style="text-align: left;"><span style="font-family: inherit;"><br /></span></div><div style="text-align: left;"><span style="font-family: inherit;">* <a href="https://www.philosophyetc.net/2021/03/appeasing-anti-vaxxers.html">Appeasing Anti-Vaxxers</a> -- and why it's wrong.</span></div><div style="text-align: left;"><br /></div><div style="text-align: left;"><span style="font-family: inherit;">* <a href="https://www.philosophyetc.net/2021/08/the-ethics-of-off-label-vaccinations.html">The Ethics of Off-Label Vaccinations for Kids</a></span></div><div style="text-align: left;"><span style="font-family: inherit;"><br /></span></div><div style="text-align: left;"><span style="font-family: inherit;">* <a href="https://www.philosophyetc.net/2021/04/imagining-alternative-pandemic-response.html">Imagining an Alternative Pandemic Response</a> -- with vaccine challenge trials, targeted immunity via variolation, and immunity passports to spare many (e.g. healthy young people) from lockdowns.</span></div><div style="text-align: left;"><br /></div><div style="text-align: left;">* <a href="https://www.philosophyetc.net/2021/11/the-indefensibility-of-post-vaccine.html">The Indefensibility of Post-Vaccine Lockdowns</a></div><div style="text-align: left;"><br /></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i><br /></i></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i>Applied Ethics</i></span></span></div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">* <a href="https://www.philosophyetc.net/2021/09/companies-cities-and-carbon.html">Companies, Cities, and Carbon</a> -- blaming large corporations for proportionately large carbon emissions makes no more sense than blaming large cities. </div><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><span style="background-color: white; color: #333333;">* </span><a href="https://www.philosophyetc.net/2021/05/five-fallacies-of-collective-harm.html">Five Fallacies of Collective Harm</a><span style="background-color: white; color: #333333;"> -- Critiquing the five main reasons why people falsely believe that collective difference-making doesn't require individual difference-making.</span></div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">* <a href="https://www.philosophyetc.net/2021/02/the-absurdity-of-undue-inducement.html">The Absurdity of "Undue Inducement"</a> argues that there's no in-principle basis for objecting to monetary incentives to (e.g.) research participants. If concerned that an offer might be exploitative, the solution is to pay <i>more</i>, not less.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">* <a href="https://www.philosophyetc.net/2021/02/against-anti-beneficent-paternalism.html">Against Anti-Beneficent Paternalism</a> - as a general rule, we shouldn't prevent people from doing good (even if we aren't entirely certain of the quality of their understanding or consent).</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">* <a href="https://www.philosophyetc.net/2021/10/puzzling-conditional-obligations.html">Puzzling Conditional Obligations</a> -- if positively good to comply with, then you ought to have <i>unconditional reason</i> to get yourself into position to meet the putative obligation.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><br /></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i>Metaethics</i></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i><br /></i></span></span></div><div><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i>* </i><a href="https://www.philosophyetc.net/2021/02/the-parochialism-of-metaethical.html">The Parochialism of Metaethical Naturalism</a> -</span></span> the basic moral facts should not differ depending on our location in modal space (i.e. which world is actual). But synthetic metaethical naturalism, with its 2-D semantic asymmetry, violates this principle.</div><div><br /></div><div>* <a href="https://www.philosophyetc.net/2021/10/ruling-out-helium-maximizing.html">Ruling out Helium-Maximizing</a> -- without giving up robust realism. </div><div><br /></div><div>* <a href="https://www.philosophyetc.net/2021/05/why-belief-is-no-game.html">Why Belief is No Game</a> - pragmatists (like Maguire & Woods) are wrong about what people are rationally criticizable for, and hence wrong about what reasons there are.</div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i>Other</i></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i><br /></i></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i>* </i><a href="https://www.philosophyetc.net/2021/06/philosophical-pluralism-and-modest.html">Philosophical Pluralism and Modest Dogmatism</a> - On why we should welcome philosophical dissensus.</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i><br /></i></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;">* <a href="https://www.philosophyetc.net/2021/03/constructive-vs-dismissive-objections.html">Querying vs Dismissive Objections</a> - are you aiming to create a dialectical <i>opening</i> (to which you'd like to hear a response), or simply shutting things down? When is the latter appropriate?</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;">* <a href="https://www.philosophyetc.net/2021/08/commonsense-epiphenomenalism.html">Commonsense Epiphenomenalism</a> - could the view be less weird than everyone tends to assume?</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;">* <a href="https://www.philosophyetc.net/2021/09/helen-interviewed-on-idealism.html">Helen interviewed on Idealism</a> -- including why Idealism might warrant up to 30% credence.</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;">* <a href="https://www.philosophyetc.net/2021/10/best-new-blogs.html">New Blogs of Note</a> -- three recommendations.</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;">* Zach Barnett's guest post on '<a href="https://www.philosophyetc.net/2021/04/guest-post-save-five-meeting-taureks.html">Meeting Taurek's Challenge</a>'.</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><br /></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;">* Philosophy Spotlight posts from <a href="https://www.philosophyetc.net/2021/01/philosopher-spotlight-eden-lin.html">Eden Lin</a>, <a href="https://www.philosophyetc.net/2021/07/philosopher-spotlight-jess-flanigan.html">Jess Flanigan</a>, and <a href="https://www.philosophyetc.net/2021/07/philosopher-spotlight-hrishikesh-joshi.html">Hrishikesh Joshi</a>. I'm still waiting for other blogs to <a href="https://www.philosophyetc.net/2020/12/philosopher-spotlight-series.html">join in</a>!</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><b><br /></b></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="background-color: white; color: #333333; text-align: justify;"><i><b>Happy New Year!</b></i></span></span></div>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com3tag:blogger.com,1999:blog-6642011.post-70664327905996959782021-12-12T20:04:00.002-05:002021-12-12T20:07:01.787-05:00Pandemic Ethics and Status Quo Risk (forthcoming in PHE)<p>My latest paper, '<a href="https://philpapers.org/rec/CHAPEA-10">Pandemic Ethics and Status Quo Risk</a>', has just been accepted for publication in <i>Public Health Ethics</i>. Here's the abstract:</p><p></p><blockquote>Conservative assumptions in medical ethics risk immense harms during a pandemic. Public health institutions and public discourse alike have repeatedly privileged inaction over aggressive medical interventions to address the pandemic, perversely increasing population-wide risks while claiming to be guided by “caution”. This puzzling disconnect between rhetoric and reality is suggestive of an underlying philosophical confusion. In this paper, I argue that we have been misled by status quo bias—exaggerating the moral significance of the risks inherent in medical interventions, while systematically neglecting the (objectively greater) risks inherent in the status quo prospect of an out-of-control pandemic. By coming to appreciate the possibility and significance of status quo risk, we will be better prepared to respond appropriately when the next pandemic strikes</blockquote><p></p><p>The central idea is that heuristics of ambiguity-aversion and favouring inaction over (potentially risky) action can be expected to backfire terribly in circumstances -- such as a pandemic -- in which "business as usual" is leading us towards disaster. Instead, I suggest that our policy and institutional responses to such emergency circumstances need to be rebalanced towards (i) liberalizing access to experimental treatments and vaccines, and (ii) requiring an explicit cost-benefit analysis to justify any sort of vaccine obstructionism (e.g. failure to <i>immediately</i> grant Emergency Use Authorization to any credible candidate vaccine early in the pandemic, and of course any post-authorization suspensions).</p><p>Other key points of the paper:<span></span></p><a name='more'></a><p></p><p>(1) "Governments and their agencies are not generally entitled to describe vaccine suspensions as reflecting “an abundance of caution”, unless they can show that the policy actually reduces <i>overall </i>risk. If it instead increases overall risk, it would seem more objectively accurate to describe such suspensions as “reckless”—as they would then reveal a reckless disregard of the objectively greater threat posed by the unchecked spread of the virus."</p><p>(2) Non-consequentialists should be even <i>more</i> appalled by vaccine obstructionism, as it constitutes <i>harmful coercion resulting in death</i> (which is to say: <i>killing</i>) by the government -- no less than if the FDA sent out agents to steal a cure from the hands of those who will die without it.</p><p>(3) Vaccine challenge trials were a no-brainer, and opposing them on ethical grounds constitutes <a href="https://www.philosophyetc.net/2021/02/against-anti-beneficent-paternalism.html">anti-beneficent paternalism</a> -- a kind of moral insanity. The basic argument also carries over to "any research that has a feasible chance of reducing the population-wide toll of the pandemic," including research into variolation, and challenge trials for candidate preventative measures (such as antiseptic nasal sprays). </p><p>(4) Early targeted immunity via variolation (ideally <a href="https://www.philosophyetc.net/2020/12/combining-experimental-vaccines.html">preceded with experimental vaccination</a>) could have done a lot of good, slowing the spread of the virus and freeing many healthy young people from unnecessary lockdowns.</p><p>(5) Fear of "vaccine hesitancy" provides but weak reasons to oppose liberalization. I offer several reasons for this in the paper, but I think the strongest is that however much you'd like to reduce vaccine hesitancy, it isn't ethical to pursue this goal via <i>killing innocent people</i>, but (as per #2 above) that's precisely what obstructionism amounts to.</p><p>(6) If you're on board with my conclusion that pandemic policy was rife with status quo bias, the next step is to design institutional reforms to change the incentives that lead to this result. Right now, "policy-makers are more likely to be blamed if an intervention goes wrong (resulting in highly salient identifiable victims), whereas they tend to escape blame for inaction that results in grave preventable harms (many of which may be less salient, or only linkable to the policy decision on a statistical basis—we cannot identify which <i>particular </i>deaths would have been prevented by earlier access to vaccines, for example)." My paper doesn't address this problem, but perhaps it could help to shift EUA-granting authority to a new institution that's authorized to make such decisions on the basis of explicit cost-benefit analysis, and very explicitly does not <i>recommend</i> that anyone take the experimental treatments that it authorizes (i.e. makes legally <i>available</i>) for personal use.</p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com3tag:blogger.com,1999:blog-6642011.post-50993183642552033202021-12-07T20:30:00.001-05:002021-12-07T20:30:59.546-05:00Consequentialism's Central Concept<p>Ethical theories are typically formulated as centrally concerning the concept of <i>right</i> action. Introductory ethics classes may define competing theories as offering different completions of the sentence: "An act is right iff...". And that probably works well enough for deontological theories, which are centrally concerned with delineating the boundaries of permissibility and obligation. But I think it's very misleading to treat consequentialist theories as seeking to answer this question. (And I expect virtue ethicists would have similar complaints.)</p><p>If forced into a deontic mould, it's natural to default to maximizing consequentialism: <i>An act is right iff it maximizes value</i> (or, more precisely, produces <i>no less</i> value than any alternative option). But the concept of <i>rightness</i> has connotations that fit poorly with consequentialism, as many (from <a href="https://www.philosophyetc.net/2007/09/imperfectly-right.html">Railton</a> to <a href="https://philpapers.org/rec/CHANAM-4">Norcross</a>) have pointed out. For example:</p><p><span></span></p><a name='more'></a>(1) It's natural to assume that knowing failures to act rightly (absent some special excuse) are <i>blameworthy</i>. But no maximizing consequentialist that I'm aware of has ever held that people are blameworthy for doing slightly less than absolute moral perfection (as we all do literally all the time). As I suggest in my paper '<a href="https://philpapers.org/rec/CHADPA-8">Deontic Pluralism and the Right Amount of Good</a>', maximizers are best understood as invoking what we might call "the <i>ought</i> of most reason", or what <i>ideally ought</i> to be done, not what is <i>obligatory</i> in any ordinary sense. (This helps to bring out that the view cannot possibly be "too demanding", for it makes no demands at all.)<p></p><p>(2) It's natural to assume that the boundary between 'right' and 'wrong' must mark some significant moral discontinuity. But at least for pure consequentialists, there would seem no basis for this, as the difference in value between a permissible act and a marginally impermissible one might be utterly trivial. (But see my '<a href="https://philpapers.org/rec/CHADPA-8">Deontic Pluralism</a>' paper for ways that hybrid consequentialists might be able to construct a significant boundary, e.g. by appeal to independent theories of blameworthiness.)</p><p>(3) It's natural to assume that it's especially <i>important</i> to avoid acting wrongly. But, even supposing that we're obliged to donate 10% of our income to charity, no utilitarian would think that it's inherently more important to increase your giving from 9% to 10% than from 11% to 12%.</p><p>So I think Consequentialism is best formulated, at least in its "core" or central form, without invoking <i>deontic threshold </i>concepts such as right (/wrong) or permissible (/impermissible).</p><p>What's the alternative? Rather than asking about the criteria for rightness, I think a more neutral starting point would ask: <i>What does the moral theory hold to be most important? </i>On this approach, <b><i>importance</i> </b>becomes the central concept of normative ethics, about which different theories may disagree.<i> </i>The standard answers then follow: Deontologists assign primary importance to acting in accordance with duty (with specific theories offering competing accounts of how those obligations are to be specified), virtue ethicists maybe something about acquiring and exemplifying virtues (?), and <b>consequentialists hold that what matters is promoting value</b>, or the realization of better outcomes.</p><p>I think it's important to start moral inquiry with the right question, because what you ask may have a biasing effect on what answers you reach. If you work with the ordinary concept of <i>rightness</i>, I expect this would have at least some tendency to push you towards deontological accounts. (We've already seen that consequentialism fits poorly with the ordinary understanding of rightness, and I think both consequentialist and virtue-ethical criteria for rightness tend to look pretty off-track by ordinary standards.) But if<i> </i>rightness isn't actually the central concept of normative ethics (properly understood), then this apparent failure needn't be any sort of count against deontology's competitors.</p><p>By contrast, if we begin with the question of <i>importance</i> front and center, then I think consequentialism (and especially <a href="https://www.utilitarianism.net/">utilitarianism</a>) starts to look a lot more compelling. Indeed, I've suggested elsewhere that reflecting on <i>what fundamentally matters</i> constitutes <a href="https://www.utilitarianism.net/arguments-for-utilitarianism#what-fundamentally-matters">the best positive argument for utilitarianism</a>.</p><p>Reflecting on what's important may also help to illuminate the flaws of consequentialism's competitors. For example, the Parfitian possibility of "<a href="https://www.philosophyetc.net/2018/11/consequentialism-moral-worth-and.html">virtuous[ly acquired] viciousness</a>" seems to undermine any suggestion that virtue is the most important thing in the world. And the importance of conformity to duty may be undermined by both my <a href="https://www.philosophyetc.net/2021/04/whats-at-stake-in-objectivesubjective.html">moral stunting objection</a> and my <a href="https://www.philosophyetc.net/2021/07/the-cost-of-constraints.html">new paradox of deontology</a>.</p><p>So it may well be that other theorists wouldn't be so thrilled about focusing on this question of importance. But it at least seems advisable for consequentialists!</p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com14tag:blogger.com,1999:blog-6642011.post-50748447640726980722021-12-03T09:59:00.002-05:002022-12-30T10:37:15.132-05:00Beneficentrism<p><b><i>An updated version of this post is <a href="https://rychappell.substack.com/p/beneficentrism">now available at Good Thoughts</a>.</i></b></p><p>Philosophical discussion of <a href="https://www.utilitarianism.net/">utilitarianism</a> understandably focuses on its most controversial features: its <a href="https://www.utilitarianism.net/objections-to-utilitarianism/rights">rejection of deontic constraints</a> and the "<a href="https://www.utilitarianism.net/objections-to-utilitarianism/demandingness">demandingness</a>" of impartial maximizing. But in fact almost all of the important practical implications of utilitarianism stem from a much weaker feature, one that I think probably ought to be shared by <i>every</i> sensible moral view. It's just the claim that <i><b>it's really important to help others</b></i>. As <a href="https://www.philosophyetc.net/2012/04/singers-pond-and-quality-of-will.html">Peter Singer</a> and other effective altruists have long argued, we're able to do <a href="https://www.philosophyetc.net/2014/11/kidney-equivalent-donations.html">extraordinary amounts of good for others</a> very easily (e.g. just by donating 10% of our income to the most effective charities), and this is <i>very much worth doing</i>.</p><p>It'd be helpful to have a snappy name for this view, which assigns (non-exclusive) central moral importance to <i>beneficence</i>. So let's coin the following:</p><p><b>Beneficentrism:</b> The view that promoting the general welfare is deeply important.</p><p><span></span></p><a name='more'></a>Clearly, you <a href="https://www.philosophyetc.net/2021/03/is-effective-altruism-inherently.html">don't have to be a utilitarian</a> to accept beneficentrism. You could accept deontic constraints. You could accept any number of supplemental non-welfarist values (as long as they don't implausibly swamp the importance of welfare). You could accept any number of views about partiality and/or priority. You can reject 'maximizing' accounts of obligation in favour of views that leave room for supererogation. You just need to appreciate that the numbers count, such that immensely helping others is immensely <i>important</i>.<p></p><p>Once you accept this very basic claim, it seems that you should probably be pretty enthusiastic about <a href="https://www.effectivealtruism.org/">effective altruism</a>. Not making any claims about "obligation" here, but just in terms of fittingness: we should care about what's important, and effective altruism basically just <i>is</i> the attempt to put beneficentrism into practice, i.e. to act upon what we've just agreed is deeply important. (Of course, you might have any number of empirical disagreements with other effective altruists about how best to <i>achieve</i> this goal. Nothing here commits you to agreeing with them about such details. I just mean that you ought to be enthusiastic about the basic project.)</p><p>Beneficentrism strikes me as impossible to deny while retaining basic moral decency. (Cf. Stalin's "a single death is a tragedy, a million deaths are a statistic.") Does anyone disagree? Devil's advocates are welcome to comment.</p><p>Even if theoretically very tame, beneficentrism strikes me as an immensely important claim in practice, just because most people don't really seem to treat promoting the general welfare as an especially important goal. Utilitarians do, of course, and are massively over-represented in the effective altruism movement as a result. But why don't more non-utilitarians give more weight to the importance of impartial beneficence? I don't understand it. (Comments welcome on this point, too.)</p><p>I guess one possibility is that the standard ideology of "obligations", "permissions", etc., encourages people to focus on meeting the bare baseline of moral adequacy. (Didn't murder anyone today, hooray!) But I think that's a bad ideology. We shouldn't just care about avoiding wrongdoing (indeed, I don't think we should precisely <a href="https://www.philosophyetc.net/2021/04/whats-at-stake-in-objectivesubjective.html">care about that at all</a>). We should care about <i>what's important</i>.</p><p>So I'd like to invite <i>everyone</i>, whatever your moral-theoretical persuasion, to explicitly consider what you think is truly important, and whether beneficentrism might be a part of the answer.</p><p>And if you're then enthusiastic (as I hope you might be) about making beneficence a more central aspect of your life, maybe consider the <a href="https://www.givingwhatwecan.org/pledge/">Giving What We Can</a> pledge?</p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com1tag:blogger.com,1999:blog-6642011.post-38550208368264262922021-11-22T10:34:00.002-05:002021-11-22T10:37:25.264-05:00The Indefensibility of Post-Vaccine Lockdowns<p>Reasonable people may disagree about the justifiability of early-pandemic lockdowns (while awaiting the availability of vaccines), but <a href="https://www.nytimes.com/2021/11/21/world/europe/austria-covid-lockdown-vaccine-mandates.html">this</a> is just nuts:</p><blockquote>Austrian officials’ decision to impose a lockdown that will last at least 10 days and as many as 20 came after months of struggling attempts to halt the contagion through widespread testing and partial restrictions. Starting Monday, public life in the country is to come to a halt, with people allowed to leave their homes only to go to work or to procure groceries or medicines.</blockquote><p>What's the justification for this? When vaccines are freely available to all, Covid isn't a serious threat except to those who refuse the vaccine, and thereby accept <a href="https://www.philosophyetc.net/2021/03/appeasing-anti-vaxxers.html">personal responsibility for the consequences</a>. If policymakers are worried about hospital over-crowding, unvaccinated adults suffering complications from Covid should go to the back of the line. If the unvaccinated are not willing to accept the risk of death due to a lack of hospital beds, they can either (i) get vaccinated, or (ii) stay home or take other precautions while local case rates are high. But if they insist on risking their health, and get seriously ill as a result, they've no-one to blame but themselves. It's simply not reasonable to infringe upon everyone's liberties for fear of harms that individuals have it within their own power to mitigate or avoid.</p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com3tag:blogger.com,1999:blog-6642011.post-33709450726959745742021-10-27T19:38:00.000-04:002021-10-27T19:38:53.813-04:00Updates to utilitarianism.net<p>Back in July, I <a href="https://www.philosophyetc.net/2021/07/new-introduction-to-population-ethics.html">mentioned</a> our new <a href="https://www.utilitarianism.net/population-ethics">introduction to population ethics</a>. Since then, I've also added a chapter on <a href="https://www.utilitarianism.net/theories-of-wellbeing">Theories of Well-being</a>, and -- brand new as of today -- <a href="https://www.utilitarianism.net/arguments-for-utilitarianism">Arguments for Utilitarianism</a>.</p><p>I'm inclined to think the best case for utilitarianism stems from simply <a href="https://www.utilitarianism.net/arguments-for-utilitarianism#what-fundamentally-matters">reflecting on what fundamentally matters</a> (and one who doesn't find the utilitarian answer here intuitively compelling is unlikely to be much moved by any other argument in support of the view). But I'm also pretty moved by the charge <i>against</i> non-consequentialist views that they are <a href="https://www.utilitarianism.net/arguments-for-utilitarianism#status-quo-bias">steeped in status quo bias</a>, so I was pleased to be able to make that case here. (I don't recall seeing the point discussed so much elsewhere -- it strikes me as unduly neglected.)</p><p>The other big news today is that we're kicking off a new series of <a href="https://www.utilitarianism.net/guest-essays/">Guest Essays</a> with an excellent article by Jeff Sebo on '<a href="https://www.utilitarianism.net/guest-essays/utilitarianism-and-nonhuman-animals">Utilitarianism and Nonhuman Animals</a>':</p><blockquote>This essay advances three broad claims about utilitarianism and nonhuman animals. First, utilitarianism plausibly implies that all vertebrates and many invertebrates morally matter, but that some of these animals might matter more than others. Second, utilitarianism plausibly implies that we should attempt to both promote animal welfare and respect animal rights in practice. Third, utilitarianism plausibly implies that we should prioritize farmed and wild animals at present, and that we should work to support them in a variety of ways.</blockquote><p>Enjoy! (And maybe consider adding the relevant articles to your syllabi if you teach on any of these topics...)</p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com3tag:blogger.com,1999:blog-6642011.post-15461702094453733992021-10-13T16:42:00.001-04:002021-10-13T16:46:30.260-04:00Puzzling Conditional ObligationsIf you make a promise (and haven't been released from it), then you're obliged to keep your promise. The obligation is, in a sense, <i>conditional.</i> Note that you've no moral reason to go around making extra promises just so that you can keep them. Keeping promises isn't a good to be promoted in this way. (We might instead think that keeping a promise is neutral, while breaking one is bad.)<div><br /></div><div>It's natural to think that obligations that are in this way "conditional" should mimic this axiological structure: being bad to violate, but neutral between complying and cancelling. For if they were positively good to comply with, that reason would seem to transmit up the conditional and yield us an <i>unconditional</i> reason to <i>get yourself into a position</i> where the obligation (applies and) can be met.</div><div><br /></div><div>With this in mind, the following putatively conditional obligations begin to look puzzling:</div><span><a name='more'></a></span><div><br /></div><div>(1) The obligation <i>of the rich</i> to donate significant amounts of money to charity.</div><div><br /></div><div>Giving to charity is straightforwardly good. So there's just as much reason to <i>become</i> rich in order to give more to charity, as there is to give to charity <i>once</i> already rich. (I think Peter Unger was the first to make this point?) For a concrete illustration, suppose a talented young person is choosing between two life paths: (i) a struggling artist earning $40k and donating 10% of it, or (ii) a financial trader earning $500k per year and donating just 1% of it. People in general will be more likely to condemn the person for "selfishness" if they choose the second path, when in fact it's the more generous of the two. (Suppose that, even as a struggling artist, they could at any time switch to trading and earning vastly more, but simply prefer not so.)</div><div><br /></div><div>The upshot: we focus overly much on <i>actual</i> income, and not enough on <i>potential</i> income, when it's really the latter that's morally significant.</div><div><br /></div><div>(2) The supposed obligation <i>of (well-off) parents</i> to send their kids to public school (so as to incentivize themselves to better support public education).</div><div><br /></div><div>Again, if there's really moral reason to do this, it's to achieve a good not avoid a bad. So it would equally seem a moral reason to <i>become</i> a parent (so you can send them to public school, and thereby incentivize yourself to better support public education). Parents who home-school or send their kids to private school are not doing any worse by public schools than are other adults who remain childless by choice (and so are similarly uninvested, on a personal level, in public education). But it doesn't seem remotely plausible to suggest that well-off people are obliged to have kids for this reason, so I think we should be similarly skeptical of claims that well-off parents are obliged to choose (what they believe to be) a worse education for their kids for this reason.</div><div><br /></div><div>In general, I think, when people focus on <i>those in a position to achieve some good</i>, we should re-focus the moral question more broadly on those who <i>could get into</i> a position to achieve that same good.</div>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com3tag:blogger.com,1999:blog-6642011.post-59331635692935235822021-10-03T19:03:00.001-04:002021-10-03T19:03:20.767-04:00Ruling out Helium-Maximizing<p>Joe Carlsmith asks: <a href="https://handsandcities.com/2020/12/20/alienation-and-meta-ethics-or-is-it-possible-you-should-maximize-helium/">is it possible you should maximize helium?</a> Robust realism <i>per se</i> places no constraints on what the normative truths might end up being. So, in particular, there's no guarantee that what we objectively ought to do would hold <i>any appeal whatsoever</i> to us, even on ideal reflection -- the objective requirements could be <i>anything</i>! (Or so you might assume.)</p><p>But I think that's not quite right. Metaphysically, of course, the fundamental normative truths are non-contingent, so they could not <i>really</i> be anything other than what they in fact are. Epistemically, the fundamental normative truths are <i>a priori</i> (if knowable at all), so it's not clear that erroneous views are "possible" in <i>any</i> deep sense. A somewhat wider range of views may be "possible" in the superficial sense that <i>we don't currently know them to be false</i>, but unless you're a normative skeptic, we <i>can</i> currently know that pain is bad and that maximizing helium is <i>not</i> the ultimate good.</p><p><span></span></p><a name='more'></a>It's an interesting question <i>how</i> we can have any normative knowledge at all. (I offer my answer <a href="https://philpapers.org/rec/CHAKWM">here</a>.) But given that we can, it's important not to lose sight of this fact when thinking about the implications of non-naturalism. For while the "non-natural" status of normative properties does not constrain their application, it doesn't follow that they really could apply to just anything (either metaphysically or epistemically).<p></p><p>Compare two very different bases for the confident rejection of helium-maximization:</p><p>(1) Normative <i>internalism </i>rules out the possibility of a mismatch between normative truth and the attitudes we'd hold on procedurally ideal reflection. So on purely <i>formal </i>grounds, we can be confident that what we objectively ought to do cannot be something (like maximizing helium) that would never appeal to us.</p><p>(2) Normative <i>externalists</i> must instead appeal to <i>substantive</i> normative claims, such as the datum that well-being matters (non-instrumentally) and helium does not.</p><p>I think the substantive explanation is the better<i> </i>one. After all, it seems an open possibility that some fool might actually want nothing more than to maximize helium (even on ideal reflection), so to maintain that they would be <i>mistaken</i> we need to leave room for possible mismatches between subjective appeal and objective normativity. Furthermore, in addressing the question <i>why</i> helium-maximizing would be so misguided, I think the answer, "<i>because people are what really matter!</i>" is better than "<i>because there's no way I would ever care about helium so much!</i>" The real <i>problem</i> with helium-maximizing is substantive, not merely formal, so it's entirely appropriate that our response to it should lie on this (first-order rather than metaethical) level.</p><p>So, while (externalist) non-naturalists view deep alienation as a live possibility in general, they need not regard it as a possibility that's compatible with their current attitudes, if they're able to know that their current attitudes are actually (at least roughly) right. We may thus be confident that normative reality will not completely baffle us (while allowing that it might baffle others).</p><p>But, importantly, it may still surprise us in a weaker sense. Consider: I may give some credence to a view (e.g. prioritarianism) that strikes me as somewhat reasonable, even while I am near-certain that I would not myself believe the view even upon ideal reflection. If prioritarianism turned out to be the objectively correct view, this would be <i>surprising</i> (even to my idealized self), but it's the kind of surprise I think we should be open to. It seems a problem for internalist views that they cannot leave room for normative reality to slightly surprise our idealized selves in such a way.</p><p>In sum, when reflecting on these issues, I think we should ideally want our metaethical theories to accommodate the following three desiderata:</p><p></p><ul style="text-align: left;"><li>Allow us to rule out helium-maximization (and other "baffling" views that are at odds with views that we are <i>rightly</i> confident of).</li><li>Allow that wrong-headed agents can be wrong, and so suffer an "alienating" mismatch between their (procedurally idealized) attitudes and normative reality.</li><li>Allow that, even given our broadly reasonable starting points, our idealized selves may be <i>surprised</i> by some aspects of normative reality, as we may be robustly disposed towards a subtly-mistaken view (that is close to the correct view without being exactly right).</li></ul><div>Externalist non-naturalism can accommodate all three (whereas internalist views secure only the first, and that arguably for the wrong reason). So, far from posing a problem for the view, I think that reflection on alienation and related issues should bolster our confidence in normative externalism.</div><p></p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com0tag:blogger.com,1999:blog-6642011.post-66111499546970771402021-10-01T12:46:00.004-04:002021-10-01T12:47:01.365-04:00New Blogs of Note<p>Three new-ish blogs (from the past year or so) that I figure are worth highlighting:</p><p>(1) <a href="https://www.cold-takes.com/">Cold Takes</a> - Holden Karnofsky (of GiveWell and Open Philanthropy fame) writing on themes relating to "avant-garde effective altruism". See especially his "<a href="https://www.cold-takes.com/most-important-century/">Most Important Century</a>" series, on why humanity needs to prepare for some wild changes.</p><p>(2) <a href="https://handsandcities.com/">Hands and Cities</a> - by Oxford philosophy grad (and Open Philanthropy research analyst) Joe Carlsmith. I just discovered this blog a week or so ago, but have been digging through the archives a bit and really enjoying it. I especially recommend '<a href="https://handsandcities.com/2021/03/22/on-future-people-looking-back-at-21st-century-longtermism/">On future people, looking back at 21st century longtermism</a>', '<a href="https://handsandcities.com/2021/06/21/on-the-limits-of-idealized-values/">On the limits of idealized values</a>' (exploring puzzles for subjectivists about how to select the appropriate idealization procedure), and '<a href="https://handsandcities.com/2021/08/27/can-you-control-the-past/">Can you control the past?</a>' (on decision theory). He's clearly influenced by <a href="https://www.philosophyetc.net/2008/03/arguing-with-eliezer-part-ii.html">Eliezer Yudkowsky</a>, but is actually <i>good at philosophy</i>, which makes for an interesting combination.</p><p>(3) <a href="https://astralcodexten.substack.com/">Astral Codex Ten</a> - Scott Alexander's new blog. Probably everyone already knows this? But I mention it here in case there are any deprived souls out there who could still benefit from the pointer. See, e.g., '<a href="https://astralcodexten.substack.com/p/moral-costs-of-chicken-vs-beef">The Moral Costs of Chicken vs Beef</a>', '<a href="https://astralcodexten.substack.com/p/the-rise-and-fall-of-online-culture">The Rise and Fall of Online Culture Wars</a>', stuff on <a href="https://astralcodexten.substack.com/p/prospectus-on-prospera">charter cities</a>, <a href="https://astralcodexten.substack.com/p/book-review-the-cult-of-smart">schooling</a>, and <a href="https://astralcodexten.substack.com/p/adumbrations-of-aducanumab">the FDA</a>.</p><p>Are there any other new blogs of note that you've been enjoying recently? Share a link in the comments, if so...</p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com1tag:blogger.com,1999:blog-6642011.post-51996163621002833592021-09-24T10:22:00.000-04:002021-09-24T10:22:45.818-04:00Agency as a Force for Good<p>One fundamental reason for <a href="https://www.philosophyetc.net/2011/11/why-consequentialism.html">favouring consequentialism</a> is the basic teleological intuition that the primary <i>purpose</i> of agency is to realize preferable outcomes. If you have a choice between a better state of affairs and a worse one, it's very natural to think that the better state of affairs would be the better option to choose.</p><p>A slightly different way to put it is that if it would be good for something to happen, then it would be good to <i>choose</i> for it to happen. Our agency is itself part of the natural world, after all, and while it is distinctive in being subject to moral evaluation -- misdirected exercises of agency may be <i>wicked </i>in a way that unfortunately directed lightning strikes are not -- it's far from clear why this should transform an otherwise desirable outcome into an undesirable one. There's nothing obviously misdirected (let alone "wicked") about straightforwardly <i>aiming at the good</i>, after all.</p><p>Consequentialism thus fits with an appealing conception of agency as <i>a force for good</i> in the world. Left to its own devices, the world might just as easily drift into bad outcomes as good ones, but through our choices, we moral agents may deliberately steer it along better paths.</p><span><a name='more'></a></span><p>This suggests to me a (possibly new?) argument for consequentialism. For it seems a real <i>cost</i> to non-consequentialist views that they must give up this view of agency as a force for good. Instead, on non-consequentialist views, it could well be a <i>bad</i> thing for outcomes to fall under the control of -- even fully-informed and morally perfect -- agents.</p><p>For example, consider a "lifeboat" case (with a choice between saving one or saving five others) where the non-consequentialist insists on flipping a coin rather than simply saving the many. Imagine a variant of the case where, if the captain of the lifeboat hadn't been steering it, it would have naturally drifted towards the five -- resulting in the best outcome. It's natural to think that putting an ideal (i.e., in no way ignorant or vicious) agent in control of the situation shouldn't make things worse. But for the non-consequentialist, it can, for it introduces extra moral reasons (e.g. to treat people "fairly") that could outweigh the welfarist ones, such that the captain might end up deliberately choosing to bringing about the worse outcome instead. And that seems messed up!</p><p>Of course, not every non-consequentialist view embraces coin-flipping over saving the many. Different examples may be generated to apply to different non-consequentialist views. But this simple example serves to illustrate the appeal of the consequentialist conception of agency as a force for good.</p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com12tag:blogger.com,1999:blog-6642011.post-8247171393574437832021-09-22T21:14:00.001-04:002021-09-22T21:14:17.345-04:00Helen interviewed on Idealism<p>In a rare online appearance, <a href="http://yetterchappell.net/Helen/">Helen</a> is <a href="https://www.youtube.com/watch?v=aL09IZ1D7HE">interviewed on <i>Mind Chat</i></a> by Philip Goff and Keith Frankish about her book-in-progress, <i>The View From Everywhere: Realist Idealism Without God</i>.<br /><br />For highlights, see especially:</p><p>36:00 - Helen explains the basics of her novel form of idealism (and how it differs from Berkeley's).</p><p>53:45 - Why idealism is more plausible than you might have thought.</p><p>58:20 - How idealism enables a direct realist account of perception like no other.</p><p>1:56:42 - Why philosophy monographs should be followed up with a "for kids" version.</p><p>There's also a bunch of interesting meta-philosophical discussion throughout, reacting to Helen's explanation that she only has about 30% credence in idealism, and correspondingly aims not to convince others that it's <i>true</i>, but just that they should take it <i>more seriously</i> than they had previously.</p><p>Check out the full interview <a href="https://www.youtube.com/watch?v=aL09IZ1D7HE">on YouTube</a>.</p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com0tag:blogger.com,1999:blog-6642011.post-13335805987355988142021-09-22T09:54:00.000-04:002021-09-22T09:54:06.988-04:00Companies, Cities, and Carbon<a href="https://www.vox.com/recode/2021/9/21/22686233/jeff-bezos-conservation-climate-amazon" rel="nofollow">This</a> is terrible journalism:<div><blockquote>While [donating $1 billion to protect forests] is certainly notable, Bezos’s commitment to protecting the environment serves as a stark reminder that much of his legacy and largely untaxed fortune was built by companies that have staggering carbon footprints. Amazon’s carbon emissions have grown every year since 2018, and last year alone, when global carbon emissions fell roughly 7 percent, Amazon’s carbon emissions grew 19 percent.</blockquote><div><br /></div>Economic activity is (for the time being) carbon-intensive. Amazon constitutes a huge and (especially during the pandemic) growing portion of the US economy. There's nothing said here to suggest that Amazon is unusually inefficient (from an environmental perspective); the author is really just complaining that Amazon is a large and growing part of the economy. (Horrors! They even had the gall to keep the economy going during the pandemic, when other companies did the green thing and shut down, bless their empty coffers...)</div><span><a name='more'></a></span><div><br />Obviously there are all kinds of climate policies that should've been passed long ago that would help to reduce the carbon intensity of the economy (carbon taxes, more investment in green energy & research, etc.). Our lack of those needed policies is the fault of politicians, voters, and the companies that lobbied against them. Blaming other companies that are simply involved in <i>ordinary economic activity</i>, by contrast, makes little sense.</div><div><br />I think we all realize it'd be silly to blame, say, <i>New York City</i> for having a large carbon footprint. Sure, it contains a lot of people, and so inevitably has a large carbon footprint in absolute terms. But if NYC didn't exist those people would just live somewhere else -- and possibly somewhere much less carbon-efficient than a dense city can be. But isn't blaming ordinary large companies for their carbon footprints misguided in much the same way? No evidence tends to be offered to suggest that they're any worse <i>proportionally</i> than their smaller competitors, so it really seems like they're just being blamed for being large and successful (something that we could also say of NYC).</div>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com1tag:blogger.com,1999:blog-6642011.post-74489077765905448422021-09-16T12:15:00.002-04:002021-09-16T12:15:26.414-04:00Discounting Illicit Benefits<p>In '<a href="https://philpapers.org/rec/OLITMA-2">The Means and the Good</a>' (<i>Analysis</i>, forthcoming) Matthew Oliver argues that pluralist consequentialists can accommodate intuitions against using others as a means, on the model of how they can accommodate intuitions about desert:</p><p></p><blockquote>Just as it is bad for Emily to benefit from a stolen manuscript, it is bad for anyone to benefit from the use of another’s body or resources as a means. We can call this impersonal badness an impersonal-use-cost. As with a stolen manuscript, good results that are produced by using another person’s body or resources are heavily offset by an accompanying impersonal-use-cost.</blockquote><p></p><p></p><p>By, in effect, discounting illicit benefits, we get the result that killing one to save five does more harm than good. But we also get the result that killing one to <i>prevent five others from each killing one to save five </i>likewise does more harm than good. (I think the most natural way to understand this is not to regard the second-order killing as in itself <a href="https://www.philosophyetc.net/2017/10/iterating-badness-in-paradox-of.html">extra bad</a>; the killing is just as intrinsically bad as any other death, the problem is instead that <i>any good that would follow from it</i> -- including the prevention of other wrongful killings -- gets massively discounted.)</p><p>It's a clever and interesting view! But it seems really vulnerable to <a href="https://www.philosophyetc.net/2021/07/the-cost-of-constraints.html">my argument against constraints</a>, namely, that it unacceptably devalues the lives of the innocent victims who might be rescued. Once an innocent person has been killed in an (even wrongful) attempt to save five, it <b>really matters</b> whether those five are ultimately saved or not! So we shouldn't discount the value of their lives, no matter the illicit nature of the agent's act (however bad it may have been, <i>that</i> harm has already been done). Otherwise, we would violate the moral datum that <b>One Killing to Prevent Five >> Six Killings (Failed Prevention)</b>.</p><p>My reframing of the view in terms of "discounting illicit benefits" brings out the problem most starkly. But I <i>think</i> it's just a verbal difference, and Oliver's original formulation in terms of an offsetting "use cost" (proportional to the illicit benefits) has the same implications. Does that sound right? Do correct me if I'm wrong...</p><p></p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com0tag:blogger.com,1999:blog-6642011.post-39150891922913237872021-09-05T10:10:00.000-04:002021-09-05T10:10:17.232-04:00JCVI endorses Status Quo Bias<p>The UK's Joint Committee on Vaccination and Immunisation recently <a href="https://www.gov.uk/government/publications/jcvi-statement-september-2021-covid-19-vaccination-of-children-aged-12-to-15-years/jcvi-statement-on-covid-19-vaccination-of-children-aged-12-to-15-years-3-september-2021">recommended <i>against</i> vaccinating children under 16</a> against Covid, despite granting that "the benefits from vaccination are marginally greater than the potential known harms." (Of course, aggregated over a subpopulation of millions, even "marginal" improvements in risk profile can result in several saved lives and scores or hundreds fewer hospitalizations. And, as <a href="https://twitter.com/dgurdasani1/status/1434369063896113154">Deepti Gurdasani makes clear in this thread</a>,* all the evidence should lead us to expect the "unknown" risks from Covid to outweigh those from the vaccine, so <b>taking uncertainty into account should lead us to regard vaccination as all the <i>more</i> important</b>.)</p><p>So what's behind the JCVI's verdict? They are at least admirably transparent:</p><blockquote>In providing its advice, JCVI also recognises that in relation to childhood immunisation programmes, the UK public places a higher relative value on safety compared to benefits.</blockquote><p>It's important to be clear on what this really means. Note that this is not invoking any kind of philosophically defensible harm/benefit asymmetry. (Many people think it's more important to reduce suffering than to promote happiness, but that's not what this is about.) Vaccines aren't to make you happy. The "benefits" they provide are specifically <i>safety benefits</i>, i.e. against <i>other</i> health risks. So what the JCVI is really saying is that they place higher value on <i>protecting people from potential harms from vaccines</i> than on <i>protecting people from potential harms from COVID</i>.</p><p>That is deeply messed up.</p><p>I just hope that greater philosophical clarity here will help people to see how messed up it is (and so change these institutions' values in future). Every time some dopey bureaucrat claims they're prioritizing "vaccine safety" over "benefits", they need to be met with the response: <i><b>No, you're prioritizing safety from vaccines over safety from COVID</b></i>.</p><p>That's clearly indefensible. We just need to make it clear that this is in fact what they are doing. Don't let them obscure the reality of <a href="https://www.philosophyetc.net/2021/08/pandemic-paralysis.html">status quo risks</a> behind a weasel-word like "benefits". The choice isn't between "safety vs benefits", it's "safety [against lesser vaccine risks] vs safety [against greater covid risks]".</p><p>* = Thanks to Dan Fogal for the pointer.</p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com14tag:blogger.com,1999:blog-6642011.post-43892801239361851182021-09-03T09:52:00.001-04:002021-09-03T09:58:07.235-04:00Sauce for the Gander<p>The Texas anti-abortion law enshrines the idea that others' interests legally trump an individual's right to bodily integrity. Of course, many would question whether a six-week embryo really has morally significant interests yet, but put such worries aside for now. I'm interested in how broadly this principle should be applied. For there are many needy individuals out whose moral status is much clearer than that of an embryo. Just consider any dialysis patient, for example. If bodily integrity is no longer sacrosanct, should we not pass laws mandating the removal of excess kidneys to help those in need? Better yet, since most of us (I think) still regard violations of bodily integrity as a serious moral cost, perhaps one could instead mandate just that <i>those who have mandated that others' bodily integrity be violated for another's sake </i>should themselves be subject to mandatory kidney donation. They've already implicitly consented to the principle at stake, after all.</p><p><span></span></p><a name='more'></a>As a bonus, we don't even need the State to get its hands dirty -- just further specify that the law empowers <i>any concerned citizen</i> to harvest a kidney from anyone responsible for the Texas law (including, e.g., those who offered legal, financial, or other support to the legislators in the crafting of their bill). I'm sure such a proposal would immediately be met with universal support, right?<p></p><p><b>In other news: </b>Trumpists finally proved that birthright citizenship is a mistake. Nice as it may sound to welcome new-comers into our country with open arms, there are those whose values are plainly incompatible with liberal democracy. If allowed into the country -- and eventually to vote -- they will threaten the very foundation upon which America's greatness rests. Political views -- including illiberal, anti-democratic values -- are all-too-often transmitted from one generation to the next within cloistered cultural communities who refuse to integrate with the rest of society, culminating in acts of terrorism and insurrectionist violence like we saw in January. The only solution is that children of Trumpists must be denied citizenship and <a href="https://www.philosophyetc.net/2015/11/gop-closes-doors-to-newborns.html">deported immediately</a>.</p>Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.com6