Consciousness is in the patterns of the interactions between physical parts of the brain. As one of the consequences, p-zombies cannot exist. I will not informally prove the first statement true to those who disagree, countless other people have already done it for me. If you disagree and do not want to read the arguments, or are not convinced, assume that hypothetically it is true for the rest of this post. Same goes for morality being a human construct and not an inherent property of the universe (otherwise, we could observe it based on physical properties) and some relatively uncontroversial parts of utilitarism.
So we deny that there is anything that in principle cannot be observed, at least with regard to consciousness. But we have not done away with the notion of consciousness. Which patterns are conscious, with regard to morality? Do all conscious beings have moral weight, or only known conscious biological species?
If we choose the second approach, we quickly run into all sorts of problems. Most obviously, the known clusters of beings with moral weight are a closed set. If we encounter a species of very anthropomorphic aliens, tough luck for them: they have no moral weight. A little less obviously, clusters are clusters: each individual within a species has its own pattern of consciousness, and we would need to enumerate all. Otherwise, we would effectively be defining a rule for consciousness, and that would be departing from option 2. Furthermore, the pattern of each “individual” (which is itself another tricky concept) changes over time, rather quickly. We can decide that we will add new individual-time pairs as needed, but certainly we will not add all of them, otherwise I would be committing immoral massacre right now by starting and stopping computer programs.
Thus, all conscious beings have moral weight. Note that this does not imply that all non-conscious beings have no moral weight. Now we only need to identify the criteria of consciousness. A possible informal definition is that a being is conscious if and only if it is self-aware. That however is very problematic: in which cases is a being self-aware? Clearly unintelligent computer programs can and do manipulate and introspect themselves as an entity different from other entities in an environment (via UNIX process id, for example), effectively acting as if aware (so to all effects, being aware) of the self.
Let us venture another definition: we define a being as conscious if and only if it is intelligent, i.e. has a certain amount of efficiency at optimizing world states towards its preference, and it is able to act as if it modeled the world as consisting of entities and itself as an entity within the world. I can think of at least two counter-examples, of intelligent beings that actually have no moral weight, to this definition. One, of superhuman intelligence, the typical paperclip maximizer that only cares about the amount of paperclips in the universe. The other, of subhuman intelligence, a chat bot displaying basic theory of mind, that answers correctly questions such as: “You know the glass is in the kitchen, I think the glass is in the living room, where will I look for the glass?”.
A case could also be made that these beings do have moral weight, but it is too small. In the case of the chat bot, its moral weight is smaller than the entropy it consumes, therefore prevents other beings from using as they see fit. Otherwise, one could enormously increase global utility by instancing a few thousand chat bots, that eternally answer questions.
Maybe all conscious beings hold moral weight, even if only a little. The answer I do not know, but I hope this exposition helps someone make some progress.