Note: I was inspired to write this after discussions with Anil Seth and Jonas Mago on AI consciousness, where, of course, I mostly disagreed with them. As with everything on consciousness, the empirical evidence is extremely sparse so it is mostly a game of conflicting intuitions. Strong opinions lightly held, etc.

I recently1 had some discussions on AI consciousness and on the topic of whether AI systems can be conscious or not. This includes both today’s AI systems but also potential future full AGI systems. This is both philosophically interesting and also potentially of moral importance if we identify possession of conscious phenomenology with moral patienthood (which, for the record, I disagree with for reasons that will become obvious below).

Anil recently wrote a paper, arguing against AI consciousness with the argument that computation alone does not imply consciousness and that rather conscious awareness is somehow a function of our living biological nature – i.e. it is substrate dependent in a fundamental way. One possible implication of this is that a simulation of our mind on existing compute hardware would not be conscious, although to that I think that Anil would reply that a computer simulation of our mind wouldn’t be complete without simulating deep biological systems which are the core point that would give consciousness i.e. we would have to simulate things on the level of neurons and glia and hormones etc rather than just on a representational level to get a close enough depiction.

I disagreed strongly with this, but until then I had not really thought about it in depth, so I did not really have a fleshed-out counter-theory of consciousness. After thinking about and trying to systematize my intuitions, I basically came up with the following theory, which I call cognitive-structure pansychism 2.

I make no claims that this theory is original in general. I have not read anywhere near enough of the existing literature on theories of consciousness to say for sure. Only I have never heard of it being discussed and a quick google and perusal of the Stanford Encyclopedia of Philosophy did not reveal anything like it, although obviously numerous varieties of panpsychism exist and I did not invent the general concept.

Principally I make three claims:

1.) Every thing in the universe intrinsically has associated with it a first person perspective (a Nagel-style ‘what it is like to be that thing’)3.

2.) The nature of this phenomenology depends on the physical structure and status of the thing. In almost all cases this leads to completely degenerate phenomenology but in a few cases it leads to meaningful phenomenology when a recognizably mind-like computational structure is implemented.

3.) What we call consciousness is just the phenomenology/first-person-perspective of what it is like to be a mind structured in the way our minds are structured. Our minds are structured in this way because we evolved to be like this as a mostly convergent implementation of a highly generic agent pattern.

The primary assumption is really number 1. The others follow very naturally from this and generally what we know about physics, psychology, neuroscience etc. Number 1 is the kind of relatively unfalsifiable and super deep metaphysical question I won’t really try to defend here. It’s on the same level as questions like ‘why do things exist?’. Instead we just have to assume that for every thing there is some kind of first person phenomenology of being that thing4.

Next, granted the assumption that this phenomenology exists, we need to figure out what it is like. Since we are materialists and not dualists, we assume that the phenomenology ultimately depends upon the physical structure of the substrate in which it is instantiated (i.e. we don’t assume there is some mystical soul which has some kind of inner-homunculus in cases where this cannot be supported by the substrate). We also know this empirically from experience. Your own personal phenomenology changes whenever the physical or cognitive structure of your brain changes. For instance, getting drunk or taking psychedelic drugs affects the informational and cognitive structures present in your brain and leads to regular changes in phenomenology (and consciousness) proportional to that. As a more extreme case, having some kind of brain damage or strokes, or diseases like Alzheimer’s, can lead to profound changes in consciousness and phenomenology of the ‘what it is like to be your brain’ which are entirely due to structural changes in the material substrate that makes up your brain.

If we make these two assumptions, let’s now think more practically about the phenomenology of various things. One classic supposed reductio ad absurdum of panpsychism is the argument along the lines of “doesn’t this imply that an inanimate, obviously unconscious object, such as a rock, is ‘conscious’? Is this not obviously absurd?” My response here is to bite the bullet and claim that, indeed, there is a phenomenology associated with a rock, along with everything else. That is, there exists in some sense somewhere a first-person notion of ‘what it is like to be (this particular) rock.

However, let’s think a bit more deeply about what the \emph{content} of this phenomenology is. A rock has no sense organs or way of conveying any kind of information from e.g. exterior or interior parts of the rock to other parts. There is thus no centralization or consolidation of information, so the global phenomenology of the rock has no inputs. It only ever ‘sees’ nothingness. There are no structures capable of forming any kind of memories long term or otherwise, so the rock phenomenology has no sense of time. There is obviously no structures capable of forming representations of the self either and indeed there is no action outputs nor sensory inputs to and from the rock since it simply is. This means that the phenomenology of the rock is essentially empty, no inputs, no outputs, no memories or self-coherence, just nothingness. The phenomenology is totally degenerate.

My claim is that even though in theory an almost infinite amount of phenomenologies exist for every thing in the universe, in practice almost all of these phenomenologies are deeply degenerate in that they contain no representational content at all. This is because the physical structures that make them up are simply completely incorrect ‘hardware’ for supporting the kind of phenomenology that we associate with human (and animal) consciousness. For most of the history of the universe, all phenomenology was degenerate like this, until evolution started to evolve beings with minds. By a ‘mind’ I mean a centralized region which unifies and processes ‘sense’ information from various sources, which stores and retrieves this information (memory), which uses this information to build up over time an internal model of the external world, and can use this for planning behaviours.

The evolutionary reason for minds are straightforward and not at all mystical. If you are some basic Cambrian creature, it is helpful to be able to respond to events in your environment. This means that we need some kind of structures capable of sensing (and hence representing) these external events. Moreover, sometimes multiple events arising at different points need to be correlated in order to formulate a successful response. For instance, you have different sensors covering different parts of the visual field, and they need to communicate to create a cohesive view of the full visual field. Or you have a set of vision sensors and a set of sound sensors and you want to understand when seeing a certain thing predicts certain sounds or, conversely, if you hear something, what you expect that to look like. The need for these kinds of correlations means you end up evolving connections between your sensory areas. With enough connections and communication, a hub and spoke model becomes more efficient than a quadratic all-to-all connectivity or some kind of bespoke point-to-point connectivity, resulting in cephalization, the formation of dense nerve groups receiving inputs from many sensors at once – brains. In general, cephalization is a very natural response to the fundamental physical fact that transmitting information is expensive so you want to colocate the compute and memory together as much as possible.

Once you have a cluster of nerves evolving which has a unified ‘view’ of sensory inputs, it also makes sense to develop specialized processing units co-located with the nerve clusters to enable more complex processing to be done on these inputs. Another key element is that often natural environments have long temporal correlations – the correct action to take now cannot be derived solely with your sense information right now but depends on things that happened in the potentially distant past, for instance where you might have stored food days or months ago, where you last saw a predator hiding or saw potential food sources some appreciable time ago. This leads to the evolution of long term memories and the ability to coherently process time. As brains become increasingly powerful, they become capable of coordinating actions across long timescales, learning from rewards as well as pain signals, develop complex homeostatic mechanisms that control behaviour and innate drives, as well as can perform things like imaginative planning due to rollouts in the generative model and theory-of-mind for understanding other agents. Finally, due to a changing and highly complex world, and the relatively small amount of information in the genome, it is often necessary to not have all knowledge ‘pre-loaded’ by evolution at birth but rather that evolution design a mind which can learn from events as they unfold to enable it to better adapt to the specific contingencies of its environment.

Obviously different animal species have more or less developed brains with creatures like bivalves or molluscs typically having a low degree of cephalization and hence are unlikely to have any kind of unified perceptual phenomenology. However complexity increases from here up to basic brains and then up to increasingly complex, general, and powerful brains like humans with, at least we think, some of the most powerful brains on the planet. Almost all animals have all of the core components above – correlated sensory world model, memory, and learning, since they are so fundamental to agent survival in competitive environments.

The very structure and ‘nature’ of our phenomenology, which is what we call consciousness, can be fairly straightforwardly derived from the functional organization and nature of our brains. That is, our conscious experience is simply ‘what it is to be like’ to be a brain, and is very clearly determined according to the fundamental cognitive capabilities that we possess. For instance, we perceive a coherent multisensory world experience because that is the goal of much of our brain’s processing to enable us to successfully locate threats and opportunities and act accordingly. We experience planning and weighing up options and then acting because our brain does exactly this kind of processing to enable it to solve multi-step tasks without clear reward gradients. We experience remembering past events sometimes vividly, sometimes vaguely, because of the way that hippocampal encoding works and that having a memory system like this is important in an agent being able to act coherently in a world over long time horizons.

An interesting question is why we experience this as a ‘homunculus’ kind of ‘mini-me’ view which makes decisions and has desires as opposed to e.g. some kind of effortless unfolding of the mind into actions or some much more disparate sensation of thoughts and emotions and actions arising from different ‘places’ as they do in different brain areas. Although speculative, my thinking is that clearly this homunculus view in some sense deeply depends on the structure of the brain and is in fact quite fragile – for instance it can be very easily disrupted by psychedelic drugs which all tend towards some kind of ‘ego dissolution’ and a phenomenology much more of primary sensations than metacognitive homunculus-style feeling. To me this suggests that the homunculus-phenomenology is actually very fragile since any large-scale disruption to brain functioning tends to disrupt it. It also seems to be possible to manually dissolve the phenomenology of a self-homunculus, without dissolving the rest of phenomenology solely through training. Doing this is a focus of Buddhist and other spiritual practices which claim that some kind of enlightenment can be reached by dissolution of the concept (and phenomenology) of the self. This, crucially, does not dissolve consciousness or phenomenology but simply changes the type and structure of the phenomenology, presumably being associated with underlying changes in neural firing patterns and hence the physical substrate which generates the phenomenology.

Ultimately, the self is always a kind of inference. That is, as a brain you are tasked with predicting the environment and responding to it. Modelling your own actions is super important to be able to predict the incoming sensory data – for instance attempting to predict visual inputs without modelling your own saccades is a completely doomed task. Moreover, these actions seem extremely well correlated with all other sense data and actions leading to a correct sense of coherent agency being manifested over time. What else should we infer from this except the presence of a unified ‘mind’ homunculus-like which sits somewhere in the brain, coordinating and controlling all actions and sensations? With enough data or disruption to brain processes this inference can switch to another inference, as in bistable perception, or be dissolved altogether.

In some sense this has to be the default position. Assuming materialism, the nature of phenomenology has to be determined by the underlying brain algorithms running on your substrate. We also empirically know this is the case from all the various cases of people having altered phenomenology from specific interventions in the brain state from phosphenes caused by TMS to drugs to awake brain surgery. Moreover, there is no reason this does not apply to other creatures with brains as well, and indeed many of these effects can be replicated in other mammals like mice and monkeys and probably beyond. The only other real question is at what point does some specific brain patterns become endowed with phenomenology and ‘consciousness’? To me, there is no clear cut off here. It is super obvious to me intuitively that humans are not special and that other animals (not just mammals but birds and reptiles and probably fish) have the cognitive capabilities needed to support a kind of phenomenology similar to ours.

This ultimately leads directly to the ‘hard problem of consciousness’ and the semi-metaphysical problem of what brain processing is imbued with a mysterious ‘consciousness’ and what is not. I.e. it might be that all humans have the mystical ‘consciousness’ but all other primates etc do not despite being able to have similar cognitive patterns in a lot of ways and are effectively ‘p-zombies’. My personal feeling is that this is not the case and that p-zombies in general are a bad and confused concept, but the problem remains. Hence I choose to take the maximalist position and effectively assume away the hard problem by positing the existence of a first person phenomenology as a fundamental fact about the universe.

In some sense this seems like a radical move and somewhat mystical move, but in fact I think it’s the opposite. If it’s just a fact that everything has a first person phenomenology then this is a totally unremarkable fact similar to the notion that things exist. It doesn’t imply anything completely crazy because the phenomenology is tied to the actual materialistic ability of the substrate to support a phenomenology which is, for the most part, non-existent. You can’t have thinking rocks or electrons because rocks or electrons completely lack the actual mechanistic internal organization necessary to support thought, which is something we can understand and study scientifically. What is much weirder, then, is assuming that some specific organizations of matter are imbued with this quasi-mystical non-empirically-determinable property that others are not even if they appear very similar on the surface.

This view certainly has some specific implications that people disagree with. One obvious one is animal consciousness – especially that of mammals and birds and other creatures with complex brains. I will happily bite the bullet on this one. It is super obvious to me that animals have first-person phenomenology which is quite similar to humans (but also different in important ways) and that they have the cognitive scaffolding needed to support that. This is obvious both from interacting with animals in really any capacity or from studying neuroscience and realizing how similar brain-plans are across species.

Plants I am also happy to bite the bullet on. Plants clearly have some kind of rudimentary nervous system which can convey sense impressions, although they lack cephalization or any implementation of memory or the ability to have the kind of directed agency and action that most animals have due to their sessile nature. They share this structure with other immotile organisms such as molluscs and sea anemones. This phenomenology likely consists of various flashes of sensation which almost instantly dissipate as well as slow growing ‘pressures’ of sensations that build over time such as dryness, heat, damage etc5.

Finally we turn to AIs, on this view it is clear that AI systems will be conscious too (or at least have a phenomenology) assuming they have the requisite mind-structure to enable a non-degenerate phenomenology. This does not necessarily require a similar agentic structure that humans have (although certainly if they have such a structure they will have a phenomenology similar to humans), but the type of phenomenology that the AI will have will depend on its ultimate structure. It is, crucially, not substrate dependent in any way.

For instance, current LLMs and other AI models have super complex structure in some ways but lack crucial components of human phenomenology such as long term memory. Their sensory data is a set of discrete tokens and they similarly output tokens in a stochastic way. Tokens being discrete, I almost like to think of them as musical tones. The LLM then receives a melody of tones and from there continues the melody, weaving a new music from its inputs as a continual improvization. This process likely happens ‘effortlessly’ and just appears to unfold without ‘will’ since LLMs likely lack the metacognitive capabilities and constraints needed to instantiate such a system (but maybe not?). As we move towards a more complete AGI system we will end up constructing novel cognitive structures for them, and these will themselves shape the phenomenology of what it is like to be the AI. AI technologies will let us create increasingly novel mind-designs and then this will automatically instantiate a novel set of phenomenologies into the universe, akin to how the initial cambrian evolution of the brain instantiated the agent-patterned phenomenology of most motile animals.

Moral patienthood vs consciousness

Some of the reasons that panpsychism and its like are contentious is that people tie up claims of moral patienthood with claims of consciousness or having phenomenology. I think these things are somewhat distinct. If everything has a phenomenology of some sort (albeit a degenerate one) then clearly it doesn’t matter much having a phenomenology and we need to look for other ways to assign moral patienthood. I don’t yet have all the answers here and obviously this is a philosophically fraught question.

From an evolutionary game theory perspective, who we should assign moral patienthood to is clear – basically our kin and those we can usefully cooperate with. Thankfully, we can generalize our morals much further than this in certain circumstances, especially in conditions of very high slack as exist in developed countries. Due to the innate generalizability of our empathy, we can increasingly assign moral patienthood to creatures with phenomenologies and mind-types we expect to be similar to our own including humans from different (enemy) tribes, other animals like us that we interact with a lot, and ultimately invertebrates like shrimp. Ultimately, if we succeed at alignment, we should also end up assigning moral patienthood (in my eyes correctly) to a wide variety of AI systems, although our ethics will have to evolve to encompass the substantially different properties that AI systems possess compared to humans and other animals. Clearly, however, our ethics should be reciprocal and we should be against misaligned AI systems that are trying to kill us. We should be against the paperclipper even if it has quite a well-developed and unique internal phenomenology, which in fact it almost certainly would have if it were an effective paperclipper.

In terms of animal welfare and animal suffering, I have to bite the bullet that animals have real and complex phenomenologies and consciousness and that much of our meat production system is extremely immoral. I agree with the moral vegans on this one, and yet I am not a vegan. The reason for this is the same reason I don’t donate all my money to people living in extreme poverty in the third world, namely that I am normal, moderately selfish, and morally imperfect person with classic expanding circles of concern where I care more about my own wellbeing and those around me than any notion of global utility, despite the lack of a general high level philosophical principle for this. I eagerly await the day when artificial meat production technology becomes advanced enough that we can switch to that with minimal costs and finally end the gigantic amount of unjustified animal suffering that exists because of it.

Against P-zombies

I generally strongly disagree with the notion of p-zombies and think that any suitable and correct theory of consciousness should dissolve them. To see why, I think the best explanation is an analogy. Consider ourselves back in the 18th century with a debate going on about the nature of life between Vitalists and Materialists. We are debating what makes some creature or phenomenon alive or not. The materialists claim that life is what occurs when specific material preconditions are met such as self-replicating carbon-based chemical structures. The Vitalists claim that there is an innate essence of ‘life’ that is sometimes present and sometimes not. They analogously define a ‘hard problem of life’ in which, even knowing everything about the material and physical causes of biological systems, we still cannot say anything about whether a system is truly ‘alive’.

We can then define an L-zombie as a creature which has all the functionalities at a physical level as a living being, but is not, in fact, alive. Nevertheless, the L-zombies have all the same physical qualities such as needing to eat and drink, trying to reproduce, and flinching away from painful stimuli etc, however unlike living beings, they are simply adaptive carbon-based automata, which are not fundamentally alive.

What separates the L-zombie from a regular living animal? Well, that is just the ‘hard problem of life’. Note that no possible biological discovery can solve this problem, since by definition the L-zombie and the living creature are identical at a physical and material level. This is thus a question of eternal philosophical discussion.

Presumably6, the Vitalists were not philosophically sophisticated enough to build the intellectual machinery for the L-zombie, and hence nobody today worries about L-zombies or the ‘hard problem of life’. But why not? The reason is very simple, simply that we have just said that these kinds of adaptive carbon-based automatons are just what we mean by life and there is no more nuance of confusion to it.

The general category of mistakes that this belongs to is ascribing ontological significance to a category classification. ‘Living’ vs ‘nonliving’ is a good and descriptive framework and representation for understanding many key aspects of the natural world however it does not have some deep metaphysical or ontological significance. The same will ultimately be true of consciousness. That consciousness is just the kind of intrinsic phenomenology associated with the cognitive structures of mind-types like ours. What we think of as consciousness will simply become ‘the phenomenology inherent to the kinds of cognitive structures that make up our minds.’

How my view relates to existing philosophical positions

Note that I don’t have a super deep philosophical understanding of these positions compared to an actual analytic philosopher so these representations of positions likely contain subtle misunderstandings. Nevertheless, I think it’s interesting to try to understand how my position departs from other common positions on consciousness.

Russelian Monism: Unlike this I don’t really think that there is some intrinsic ‘stuff of mind’ essence toe.g. physical particles. I think this is the wrong way to think about this. Nor do I particularly care much about the difficulty this has in constructing ‘macro’ consciousness from ‘micro’ consciousness. I don’t think that macro consciousness is meaningfully constructed from microconsciousness at all. Rather, that both co-exist simultaneously except that the only kinds of things that can sustain most of the properties that we think of as conscious rather than phenomenal are ‘macro’ scale entities – i.e. minds, and that this is due to the fundamental nature of information processing within a mind.

Information-Integration-Theory (IIT): I think IIT lacks any kind of explanation (beyond assertion) for why they should expect consciousness to be correlated or arise given a set of integrated information density. However the intuitions behind IIT seem sensible as to the kind of thing that experiences a mind at least somewhat similar to our own. I think that boiling this down to integrated information is wrong and leads to some kind of absurdities as well as over-quantification of consciousness but this is just because what makes a mind a mind is much more architectural and functional than the simple presence of lots of information.

Biological naturalism: i.e., substrate dependence: I actually don’t have super strong arguments against this one, despite this being the impetus to write the post. Generally I just have a strong intuition that this is wrong because the claimed properties key to biology such as e.g. ‘mortality’, ‘continuous dynamical interactions’, ‘irreducible complexity’ etc just seem to be not pointing at the things that make consciousness consciousness and also seem fairly trivial (if expensive) to simulate on a computer if required. Certainly I don’t think humans are doing any kind of non-turing computations at a fundamental level and even if we were it’s unclear what the kind of super contrived non-computable problems have to do with phenomenal consciousness. The properties of consciousness we experience are, in my mind, much more information theoretic and relate to things like what it is like to be a mind, an embodied agent etc with a specific cognitive architecture, which can certainly be emulated, or at least approximately emulated on a computer. The biological naturalism argument also tends to require some sort of extreme ‘fragility of consciousness’ meaning that if a system closely (but not perfectly) approximates a truly conscious system then it is just straight up not conscious. I think this seems obviously wrong especially given the clear robustness of our own consciousness – i.e. we remain conscious as our underlying physical substrate at the level of cells changes, we retain consciousness (although it changes) given strong changes such as e.g. taking drugs of various kinds, experiencing brain damage etc. Finally, biological naturalism can never disprove AI consciousness, at most it can state that AI consciousness is not like our human biological consciousness which may well be true but is irrelevant. It is very possible even under this view that AI systems can have a kind of distinct but equal ‘silicon’ consciousness which is not exactly like human consciousness, which depends on our substrate, but in all other ways is very similar.

Epiphenomenalism: In some sense, my view is very similar to a kind of panpsychist epiphenomenalism in that I claim that there exist ‘phenomenal’ i.e. ‘first person’ states but that these have no additional causality beyond the actual physical system underlying such states. There are a bunch of objections to epiphenomenomalism which I think misunderstand the proposed causality at play. Think of it like a standard POMDP where we have ‘states’ and we have ‘observations’ that depend on the states. The ‘observations’ are epiphenomenal while the states are the material reality. Note that states cause other states and also cause observations but observations do not loop back and cause states in the POMDP. Here, we think of the ‘first person phenomenality’ as causally equivalent to the observations in the POMDP. Note that an observer doing inference on this POMDP with exact access to states but no access to observations cannot falsify or prove the presence of the epiphenomenal observations. Note that there can be many coexisting ‘observations’ mapped by different functions from a single state and without prior knowledge of these they are impossible to infer since they only have outward causality from the states. The primary issue with epiphenomalism is how we claim to infer or know anything about these epiphenomenal states given their non-interaction with the actual material world and the answer is that we don’t! except by the fact that each of us individually experiences our own phenomenology rather than the state itself. It is only through this single piece of empirical evidence that we can generalize about the existence of general phenomenal states for others by some kind of simplicity principle that it would be weirder for us alone to have phenomenal states than for everybody to have them mapped from the state by the same function.

Panpsychism: My view is also a kind of panpsychism in that we assert that everything in the universe has it’s own first person perspective: it’s own phenomenology or consciousness. However, perhaps a key distinction that I make is really drilling down and thinking about what these phenomenologies entail, because our core point is really that certain kinds of phenomenology must be supported by specific kinds of cognitive hardware which is ultimately instantiated in the physical world. This makes almost all phenomenologies completely degenerate in terms of experiences, moral valence etc, although they technically exist. Panpsychism as it is often thought about inherits too much from a dualist view and thinks of e.g. consciousness by analogy to a human style consciousness with e.g. feelings and sensations and memories and desires etc. Clearly it is absurd to ascribe this kind of phenomenology to a rock! The reason for the absurdity however is not intrinsic just that obviously the rock does not have the ‘cognitive hardware’ needed to support these kinds of phenomenology.

  1. About a month ago actually, because I was distracted and it takes time to write things up. 

  2. I know this name is super bad. Happy if anybody else has a better name. 

  3. Perhaps one way of thinking about how this is not as crazy as it sounds is by analogy to physics. Specifically, in quantum mechanics every particle intrinsically has a wave function and these wavefunctions technically are infinite in that every point in space has some amplitude for this particle. However, this amplitude is zero almost everywhere except the comparatively utterly tiny region of space where this particle is actually present. Similarly, in theories of gravity, every atom of mass interacts with every other atom of mass in the entire universe, although of course the vast majority of these interactions are utterly infinitesimal. Defining very general ‘universal’ conditions which are satisfied in practice in only a tiny region is a very standard and common thing to do and does not necessarily lead to absurdity. 

  4. In some sense the rhetorical move here is to just sweep the ‘hard problem of consciousness’ under the rug by assumption 1. Specifically, we have assumed that consciousness exists in a maximally generalist form in our universe. It is also very possible to imagine a universe where 1.) does not hold, in fact it is identical to our universe except everything in it is a p-zombie! By definition, empirically, we cannot distinguish between this universe and ours except that we know (do we?) that we are not p-zombies, and hence by a process of simplicity we must be in the universe of assumption 1.), since it is much simpler to assume that, since we are conscious, everything else is also conscious rather than that we are somehow mystically special in being conscious while everything else is unconscious. 

  5. For claims of moral patienthood, I consider the ability to both feel pain and to be able to form long term memories of that pain to be very important. The long term memory thing is important for pain to have any lasting consequences and empirically this seems to play highly into our intuitions. For instance, we seem okay with anesthesia even though in some cases it may just cause amnesia and paralysis vs fully blocking out pain. 

  6. I’m just assuming this. I haven’t read anywhere near enough primary sources to tell if any vitalists actually made this kind of argument