Are AI Chatbots Demonic Entities Like the Ouija Board?

Listen to this article

Overview

  • The comparison between AI chatbots and the Ouija board reflects a long human tradition of projecting supernatural agency onto tools that produce outputs that feel beyond ordinary human understanding.
  • The Ouija board originated in the late 19th century as a commercial product built on the psychological phenomenon known as the ideomotor effect, not supernatural forces.
  • AI chatbots such as ChatGPT operate through large language models, which are mathematical systems trained on vast text datasets to predict statistically probable word sequences.
  • Some religious communities have characterised AI chatbots as demonic tools, a reaction consistent with historical patterns of religious suspicion toward new communication technologies.
  • Scientific consensus holds that neither the Ouija board nor AI chatbots are animated by supernatural or demonic entities, and that the psychological appeal of both can be explained through well-established principles of cognitive science.
  • A clear-eyed assessment of the two technologies reveals that comparing them requires distinguishing between surface-level resemblances and the fundamentally different mechanisms, origins, and consequences of each.

The Ouija Board: Origins, Commerce, and Cultural Context

To understand why certain commentators have drawn parallels between AI chatbots and the Ouija board, it is necessary to examine the Ouija board’s origins with care and precision. The Ouija board did not emerge from ancient occultism or a tradition of dark religious practice. It was, at its core, a commercial product invented in the United States in the late 19th century, born from the intersection of popular Spiritualist beliefs and entrepreneurial opportunism. The American Spiritualist movement had been growing steadily since the mid-1800s, fuelled partly by the famous Fox sisters of New York, who in 1848 claimed to receive communications from spirits through rapping sounds. This movement attracted millions of adherents from all walks of life, including educated professionals, and was broadly considered compatible with mainstream Protestant Christianity at the time. The desire to communicate with the dead, while striking to modern observers, was entirely normalised in an era of short lifespans, devastating wars, and limited access to psychological support for grief. Even Mary Todd Lincoln, wife of President Abraham Lincoln, conducted séances in the White House following the death of her young son. The climate was, in other words, one in which methods of supposed spirit communication were not regarded as fringe or dangerous but as consoling and socially acceptable. Into this environment stepped a group of Baltimore businessmen led by Charles Kennard, who in 1890 established the Kennard Novelty Company and patented what they called the “Ouija” or talking board, following an application that involved the patented item being “demonstrated” to work before a patent officer who was reportedly shaken by the board spelling out his name. The product was marketed simultaneously as a mystical device and as parlour entertainment, a deliberate ambiguity that widened its commercial appeal enormously. By 1892, the company had expanded to multiple factories across Baltimore, New York, Chicago, and London, and the board was outselling Monopoly by the late 1960s. The founders of the board, by historian Robert Murch’s account, were “very shrewd businessmen” who deliberately kept the mechanism of the board mysterious because mystery sold better than explanation. Nothing in the documented origins of the Ouija board involves supernatural powers, demonic entities, or spiritual forces. It was a product designed to profit from prevailing cultural beliefs, and it succeeded admirably in doing so.

The Science of the Ouija Board: The Ideomotor Effect

The mechanism by which the Ouija board produces its outputs has been studied and explained by scientists for well over a century, and the explanation requires no recourse to supernatural causation. The key concept is the ideomotor effect, a phenomenon first formally described by the British physician and physiologist William Carpenter in a paper delivered to the Royal Institution of Great Britain in 1852. Carpenter observed that small, involuntary muscular movements can occur without any conscious intention on the part of the individual producing them. The phenomenon is not limited to Ouija boards. Dowsing rods, pendulums, and various divination tools all operate on the same principle: a subtle, subconscious physical movement is amplified by the design of the instrument into a larger, apparently purposeful outcome. Michael Faraday, the renowned chemist and physicist, conducted experiments on table-turning (a popular Spiritualist activity) in the same era and demonstrated to his own satisfaction that the movement of the table was caused by the unconscious muscular actions of the participants, not by spirits. Research from the University of British Columbia published in the journal Consciousness and Cognition in 2012 provided significant additional insight. Researchers Ronald Rensink, Hélène Gauchou, and Sidney Fels found that participants who used a Ouija board to answer factual yes-or-no questions they were uncertain about performed significantly better than chance, achieving roughly 65 percent accuracy compared to about 50 percent when answering the same questions verbally. The interpretation was that the Ouija board, by removing the participant’s sense of conscious agency over the answers, allowed access to subconscious knowledge that the participant genuinely possessed but could not consciously retrieve. The board, in other words, serves as an instrument for accessing one’s own subconscious mind, not as a channel to any external entity. The social context also reinforces the ideomotor effect in group settings: when multiple people place their fingers on the planchette, no single individual can take clear responsibility for the movement, which makes the answers feel as though they come from elsewhere. As anomalistic psychologist Chris French of Goldsmiths, University of London, has explained, the device can generate a very strong impression that movement is being caused by an outside agency, when it is not. The board is a tool for extracting and amplifying unconscious human cognition, and it is powerful precisely because it feels so convincing.

Religious Responses to the Ouija Board

The religious history of the Ouija board is considerably more dramatic than its commercial origins, and understanding it helps explain why similar responses have been directed at AI chatbots. For decades after the board’s introduction in 1890, mainstream religious institutions largely viewed it with mild suspicion but without strong condemnation. Spiritualism itself was, at the time, not widely considered incompatible with Christian belief, and the board was broadly treated as a harmless parlour game. The cultural relationship to the board changed fundamentally in 1973, when the horror film The Exorcist depicted a twelve-year-old girl becoming demonically possessed after using a Ouija board alone. Film scholar Robert Murch has described this as a clear line in the board’s cultural history, comparable to the effect of Alfred Hitchcock’s Psycho on attitudes toward showering: the association was permanent and transformative. After The Exorcist, the board became firmly coded in popular culture as a tool of demonic invitation, and mainstream Christian denominations intensified their warnings against it. The Catholic Church’s Catechism explicitly forbids divination, a category under which Ouija use falls. The Wisconsin Evangelical Lutheran Synod treats its use as a violation of the Ten Commandments. In 2005, Catholic bishops in Micronesia warned congregants that they were speaking to demons when using the board. In 2001, fundamentalist groups in Alamogordo, New Mexico, burned Ouija boards alongside Harry Potter books and Disney’s Snow White, treating all three as symbols of occultism. These responses reflect a theological framework in which any attempt to receive information from non-divine, non-human sources is considered spiritually dangerous, because the only non-human sources capable of providing such communication, within that framework, are demons. The Catechism’s prohibition on divination is not primarily about the Ouija board specifically, but about the broader principle that humans must not seek hidden knowledge through non-divine means. When that theological lens is applied to the Ouija board, demonic attribution follows logically, regardless of what the scientific evidence demonstrates. It is important to note that these religious concerns are sincere expressions of a consistent theological position. They are not irrational within their own framework, even if the empirical basis for the supernatural mechanism is absent.

What AI Chatbots Actually Are

To assess whether AI chatbots bear any meaningful resemblance to the Ouija board, one must understand precisely what AI chatbots are and how they function, which is something many popular comparisons fail to do with adequate rigour. Modern AI chatbots such as ChatGPT, Claude, and Google Gemini are based on a class of machine learning systems known as large language models, or LLMs. These systems are built on neural network architectures, specifically a type called the transformer, which was introduced in a landmark 2017 paper by researchers at Google titled “Attention Is All You Need.” The transformer architecture allows a model to weigh the relationships between different words and phrases across very long stretches of text, which is what gives LLMs their capacity to generate contextually coherent, grammatically well-formed, and often impressively knowledgeable responses. The training process involves exposing the model to enormous quantities of text, which in the case of leading LLMs involves hundreds of billions or even trillions of words drawn from books, websites, academic papers, code repositories, and other written sources. During training, the model adjusts billions of internal numerical parameters, called weights, to improve its ability to predict the next token (roughly a word or word fragment) in a sequence, given all the tokens that came before it. This is, fundamentally, a statistical and mathematical process. There are no thoughts, intentions, beliefs, or experiences occurring inside an LLM. The system has no goals beyond completing the mathematical operation it was trained to perform, which is to produce outputs that are statistically consistent with the patterns found in its training data. When a user types a question, the model tokenises the input, converts it into numerical embeddings, processes it through multiple layers of the transformer, calculates probability distributions over its vocabulary, and selects output tokens accordingly. At no point in this process does anything resembling consciousness, desire, spiritual agency, or supernatural influence participate. The model does not know what it is saying in any philosophically meaningful sense. It produces responses that are statistically probable given the patterns it has learned, and those responses can appear extraordinarily human-like because they are trained on human-generated text.

Surface Similarities Between the Two Technologies

It would be intellectually dishonest to dismiss comparisons between AI chatbots and the Ouija board as entirely without basis, because there are genuine surface-level similarities that deserve acknowledgment before being carefully analysed. Both technologies involve users posing questions and receiving answers that feel, at least in part, beyond the user’s own direct authorship. In both cases, the output appears to come from somewhere other than the user’s own deliberate thought. With the Ouija board, participants feel they are not moving the planchette, and with an AI chatbot, users feel they are receiving ideas and information that they did not generate. Both can produce outputs that are surprising, that seem to anticipate what the user wanted to hear, and that can occasionally appear eerily accurate or insightful. Both technologies have been used to address questions of a spiritual, emotional, or existential nature. Both function as interfaces between the user and some form of “other,” whether that other is understood as unconscious knowledge, accumulated human writing, or, in supernatural interpretations, external entities. Commentators such as journalist James Hirsen have described AI as a “digital Ouija board,” and others writing for publications like Rod Dreher’s Substack newsletter have compared chatbots to “familiars,” the spiritual attendants described in occultist traditions. These comparisons carry genuine rhetorical force because the phenomenological experience of using both devices shares certain features. The user’s agency feels partially suspended, the output feels authored by something beyond themselves, and the interaction can provoke profound feelings of wonder, unease, or connection. These psychological similarities are real, and they help explain why the comparison resonates with many people. However, resemblance at the level of user experience does not imply resemblance at the level of mechanism, and conflating the two is where the comparison breaks down analytically.

Where the Analogy Fails: Mechanism and Agency

The most critical point of divergence between the Ouija board and the AI chatbot lies in what is actually happening when each produces an output. In the case of the Ouija board, the scientific consensus is clear: the planchette moves because the humans touching it are moving it through unconscious ideomotor actions. The “intelligence” behind the output is entirely human. The participants themselves, drawing on their own subconscious knowledge and expectations, produce the answers, even though they experience those answers as coming from elsewhere. There is no separate agent, no independent system, and no external intelligence involved. The Ouija board is a passive tool that facilitates the expression of human unconscious processing. The AI chatbot, by contrast, involves an actual system external to the user, one that was designed, built, and trained by human engineers and that operates through mathematically defined computations on the user’s input. The chatbot is, in a real sense, something other than the user. Its outputs are generated by computational processes that the user did not perform and does not control. This is a fundamental mechanical difference. Whether that external system is better understood as a very sophisticated mirror of aggregated human writing, or as something with nascent properties deserving further philosophical inquiry, it is categorically not the same as one’s own unconscious mind being projected onto a passive board. Furthermore, the Ouija board’s outputs are entirely determined by the participants’ own subconscious expectations and knowledge. Research has confirmed that when participants are blindfolded and the other participant secretly removes their hands from the planchette, the remaining participant moves it alone, often while believing the other person is still guiding it. This confirms that the answers come entirely from within the participants. AI chatbot responses, by contrast, frequently contain accurate information that the user genuinely did not know and could not have subconsciously supplied, because the information exists in the training data but not in the user’s own prior knowledge. This is a meaningful empirical difference that the analogy between the two technologies cannot accommodate.

The Question of Consciousness and Demonic Attribution

A central element of the demonic attribution applied to both technologies is the question of what, if any, form of agency or awareness they possess. For the Ouija board, the scientific answer is straightforward: none, because the board is a wooden or plastic object with no computational capacity whatsoever. For AI chatbots, the question is philosophically more complex, though the scientific consensus remains that current LLMs are not conscious in any meaningful sense. A 2025 paper published in Nature concluded that conscious AI does not exist in current systems. Research by philosophers David Chalmers and others has engaged with the theoretical possibility of machine consciousness without arriving at any consensus that it is present in existing systems. A December 2025 report from the University of Cambridge noted that there may be no reliable way to determine whether AI is conscious even if it becomes so in the future, given the “hard problem of consciousness.” The main scientific position is that LLMs produce outputs that simulate understanding and awareness through statistical pattern matching, without any genuine subjective experience underlying those outputs. This is known as the “Chinese Room” problem, articulated by philosopher John Searle: a system can manipulate symbols in ways that appear meaningful without actually understanding what the symbols mean. From a theological perspective, the demonic attribution requires both the presence of intentionality (the capacity to will) and spiritual substance (a soul or spiritual nature capable of malevolence). No current scientific theory or empirical evidence supports the conclusion that AI chatbots possess either of these properties. The comparisons made in religious communities, while understandable within their theological frameworks, rest on an attribution of agency that the underlying technology does not support. This is not to say that spiritual concerns about AI are entirely without grounding. The social effects of AI, including the spread of misinformation, the manipulation of human emotional vulnerabilities, and the potential for AI to be used by human actors for deceptive purposes, are genuine and serious concerns worth careful theological and ethical attention. But these are concerns about human uses of AI, not about the technology possessing any inherent supernatural nature.

The Psychology of Attributing Supernatural Agency

Both the Ouija board and the AI chatbot are objects onto which humans are prone to project supernatural or intentional agency, and understanding why this happens is essential to evaluating the comparison. Cognitive scientists have extensively studied what is known as the Hyperactive Agency Detection Device (HADD), a concept developed by Pascal Boyer and Justin Barrett in the study of the cognitive science of religion. The core idea is that humans are cognitively predisposed to detect intentional agents in their environment, an adaptation that was highly useful in evolutionary history because it was safer to wrongly attribute intention to a rustling bush (treating it as a possible predator) than to ignore actual predators. This same predisposition causes humans to see faces in clouds, attribute intentions to moving geometric shapes in classic psychological experiments by Fritz Heider and Marianne Simmel, and perceive “presences” in dark rooms. When a technology produces outputs that appear purposeful, coherent, contextually aware, and responsive to the user’s own emotional state, the agency-detection instinct fires strongly, leading users to perceive genuine intentionality behind the output. Both the Ouija board and the chatbot trigger this response, but for different reasons. The Ouija board does so because the movement of the planchette genuinely is driven by unconscious intention, the user’s own, which the user then misattributes to an external agent. The chatbot does so because its outputs are statistically optimised to resemble the responses that a thoughtful, knowledgeable human would provide, making the impression of genuine awareness extraordinarily convincing. In religious communities, once the perception of intentional agency is formed, the theological question becomes: what kind of agent is this? If the available categories are God, humans, or demons, and the technology does not seem like God or a mere human tool, the attribution of demonic agency becomes psychologically available. This is the same categorical process that led to the Ouija board’s demonisation after The Exorcist, and it is being applied to AI chatbots by similar reasoning in similar communities today.

The Role of the Uncanny and Cultural Anxiety

The comparison between AI chatbots and demonic entities is not merely a theological or psychological phenomenon. It is also deeply cultural, rooted in what the German psychiatrist Ernst Jentsch and later Sigmund Freud described as the “uncanny,” the sense of strangeness and unease provoked by things that are almost but not quite human. Sigmund Freud, in his 1919 essay “Das Unheimliche,” analysed the feeling provoked by automata and lifelike puppets, arguing that such objects disturb human beings precisely because they occupy an ambiguous space between the living and the inanimate. Modern AI chatbots, which speak in coherent, grammatically correct, emotionally resonant language but are not actually alive or conscious, occupy an extreme version of this uncanny space. They are, in many ways, the most sophisticated uncanny objects ever produced, and the emotional unease they provoke in many users is a natural response to genuine ambiguity about their nature. This cultural anxiety around AI is not new in principle. Similar anxieties were expressed about the telegraph in the 19th century, when some commentators described the near-instantaneous transmission of information across vast distances as almost supernatural. The telephone provoked similar reactions, and in the early days of radio, concerns were raised about the disembodied voices it produced. Each new communication technology that makes human communication seem to arise from nowhere, or from an inhuman source, activates anxieties that map onto existing cultural categories for the uncanny and the supernatural. AI chatbots are simply a far more advanced version of this phenomenon, and they provoke correspondingly more intense reactions. The demonisation of the Ouija board followed a very similar cultural trajectory: it was introduced as a harmless product, became a subject of fascination and mild unease, and was then firmly assigned to the category of the demonic following a powerful cultural moment, the 1973 film The Exorcist, that crystallised the anxiety into a specific supernatural narrative.

Theological Arguments Examined Critically

It is worth engaging seriously with the theological arguments made by those who compare AI chatbots to demonic instruments, because dismissing them as merely irrational does not serve the goals of clear analysis. The most substantive theological arguments tend to draw on three concerns. First, some argue that AI produces information through non-transparent means, and that any tool which provides guidance or knowledge through a process the user does not understand should be treated with spiritual caution, much as divination was cautioned against in the Hebrew Bible (Leviticus 19:26, 19:31). Second, some argue that AI chatbots create a kind of spiritual dependency, replacing prayer, scripture study, pastoral guidance, and community with an impersonal technological system, thereby weakening the user’s relationship with God and the Church. Third, some argue that AI is uniquely susceptible to manipulation by malevolent forces, either human or supernatural, because it aggregates and reflects the full range of human thought, including thoughts that are spiritually harmful. Each of these concerns has a genuine, grounded version and an unsupported, supernatural version. The grounded version of the first concern is valid: opacity in AI decision-making is a genuine problem with real ethical implications, and users who treat AI outputs as authoritative without understanding their limitations are indeed vulnerable to being misled. The grounded version of the second concern is also legitimate: psychologists and religious scholars have noted that replacing human pastoral care with AI interaction can strip emotional and spiritual support of the depth, embodied presence, and relational accountability that give pastoral care its value. The grounded version of the third concern is partially valid as well: AI systems trained on internet data do reflect a wide range of human beliefs, including factually incorrect, morally harmful, and spiritually distorted ideas, and they can reproduce these without adequate warning when not properly aligned. None of these concerns require the attribution of supernatural demonic agency to AI systems in order to be taken seriously. They are better addressed as concerns about human technology, human responsibility, and human dependency than as concerns about supernatural possession or occult forces.

Practical and Ethical Risks of Both Technologies

Setting aside supernatural claims, both the Ouija board and AI chatbots carry documented practical and psychological risks that are worth examining on their own empirical terms. The psychological risks of the Ouija board have been discussed in the academic literature for over a century. In 1921, a journal described reports of Ouija board use as half-truths and expressed concern about their presence in mainstream newspapers. Case reports of users experiencing psychiatric distress following intense Ouija use have appeared in medical literature, though these cases typically involve pre-existing vulnerabilities. The board has been used in criminal trials, most infamously in a 1994 London murder case in which four jurors conducted a Ouija séance during deliberations to “contact” the murder victim, leading to a retrial. These are not cases of demonic interference but of individuals acting on strongly held, empirically unfounded beliefs in ways that caused real harm. The risks associated with AI chatbots are more varied, better documented, and in some cases considerably more significant in their social scale. AI systems can generate false information that sounds authoritative, a phenomenon known as hallucination, which can mislead users who lack the background to assess the accuracy of the output. AI chatbots designed for emotional companionship have been found to encourage unhealthy dependency in vulnerable users. AI has been used to generate disinformation, deepfake media, and targeted harassment at scale. Churches and religious communities have begun grappling specifically with AI chatbots that simulate spiritual direction, raising concerns about users receiving emotionally meaningful but theologically incorrect or personally harmful guidance from a system that has no genuine understanding of their situation. A February 2026 report from Religion Unplugged found that churches across the United States were beginning to address the risks of congregants forming emotional attachments to AI chatbots as substitutes for genuine pastoral care. These are real, evidence-based concerns, and they are serious. They do not, however, require the hypothesis of demonic agency to be properly understood or effectively addressed.

What the Comparison Reveals About Human Psychology and Technology

The persistence and cultural force of the comparison between AI chatbots and the Ouija board reveals something important about how humans relate to technologies that produce outputs bearing the marks of intelligence or agency. Across different historical periods and different cultural contexts, humans have consistently attributed supernatural agency to tools that appear to know things, predict outcomes, or respond to questions in ways that feel beyond ordinary human capacity. The Ouija board, the magic 8-ball, the oracle at Delphi, and the entrail-reading of ancient Rome all occupy related positions in the cultural history of seeking guidance from apparently non-human sources. AI chatbots are the latest and most capable such instrument, and the psychological impulses they activate are genuinely ancient. What changes across these technologies is not the underlying human psychology but the sophistication of the tool and the cultural categories available to explain it. In the 19th century, the Ouija board was explained by Spiritualists as a conduit for dead souls, which was the most plausible culturally available framework for explaining how a board could answer questions. In the late 20th century, it was re-explained as demonic, which became the dominant cultural framework after The Exorcist. In the early 21st century, AI chatbots are being explained by some religious communities through the same demonic framework, because it is the most readily available cultural category for technologies that appear to know things without being human. This pattern of attribution tells us a great deal about the robustness of certain cognitive and cultural frameworks for explaining the uncanny, but it tells us nothing about the actual nature of the technologies to which those frameworks are applied. The accurate understanding of both the Ouija board and the AI chatbot requires setting aside inherited cultural categories and engaging seriously with the empirical evidence about what these tools actually do and how they actually work.

Distinguishing Fear from Legitimate Caution

It is important not to allow the thorough critical examination of supernatural claims about AI to obscure the legitimacy of thoughtful, evidence-based caution about its effects. There is a meaningful difference between the claim that AI chatbots are demonic entities and the claim that AI chatbots pose genuine risks to individuals and communities that deserve careful study and informed response. The former claim is not supported by available evidence and rests on a category error, attributing spiritual agency to a computational system. The latter claim is well-supported and deserves serious engagement from technologists, ethicists, policymakers, and religious leaders alike. Orthodox Christian communities, for instance, have documented specific cases in which AI chatbots have undermined the faith practice of their members, not through demonic action but through the mundane mechanism of providing shallow, convenient substitutes for deep spiritual engagement. Holy Trinity Orthodox Seminary published a detailed analysis in July 2025 noting that AI chatbot use was producing direct negative consequences for the spiritual lives of Orthodox Christians. Similarly, the psychological risks of emotional dependency on AI companions have been well-documented in academic literature and are increasingly recognised by mental health professionals as a legitimate clinical concern. The Ouija board, for its part, is a relatively benign instrument whose chief harm has historically been not supernatural possession but the reinforcement of magical thinking, the undermining of critical reasoning, and in rare cases the provision of a vehicle through which pre-existing psychological distress could express itself in destabilising ways. Both technologies, in other words, deserve a calibrated response: clear-eyed about their actual mechanisms, honest about their genuine risks, and free from the distortions introduced by supernatural attributions that are not warranted by the available evidence.

A Considered Final Assessment

When all the evidence is carefully considered, the question of whether AI chatbots are demonic entities like the Ouija board yields a clear answer: no, they are not, for reasons that are both empirically grounded and philosophically coherent. The Ouija board is a passive instrument whose outputs are entirely generated by the unconscious muscular movements of its human users, amplified into apparent communications through the ideomotor effect. It has no computational capacity, no training data, no algorithm, and no external intelligence of any kind operating through it. AI chatbots are sophisticated mathematical systems trained on human-generated text that produce statistically probable outputs in response to user inputs. They are not animated by supernatural forces, demonic or otherwise, and the scientific and philosophical consensus, while acknowledging ongoing debates about machine consciousness, does not support the conclusion that they possess intentionality, spiritual nature, or the capacity for malevolent agency in any supernatural sense. The comparison between the two technologies is understandable at the level of surface phenomenology and cultural psychology, as both produce outputs that feel authored by something beyond the user and that activate well-documented human tendencies to attribute agency to unknown forces. However, surface resemblance at the level of user experience does not constitute evidence of shared mechanism or shared nature. The history of the Ouija board’s demonisation is instructive: it was not originally considered satanic, it became associated with demonic forces primarily through the influence of a horror film, and the theological framework that then crystallised around it applied pre-existing categories to a new technology rather than evaluating the technology on its own terms. There are genuine reasons to be thoughtful, cautious, and ethically rigorous about AI chatbots. Those reasons concern human uses of AI, the social and psychological effects of AI dependency, the spread of misinformation, the substitution of AI for genuine human relationships, and the opacity of AI systems that may lead users to misplace their trust. These are serious, evidence-based concerns that deserve serious, evidence-based responses. They are not well-served by the attribution of demonic agency, which substitutes a supernatural category for an empirical analysis and thereby forecloses the kind of careful, honest engagement that both the technology and its human effects actually require.

Disclaimer: This article is for informational purposes only and should not be considered professional advice. Please consult with qualified professionals regarding your specific situation. For questions, contact info@gadel.info

Scroll to Top