Think of your brain as not only a perception and aesthetic machine, but also as a game engine. In terms of games you could play, a déjà vu might be thought of as a hash collision with a previously encountered (or simulated) state of a game with characteristic and irreducibly-complex decision-points similar to the ones you are experiencing right now. The microaffect that was deemed appropriate for that given situation, within the context of the game that was being played, is triggered in an inappropriate situation. This can be expected in so far as the hashing function is both quick and imperfect; it trades complete accuracy for speed, as the latter is critical in real-world scenarios. This explains why nearly universally déjà vus have a characteristic affect- it’s not so much that you feel that “the situation is the same” as that the way it feels emotionally is reminiscent of a “game situation” you’ve encountered before. A déjà vu is a repeat of a game-state-specific feeling of behavioral restriction.
Are we to be reduced to computers? Possibly. Presumably the hard question of consciousness willbe found somwhere in our code?
No, not at all 🙂 That said, there *are* important aspects from computer science that have meaningful translations in consciousness studies. It is, however, always a risk to extend the analogies too far. In the case of QRI, we think of consciousness as a physical state (cf. Dual Aspect Monism) rather than a computational state. See: https://qualiacomputing.com/2017/07/22/why-i-think-the-foundational-research-institute-should-rethink-its-approach/
In the paradigm of Marr’s levels of analysis, people who think we are computers strictly speaking believe that consciousness is a set of algorithms (i.e. at the algorithmic level of analysis). What we believe is that consciousness is an implementation-level feature of the brain, which is then utilized to implement consciousness-specific algorithms.
Ah! Spinoza. Excellent stuff. Long one of my favourites.