| Information Processing | 
Processes vast amounts of information rapidly and automatically, often without conscious awareness (From the first studies of the unconscious mind to consumer neuroscience: A systematic literature review, 2023) | 
Processes large datasets quickly, extracting patterns and generating outputs without explicit programming for each task (Deep Learning, 2015) | 
| Pattern Recognition | 
Recognizes complex patterns in sensory input and past experiences, influencing behavior and decision-making (Analysis of Sources about the Unconscious Hypothesis of Freud, 2017) | 
Excels at identifying patterns in training data, forming the basis for generating new content or making predictions (A Survey on Deep Learning in Medical Image Analysis, 2017) | 
| Creativity | 
Contributes to creative insights and problem-solving through unconscious incubation and associative processes (The Study of Cognitive Psychology in Conjunction with Artificial Intelligence, 2023) | 
Generates novel combinations and ideas by recombining elements from training data in unexpected ways (e.g., GANs in art generation) (Generative Adversarial Networks, 2014) | 
| Emotional Processing | 
Processes emotional information rapidly, influencing mood and behavior before conscious awareness (Unconscious Branding: How Neuroscience Can Empower (and Inspire) Marketing, 2012) | 
Can generate text or images with emotional content based on patterns in training data, but lacks genuine emotions (Language Models are Few-Shot Learners, 2020) | 
| Memory Consolidation | 
Plays a crucial role in memory consolidation during sleep, strengthening neural connections (The Role of Sleep in Memory Consolidation, 2001) | 
Analogous processes in some AI systems involve memory consolidation and performance improvement (In search of dispersed memories: Generative diffusion models are associative memory networks, 2024) | 
| Implicit Learning | 
Acquires complex information without conscious awareness, as in procedural learning (Implicit Learning and Tacit Knowledge, 1994) | 
Learns complex patterns and rules from data without explicit programming, similar to implicit learning in humans (Deep Learning for Natural Language Processing, 2018) | 
| Bias and Heuristics | 
Employs cognitive shortcuts and biases that can lead to systematic errors in judgment (Thinking, Fast and Slow, 2011) | 
Can amplify biases present in training data, leading to skewed outputs or decision-making (Mind vs. Mouth: On Measuring Re-judge Inconsistency of Social Bias in Large Language Models, 2023) | 
| Associative Networks | 
Forms complex networks of associations between concepts, influencing thought and behavior (The associative basis of the creative process, 2010) | 
Creates dense networks of associations between elements in training data, enabling complex pattern completion and generation tasks (Attention Is All You Need, 2017) | 
| Parallel Processing | 
Processes multiple streams of information simultaneously (Parallel Distributed Processing: Explorations in the Microstructure of Cognition, 1986)) | 
Utilizes parallel processing architecture (e.g., neural networks) to handle multiple inputs and generate outputs (Next Generation of Neural Networks, 2021) | 
| Intuition | 
Generates rapid, automatic judgments based on unconscious processing of past experiences (Blink: The Power of Thinking Without Thinking, 2005) | 
Produces quick outputs based on learned patterns, which can appear intuitive but lack genuine understanding (BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, 2019) | 
| Priming Effects | 
Unconscious exposure to stimuli influences subsequent behavior and cognition (Attention and Implicit Memory: Priming-Induced Benefits and Costs, 2016) | 
Training on specific datasets can “prime” generative AI to produce biased or contextually influenced outputs (AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias, 2018) | 
| Symbol Grounding | 
Grounds abstract symbols in sensorimotor experiences and emotions (The Symbol Grounding Problem, 1990) | 
Struggles with true symbol grounding, relying instead on statistical correlations in text or other data (Symbol Grounding Through Cumulative Learning, 2006) | 
| Metaphorical Thinking | 
Uses embodied metaphors to understand and reason about abstract concepts (Metaphors We Live By, 1980) | 
Can generate and use metaphors based on learned patterns but lacks deep understanding of their embodied nature (Deep Learning-Based Knowledge Injection for Metaphor Detection, 2023) | 
| Dream Generation | 
Produces vivid, often bizarre narratives and imagery during REM sleep (The Interpretation of Dreams, 1900) | 
Some generative models can produce dream-like, surreal content (Video generation models as world simulators, 2024) | 
| Cognitive Dissonance | 
Automatically attempts to reduce inconsistencies between beliefs and behaviors (A Theory of Cognitive Dissonance, 1957) | 
MoE architectures can handle a wider range of inputs without ballooning model size, suggesting potential for resolving conflicts between different AI components by synthesizing expert opinions into a coherent whole (Optimizing Generative AI Networking, 2024). | 
 
Very few lemmy users have enough background to appreciate your work here.
¯|(ツ)/¯ mostly looking to spark discussion.