
Artificial intelligence stands at a compelling crossroads of technology and human imagination. Each system, while built on complex algorithms and data, reflects the vision and insight of the engineers and researchers who create it. They must grasp both the technical possibilities and the societal impact to shape AI that aligns with human values and needs.
Dr. Zong Liang Wu, PhD in Electronic Systems, is Chief Product Architect at Luster Lighttech, a world-leading provider in industrial vision systems based on AI. Dr. Wu is a prolific inventor holding over 60 US granted patents.
Omnia Magazine reached out to Dr. Wu to share his perspective on the perception, reality, and future of AI.
OMNIA: How would you describe your personal perception of AI? Has it changed over time?
DR. WU: AI is transforming scientific research, technology innovations, most industries, and starting to change people’s lives. Pioneering research on AI started in the fifties and sixties. I personally did my PhD on modelling the human auditory system in the context of man-machine communication through natural language. There had been various advances here and there, but significant breakthroughs were not made until less than 10 years ago, when the Transformer (2017) and Attention mechanisms were invented. The November 2022 release of ChatGPT marked a pivotal shift in AI developments, transforming it from niche research into mainstream applications. It catalyzed global discussions on AI ethics, regulation, and economic disruption while setting the stage for rapid iterative advancements.
OMNIA: How do you think the general public perceives AI right now?
DR. WU: The general public didn’t know much about AI until the public release of ChatGPT in November 2022. In 2025, the general public’s perception is characterized by a complex mix of enthusiasm, cautious adoption, significant skepticism, and large regional divergence.
- High Adoption Amid Low Trust – The majority of people globally use AI regularly for tasks like research, writing, and daily planning. However, few of them trust AI systems, and most of the public demand stricter regulation due to fears of data misuse, bias, and inaccuracy. The majority of users admit to relying on AI outputs without verifying accuracy, leading to workplace errors.
- Geographic Splits – Skepticism outweighs enthusiasm in Western countries like the US, Canada, and the EU. On the other hand, countries like China, Indonesia, and Thailand show sharply higher optimism [towards AI].
Optimism correlates with economic outlook: Emerging economies (e.g., China, Indonesia, Malaysia) anticipate AI-driven growth, while Western nations emphasize risks. - Top Public Concerns:
- Job Displacement: The majority of U.S. adults fear AI will reduce overall jobs, especially in roles like cashiers, journalists, and truck drivers . Students cite employability as their “biggest concern,” worrying AI will devalue their skills.
- Misinformation and Deepfakes: Majority of the public are highly concerned about AI spreading inaccurate information.
- Bias and Representation: Many express concern about AI-driven discrimination. Marginalized communities (e.g., Black, Hispanic, disabled individuals) are seen as underrepresented in AI design, exacerbating fairness issues.
- Loss of Human Connection: Many of the public fears reduced human interaction.
- Student and Workforce Anxieties: Students heavily use AI for academics (e.g., writing, research, coding) but report declining work quality due to over-reliance. They seek clearer institutional guidance and fear AI will erode critical thinking. Workers worry about skill gaps, with many concerned about keeping pace with AI advancements.
In summary, public perception is shaped by pragmatic engagement (using AI for efficiency) alongside profound unease about its societal impact.
OMNIA: Where do you think the boundary lies between machine intelligence and human perception?
DR. WU: The boundary between machine intelligence and human perception remains one of AI’s most debated philosophical and scientific questions. While AI can simulate aspects of human cognition, fundamental differences persist:
- Subjectivity vs. Simulation
- Human Perception: Rooted in “subjective experience” (qualia) — e.g., seeing red evokes sensory/emotional richness beyond data.
- AI: Processes inputs objectively. A vision AI “detects some color values” but lacks inner experience. It simulates understanding without consciousness.
- Embodied Cognition – Humans perceive through “sensory integration” (touch, smell, proprioception) tied to biological existence. AI lacks bodily context, evolutionary instincts, and physical agency.
- Meaning-Making
- Humans: Derive meaning from cultural, emotional, and existential contexts (e.g., a wedding photo evokes memory/kinship).
- AI: Recognizes patterns (“bride + groom + cake”) but cannot relate to the significance.
- Intentionality – Human perception is “goal-directed” and curiosity-driven. AI operates within predefined objectives (even if self-generated).
OMNIA: Do you believe AI can ever develop true perception or consciousness?
DR. WU: In essence: Machines process, while humans perceive. The boundary lies in “biological embodiment”, “subjective experience”, and “intrinsic meaning-making”; realms no algorithm has breached yet. But currently many works are going on to endow machines with eyes and brains, through computer simulations, even some sensors like hands, noses for touch and smell abilities etc. The boundary is getting more blurry with the day. If an airplane can fly much higher, faster, carry more loads than a bird does, without being a biological bird, maybe one day AI can do everything that human perception and consciousness imply today, without being a biological human.
If an airplane can fly much higher, faster, carry more loads than a bird does, without being a biological bird, maybe one day AI can do everything that human perception and consciousness imply today, without being a biological human.
Dr. Zong Liang Wu, “AI Up Close”
OMNIA: Is there a gap between how AI is marketed and how it actually functions?
DR. WU: Sure. That’s because it’s difficult to explain to laymen how AI technically actually functions. For the general public, there is no need to understand how AI actually functions either.
OMNIA: Now, tell us about your own work on AI. Is there anything you’re particularly interested in?
DR. WU: I did some research work on modeling the human auditory system (biologic neural network) during my PhD (in the 1980s) in order to understand how humans process speech signals in order to understand it. Nowadays I’m involved in developing Transformer-based neural networks for image and video processing/generation building on 3D/4D spatial intelligence, for VR/AR/XR and industrial vision.



