Responsible Intelligence reclassifies regulatory compliance and stakeholder communication from ‘soft skills’ to essential technical competencies. Real progress comes from creating solutions that are practical, reliable, and able to meet everyday challenges. The most meaningful innovations are those that deliver strong results not only in theory but also when used in real-life situations, maintaining their effectiveness over time and beyond initial excitement.
Victor Chang, a Professor of Business Analytics and AI Expert, has focused his career on building systems that work well in everyday environments. Unlike many who prioritize speed, benchmarks, and showy outcomes, Chang concentrates on developing systems that remain dependable under pressure, respect real-world limits, and continue to perform long after the initial research support ends.
With over 26 years of experience in higher education and industry, more than 300 peer-reviewed publications, around 30,000 citations, and over £3 million in research funding as Principal Investigator, Chang ranks among the top 0.2 percent of scientists worldwide based on citation impact. Yet, his influence extends beyond these figures. His guiding principle is that solutions must demonstrate their usefulness by succeeding in practical, everyday settings.
From work in NHS hospitals and financial institutions to frameworks for collaborative learning and scalable mentorship programs, Chang’s efforts consistently tackle a major challenge in education today: bridging the gap between what can be created through research and what is truly useful in practice.
From Cloud Systems to Human Systems
Professor Victor Chang’s journey into Business Analytics, AI, and Data Science did not begin with algorithms. It began with discomfort.
Early in his career, his focus was on cloud computing and data systems, areas already gaining academic momentum. But around 2010, while working closely with research students and NHS hospital trusts, Chang noticed a troubling pattern. Technically brilliant models, grounded in strong theory, repeatedly failed once deployed in clinical environments.
Doctors had seconds, not minutes, to act. The data was incomplete. Organisational politics shaped adoption. Privacy rules restricted access. What looked elegant in a journal paper collapsed in practice.
That was the moment he realised universities were teaching the wrong thing. Students were learning how to optimise models, not how to make decisions survive real-world constraints.
This insight fundamentally altered his trajectory. AI, he concluded, is not merely a technical discipline. It is an ecosystem involving trust, governance, ethics, organisational behaviour, and human judgment. His research focus expanded accordingly, leading him toward privacy-preserving AI, federated learning, secure analytics, and systems-level thinking.
Choosing the Harder Path
One of the defining moments in Victor Chang’s career came between 2018 and 2019, when he faced a choice familiar to many senior academics. He could pursue cleaner, more theoretical work that guaranteed high-impact publications, or commit to applied research that was messier, slower, and riskier.
The academic system subtly discourages the latter. Applied projects involve unpredictable stakeholders, incomplete data, and outcomes that cannot always be neatly framed for elite journals. Victor Chang chose the harder path.
Collaborating with NHS trusts and financial institutions, he immersed himself and his students in live environments where AI systems had real consequences. The payoff was transformative. These collaborations surfaced research questions that no textbook addressed. How do you design models clinicians trust? How do you preserve privacy across institutions? How do you deploy AI within regulatory and operational limits? The applied work did not dilute the research. It changed it. That decision now defines his reputation as a professor whose work does not end at publication, but continues in deployment, maintenance, and long-term use.

Teaching Without Silos
Perhaps the most radical aspect of Professor Chang’s approach is his refusal to separate teaching, research, and practice.
In conventional programs, students learn theory first, complete isolated projects later, and only encounter reality after graduation. Victor Chang inverts this model entirely.
His students work on active research problems tied to real organisations. A master’s student may help design a federated learning pipeline for an NHS trust. A PhD candidate may deploy a privacy-preserving model where clinicians will use the outputs the same week. This changes everything.
Data quality becomes tangible. Ethics becomes immediate. Communication becomes survival-critical. A false alert is no longer an abstract metric. It could waste clinical time or erode trust. Victor Chang also teaches what many programs ignore. Procurement processes, regulatory compliance, stakeholder communication, ethical trade-offs, and organisational constraints are treated as core material. These are not soft skills. They are core competencies for anyone who wants their AI work to matter.

| From Traditional AI Education to Responsible Intelligence | |
|---|---|
| Traditional AI Education | Chang’s Responsible Intelligence Approach |
| Optimising model accuracy | Designing systems that survive real-world constraints |
| Clean, curated datasets | Messy, incomplete, regulated data |
| Ethics as discussion | Ethics as implementation |
| Research ends at publication | Research continues through deployment and maintenance |
| Individual brilliance | Systems, teams, and long-term use |
A System That Delivered and Endured
One NHS collaboration exemplifies Chang’s philosophy in action.
Rather than following the traditional grant-to-publication pipeline, Victor Chang structured the project as a learning laboratory. Two PhD students, three master’s students, six early-career researchers, and a visiting professor worked alongside clinicians and administrators.
Students attended governance meetings. They explained design decisions to surgeons. They adapted systems under real-world pressure.
The result was a deployed AI system achieving over 90 percent accuracy in predicting post-operative complications, rolled out in NHS trusts with expanding partnerships in international healthcare systems. The system improved resource allocation and contributed to an estimated 15 to 20 percent reduction in preventable readmission costs.
The outcomes were equally effective for people. PhD researchers published in high-impact journals and secured prestigious fellowships. Master’s students entered health-tech roles. Several alumni are now NHS or other healthcare AI consultants.
Most tellingly, years later, clinicians are still using the system and requesting expansions.
That, Victor Chang says, is what success looks like.
Scaling Mentorship Without Burning Out
As Chang’s research footprint expanded, another challenge emerged. Mentorship at scale. By 2020, he was supervising over 40 researchers directly and supporting more than 200 across networks. Traditional one-to-one supervision was unsustainable. Rather than reducing impact, Chang designed systems.
His Scalable Mentorship Framework uses tiered responsibility across career stages, peer-learning networks, and extensive documentation. Knowledge is shared, not hoarded. Junior researchers learn from near peers. Chang remains deeply involved, but no longer a bottleneck. Systems beat heroics. You cannot mentor at scale through charisma alone.
The result is a global research network spanning over 100 institutions and more than 250 mentored researchers across 20-plus countries. It is an influence that compounds far beyond any single paper.
Closing the Academic and Industry Gap
According to Chang, the biggest failure in AI education is the belief that technical excellence is enough.
In reality, deployment depends on regulatory compliance, ethical governance, organisational alignment, and long-term maintenance. Privacy and security, in particular, are no longer optional.
Victor Chang addresses these gaps deliberately. His students work with messy, incomplete data. They navigate GDPR constraints. They design systems using federated learning and differential privacy, not as theoretical concepts, but as operational necessities.
He also teaches the invisible curriculum. Proposal writing, stakeholder communication, professional networking, and strategic career management are treated as essential skills.
These capabilities determine whether work survives beyond the lab.

Frameworks That Redefine Learning
Over the years, Victor Chang has developed original models that now shape his programs.
The Research and Practice Integration Model structures learning around active, real-world research problems with stakeholders involved from day one.
The Scalable Mentorship System enables high-quality supervision across hundreds of researchers.
The Cross-Sector Learning Laboratory allows students to work simultaneously across healthcare, finance, and government, learning transferable patterns rather than narrow specialisations.
Together, these frameworks redefine Business Analytics education as a living system rather than a static curriculum.
The Future of AI Education
Looking ahead, Victor Chang sees clear shifts shaping the next decade.
Privacy and security will become baseline requirements. AI roles without these competencies will disappear. Education must pivot from model-building to AI systems engineering, including MLOps, governance, monitoring, and lifecycle management.
Ethics must become technical. Students should graduate able to implement fairness audits, explainability tools, and accountability mechanisms, not just discuss them.
Perhaps most importantly, adaptability will eclipse any single skill.
The field will change faster than any curriculum. The most important capability is learning how to learn.
Recognition Grounded in Practice
Professor Chang’s work has earned wide recognition, including Data Leader of the Year 2025, UK Inspirational Individual of the Year 2024, and Cybersecurity Initiative of the Year 2025, alongside fellowships across leading professional bodies.
Yet he measures success differently.
What matters most is that systems are still being used, institutions ask for more, and former students return as collaborators.
Ethics as Design, Not Decoration
For Chang, ethics and privacy are not constraints. They are enablers.
Federated learning allows hospitals to collaborate without sharing patient data. Differential privacy enables analytics where centralised access would be impossible. Explainability builds trust where black boxes would fail.
Students confront ethical trade-offs directly. Accuracy versus privacy. Fairness versus performance. Transparency versus complexity.
Failures are studied intentionally. That is where judgment is built.

An Open Letter to the Next Generation
You are entering this field at a defining moment. AI is reshaping healthcare, finance, education, and governance, and the systems you build will shape real decisions and real lives.
Reject the false choice between impact and integrity. You will be pressured to move fast and cut corners. Resist it. The most meaningful work is done by those who insist on doing things right.
Embrace messiness. The most important AI problems come with incomplete data, conflicting priorities, and uncertainty. This is not a barrier to innovation. It is where innovation lives.
Build systems, not just models. Performance alone is not enough. AI must work under pressure, adapt to reality, and earn trust over time.
Treat privacy and security as capabilities, not constraints. Responsible design enables AI in places where centralised data access is impossible.
Specialise deeply, learn broadly, and invest in people. Your network is your learning community, and progress compounds through relationships.
Think in decades, not months. Build capabilities and impact that last. Remember why this work matters. AI can save lives, expand access, and improve systems at scale.
Build trustworthy AI. Deploy responsible systems. Choose problems that matter.
Victor Chang, Professor of Business Analytics and AI Expert
Aston University
A Legacy Still in Motion
Today, at Aston Business School in Birmingham, Professor Victor Chang continues to expand international collaborations, from UK–Japan 6G security research to SecureAI4Public frameworks and next-generation financial AI systems.
His legacy is not a single theory or tool, but a disciplined way of thinking. AI is designed for the world as it is, not as we wish it to be.
In a field obsessed with speed, Chang has chosen endurance. In doing so, he is shaping a generation equipped not just to build intelligence, but to make it worthy of trust.
Five Key Takeaways
- AI Only Matters When It Works in the Real World
- Teaching, Research, and Practice Should Not Be Separated
- Privacy and Ethics Are Technical Capabilities, Not Limitations
- Systems Outperform Individual Brilliance
- The Future of AI Belongs to Those Who Can Adapt












