Research

We build practical, theory-grounded interaction-based explanation methods to understand what deep neural networks learn and how they learn it. Our work treats feature interactions as a unifying language for interpretability, connecting attribution, learning dynamics, generalization, and robustness. By combining rigorous interaction theory with scalable monitoring and real-world validation, we help teams explain, audit and debug models.

Attribution

We develop principled attribution methods that explain predictions by allocating credit to inputs and their combinations, unifying majority post-hoc explainers and improving Shapley-style efficiency and faithfulness.

Attribution

We develop principled attribution methods that explain predictions by allocating credit to inputs and their combinations, unifying majority post-hoc explainers and improving Shapley-style efficiency and faithfulness.

Interaction Theory
Interaction Theory

We formalize and extract interaction primitives, which are sparse, reusable building blocks that capture compositional concepts, providing a compact, interpretable view of how networks represent knowledge.

Interaction Theory
Interaction Theory

We formalize and extract interaction primitives, which are sparse, reusable building blocks that capture compositional concepts, providing a compact, interpretable view of how networks represent knowledge.

Robustness
Robustness

We analyze why models fail under adversarial pressure, distribution shift, or fine-tuning, and propose tools to assess and improve stability of learned concepts and fingerprints.

Robustness
Robustness

We analyze why models fail under adversarial pressure, distribution shift, or fine-tuning, and propose tools to assess and improve stability of learned concepts and fingerprints.

Applications
Applications

We translate these ideas into practice across vision and 3D understanding, privacy/attribute obfuscation, and LLM behavior auditing (e.g., legal reasoning), demonstrating impact in real deployment scenarios.

Applications
Applications

We translate these ideas into practice across vision and 3D understanding, privacy/attribute obfuscation, and LLM behavior auditing (e.g., legal reasoning), demonstrating impact in real deployment scenarios.

Publications

Company Logo
Company Logo

Demystifying AI, Defining Trust.

Official WeChat

Contact

Business: contact@symtrustai.com

Product: product@symtrustai.com

Support: support@symtrustai.com

Address: Room 3309, Building 3, NeoBay, No. 951 Jianchuan Rd, Minhang District, Shanghai

©SymtrustAI Co., Ltd. 2026 All Rights Reserved

ICP No. 2026002871-1

Public Security No. 31011202022067

Company Logo
Company Logo

Demystifying AI, Defining Trust.

Official WeChat

Contact

Business: contact@symtrustai.com

Product: product@symtrustai.com

Support: support@symtrustai.com

Address: Room 3309, Building 3, NeoBay, No. 951 Jianchuan Rd, Minhang District, Shanghai

©SymtrustAI Co., Ltd. 2026 All Rights Reserved

ICP No. 2026002871-1

Public Security No. 31011202022067

Company Logo
Company Logo

Demystifying AI, Defining Trust.

Official WeChat

Contact

Business: contact@symtrustai.com

Product: product@symtrustai.com

Support: support@symtrustai.com

Address: Room 3309, Building 3, NeoBay, No. 951 Jianchuan Rd, Minhang District, Shanghai

©SymtrustAI Co., Ltd. 2026 All Rights Reserved

ICP No. 2026002871-1

Public Security No. 31011202022067