BG Color

GOSIM 2025: Charting the Path for Explainable AI Commercialization

QuanShi Zhang shared entrepreneurship practices, rejecting end-to-end black box to shift AI from trial and error to science-driven development.

View the full video

BG Color

GOSIM 2025: Charting the Path for Explainable AI Commercialization

QuanShi Zhang shared entrepreneurship practices, rejecting end-to-end black box to shift AI from trial and error to science-driven development.

View the full video

BG Color

GOSIM 2025: Charting the Path for Explainable AI Commercialization

QuanShi Zhang shared entrepreneurship practices, rejecting end-to-end black box to shift AI from trial and error to science-driven development.

View the full video

During the GOSIM HANGZHOU 2025 conference, Professor Zhang Quanshi, founder of Fullpro Translations and a professor at Shanghai Jiao Tong University, was interviewed by Tang Xiaoyin, executive editor-in-chief of CSDN & "New Programmer." He deeply shared entrepreneurial practices and industry visions for the interpretability of neural networks.

Having delved into this field for over a decade, Professor Zhang has persistently refused to blindly follow the mainstream trend of "end-to-end black box training." In this discussion, he systematically explained the company's core paradigm of "rigorous mechanism explanations + open source collaboration"—sharing mathematically rigorous explanatory methods through open source to collaboratively verify and iterate with industry developers, avoiding inefficient homogeneity in industry exploration.

Currently, leveraging breakthrough research outcomes, the company has officially entered vertical fields such as autonomous driving, legal technology, and quantitative investment. It is committed to reconstructing AI evaluation and training logic using interpretability technology to systematically address the reliability challenges of large models and propel artificial intelligence from "engineering parameter tuning" to "science-driven."

🎬Highlights of the Dialogue:
1. Academic Beginnings: Rejecting "End-to-End" Internal Competition, Exploring the "Lonely Path" of Mechanism Explanation

“If one is to pursue a research direction for 10 or 20 years, its essence must be clearly understood.” After returning to China from a postdoctoral stint at UCLA in 2014, Zhang Quanshi turned down the then hot end-to-end deep learning trend in favor of focusing on the interpretability of neural networks. He candidly pointed out the pain points of the mainstream trend: “The research dimensions of deep learning have become increasingly narrow, with everyone collecting data and tuning parameters, losing distinctiveness, which long-term holds no value.” His core goal is clear and firm: “Explain all decision logics of black box models from a mechanistic level, ensuring mathematical rigor in explanations. This is not about approximate approximations but about ‘if the contribution is 3.4, it has to be precisely 3.4.'”

2. Core Breakthrough: Quantifying Knowledge Points + Piercing Scaling Law, Making Black Box Models "Conversable"

“The current Scaling Law of large models is a fatal flaw—the exponential growth of data and parameters only leads to linear performance improvement.” Zhang Quanshi’s team’s research hits the core: through mathematical proof, under specific conditions, the intrinsic mechanisms of black box models can be interpreted as “sparse symbolic logic,” similar to how the human brain efficiently communicates using a few knowledge points. Their method can quantify the model's

BLOGS

Discover More

Interested in our latest developments? Click to explore more AI industry insights and company updates.

BLOGS

Discover More

Interested in our latest developments? Click to explore more AI industry insights and company updates.

BLOGS

Discover More

Interested in our latest developments? Click to explore more AI industry insights and company updates.

Company Logo
Company Logo

Demystifying AI, Defining Trust.

Official WeChat

Contact

Business: contact@symtrustai.com

Product: product@symtrustai.com

Support: support@symtrustai.com

Address: Room 3309, Building 3, NeoBay, No. 951 Jianchuan Rd, Minhang District, Shanghai

©SymtrustAI Co., Ltd. 2026 All Rights Reserved

ICP No. 2026002871-1

Public Security No. 31011202022067

Company Logo
Company Logo

Demystifying AI, Defining Trust.

Official WeChat

Contact

Business: contact@symtrustai.com

Product: product@symtrustai.com

Support: support@symtrustai.com

Address: Room 3309, Building 3, NeoBay, No. 951 Jianchuan Rd, Minhang District, Shanghai

©SymtrustAI Co., Ltd. 2026 All Rights Reserved

ICP No. 2026002871-1

Public Security No. 31011202022067

Company Logo
Company Logo

Demystifying AI, Defining Trust.

Official WeChat

Contact

Business: contact@symtrustai.com

Product: product@symtrustai.com

Support: support@symtrustai.com

Address: Room 3309, Building 3, NeoBay, No. 951 Jianchuan Rd, Minhang District, Shanghai

©SymtrustAI Co., Ltd. 2026 All Rights Reserved

ICP No. 2026002871-1

Public Security No. 31011202022067