Can We Trust LLMs on Memristors? Diving into Reasoning Ability under Non-Ideality
arXiv:2603.13725v1 Announce Type: new Abstract: Memristor-based analog compute-in-memory (CIM) architectures provide a promising substrate for the efficient deployment of Large Language Models (LLMs), owing to …
Taiqiang Wu, Yuxin Cheng, Chenchen Ding, Runming Yang, Xincheng Feng, Wenyong Zhou, Zhengwu Liu, Ngai Wong
12 views