Research

My research encompasses various facets of statistics and optimization. Below is a list of my publications. You are welcome to contact me if you are interested in one or more of my papers. I am happy to discuss!

Preprints and paper in progress

[J9: In submission] Multi-physics Simulation Guided Generative Diffusion Models with Applications in Fluid and Heat Dynamics Naichen Shi, Hao Yan, Shenghan Guo, Raed Kontar. Link, Code.

Highlights Diffusion generative models can generate photorealistic images and videos but often struggle to understand the physical interactions. When practitioners from the science and engineering fields have access to physics simulators, can they improve the quality of diffusion model-generated samples with the help of simulations?

  illustration

We explore two strategies to incorporate physics simulation into diffusion models. Results show that our model indeed integrates physics knowledge in heat and fluid dynamics with patterns from real observations.


[J8: In submission] Heterogeneous Matrix Factorization: When Features Differ by Dataset Naichen Shi, Salar Fattahi, Raed Kontar. Link

Journal papers

[J7: JMLR] Triple Component Matrix Factorization: Untangling Global, Local, and Noisy Components Naichen Shi, Salar Fattahi, Raed Al Kontar. Journal of Machine Learning Research (JMLR), 2024. Link

[J6: Technometrics] Personalized Tucker Decomposition: Modeling Commonality and Peculiarity on Tensor Data Jiuyun Hu, Naichen Shi, Raed Kontar, Hao Yan. Technometrics, 2024. Link

[J5: JMLR] Personalized PCA: Decoupling Shared and Unique Features Naichen Shi, Raed Al Kontar. Journal of Machine Learning Research (JMLR), 2024. Link, Video, Code.

[J4: JMS] Personalized feature extraction for manufacturing process signature characterization and anomaly detection Naichen Shi, Shenghan Guo, Raed Al Kontar. Journal of Manufacturing Systems, 2024. Link.

[J3: Technometrics] Personalized Federated Learning via Domain Adaptation with an Application to Distributed 3D Printing Naichen Shi, Raed Al Kontar. Technometrics, 2023. Link, Video, Code.

[J2: TASE] Fed-ensemble: Ensemble Models in Federated Learning for Improved Generalization and Uncertainty Quantification Naichen Shi, Raed Al Kontar. IEEE Transactions on Automation Science and Engineering, 2022. Link, Code.

[J1: IEEE Access] The Internet of Federated Things Raed Kontar, Naichen Shi, Xubo Yue, Seokhyun Chung, Eunshin Byon, Mosharaf Chowdhury, Judy Jin, Wissam Kontar, Neda Masoud, Maher Noueihed, Chinedum E. Okwudire, Garvesh Raskutti, Romesh Saigal, Karandeep Singh, and Zhisheng Ye, IEEE Access, 2021. Link.

Conference papers

[C4: NeurIPS Spotlight] Personalized Dictionary Learning for Heterogeneous Datasets Geyu Liang, Naichen Shi, Raed Al Kontar, Salar Fattahi. Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS), 2023. Link, Code.

[C3: MSEC] Process Signature Characterization and Anomaly Detection with Personalized PCA in Laser-Based Metal Additive Manufacturing Naichen Shi, Raed Kontar, Shenghan Guo. Proceedings of the ASME 2023 18th International Manufacturing Science and Engineering Conference, 2022. Link.

[C2: NeurIPS Spotlight] Adam Can Converge Without Any Modification On Update Rules Yushun Zhang, Congliang Chen, Naichen Shi, Ruoyu Sun, Zhiquan Luo. Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS), 2022. Link.

[C1: ICLR spotlight] RMSprop converges with proper hyper-parameter Naichen Shi, Dawei Li, Mingyi Hong, and Ruoyu Sun. International Conference on Learning Representations (ICLR), 2021. Link, Video, Code.

Highlights Almost every ML/AL practitioner uses adaptive stepsize optimization algorithms (e.g., Adam). Surprisingly, an important theoretical problem was largely unexplored: under what conditions can they converge? We show, both theoretically and numerically, that the good performance of RMSprop and Adam is contingent on the appropriate choice of the exponential averaging parameter $\beta_2$. Only when $\beta_2$ close enough to 1 can (stochastic versions of) Adam and RMSprop generate stable update directions that gradually lead the updates to the optimality.

  Adam updates


You can also check my Google scholar profile.