RaYeTxxhyf6Z52c64BKAakmbh5YXNvi9en7cSfcvefYZgvBbZtUX0EjickHs
纪卓然
Research Associate
Visit:
Personal Information
  • Name (Pinyin):
    Ji Zhuoran
  • E-Mail:
  • Date of Employment:
    2023-02-09
  • School/Department:
    网络空间安全学院(研究院)
  • Education Level:
    With Certificate of Graduation for Doctorate Study
  • Business Address:
    山东省青岛市即墨滨海路72号山东大学青岛校区淦昌苑D座
  • Gender:
    Male
  • Degree:
    Doctoral Degree in Philosophy
  • Alma Mater:
    香港大学
Biography

博士,山东大学网络空间安全学院(研究院)助理研究员,分别于2018年和2022年在香港大学获得工学学士学位(一等荣誉)和哲学博士学位。主要研究方向为系统安全、高性能隐私计算、GPU并行计算、GPU编译器等。近年来在IPDPS、PACT、ICS、ICPP、JPDC等CCF推荐会议和期刊上发表论文数篇。详情见:https://jizhuoran.github.io/


Education
  • 2018-09 — 2022-11
    香港大学
    Computer Science and Technology
    Doctoral Degree in Philosophy
  • 2014-09 — 2018-06
    香港大学
    Computer Science and Technology
    工学学士
Publication
Paper Publications

1. 纪卓然. Accelerating Number Theoretic Transform with Multi-GPU Systems for Efficient Zero Knowledge Proof .ASPLOS.2025,1 (3):34

2. Zhang, Zhaorui. FedCSpc: A Cross-Silo Federated Learning System with Error-Bounded Lossy Parameter Compression .IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS.2025

3. Zhang, Zhaorui. FedEFsz: Fair Cross-Silo Federated Learning System with Error-Bounded Lossy Compression .IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS.2025

4. Zhao, Haosong. VESTA: A Secure and Efficient FHE-based Three-Party Vectorized Evaluation System for Tree Aggregation Models .PROCEEDINGS OF THE ACM ON MEASUREMENT AND ANALYSIS OF COMPUTING SYSTEMS.2025,9 (1)

5. 唐艺峰. Cube-fx: Mapping Taylor Expansion Onto Matrix Multiplier-Accumulators of Huawei Ascend AI Processors .IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS.2025,36 (6):1115-1129

6. 纪卓然. POSTER: Accelerating High-Precision Integer Multiplication used in Cryptosystems with GPUs .ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programmin.2024

7. 纪卓然. A Compiler-Like Framework for Optimizing Cryptographic Big Integer Multiplication on GPUs .A Compiler-Like Framework for Optimizing Cryptographic Big Integer Multiplication on GPUs.2024

8. 纪卓然. Accelerating Multi-Scalar Multiplication for Efficient Zero Knowledge Proofs with Multi-GPU Systems .Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems.2024 (3)

9. Zhaorui Zhang. Momentum-driven adaptive synchronization model for distributed DNN training on HPC clusters .Journal of Parallel and Distributed Computing.2022 (Volume 159)

10. 纪卓然. Compiler-Directed Incremental Checkpointing for Low Latency GPU Preemption .2022 IEEE International Parallel and Distributed Processing Symposium.2022

11. 纪卓然. Optimizing Aggregate Computation of Graph Neural Networks with on-GPU Interpreter-Style Programming .Proceedings of the International Conference on Parallel Architectures and Compilation Techniques.2022

12. 纪卓然. CTXBack: Enabling Low Latency GPU Context Switching via Context Flashback .2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS).2021

13. 纪卓然. Efficient exact K-nearest neighbor graph construction for billion-scale datasets using GPUs with tensor cores .Proceedings of the 36th ACM International Conference on Supercomputing.2022

14. 纪卓然. Collaborative gpu preemption via spatial multitasking for efficient gpu sharing .European Conference on Parallel Processing.2021

15. Xueyu Wu. Embedding Communication for Federated Graph Neural Networks with Privacy Guarantees .International Conference on Distributed Computing Systems.2023

16. 纪卓然. Accelerating DBSCAN Algorithm with AI Chips for Large Datasets .Proceedings of the 50th International Conference on Parallel Processing.2021

Copyright All Rights Reserved Shandong University Address: No. 27 Shanda South Road, Jinan City, Shandong Province, China: 250100
Information desk: (86) - 0531-88395114
On Duty Telephone: (86) - 0531-88364731 Construction and Maintenance: Information Work Office of Shandong University