I am a PhD student in Computer Science at the University of Texas at El Paso, specializing in Large Language Models (LLMs), multimodal learning, efficient deep-learning architectures, and scalable model training pipelines. My research spans transformer optimization, robustness, alignment, self-supervised learning, active learning, and real-world AI deployment, with multiple publications across major venues including CIKM, PoPETs, AAAI (ICWSM), HyperText, and ASONAM. I currently work as a Research Assistant in the SUPREME Lab, where I build and evaluate LLMs, develop distributed training workflows (FSDP), design multimodal architectures, and explore techniques such as PEFT/LoRA, RAG, generative modeling, and behavior modeling. My work focuses on creating reliable, efficient, and trustworthy AI systems.
Loading data...
Citations
h-index
i10-index