Welcome to ARKAI Research Lab (Accountable, Resilient, and Kind AI).
Our reseach lab is on a mission to enhance the robustness, safety, and transparency of AI-related technologies in various sociotechnical contexts, ensuring that the society and netizen can harness their power with confidence and clarity.
We hope to associate AI with kindness as not to humanize AI, but to emphasize the much needed responsible and ethical development and use of AI, and to also encourage AI applications that nurture humans and the society.
09/2024 - Adapters Mixup is accepted to EMNLP 2024!
We use mixup with adapters to fine-tune pre-trained language models to enhance their adversarial robustness with unknown future attacks.
06/2024 - Beyond Individual Facts for GPT Models is available
We investigate knowledge locality of GPT models for not one but a group of conceptually related facts.
06/2024 - PlagBench is available.
We collect data and benchmark LLM's dual behaviors: (1) plagarism generation via summarization and (2) plagarism detection.
06/2024 - XAI Similarity Metrics is available.
We investigate a variety of similarity measures designed for text-based adversarial attacks on Explainable AI.
12/2023 - Rishabh successfully defended his Master thesis. Congratulations!
12/2023 - Our paper ALISON: Fast and Effective Stylometric Authorship Obfuscation accepted to AAAI'24 This paper proposed a simple, fast and effective algorithm to hide or mask true authorships from texts, even ChatGPT generated ones Congratulations to Eric (PhD at UMD) and Saranya (PhD at PSU)!
11/2023 - Our paper ``Enhancing Brand Affinity in the Face of Political Controversy: the Role of Disclosing AI Moderator on Social Media Platforms'' accepted to AAA'24. This is a collaborative effort with Maria (Michigan State University) and Marie (Cornell University). We designed a user-interface to evaluate the effects of AI moderation disclosure on brand affinity.
10/2023 - Our papers are accepted to EMNLP'23. Congratulations to KInIT and Pike Lab (multilingual neural text detection),
Jason (UMD, preventing privacy leakage in texts on social media), Nafis (Penn State, written v.s. spoken neural texts) and Chris (Ole Miss, fooling XAI explanations)!
10/2023 - Our tutorial on deepfake text detection and obfuscation accepted at NAACL'24
09/2023 - Dr. Le participated in the LEVEL UP workshop organized by CRA in Atlanta