Philip Harvey


2024

pdf bib
Towards Comprehensive Language Analysis for Clinically Enriched Spontaneous Dialogue
Baris Karacan | Ankit Aich | Avery Quynh | Amy Pinkham | Philip Harvey | Colin Depp | Natalie Parde
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Contemporary NLP has rapidly progressed from feature-based classification to fine-tuning and prompt-based techniques leveraging large language models. Many of these techniques remain understudied in the context of real-world, clinically enriched spontaneous dialogue. We fill this gap by systematically testing the efficacy and overall performance of a wide variety of NLP techniques ranging from feature-based to in-context learning on transcribed speech collected from patients with bipolar disorder, schizophrenia, and healthy controls taking a focused, clinically-validated language test. We observe impressive utility of a range of feature-based and language modeling techniques, finding that these approaches may provide a plethora of information capable of upholding clinical truths about these subjects. Building upon this, we establish pathways for future research directions in automated detection and understanding of psychiatric conditions.

2022

pdf bib
Towards Intelligent Clinically-Informed Language Analyses of People with Bipolar Disorder and Schizophrenia
Ankit Aich | Avery Quynh | Varsha Badal | Amy Pinkham | Philip Harvey | Colin Depp | Natalie Parde
Findings of the Association for Computational Linguistics: EMNLP 2022

NLP offers a myriad of opportunities to support mental health research. However, prior work has almost exclusively focused on social media data, for which diagnoses are difficult or impossible to validate. We present a first-of-its-kind dataset of manually transcribed interactions with people clinically diagnosed with bipolar disorder and schizophrenia, as well as healthy controls. Data was collected through validated clinical tasks and paired with diagnostic measures. We extract 100+ temporal, sentiment, psycholinguistic, emotion, and lexical features from the data and establish classification validity using a variety of models to study language differences between diagnostic groups. Our models achieve strong classification performance (maximum F1=0.93-0.96), and lead to the discovery of interesting associations between linguistic features and diagnostic class. It is our hope that this dataset will offer high value to clinical and NLP researchers, with potential for widespread broader impacts.