PROJECTS
Recognizing Brain Regions in 2D Images from Brain TissueOften, the first step in neuroimaging research is understanding which anatomical structures are present in an image. Structural MRI provides a clear, high-resolution visualization of the anatomy of the brain, capturing physical characteristics like the size and shape of different regions of the brain or the presence of abnormalities such as tumors. Whereas sMRI are more commonly taken in vivo, the neuropathology of many neurodegenerative disorders, like Alzheimer’s, requires analysis of the brain post-mortem through techniques like brain dissection, necessitating the use of other imaging modalities.
What is “fair”? Reevaluating Copyright in the World of Generative AIFollowing the launch of OpenAI’s ChatGPT and StabilityAI’s Stable Diffusion models in 2022, there have been a number of lawsuits filed against generative AI companies that accuse them of copyright infringement. This paper establishes the scope of the fair use doctrine of the Copyright Act through a number of legal decisions and explores copyright implications at various stages of generative AI model development. This paper argues that past legal precedent about copyright law is not sufficient to address new issues presented by AI technology and presents possible policy approaches, particularly with regards to data licensing, that could address legal and ethical issues surrounding data usage during the development of these technologies.
Unraveling the Mystery: Evaluating Language Models’ Grasp of PresuppositionsPresuppositions are implicit background information conveyed in a sentence that are taken for granted as true through a process called presupposition accommodation. Presuppositions are crucial in understanding meaning in natural language, as they provide context, influence interpretation, and enrich communication. The ability to recognize and accommodate presuppositions is essential for effective and precise communication. In this paper, we evaluate whether language models are able to accommodate presuppositions using a probing task. We explore how the complexity of the model (i.e. BERT vs. GPT-2 models) and stages of training (i.e. random initialization vs. pre-training vs. fine-tuning) affect the model's ability to accommodate presuppositions and how presuppositions are encoded within sentence embeddings. We find that pre-training and fine-tuning can help the language model learn to accommodate presuppositions, and the presupposition accommodation ability is captured within the later layers of the BERT model. Of all types of presuppositions, language models accommodate question presuppositions the best.
Wh-Questions in HindiGrowing up speaking both Hindi and English, I have observed that these languages have many different syntactic properties. For instance, Hindi is a head-final language and exhibits scrambling, neither of which is true of English. The aim of this paper is to examine the syntactic properties of wh-questions in Hindi. The data provided in this paper show that wh-words in Hindi questions can engage in the same kind of scrambling that their corresponding constituents do in declarative sentences. These properties extend to embedded clause questions as well. In addition, I show the presence of overt successive cyclice wh-movement in Hindi, which provides evidence for the hypothesis that even Hindi questions that seem to remain in-situ may undergo covert movement.