As CAIS continues to grow, so does our team! We are proud to introduce you to our three new associate directors, all who bring new perspectives and expertise to our leadership team and are dedicated to creating a more just, healthy, and sustainable world: Ajitesh Srivastava, Research Assistant Professor of Electrical and Computer Engineering, Lindsay Young, Assistant Professor of Communication, and Swabha Swayamdipta, Gabilan Assistant Professor of Computer Science.
From left to right: Ajitesh Srivastava, Swabha Swayamdipta, & Lindsay Young
Dr. Srivastava completed his Ph.D. in computer science at USC in 2018. His research interests include network science, modeling, and machine learning (ML) applied to epidemics, social good, and social networks. He collaborates with teams around the world and the CDC for infectious disease forecasting and scenario projections. Reflecting on his past research, he states, “My research includes modeling and solving optimization problems on information diffusion on networks, machine learning applied to smartgrids and computer systems, algorithms to optimize Field Programmable Gate Arrays (FPGA) designs, and Graph Neural Networks.”
Dr. Young has formal training in communication studies with a methodological focus on social network methods. In her Ph.D., she focused on understanding why non-profit organizations collaborate with one another and how those relationships influence donor institutions’ funding decisions. After her Ph.D. she wanted to focus more on applied social problems, taking a postdoc with the Chicago Center for HIV Elimination (CCHE) at the University of Chicago. “To say that my time at CCHE was valuable would be an understatement – CCHE made me into the researcher that I am today.” Currently, she uses communication and social network perspectives to characterize and interrogate the social contexts that contribute to health disparities, access to critical health resources, and health behavior change in marginalized, resource-restricted communities. “Most of my work focuses on the health and well-being of sexual and gender minorities, with a particular interest in how we can use social media communication and network data to monitor health behavior and outcomes and to design interventions to improve their health and well-being,” explains Young.
Dr. Swayamdipta’s research focuses on natural language processing (NLP) and ML. Within these, she focuses on three main aspects: 1) the estimation of the quality of data both for training and for evaluation, 2) understanding the behavior generative models of language and designing evaluation metrics for the same, and 3) understanding the society using language technologies. In her own words, “My focus on data-centric NLP and ML intersects squarely with the goals of CAIS: as our AI models get more and more ubiquitous, the data they are trained on plays a major role in downstream impacts on different user populations, and the society at large. My lab (Data, Interpretability, Language and Learning, or DILL in short) grapples with data interpretability, with an eye on societal impacts of language technologies.”
Continue reading to learn more about their backgrounds, research interests, their view on the impact of AI in society, and a bit of their plans for the future of CAIS. Hope you enjoy!
1. How did you get into interdisciplinary research? How was your journey until you got to CAIS?
Ajitesh Srivastava: I have always enjoyed solving abstract problems. But the first time I worked on solving a real-world problem was the 2014-2015 DARPA challenge on predicting the spread of Chikungunya virus. I ended up being one of the 10 winners of the challenge. The year-long DARPA Challenge exposed me to the data issues that appear in the real world. After that, in my final years of PhD, I worked on problems coming from other domains and realized the gaps between computing and the target domain. These gaps are in the form of lack of understanding of each other’s domain, trust in each other’s methods, and a common language. In computational fields, we are often tempted to make unrealistic assumptions to solve problems to arrive at solutions applicable to “all spherical chickens in vacuum.” As a Ph.D. student at USC, I had the opportunity to take the first “AI for Social Good” class that planted the seed for CAIS. Through projects with social work students and faculty, I learned about the challenges faced in peer intervention research and challenges of data collection. That collaboration has had a deep impact in my approach to real-world problem solving.
Lindsay Young: I think my answer to the previous question [as described above regarding my PhD and postdoc experience] speaks to this. However, to be clear, interdisciplinarity comes naturally to me. I can’t say that I ever “got into it” or “learned it” at any point during my trajectory. It’s just how I naturally think. To me, if you want to solve a social problem, you have to bring all relevant tools to the table (it’s like an “all hands on deck” mentality). No social problem is uni-disciplinary, so that means that you have to be open to countenance a variety of perspectives and approaches when trying to solve a social problem. I think folks who fail to do this are, perhaps, more allegiant to their discipline than to the task of addressing the social problem.
Swabha Swayamdipta: Most AI technologies, especially language technologies, are meaningless in the absence of a human user interacting with them. A lot of my PhD research was siloed in the design of inductive biases for better processing of linguistic data. Towards the end of my PhD, I started looking at the question of other kinds of biases, and experienced a new rush of motivation when I encountered significant presence of spurious and societal biases in the models I had been working with. During my postdoc, I started exploring the societal consequences of language technologies more closely. When I started at USC in the Fall of 2022, it was immediately obvious to me that CAIS was the best platform on campus to continue my research on language and society, both based on its stated goals as well as the commitment of the leadership to these goals.
2. What are you most looking forward to as a CAIS associate director?
Srivastava: I believe we have an excellent team at CAIS. My main goal is to find ways to accelerate collaborations within CAIS that can solve interesting problems of high real-world impact.
Young: First, somewhat selfishly, I am excited about having the opportunity to contribute to and reap benefits from membership in an interdisciplinary network of researchers who collectively embody the “research for social good” ethos that motivates my own work. So, joining the leadership of CAIS is a gift. It will grant me access to a network of similarly motivated researchers and will allow me to contribute to that network’s growth.
Second, as one of only two social scientists on CAIS’s leadership team, I am looking forward to being able to bring my more human-centered (versus data-centered) training and perspectives to the table as we develop new projects. For example, I think it’s important to include the voices of the community members who we are trying to help in the research process itself. This community-centered approach to computational research has the benefit of ensuring that we’re pursuing research questions that are relevant to the communities we aim to help, that we’re interpreting our findings appropriately, and that we pursue our research ethically. I know the other CAIS leaders share this perspective. I just see myself as someone who will advocate for this approach as much as possible and who has experience integrating computational/data driven research approaches with community-centered ones.
And third, as a network scientist, I am also eager to help expand CAIS’s research portfolio by pursuing more network-oriented projects. Although not every problem is a network problem, I do see networks being implicated in a lot of critical real-world problems that CAIS has the potential to address, like health disparities, homelessness, crisis responses, and misinformation, just as examples. So, to me, there is a tremendous opportunity to design studies that integrate social network measures and methods with more familiar computational methods like machine learning classifiers, predictive models, and large language models, to name a few.
Swayamdipta: I’m very excited about the potential to collaborate with faculty and students across disciplines who find a home in CAIS. I’m already learning a great deal and growing my research agenda through an ongoing collaboration with folks from the school of social work. I hope I can continue to widen my net and contribute to technologies that make a tangible societal impact though CAIS.
3. It is no secret that AI is a very powerful and useful tool in addressing problems in society, be it homelessness or wildfires, and in making processes and systems more efficient. However, there is a lot of speculation surrounding the future of AI and how it will affect us as a society. How do you envision the progression and development of AI in the next few years? How do you evaluate the balance between the possible negative impacts and the positive impacts?
Srivastava: Next 5 years: Many creative processes will require minimal human creativity and skill– photo and video editing, including CGI and creating animations will become much easier.
Next 50 years: Pockets of the world, and many highways will be dedicated to autonomous vehicles, AI will play a central role (with human-in-the-loop) for diagnosis and drug discovery.
Unfortunately, it is not clear to me if progress can be made at the same rate in sustainability and to improve the lives of the less fortunate. While the fields of vision, text, and language will see great progress, many real-world problems of societal impact will not progress at the same rate unless we focus our efforts on these problems.
Like any new technology, I think there are three types of negative outcomes: (i) drastic shift in economy, (ii) intentional misuse of technology, and (iii) unintentional misuse of technology. Out of the three, the first two rely on our faith in humanity. The third is where we have control as researchers. We should study and communicate what our algorithms are designed for, how they can be made fairer, and what are the drawbacks of using them in certain ways. So, if you have faith in humanity and our ability to address algorithmic biases and to be better communicators, then the positive impacts can heavily outweigh the negative impacts.
Young: I am not an expert on the technological side of AI’s development nor on the market or national security considerations that ought to be considered, so my perspective may come across as pollyannaish. However, I would like to see the United States form an AI commission — comprised of academics, technologists and data scientists, social scientists, humanists, and, most importantly, ethicists — to articulate an ethics-based policy vision for how AI ought to be used in society, with the idea that funding priorities would be centered around those purposes. In terms of evaluating AI’s impact, my eyes are set on whether the AI is acting in the interest of the most marginalized and the degree to which the “black box” of AI is made more transparent. The common denominator between these two criteria is that AI’s legitimacy hinges on making AI accessible, both in terms of who benefits from it and who can understand it.
Swayamdipta: Given the pace at which AI technologies are proliferating, it is not trivial to answer this question. I am optimistic that with the progression of AI, there will also be a progression in the technologies that result in safe and trustworthy AI — there is an appetite for this at multiple levels, all the way up to the US / EU governments. At an individual researcher level, it is increasingly important to contextualize our research products both in their limitations and strengths and think of potential impact on society; something we are rigorous about in my own lab and at CAIS.