Third year PhD student Sina Aghaei is in the USC Viterbi School of Engineering Industrial & Systems Engineering department and part of USC CAIS.
What does “fairness” mean to you? Tough question, right? There are so many ways to define fairness, as it differs across cultures, communities, and even over time. Here at USC CAIS, we care a lot about fairness and equity in the context of artificial intelligence. When building AI systems, our researchers seek to reduce biases that can produce unequal outcomes for individuals and cause them real harm if deployed in the real world.
We recently talked to PhD student Sina Aghaei and learnt about his fascinating work on fairness in AI here at the USC Center for Artificial Intelligence in Society.
Tell us a little about yourself.
Hi there, my name is Sina, I’m in the third year of my PhD program in Industrial & Systems Engineering in the USC Viterbi School of Engineering. I was born and raised in a small Kurdish town named Sardasht, northwest of Iran. When I was 18 years old, I moved to Tehran to start my undergraduate experience at Sharif University of Technology which is one of the best engineering schools in Iran. Tehran is a huge city compared to my hometown, so it was a whole new experience for me, but I had a really good time there. After five years, I graduated with a double major in Computer Science and Industrial Engineering. Applying to graduate schools is a very common thing for students at Sharif University, so since the beginning of my college career I knew I wanted to get a masters degree followed by a PhD, and here I am!
What sparked your initial interest in AI?
When I was a kid, I was fascinated with the movie “The Matrix” which touches upon the topic of artificial intelligence. So, the idea of working in this area was very exciting and appealing to me. Also, during my undergraduate experience, I was very interested in math courses like algebra and calculus, but, most of all, optimization which is the foundation of artificial intelligence and machine learning.
You have double majored in both Computer Science and Industrial Engineering. Why did you choose to pursue a PhD in Industrial & Systems Engineering specifically?
Nowadays, there are a lot of different fields working with AI and machine learning. Although I had a background in both Computer Science and Industrial Engineering, the Industrial & Systems Engineering department is more focused on optimization, which is the area I’m most interested in.
What factors influenced your decision to pursue your PhD at USC?
When I was applying for grad school, I was looking for professors who focused on the same areas of research I was interested in. When I found my advisor Professor Phebe Vayanos (Associate Director at USC CAIS), I was very excited about her background as well as her research interests, which included optimization and machine learning. Since most of Professor Phebe’s projects are a part of USC CAIS, I soon learnt about the center and was fascinated by it!
In addition, USC Viterbi School of Engineering is one of the top engineering schools in the United States with high-quality research facilities, amazing professors, and a great reputation worldwide. So, in the end, USC was the right choice for me.
What is the focus of your research here at CAIS?
Well, I have been working on a couple of different projects related to fairness and machine learning. One of them is the Housing Allocation for Homeless Persons project in which machine learning tools are used to create a more fair housing allocation system in Los Angeles County.
I chose to work on this project specifically, because, when I got to LA, I was shocked to see so many homeless people on the streets without any resources or support. I understand that the government and other private agencies are trying to do something about it, but there is a lack of resources to address this problem. For this reason, I thought it would be a valuable project for me to work on, as it is very rare for PhD students to see their research having an impact in the real world.
Fairness in AI has been a popular topic in the tech world nowadays. Can you explain what is “fair” when it comes to AI bias?
As humans, we are affected by biases — the way we were raised, the people we interact with, etc. —, and we make decisions based on them. Yet, some biases are not acceptable, such as biases against gender, race, or sexual orientation.
In machine learning, data is received from the real world and models are built to decipher the patterns inside this data, so that predictions can be made when the model encounters new situations. Yet, if the data is biased, then the machine learning model will be biased as well, which can lead to serious, dangerous implications in the real world. This is why fairness is such a popular and important topic in AI for social good!
Tell us about your plans for the future.
I’m interested in getting some industry experience, so hopefully I’ll continue doing research on fairness, but this time for a top company in the industry.