During my master’s study, I worked on a project in Prof. Krcmar's group at the Technical University of Munich about analyzing developers’ reactions towards any change in application programming interfaces (APIs). This project aimed to bridge the gap between API providers and consumers to benefit both of them by generating a list of optimal criteria an API provider should follow to gain the popularity in developers’ circles. This project employed rule-based method and support vector machine (SVM) over data concerning conversations of developers.
Having been a member of PD Dr. Georg Groh's Social Computing group at the Technical University of Munich, I performed an evaluation between unsupervised and pre-trained neural models, such as attention-based clustering model, Google's XLING and studied the extent to which they are beneficial in extracting meaningful topics out of textual data.
During my master thesis on multilingual aspect extraction, I explored domain adaption of pre-trained word embedding for low resource language and also analysized significance of attention based neural model over multilingual data for extracting aspects/topics. It was followed by qualitative and quantitative analysis to know if neural topic model trained on individual monolingual data works better on multilingual data without being trained on this jointly.
Having been a member of PD Dr. Georg Groh's Social Computing group at the Technical University of Munich, I performed an evaluation between unsupervised and pre-trained neural models, such as attention-based clustering model, Google's XLING and studied the extent to which they are beneficial in extracting meaningful topics out of textual data.
During my master thesis on multilingual aspect extraction, I explored domain adaption of pre-trained word embedding for low resource language and also analysized significance of attention based neural model over multilingual data for extracting aspects/topics. It was followed by qualitative and quantitative analysis to know if neural topic model trained on individual monolingual data works better on multilingual data without being trained on this jointly.