Software Engineering in AI: Enhancing Development, Maintenance and Scalability
Talk, 1ra Jornada de Inteligencia Artificial, UPR Río Piedras, San Juan, PR
Presentation of 20 minutes.
Talk, 1ra Jornada de Inteligencia Artificial, UPR Río Piedras, San Juan, PR
Presentation of 20 minutes.
Conference Talk, 28th IEE/ACM International Conference on Automated Software Engineering (ASE 2023), Kirchberg, Luxembourg
Towards safe automated refactoring of imperative Deep Learning programs to graph execution. September 2023. Presentation (15 min)
Invited Talk, University of Puerto Rico, Río Piedras Campus, San Juan, PR
(Presentation 1 hr 20 min): To increase the quality and maintainability of software systems, significant research is being done in the fields of program analysis, transformation, and automatic refactoring. Combining these can help programmers create software that is simpler to maintain and adapt over time while also reducing the risk of bugs and errors. In particular, the application of program analysis, transformation, and automatic refactoring has significant potential in developing large industrial deep learning (DL) software systems that utilize imperative-style programming. Utilizing these techniques can facilitate such systems’ robustness and automated evolution and maintenance.
Conference Talk, 19th International Conference on Mining Software Repositories (MSR), Pittsburgh, PA, US
Invited Talk, University of Puerto Rico, Río Piedras Campus, San Juan, PR
(Presentation 1 hr 20 min): Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code that supports symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development tends to produce DL code that is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, less error-prone imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance. While hybrid approaches aim for the “best of both worlds,” the challenges in applying them in the real world are largely unknown. We conduct a data-driven analysis of challenges—and resultant bugs—involved in writing reliable yet performant imperative DL code by studying 250 open-source projects, consisting of 19.7 MLOC, along with 470 and 446 manually examined code patches and bug reports, respectively. The results indicate that hybridization: (i) is prone to API misuse, (ii) can result in performance degradation—the opposite of its intention, and (iii) has limited application due to execution mode incompatibility. We put forth several recommendations, best practices, and anti-patterns for effectively hybridizing imperative DL code, potentially benefiting DL practitioners, API designers, tool developers, and educators
Workshop Poster Session, 2022 CRA-WP Grad Cohort for Women Workshop, New Orleans, LA, US
Challenges in migrating imperative Deep Learning programs to graph execution: An empirical study. April 2022. Poster Presentation.