Fall 2019
Thursday 3:30 pm – 4:00 pm pre-reception and 4:00 pm – 5:00 pm talk
Default Location: Duncan Hall 1070
September 5th |
Speaker: Sriram Sankaranarayanan, University of Colorado Boulder Host: Kostas Title: Formal Synthesis of Controllers From Demonstrations Abstract: In this talk, we will examine approaches that allow us to learn control algorithms for physical systems automatically from demonstrations while providing formal guarantees on the result of the learning, if successful. Our approach begins with the dynamics of the physical system modeled as Ordinary Differential Equations (ODEs) and a specification of the desired behavior in the form of formulas in temporal logic. The goal is to learn/synthesize control feedback laws for the physical system, along with a proof of its correctness of the form of Lyapunov-like functions. Our learning approach is based on querying a demonstrator that provides control strategies at specific queried states, and a verifier that searches for a proof or a counterexample test case for a given candidate controller. In this talk, we show that model predictive controllers (MPCs) with carefully designed cost functions can serve the role of a demonstrator to effectively learn a control law and Lyapunov-like function for properties such as safety, reachability and path following. Theoretically, the proposed scheme can be made to converge in polynomial time using ideas from convex analysis. If successful, the resulting controller comes with a proof of correctness over the given dynamical model. Finally, we demonstrate some of the advantages of our approach using a series of simulation and laboratory experiments carried out on a 1/8-th scale model car. Joint Work with Hadi Ravanbakhsh, Sina Aghli, Christoffer Heckman and Sanjit Seshia. Bio: Sriram Sankaranarayanan is an Associate Professor of Computer Science at the University of Colorado, Boulder. He obtained his PhD in 2005 from Stanford University where he was advised by Zohar Manna and Henny Sipma. Subsequently, he worked at NEC research labs in Princeton, NJ, joining the University of Colorado in 2009. His research interests lie in the area of formal verification of Cyber-Physical Systems (CPS) with applications to autonomous systems and medical devices. He has been the recipient of awards including the NSF CAREER Award (2010), the Dean’s award for outstanding teaching at CU Boulder (2015), CU Boulder Provost’s award for faculty excellence (2015), the Roubos Engineering Endowed Fellowship (2015-2019) and the S.J. Archuleta Endowed Professorship (2019). |
September 12th | NO COLLOQUIUM |
September 19th |
Speaker: Schaefer, Scott , TAMU The reception will be held in DH 3076 instead of 3092 Host: Joe Abstract: Digital Taxidermy: Bijective Parameterization with Free Boundaries Title: This talk describes the motivation and geometry of parameterization in Computer Graphics. In particular, we focus on the difficulty of computing low distortion bijective maps between triangulated surfaces and the two dimensional plane. To do so, we describe an isometric distortion metric and describe how to specialize nonlinear optimization procedures by directly computing all singularities of the function explicitly. We guarantee bijectivity through the use of a barrier function and show how to obtain fast optimization times through the use of a spatial hash. The result is an efficient method for computing a bijective map that obtains low distortion without constraining the boundary. Bio: Scott Schaefer is a Professor and Department Head of the Computer Science & Engineering Department at Texas A&M University. He received a bachelor’s degree in Computer Science/Mathematics from Trinity University in 2000 and an M.S. and PhD. in Computer Science from Rice University in 2003 and 2006 respectively. His research interests include graphics, geometry processing, curve and surface representations, and barycentric coordinates. Dr. Schaefer received the Günter Enderle Award in 2011 and an NSF CAREER Award in 2012. His research has been used by various companies including Pixar, Nvidia, Microsoft, and Adobe. |
September 26th |
Speaker: Molham Aref, RelationalAI Host: Moshe/Chris Title: Relational Artificial Intelligence Abstract: In this talk, Molham Aref will make the case for a first-principles approach to machine learning over relational databases that exploits recent development in database systems and theory. The input to learning classification and regression models is defined by feature extraction queries over relational databases. He casts the machine learning problem as a database problem by decomposing the learning task into a batch of aggregates over the feature extraction query and by computing this batch over the input database. The performance of this approach benefits tremendously from structural properties of the relational data and of the feature extraction query; such properties may be algebraic (semi-ring), combinatorial (hypertree width), or statistical (sampling). This translates to several orders-of-magnitude speed-up over state-of-the-art systems. This work is based on collaboration with Hung Q. Ngo (RelationalAI), Mahmoud Abo-Khamis (RelationalAI), Ryan Curtin (RelationalAI), Dan Olteanu (Oxford), Maximilian Schleich (Oxford), Ben Moseley (CMU), and XuanLong Nguyen (Michigan) and other members of the RelationalAI team and faculty network. Bio: Molham Aref is the Chief Executive Officer of RelationalAI. He has more than 28 years of experience in leading organisations that develop and implement high value machine learning and artificial intelligence solutions across various industries. Prior to RelationalAI he was CEO of LogicBlox and Predictix (now Infor), Optimi (now Ericsson), and co-founder of Brickstream (now FLIR). Molham held senior leadership positions at HNC Software (now FICO) and Retek (now Oracle). |
October 3rd |
Speaker: Guofei Gu, TAMU Host: Nathan Title: When Web and Mobile Meet: Understanding and Uncovering New Security Threats in Web and Mobile Integration Abstract: Over the years, Mobile and Web have become the two major pillars of any individual’s digital life. In the new digital age, there are a vast number of innovative Mobile Apps serving functionality in every facet of society. Apps often communicate via the HTTP protocol with one or more web servers for computation or data storage and retrieval. Conveniently, the web platform serves as an excellent conduit for mobile app data exchange, and thus both technologies have become tightly integrated. We now live in a mobile-first economy where most web applications are built to specifically conform to mobile client devices. However, new security issues have arisen in this new era of integrated Web and Mobile ecosystems. In this talk, I will discuss some of the new security threats appearing in this new direction, and identify opportunities for mutual learning where each platform can benefit from the others’ successes and failures. I will demonstrate with our recent research results. For example, I will introduce new security issues caused by web-mobile bridges in Android hybrid apps, as well as new security inconsistencies in server-side input validation that can compromise mobile app security in a new class of Web API Hijacking attacks. Bio: Dr. Guofei Gu is a professor in the Department of Computer Science & Engineering at Texas A&M University (TAMU). Before coming to Texas A&M, he received his Ph.D. degree in Computer Science from the College of Computing, Georgia Institute of Technology. His research interests are in network and systems security. Dr. Gu is a recipient of 2010 NSF CAREER Award, 2013 AFOSR Young Investigator Award, 2010 IEEE S&P Best Student Paper Award, 2015 ICDCS Best Paper Award, Texas A&M Dean of Engineering Excellence Award, Charles H. Barclay Jr. ’45 Faculty Fellow, and Google Faculty Research Award. He is an active member of the security research community and has pioneered several new research directions such as botnet detection/defense and SDN security. Dr. Gu has served on the program committees of top-tier security conferences such as IEEE S&P, ACM CCS, USENIX Security, and NDSS. He is an Associate Editor for IEEE Transactions on Information Forensics and Security (T-IFS) and the Steering Committee co-chair for SecureComm. He recently co-chaired the 2018 NSF Workshop on Programmable System Security in a Software Defined World. He is currently directing the SUCCESS Lab at TAMU. |
October 10th |
Speaker: Chen Ding, University of Rochester CS 35 Celebration Colloquium Host: John Title: Computer Memory Science: Theory and Applications Abstract: The memory system on modern computers is complex, distributed, and heterogeneous, with changes accelerating in organization, e.g. stacking, and new material, e.g. Intel/Micron Optane memory. Compartmentalized and heuristic-based solutions, although successful in past decades, are increasingly untenable, and their achievable levels of performance far from optimal. This talk will present locality as the primary objective in the organization of both software and hardware. For software, locality is a quality of its dynamic data usage. It can be quantified scientifically and related mathematically with the measures of the memory system, e.g. cache misses. This talk will introduce the theory of locality and its uses in memory management and optimization Bio: Chen Ding’s research seeks to understand the composite and emergent behavior in computer systems in particular in dynamic parallelism and data usage and develop programming techniques for automatic and suggestion-based locality optimization, memory management and program parallelization. He received the inaugural DOE Young Investigator award and the NSF CAREER award, held visiting/faculty positions at Microsoft Research, MIT and IBM Center for Advanced Studies, and is recently a co-chair of ACM SIGARCH International Conference on Supercomputing in 2019 and ACM SIGPLAN Symposium on Memory Management in 2020. He studied at Rice University from 1996 to 2000 in Duncan Hall 3507. |
October 17th |
Speaker: Deepak Agarwal, LinkedIn Host: Anshu Title: AI that creates professional opportunities at scale Abstract: Professional opportunities can manifest itself in several ways like finding a new job, enhancing or learning a new skill through an online course, connecting with someone who can help with new professional opportunities in the future, finding insights about a lead to close a deal, sourcing the best candidate for a job opening, consuming the best professional news to stay informed, and many others. LinkedIn is the largest online professional social network that connects talent with opportunity at scale by leveraging and developing novel AI methods. In this talk, I will provide an overview of how AI is used across LinkedIn and the challenges thereof. The talk would mostly emphasize the principles required to bridge the gap between theory and practice of AI, with copious illustrations from the real world. Bio: Deepak Agarwal is a vice president of engineering at LinkedIn where he is responsible for all AI efforts across the company. He is well known for his work on recommender systems and has published a book on the topic. He has published extensively in top-tier computer science conferences and has coauthored several patents. He is a Fellow of the American Statistical Association and has served on the Executive Committee of Knowledge Discovery and Data Mining (KDD). Deepak regularly serves on program committees of various conferences in the field of AI and computer science. He is also an associate editor of two flagship statistics journals. |
October 24th |
Speaker: Kathleen Fisher, Tufts University Host: Kostas Title: Using Formal Methods to Eliminate Exploitable Bugs Abstract: For decades, formal methods have offered the promise of software that doesn’t have exploitable bugs. Until recently, however, it hasn’t been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. It has been proven to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and to enforce integrity and confidentiality properties. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution in the formal methods community, including increased processor speed, better infrastructure like the Isabelle/HOL and Coq theorem provers, specialized logics for reasoning about low-level code, increasing levels of automation afforded by tactic languages and SAT/SMT solvers, and the decision to move away from trying to verify existing artifacts and instead focus on co-developing the code and the correctness proof. In this talk I will explore the promise and limitations of current formal methods techniques for producing useful software that provably does not contain exploitable bugs. I will discuss these issues in the context of DARPA’s HACMS program, which had as its goal the creation of high-assurance software for vehicles, including quad-copters, helicopters, and automobiles. Bio: Professor Kathleen Fisher is Chair of the Computer Science Department at Tufts University. Previously, she was a Program Manager at DARPA and a Principal Member of the Technical Staff at AT&T Labs Research. She received her PhD in Computer Science from Stanford University. Kathleen is a Fellow of the Association of Computing Machinery (ACM) and a Hertz Foundation Fellow. Kathleen’s research focuses on advancing the theory and practice of programming languages. All of her work is collaborative and much is interdisciplinary. Three main thrusts of her work are (1) designing domain-specific languages to make it easier to write programs in particular domains, (2) developing program synthesis techniques, which use search to generate programs from high-level specifications, and (3) applying principled techniques to produce software that is provably functionally correct with the goal of making hacking much harder. Service to the community has been a hallmark of Kathleen’s career. She served as Chair of the ACM Special Interest Group in Programming Languages (SIGPLAN) and as Program Chair for three of SIGPLAN’s marquee conferences. She is Chair of DARPA’s ISAT Study Group and a member of the Board of Trustees of Harvey Mudd College. Kathleen has long been a leader in the effort to increase diversity and inclusion in Computer Science: she was Co-Chair of the Computing Research Association’s Committee on the Status of Women (CRA-W) for three years, and she co-founded SIGPLAN’s Programming Language Mentoring Workshop (PLMW) Series. Kathleen is a recipient of the SIGPLAN Distinguished Service Award. |
October 31st |
Speaker: Amal Ahmed, Northeastern University Host: Kostas Title: The Next 700 Verified Compilers Abstract: Compiler verification is an old problem, with results stretching back beyond the last half century. In the last decade, the field has witnessed remarkable progress following the pioneering work on the verified CompCert C compiler. But despite impressive advances, most verification efforts only prove compiler correctness for whole-program compilation, i.e., assuming that compiler output is never linked with any other code. That’s an unrealistic simplification that does not reflect the reality of how we use these compilers. In this talk, I’ll discuss the next generation of compiler-verification challenges that arise when we acknowledge that compilers are used to compile program fragments, not whole programs, which are then linked with code produced by different compilers, from different languages, often by potentially malicious actors. Despite recent results on “compositional” compiler correctness (or: correct compilation of components), the variety of theorems, stated in remarkably different ways, raises questions about what researchers even mean by a “compiler is correct.” I’ll discuss pros and cons of recent compiler-correctness statements and present a new framework, CCC, with which to understand compiler correctness theorems in the presence of linking. The framework offers insight into what we as a community should want from compiler correctness theorems of the future. I’ll also discuss the formidable verification challenges we face when compiling, for instance, programs with FFIs, DSLs that offer security guarantees, or languages that rely on cryptographically enforced security, and how to structure future research efforts so we can build compilers that are correct, secure, and performant. Bio: Amal Ahmed is the Sy and Laurie Sternberg Interdisciplinary Associate Professor at Northeastern University. She received her Bachelor’s from Brown University, her Master’s from Stanford, and her PhD from Princeton, all in Computer Science. Her interests include type systems and semantics, compiler verification, language interoperability, gradual typing, and dependent types. She is known for showing how to scale the logical relations proof method, via step-indexing, to realistic languages with features such as higher-order mutable state and concurrency, leading to widespread use of the technique. Her recent work has focused on developing an architecture for building correct and secure compilers that support safe inter-language linking of compiled code. Her awards include an NSF Career Award and a Google Faculty Research Award. |
November 7th |
Speaker: Moshe Vardi, Rice University Location Change: McMurtry Auditorium Reception to Follow the talk at 5pm in Martel Hall. Talk runs as usual from 4pm to 5pm. Title: Humans, Machines, and Work: The Future is Now Abstract: Automation, driven by technological progress, has been increasing inexorably for the past several decades. Its progress has recently accelerated due to dramatic advances in artificial intelligence. Two schools of economic thinking have for recently been engaged in a debate about the potential effects of automation on jobs: will new technology spawn mass unemployment, as robots take jobs away from humans? Or will the jobs robots take over create demand for new human jobs? I will present data that demonstrate that the concerns about automation are valid. In fact, technology has been hurting working-class people for the past 40 years. The discussion about humans, machines and work tends to be a discussion about some undetermined point in the far future. But it is time to face reality. The future is now. Bio: Moshe Y. Vardi is University Professor and the George Distinguished Service Professor in Computational Engineering and Director of the Ken Kennedy Institute for Information Technology at Rice University. He is the recipient of several awards, including the ACM SIGACT Goedel Prize, the ACM Kanellakis Award, the ACM SIGMOD Codd Award, the Blaise Pascal Medal, the IEEE Computer Society Goode Award, and the EATCS Distinguished Achievements Award. He is the author and co-author of over 600 papers, as well as two books. He is a fellow of several societies, and a member of several academies, including the US National Academy of Engineering and National Academy of Science. He holds honorary doctorates from six universities. He is a Senior Editor of the Communications of the ACM, the premier publication in computing |
November 14th |
Speaker: Jorge Nocedal, Northwestern University Joint Colloquium with the CAAM Department. Host: Tasos Title: Zero-Order Optimization Methods with Applications to Reinforcement Learning Abstract: The problem of finding the minimum of a noisy function when derivative information is not available occurs in many engineering applications involving black-box models. It also arises in the adversarial training of neural networks and in the design of reinforcement learning systems. In this talk, I describe the main challenges of optimizing a high dimensional noisy function and provide an analysis of state-of-the-art techniques. I present a new method capable of dealing with large dimensional problems, describe its theoretical foundations, and illustrate its performance in practice. Bio: Jorge Nocedal is the Walter P. Murphy Professor in the Department of Industrial Engineering and Management Sciences at Northwestern University. His research is in optimization, both deterministic and stochastic, and with emphasis on very large-scale problems. He is a SIAM Fellow, was awarded the 2012 George B. Dantzig Prize, and the 2017 Von Neumann Theory Prize for contributions to theory and algorithms of nonlinear optimization. |
November 21st |
Speaker: Jian Ma, CMU Host: Todd Title: Abstract: Bio: |
November 28th | THANKSGIVING |
December 5th |
Speaker: Michael Franz, UCI Host: Nathan Title: Cyber Attacks and Defenses: Trends, Challenges, and Outlook Abstract: A cyber attacker needs to find only one way in, while defenders need to guard a lot of ground. Adversaries can fully debug and perfect their attacks on their own computers, exactly replicating the environment that they will later be targeting. This is the situation today, which has been exacerbated by an increasing trend towards a software “monoculture” (in which there are only two major desktop operating systems and two major phone operating systems, one major office software suite, and so on). One possible defense is software diversity, which raises the bar to attackers. A lot of academic and industrial research has been investigating such software diversity, from simple ASLR (address space layout randomization) to more complex whole-program randomization. In the latter, a diversification engine automatically generates a large number of different versions of the same program, potentially one unique version for every computer. These all behave in exactly the same way from the perspective of the end-user, but they implement their functionality in subtly different ways. As a result, a specific attack will succeed on only a small fraction of targets and a large number of different attack vectors would be needed to take over a significant percentage of them. Because an attacker has no way of knowing a priori which specific attack will succeed on which specific target, this method also very significantly increases the cost of attacks directed at specific targets. Unfortunately, attackers have now started assembling their attacks on the target itself, circumventing diversity. Hence, in the arms race between attackers and defenders, we are already at the point where yet another set of defenses is needed, before the previous one is even fully deployed across the software industry. My talk will present a time-line of attacks and defenses, clearly illustrating a “cat and mouse game” in which defenses are almost always reactive to attacks that have already happened. I will discuss my vision of how to get ahead of the attackers, and close by stating why, in spite of the bleak situation today, I am confident that we will eventually be able to stop most kinds of cyber attacks completely. Bio: Michael Franz is a Chancellor’s Professor at the University of California, Irvine (UCI) and the director of its Secure Systems and Software Laboratory. He is a Professor of Computer Science in UCI’s Donald Bren School of Information and Computer Sciences and a Professor of Electrical Engineering and Computer Science (by courtesy) in UCI’s Henry Samueli School of Engineering. He is a Fellow of the ACM, a Fellow of the IEEE, and a winner of the Humboldt Research Award. Prof. Franz was an early pioneer in the areas of mobile code and dynamic compilation. He created an early just-in-time compilation system, contributed to the theory and practice of continuous compilation and optimization, and co-invented the trace compilation technology that eventually became the JavaScript engine in Mozilla’s Firefox browser. He has graduated 28 Ph.D. students as their primary advisor. Franz received a Dr. sc. techn. degree in Computer Science and a Dipl. Informatik-Ing. ETH degree, both from the Swiss Federal Institute of Technology, ETH Zurich. |