This shows you the differences between two versions of the page.

Link to this comparison view

seminars:graph-based_machine_learning_and_neural_architecture_search [2018/11/15 13:25] (current)
zzhao1 created
Line 1: Line 1:
 +====== Graph-Based Machine Learning and Neural Architecture Search ======
 +Thursday November 29, 2018\\
 +Location: Scaife Hall 214\\
 +Time: 3:​00PM-4:​00PM\\
 +This talk contains two parts. The first part is about semi-supervised learning via graphs, where we first introduce the concept of semi-supervised learning and then provide two graph-based methods from Google: Expander (label propagation via similarity graphs) and Neural Graph Machine (graph regularization for neural networks). The second part of the talk is about device-aware Neural Architecture Search (NAS). Neural Architecture Search (NAS) is famous for its effectiveness in searching for models that achieve state-of-the-art performances in a wide spectrum of applications,​ such as image
 +classification and language modeling. In this talk, we provide an overview of commonly-used NAS techniques, and then propose a framework to search for neural architectures optimized for both device-imposed (e.g., power & inference time) and device-agnostic (e.g., accuracy) objectives. ​
 +=====Bio===== ​
 +Machine learner, software developer, and researcher: Da-Cheng Juan
 +is a senior engineer at Google Research, exploring graph-based
 +machine learning, deep learning and their real-world applications.
 +Da-Cheng also holds the position of adjunct faculty in the
 +Department of Computer Science, National Tsing Hua University.
 +Previously, he received his Ph.D. from the Department of Electrical
 +and Computer Engineering and his Master’s from the Machine
 +Learning Department, both at Carnegie Mellon University. Da-Cheng
 +has published more than 30 research papers in the related fields; in
 +addition to research, he also enjoys algorithmic programming and
 +has won several awards in major programming contests. Da-Cheng
 +was the recipient of the 2012 Intel PhD Fellowship. His current
 +research interests span across semi-supervised learning, convex
 +optimization,​ deep learning, and energy-efficient computing.