Historically, the performance and efficiency of computers has scaled favorably (according to “Moore’s Law”) with improvements at the transistor level that followed a steady trend (so-called “Dennard scaling”). Things got faster! Computer architectures improved by leaps and bounds, enjoying this “free lunch” of compute power! Unfortunately, devices have now scaled to a point not accounted for by the trends. Further performance and power improvements are limited by physical device properties. As a result, the field of computer architecture is now soul searching. We aspire to create architecture and system designs that continue to make computation efficient, programmable, capable, and fast in this post-Dennard era.
This course course covers several important trends in computer architecture research that aim to achieve these goals. The first trend is a selection of topics on the turn to parallel computation, wholesale, by researchers and industry alike. The second trend is the introduction of specialized, heterogeneous components (custom ASICs, FPGAs, GPUs, and programmable functional units) into architectures with CPUs. The third trend is the transition to new system capabilities and new models of computation. These novel ideas include approximate computing, neuromorphic computing, ambiently powered devices, and exploiting analog circuit characteristics in new ways. The course will address these topics across the system stack: from the architecture to the application.
Grading: Project: 50% Presentation + In-Class Discussion: 30% Paper Critiques: 20%