Computers and computation are pervasive in our society. They play enormously important roles in areas as diverse as education, business, industry, and the arts. The Computer Science Department seeks to provide students with an understanding of the nature of computation and the ability to explore the great potential of computers. The Department recognizes that students’ interests in computer science vary widely, and attempts to meet these varying interest through 1) its major program; 2) a selection of courses intended for those who are interested primarily in an introduction to computer science; 3) recommended course sequences for the non-major who wants a more extensive introduction to computer science in general or who seeks to develop some specific expertise in computing for application in some other discipline. The computer science major equips students to pursue a wide variety of career opportunities. It can be used as preparation for a career in computing, for graduate school, or to provide important background for the student whose future career will extend outside of computer science. The first course for majors and others intending to take more than a single computer science course is Introduction to Computer Science (CSCI 134). Upper-level courses include computer organization, algorithm design and analysis, principles of programming languages, computer networks, digital design, distributed systems, advanced algorithms, theory of computation, computer graphics, artificial intelligence, machine learning, operating systems, and compiler design. For those students interested in learning more about important new ideas and developments in computer science, but who are not necessarily interested in developing extensive programming skills, the department offers three courses. Creating Games (CSCI 107) introduces important concepts in computer science through the design and analysis of games. Artificial Intelligence: Image and Reality (CSCI 108) provides an introduction to the field of artificial intelligence, and The Art and Science of Computer Graphics (CSCI 109) introduces students to the techniques of computer graphics.
CSCI 11 Mixology
This course examines the history, science, and culture of mixed drinks. Students will learn about the origin of modern cocktails, the properties and mixing qualities of bitters, syrups, and fortified wines, the role of proportion, and the classification and appreciation of alcohol. Other topics include fermentation, modern mixology, glassware, and responsible consumption. The class will meet with local mixologists and tour a local distillery. Evaluation is based on class participation, a short in-class presentation, a 10-page paper, and a final project involving the research and creation of a new cocktail.
COMPUTER SCIENCE DEPARTMENT
The Computer Science Department continues to be an exciting and vibrant academic community. We are particularly pleased to report that Assistant Professors Brent Heeringa and Morgan McGuire were both granted tenure this past year and will be promoted to the ranks of Associate Professor as of July 1, 2012. After six years serving as Provost of the College, Prof. Bill Lenhart spent the last year on sabbatical in France. Although still on sabbatical from teaching for one more academic year, he will be joining us back on campus later this summer. We all look forward to his return. Our students kept our computing lab lively around the clock this past year, and many continue to participate with the faculty on a variety of interesting research projects. Antal Spector-Zabusky ’12 was awarded honorable mention in the annual undergraduate research awards competition sponsored by the Computing Research Association for research performed under the supervision of Professor Freund. Professor Freund also began his term as Department Chair this past year. In addition, Lorraine Robinson, our long-time administrative assistant, retired in June. We wish her all the best for the future.
Each year the department participates in a special lectureship program funded through the generosity of the Class of 1960. This spring, we welcomed Dr. David Ferrucci from IBM as our distinguished guest for this program. Dr. Ferrucci leads the Semantic Analysis and Integration Department at IBM’s T.J. Watson’s Research Center, where he focuses on technologies for automatically discovering knowledge in natural language content and using it to enable better decision making. Among his accomplishments are the design and implementation of the Watson computer system that can rival human champions at the game of Jeopardy!. Dr. Ferrucci gave several talks and met with faculty and students during his visit. As part of the Class of 1960 scholars program, several students were invited to participate in special events. This year’s scholars were:
Class of 1960 Scholars in Computer Science
T. Andrew Lorenzen
Prof. Jeannie Albrecht spent the fall semester on sabbatical at the University of Massachusetts, Amherst. There she became involved in a research project involving green computing. Specifically, Albrecht and her colleagues at UMass are investigating how to use computing to decrease the energy impact of society. The problem is highly relevant to today’s technological trends. The first part of the project involved the instrumentation of Albrecht’s house with power meters. Power data is now gathered every second about the entire house, and a weather station is used to correlate the energy usage with the outdoor temperature, humidity, and wind conditions. The next step in the project was to explore ways to flatten the peak energy usage of the house by scheduling “background” appliances (such as dehumidifiers, air conditioners, refrigerators, etc.) during non-peak times of the day. Albrecht and her colleagues have recently published three papers that describe their vision and document their initial results. They are currently looking at how to use batteries and renewable energy sources most effectively given the data that has been collected. The project is joint work with Prof. Prashant Shenoy, Prof. David Irwin, Aditya Mishra, and Sean Barker ’09 at UMass.
During the past year, Albrecht also continued to develop Gush, which is a framework for configuring and controlling distributed applications (http://gush.cs.williams.edu). Gush is part of a National Science Foundation (NSF) funded project for creating GENI — a Global Environment for Network Innovations (http://www.geni.net). A journal article on Gush (and the project’s predecessor, Plush) recently appeared in the Association for Computing Machinery’s Transactions on Internet Technology (TOIT) journal. Pamela Mishkin ’15 worked with Albrecht this spring on enhancements to Gush that provide a better user experience.
Related to her involvement with GENI, Albrecht recently traveled to RIT with Karlan Eberhardt ’13 to teach other researchers how to use Gush as part of the GENI Summer Camp. She also received a grant from the National Science Foundation to organize a workshop in July in conjunction with the next GENI Conference that will focus on redefining Distributed Systems education. Approximately 20-30 attendees from a variety of academic institutions will attend and discuss how to incorporate new networking technologies into undergraduate courses.
Prof. Duane Bailey worked with Juan Mena ’15 on developing a database of sliding piece puzzles, along with their rankings and solutions. The work is based on the old but standard catalog of Hordern. The hope is to develop a better understanding of the structure of the state spaces of these human-oriented search problems to identify what makes search easy or difficult, fun or tedious. Bailey and Mena worked with Tracy Baker-White on developing a decoding device that aids in the understanding of the mysterious messages that appear in the work of Charles Dellschau, a German-American folk artist who drew pictures of fanciful airships designed by the Sonora Aero Club in the early 1900s.
Bailey was invited to the 10th Gathering for Gardner. There, he and his former student, Feng Zhu ’02 (now teaching at USC’s Marshall School of Business), presented work on the development of a porous structure that tiles the plane non-periodically.
Last summer Bailey gave a Williams summer science research talk, “In Search of a Programmable Tattoo”. The hope was to understand the technologies that might have to be developed to support robust computation in irregularly structured biological media. Bailey’s work with Kathryn Lindsey ’07 (now in math at Cornell) demonstrates that traditional automata can perform robust computations in aperiodic environments.
Bailey with students Donny Huang ’13, Peter Mertz ’12, Nehemiah Paramore ’14, Cody Skinner ’13, and James Wilcox ’13, constructed a 40th anniversary CMOS version of the first microprocessor, the Intel 4004. The original device was one of a set of chips that made up the Busicom 141-PF. The Williams processor, if fabricated, would take up less than a square millimeter of silicon.
Prof. Andrea Danyluk was very happy to teach Artificial Intelligence (CSCI 373) this year, for the first time since 2007. She revised the course substantially, introducing new topics and all new lectures, as well as a series of assignments developed at UC Berkeley.
Danyluk continues her research in the area of machine learning. She supervised a year-long student research project this year. Chansoo Lee ’12 worked with her on the problem of object identification, a specialized type of image recognition in which the category is known and the goal is to recognize an object’s exact identity. This task is related to, though clearly different from, the task of object classification, in which the goal is to identify the category to which an object belongs. Both tasks typically involve the application of machine learning algorithms, which are presented with input images along with their correct labels (i.e., either the exact identity or class). While there exist several algorithms for object identification that have met with reasonable success, there is room for improvement. Active learning is an approach in which the learning algorithm selects the training examples that it determines will be most helpful for improving its performance. Most typically, active learning is useful when there exists an abundance of input examples that have not yet been labeled. By identifying those examples that will be most useful, the learning algorithm focuses the attention of the human expert who must provide the labels for training purposes. Much less frequently, the active learning system can be used to identify examples for which it would be useful to have the human expert identify the most salient features. This has previously been done in the context of classification problems — not object identification problems. Chansoo worked toward developing an active learning algorithm that could work successfully within the object identification framework. This would have clear benefit for Prof. Danyluk’s research project on the identification of individual spotted salamanders.
In addition, Danyluk co-authored a paper with Nick Arnosti ’11, based on one half of his honors thesis work last year. The paper will be presented at the International Conference on Machine Learning in Edinburgh.
Danyluk continued her work as a board member of the CRA-W, the Computing Research Association’s Committee on the Status of Women in Computing Research, where her primary role is to administer an undergraduate research grant program. Her commitment to increasing the number of women and underrepresented minorities in computing research is also at work at Williams. This year she helped to bring Gioia DeCari to campus to perform “Truth Values: One Girl’s Romp Through M.I.T.’s Male Math Maze” and organized a panel that followed the show.
She continues her work as a member of the steering committee for the ACM/IEEE Computer Science Curriculum 2013, whose goal is to develop international curriculum guidelines for undergraduate programs in computer science. They have published their “Strawman Report” and are now working on the next phase. Danyluk also continues her work with the Liberal Arts Computer Science Consortium.
Prof. Stephen Freund is currently serving as Chair of the Computer Science Department. He has also continued his work on tools to help programmers find defects in software. His current focus is on how to make it easier for programmers to write error-free programs that take advantage of multicore processors. He presented work on this topic at the International Conference on Run-time Verification and at the Workshop on Foundations of Object-Oriented Languages, as well as at several academic institutions, including University of Washington and University of Massachusetts, Amherst. Freund also received an NSF grant to fund his research on this topic for the next three years.
Last summer, Freund worked with Antal Spector-Zabuksy ’12 on a visualization tool for illustrating the behavior of concurrent programs under relaxed memory models. The intended use of the tool is to enable programmers (or instructors in the classroom) to visually and interactively explore how a program’s run-time behavior may be impacted by scheduler and memory-model.
Freund also organized the Workshop on Formal Techniques for Java-like Languages in Lancaster, UK last summer. In addition, he served on the program committee for several workshops and conferences this past year, including the ACM Conference on Programming Language Design and Implementation. He is also currently serving on the Programming Languages Education Board, the national committee in his research community that focuses on issues surrounding undergraduate education.
Prof. Brent Heeringa completed his sixth year as Assistant Professor in the Computer Science Department at Williams College. Heeringa taught a tutorial in Advanced Algorithms in the fall, a course on Mixology during winter study, and Algorithm Design and Analysis in the spring.
Together with Louis Theran (Temple, FU Berlin), Justin Malestein (Temple), and Matthew Beradrdi (Temple), Heeringa studied computational problem in rigidity theory. Periodic frameworks, in which the lattice can flex, arise in the study of zeolites, a class of microporous crystals with a wide variety of industrial applications, most notably in petroleum refining. Heeringa helped develop algorithms for determining rigidity in generic periodic frameworks. This work appeared at the 2011 Canadian Conference on Computational Geometry (CCCG).
Heeringa also continued his collaboration with Glencora Borradaile (Oregon State University). In collaboration with Alan Fern (OSU), Xiaoli Fern (OSU), and Javad Azimi (OSU) Heeringa and Borradaile worked on problems in active learning –– an area of machine learning that specializes in minimizing the labeling efforts required to learn accurate classifiers. Most work in active learning focuses on sequentially selecting one unlabeled example at a time. Heeringa and his coauthors developed a new algorithm for batch active learning. This work was accepted at the 2012 International Conference on Machine Learning (ICML).
Heeringa worked with Donny Huang ’13 on fundamental problems in data structures relating to the efficiency and optimality of binary search trees. Heeringa reviewed papers for Information Processing Letters, the Conference on Implementation and Application of Automata, and Siam Journal on Computing. In addition he traveled to Toronto for the Canadian Conference on Computational Geometry, to Brooklyn for the Symposium on Algorithms and Data Structures, and gave an invited talk at Carleton College in Northfield, Minnesota. All of Heeringa’s research is supported by an award from the National Science Foundation.
Associate Prof. Morgan McGuire collaborated with NVIDIA Research and the Vicarious Visions game studio on massively-parallel Monte Carlo methods for computer graphics throughout the year. Their work directly affects consumer products such as film and video games today and has significant implications for how financial, medical, and scientific modeling will be done in the future.
This was the first year of an increased cap allowing 24 students per year to enroll in the Creating Games (CSCI 107) interdisciplinary course in the CS and Art departments. To scale the hands-on course up to this size, McGuire worked with closely teaching assistant Lily Riopelle ’14, Coordinator of Mellon Academic Programs Elizabeth Gallerani and interim director Katy Kline in the museum, and Michael Taylor in the science shop. The students each produced three significant computer or video games from scratch and studied topics as diverse as gender roles in narrative and computability in mathematics. Many reported this to be one of the most challenging and rewarding experiences in their Williams career. To support the course McGuire developed the codeheart.js open source framework for learning to program web and mobile apps, which is now available to the general public at http://codeheartjs.com.
McGuire published The Graphics Codex, a new electronic book on computer graphics for iPhone, iPad, and iOS http://graphicscodex.com. Next semester, OIT will issue tablets to students registered for Computer Graphics (CSCI 371) so that they can use this in the classroom. This leverages the digital format in new ways for interactive examples, continuous updates, and densely hyperlinked references. The primary goal of this new book is accessibility — the form factor means that it is always with students in the lab or classroom, it brings the resources of the ACM Digital Library to students by directly linking topics to primary sources and seminal papers, and online distribution means that the e-book costs students only 5% as much as a typical computer science textbook.
McGuire served as editor in chief of the Journal of Graphics Tools and as section editor on computational photography for ACM Queue and published several peer-reviewed papers on new special effects in computer graphics. The most recent of these is “Scalable Ambient Obscurance”, coauthored with thesis student Michael Mara ’12. Mara’s thesis is on new algorithms and system infrastructure to enable cinema-quality 3D images for mobile applications. The key to this work is computing scene illumination on a server in the cloud and streaming it as video to a 3D-enabled client. Mara and McGuire will be collaborating on this project at NVIDIA Research, where Mara will complete a post-baccalaureate research internship.
Prof. Tom Murtagh began a new project investigating techniques for more effectively utilizing flash memory in file systems. Flash memories in use today typically include controllers that enable them to emulate hard disk drives. This has hastened deployment, but prevents operating system software from optimizing file system organization for the underlying characteristics of flash devices. Together with Abigail Zimmerman-Niefield ’15, Murtagh is working on designing a file system interface to an underlying storage device that will support data layout optimizations on both flash memories and disk drives. Murtagh attended the 2012 USENIX Annual Technical Conference and the Workshop on Hot Topics in Storage and File Systems in June, 2012.
Prof. Jeffrey S. Chase, Duke University
“Reflections on Trusting Trust, Part 2: The Cloud”
Computer Science Faculty
“Everything you need to know about graduate schools including: Deadlines, personal statements, finding an Advisor, Research, Application Process, Choosing the Right School”
Prof. John W. Byers, Boston University
“Daily Deals: Prediction, Social Diffusion, and Reputational Ramifications”
Mark Terrano, Hidden Path Entertainment
“Games and Immersion”
Christopher Cyll ‘04 and Michael Gnozzio ‘07, Adverplex, Inc.
“Building a Profitable Website: Algorithmic Marketing, Quality Modeling, and Other Things You Need to Run an Internet Startup”
Prof. David Liben-Nowell, Carleton College
“You Can Pick Your (Best) Friends”
Michael Hay, PhD, Cornell University
“Analyzing Private Network Data”
Josh Ain ’03, Google
“Finding Structure in Webpages”
Prof. Daniel Scharstein, Middlebury College
“Benchmarking Stereo Vision and Optical Flow Algorithms”
Matthew Ginsberg, Co-Founder and CEO, On Time Systems Inc.
“Dr. Fill: Crosswords and An Implemented Solver for Singly Weighted CSPs”
Prof. Samuel Z. Guyer, Tufts University
“Tools for Improving Software Right Now”
Ben Ransford, Ph.D. Candidate, SPQR Lab, University of Massachusetts, Amherst
“Transiently Powered Computers”
Brent Yorgey `04, Ph.D. Candidate, University of Pennsylvania
“Embedded, Functional, Compositional Drawing”
Dr. David Ferrucci, IBM Fellow and Watson Principal Investigator, IBM Research
“Beyond Jeopardy! The Future of Watson”
Prof. Emery Berger, University of Massachusetts, Amherst
“Programming with People: Integrating Human-Based and Digital Computation”
“In Search of a Programmable Tattoo”
Summer Science Talk, Williams College, August, 2011
“How to Host a Profitable Fundraiser (with Slightly Fussy Participants)”
Science Lunch, Williams College, October, 2011
Department Student Colloquia
|Thomas Gaidus ’13||“An Introduction to Android Application Development Through Google Code University”|
|Donny Huang ‘13||“Parallel Spatial Partitioning Algorithms”|
|Chansoo Lee ’12||“Active Learning with Feature Feedback for Object Identification”|
|T. Andrew Lorenzen ‘12||“Researching for Fiksu”|
|Michael Mara ’12||“CloudLight: A Distributed Global Illumination System for Real-Time Rendering”|
|Antal Spector-Zabulsky ’12||“Dynamic Race Condition Detection via Synchronization Specification Automta”|
|James Wilcox ‘13||“Securing Internet Meta-Architectures”|
“Selected Project Highlights: An Overview of Gush”
GENI Engineering Conference 11, July 2011
“Tutorial Session: Experiment Control Using Gush,”
GENI Engineering Conference 11, July 2011
“Experiment Control Using Gush”
GENI Summer Camp at RIT, May 2012
“Identification of Individual Spotted Salamanders”
Middlebury College, Middlebury, VT, September 2011
“Undergraduate Research Experience Internships”
Grace Hopper Celebration of Women in Computing, Portland, OR, November 2011
“Cooperative Concurrency for a Multicore World”
University of Washington, Seattle, WA, November 2011
“Cooperative Concurrency for a Multicore World”
University of Massachusetts, Amherst, MA, February 2012
“The 1-neighbour Knapsack Problem”
International Workshop on Combinatorial Algorithms, June, 2011
“The Line-leaf Tree: A Dynamic Data Structure for Tree-like Partial Orders”
Carleton College, April, 2012
GPU Technology Conference, San Jose, CA, May 15, 2012
Postgraduate Plans of Computer Science Majors
|Zina H. Cigolle||Amazon.|
|Chansoo Lee||Graduate school at University of Michigan, MI.|
|T. Andrew Lorenzen||Google.|
|Michael T. Mara||Graphics research at NVIDIA.|
|Peter S. Mertz||Geophysical consulting in Connecticut.|
|Diogenes A. Nunez||Graduate school at Tufts University, MA.|
|Eric D. Outterson||Undecided.|
|Jonathan R. Schmeling||Undecided.|
|Gregory S. Sherrid||Undecided.|
|Antal B. Spector-Zabusky||Graduate school at University of Pennsylvania, PA.|
Dynamic Race Condition Detection via Synchronization Specification Automta
Antal B. Spector-Zabusky
Multithreaded programming, while powerful, opens up the possibility for new classes of bugs that cannot occur in single-threaded programs. One such class of bugs in particular are those termed race conditions: two concurrent accesses to the same variable, at least one of which is a write. Because race conditions are difficult to detect manually, there has been a great deal of work on automated tools for race condition detection. This thesis presents a source-level annotation language for ensuring race-freedom. The annotations detail how synchronization operations change which threads can read or write to each variable. We implemented an interpreter which incorporates this annotation language, and present a proof that a correctly-annotated program that executes successfully is guaranteed to be race-free.
CloudLight: A Distributed Global Illumination System for Real-Time Rendering
Michael T. Mara
Using the cloud, with its vastly superior computational capabilities when compared to local devices, should increase image quality, not decrease it. Current commercial uses of the cloud for remote rendering (e.g. OnLive, Gakai) increase convenience but decrease quality and responsiveness. This is because of the difficulty in leveraging local computation for simple streaming. Only brute force
network bandwidth increases improve quality. However, by shifting from a video-streaming approach to a hybrid rendering model, where client and server both perform part of the rendering task, we can construct a system that leverage the computation of the cloud to increase image quality without effecting responsiveness. In this paper, we describe a 3-phase plan to bring a highly efficient remote rendering model to practical use and beyond, and document the culmination of the first phase, a complete remote rendering system that we call CloudLight. CloudLight distributes computing between client and server in an intelligent way: direct illumination and an approximation of medium spatial frequency, temporally sensitive indirect illumination is computed on a low-power client in an efficient manner, while the computationally intensive task of low spatial frequency indirect illumination is computed on the powerful server. The cut of the rendering process between client and server was chosen to minimize the data flow between server and client, and to minimize the perceived error due to latency in sending the rendering information from server to client. The next two phases, in which we replace the individual components of our system to enable us to drop our unrealistic assumptions on network bandwidth while simultaneously speeding up parts of our system by several orders of magnitude, and eventually inject the network stack into the graphics pipeline itself, are laid out here as a motivation and plans for future work. One of the major components of our system is the client-side approximation of high-frequency, temporally sensitive indirect illumination. We present a new Screen-Space Ambient Occlusion (SSAO) algorithm variant called Architecture-Aware Alchemy Ambient Occlusion, which is an optimization of an algorithm presented in 2011 called Alchemy AO. The optimizations operate at three levels: reduce total bandwidth by carefully reconstructing positions and normals at high precision from a depth buffer; pre-filter the depth buffer to maximize memory hierarchy
efficiency; and low-level inner- and inter-thread techniques appropriate for a parallel, floating-point architecture. This new algorithm produces a high-quality approximation in 0.81 ms for standard 720p resolution, 1.59 ms for standard 1080p resolution, and scales much better than previous algorithms. This includes an asymptotic cost reduction from O(r) to O(log r) in the sampling radius r.
Exploiting Home Automation Protocols for Load Monitoring in Smart Buildings
David Irwin, Anthony Wu, Sean Barker, Aditya Mishra, Prashant Shenoy, and Jeannie Albrecht
Proceedings of the Third ACM Workshop On Embedded Sensing Systems For Energy-Efficiency In Buildings (BuildSys), November 2011
Monitoring and controlling electrical loads is crucial for demand-side energy management in smart grids. Home automation (HA) protocols, such as X10 and Insteon, have provided programmatic load control for many years, and are being widely deployed in early smart grid field trials. While HA protocols include basic monitoring functions, extreme bandwidth limitations ($<$180bps) have prevented their use in load monitoring. In this paper, we highlight challenges in designing AutoMeter, a system for exploiting HA for accurate load monitoring at scale. We quantify Insteon’s limitations to query device status — once every 10 seconds to achieve less than 5\% loss rate — and then evaluate techniques to disaggregate coarse HA data from fine-grained building-wide power data. In particular, our techniques learn switched load power using on-off-dim events, and tag fine-grained building-wide power data using readings from plug meters every 5 minutes.
Distributed Application Configuration, Management, and Visualization with Plush
Jeannie Albrecht, Christopher Tuttle, Ryan Braud, Darren Dao, Nikolay Topilski, Alex C. Snoeren, and Amin Vahdat
ACM Transactions on Internet Technology (TOIT), December 2011
Support for distributed application management in large-scale networked environments remains in its early stages. Although a number of solutions exist for subtasks of application deployment, monitoring, and maintenance in distributed environments, few tools provide a unified framework for application management. Many of the existing tools address the management needs of a single type of application or service that runs in a specific environment, and these tools are not adaptable enough to be used for other applications or platforms. To this end, we present the design and implementation of Plush, a fully configurable application management infrastructure designed to meet the general requirements of several different classes of distributed applications. Plush allows developers to specifically define the flow of control needed by their computations using application building blocks. Through an extensible resource management interface, Plush supports execution in a variety of environments, including both live deployment platforms and emulated clusters. Plush also uses relaxed synchronization primitives for improving fault tolerance and liveness in failure-prone environments. To gain an understanding of how Plush manages different classes of distributed applications, we take a closer look at specific applications and evaluate how Plush provides support for each.
SmartCap: Flattening Peak Electricity Demand in Smart Homes
Sean Barker, Aditya Mishra, David Irwin, Prashant Shenoy, and Jeannie Albrecht
Proceedings of the Tenth IEEE Conference On Pervasive Computing and Communications (PerCom), March 2012. Best Paper Session.
Flattening household electricity demand reduces generation costs, since costs are disproportionately affected by peak demands. While the vast majority of household electrical loads are interactive and have little scheduling flexibility (TVs, microwaves, etc.), a substantial fraction of home energy use derives from background loads with some, albeit limited, flexibility. Examples of such devices include A/Cs, refrigerators, and dehumidifiers. In this paper, we study the extent to which a home is able to transparently flatten its electricity demand by scheduling only background loads with such flexibility. We propose a Least Slack First (LSF) scheduling algorithm for household loads, inspired by the well-known Earliest Deadline First algorithm. We then integrate the algorithm into SmartCap, a system we have built for monitoring and controlling electric loads in homes. To evaluate LSF, we collected power data at outlets, panels, and switches from a real home for 82 days. We use this data to drive simulations, as well as experiment with a real testbed implementation that uses similar background loads as our home. Our results indicate that LSF is most useful during peak usage periods that exhibit “peaky” behavior, where power deviates frequently and significantly from the average. For example, LSF decreases the average deviation from the mean power by over 20\% across all 4-hour periods where the deviation is at least 400 watts.
An Intermittent Energy Internet Architecture
Barath Raghavan, David Irwin, Jeannie Albrecht, Justin Ma, and Adam Streed
Proceedings of the Third International Conference on Future Energy Systems (e-Energy), May 2012
We examine how to re-design the Internet for an energy-constrained future powered by diffuse, intermittent, and expensive power sources. We consider the types of constraints this might place upon the Internet architecture and the manner in which important network components can function in this new environment. We then attempt to chart a path forward for future research.
A Porous Aperiodic Decagon Tile
Duane A. Bailey and Feng Zhu ‘02
The 10th Gathering for Gardner, March 2012
We consider the development of a single universal aperiodic prototile that tiles the plane without overlap. We outline, here, an approach to constructing a porous, positive measure, non-overlapping prototile from the universal aperiodic decagon tile that covers the plane. Many characteristics of the decagon cover are inherited by this tiling.
Feature Selection via Probabilistic Outputs
Andrea Danyluk and Nicholas Arnosti ‘11
ICML 2012: Proceedings of the Twenty-Ninth International Conference on Machine Learning (2012)
This paper investigates two feature-scoring criteria that make use of estimated class probabilities: one method proposed by Shen et al, and a complementary approach proposed here. We develop a theoretical framework to analyze each criterion and show that both estimate the spread (across all values of a given feature) of the probability that an example belongs to the positive class. Based on our analysis, we predict when each scoring technique will be advantageous over the other and give empirical results validating those predictions.
Types for Precise Thread Interference
Stephen Freund, Jaeheon Yi, Tim Disney, and Cormac Flanagan
Workshop on Foundations of Object-Oriented Languages, 12 pages, 2011
The potential for unexpected interference between threads makes multithreaded programming notoriously difficult. Programmers use a variety of synchronization idioms such as locks and barriers to restrict where interference may actually occur. Unfortunately, the resulting actual interference points are typically never documented and must be manually reconstructed as the first step in any subsequent programming task (code review, refactoring, etc). This paper proposes explicitly documenting actual interference points in the program source code, and it presents a type and effect system for verifying the correctness of these interference specifications.
Experimental results on a variety of Java benchmarks show that this approach provides a significant improvement over prior systems based on method-level atomicity specifications. In particular, it reduces the number of interference points one must consider from several hundred points per thousand lines of code to roughly 13 per thousand lines of code. Explicit interference points also serve to highlight all known concurrency defects in these benchmarks.
Cooperative Concurrency for a Multicore World
Stephen Freund, Jaeheon Yi, Caitlin Sadowski, and Cormac Flanagan
Proceedings of the International Conference on Runtime Verification, 3 pages, 2011
Developing reliable multithreaded software is notoriously difficult, due to the potential for unexpected interference between concurrent threads. Much prior work has addressed this problem, mostly focused on verifying the correctness properties of race-freedom and atomicity. We propose an alternative approach whereby all thread interference must be specified with explicit yield annotations.
Rigid Components in Fixed-Latice and Cone Frameworks
Matthew Berardi, Brent Heeringa, Justin Malestein, and Louis Theran
Proceedings of the 23rd Annual Canadian Conference on Computational Geometry (CCCG), 2011
We study the fundamental algorithmic rigidity problems for generic frameworks periodic with respect to a fixed lattice or a finite-order rotation in the plane. For fixed-lattice frameworks we give an O(n2) algorithm for deciding generic rigidity and an O(n3) algorithm for computing rigid components. If the order of rotation is part of the input, we give an O(n4) algorithm for deciding rigidity; in the case where the rotation’s order is 3, a more specialized algorithm solves all the fundamental algorithmic rigidity problems in O(n2) time.
Batch Active Learning via Coordinated Matching
Javad Azimi, Alan Fern, Xiaoli Z. Fern, Glencora Borradaile, and Brent Heeringa
Proceedings of the 29th International Conference on Machine Learning (ICML), 2012
We propose a novel batch active learning method that leverages the availability of high-quality and efficient sequential active-learning policies by approximating their behavior when applied for k steps. Specifically, our algorithm uses Monte-Carlo simulation to estimate the distribution of unlabeled examples selected by a sequential policy over k steps. The algorithm then selects k examples that best matches this distribution, leading to a combinatorial optimization problem that we term “bounded coordinated matching.” While we show this problem is NP-hard, we give an efficient greedy solution, which inherits approximation bounds from supermodular minimization theory. Experiments on eight benchmark datasets show that the proposed approach is highly effective.
Scalable Ambient Obscurance
Morgan McGuire, M. Mara ’12, and D. Luebke
ACM SIGGRAPH/Eurographics High Performance Graphics, June 2012
This paper presents a set of architecture-aware performance and integration improvements for a recent screen-space ambient obscurance algorithm. These improvements collectively produce a 7x performance increase at 2560×1600, generalize the algorithm to both forward and deferred renderers, and eliminate the radius- and scene-dependence of the previous algorithm to provide a hard real-time guarantee of fixed execution time. The optimizations build on three strategies: pre-filter the depth buffer to maximize memory hierarchy efficiency; reduce total bandwidth by carefully reconstructing positions and normals at high precision from a depth buffer; and exploit low-level intra- and inter-thread techniques for parallel, floating-point architectures.
A Reconstruction Algorithm for Plausible Motion Blur
Morgan McGuire, P. Hennessy, M. Bukowski, and B. Osman
ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D), February 2012
This paper describes a novel filter for simulating motion blur phenomena in real time by applying ideas from offline stochastic reconstruction. The filter operates as a 2D post-process on a conventional framebuffer augmented with a screen-space velocity buffer. We demonstrate results on video game scenes rendered and reconstructed in real-time on NVIDIA GeForce 480 and Xbox 360 platforms, and show that the same filter can be applied to cinematic post-processing of offline-rendered images and real photographs. The technique is fast and robust enough that we deployed it in a production game engine used at Vicarious Visions.
The Alchemy Screen-Space Ambient Obscurance Algorithm
Morgan McGuire, B. Osman, M. Bukowski, and P. Hennessy
ACM SIGGRAPH/EuroGraphics High-Performance Graphics, 8, August 5, 2011
Ambient obscurance (AO) produces perceptually important illumination effects such as darkened corners, cracks, and wrinkles; proximity darkening; and contact shadows. We present the AO algorithm from the Alchemy engine used at Vicarious Visions in commercial games. It is based on a new derivation of screen-space obscurance for robustness, and the insight that a falloff function can cancel terms in a visibility integral to favor efficient operations. Alchemy creates contact shadows that conform to surfaces, captures obscurance from geometry of varying scale, and provides four intuitive appearance parameters: world-space radius and bias, and aesthetic intensity and contrast. The algorithm estimates obscurance at a pixel from sample points read from depth and normal buffers. It processes dynamic scenes at HD 720p resolution in about 4.5 ms on Xbox 360 and 3 ms on NVIDIA GeForce580.
Efficient Triangle and Quadrilateral Clipping in Shaders
The Journal of Graphics Tools, Taylor and Francis, 15(4), 2011
Clipping a triangle or a convex quadrilateral to a plane is a common operation in computer graphics. This clipping is implemented by fixed-function units within the graphics pipeline under most rasterization APIs. It is increasingly interesting to perform clipping in programmable stages as well. For example, to clip bounding volumes generated in the Geometry unit to the near plane, or to clip an area light source to the tangent plane of a surface in a Pixel unit.
While clipping a convex polygon is algorithmically trivial, doing so efficiently on vector architectures like GPUs can be tricky. This article presents an efficient implementation of Sutherland-Hodgman clipping for vector processors. It has high branch coherence, uses only register storage (i.e., it does not require a move-relative memory operation), leverages both data and instruction parallelism, and has a peak register count of only two 4-vectors (7 scalars).
I found it to be about five times faster than direct Sutherland-Hodgman and yield a 45% increase in net throughput when applied in the algorithm from a previous publication on two different GPU architectures. The principles of optimization presented for this class of parallel algorithm extend to other algorithms and architectures.