Peggy Chi

Staff Research Scientist, Google

I'm a Staff Research Scientist at Google Research, leading a team to conduct research and product launches. I develop interactive systems that support users' creativity activities, including video creation, image sharing, and programming. I received my Ph.D. in Computer Science from UC Berkeley and an M.S. from the MIT Media Lab. My research has received a Best Paper Award at ACM CHI, a Google PhD Fellowship in Human-Computer Interaction, a Berkeley Fellowship for Graduate Study, and an MIT Media Lab Fellowship. I have published in top HCI venues and served on program committees, including CHI and UIST.

Outside Google, I'm a Visiting Associate Professor at National Taiwan University's Department of Computer Science and Information Engineering, where I teach a graduate level course.

Creativity and Navigation Tools

TacNote: Tactile and Audio Note-Taking for Non-Visual Access

Wan-Chen Lee, Ching-Wen Hung, Chao-Hsien Ting, Peggy Chi, Bing-Yu Chen (National Taiwan University)

ACM Symposium on User Interface Software and Technology (UIST 2023)

TacNote is a system that enables BVI users to annotate, explore, and memorize critical information associated with everyday objects.

Slide Gestalt: Automatic Structure Extraction in Slide Decks for Non-Visual Access

Yi-Hao Peng, Peggy Chi, Anjuli Kannan, Meredith Ringel Morris, Irfan Essa (Google and CMU)

ACM Conference on Human Factors in Computing Systems (CHI 2023)

Slide Gestalt is an automatic approach that identifies the hierarchical structure in a slide deck.

Synthesis-Assisted Video Prototyping From a Document

Peggy Chi, Tao Dong, Christian Frueh, Brian Colonna, Vivek Kwatra, Irfan Essa (Google Research)

ACM Symposium on User Interface Software and Technology (UIST 2022)

Doc2Video is a video prototyping approach that converts a document to interactive scripting with a preview of synthetic talking head videos.

Automatic Instructional Video Creation from a Markdown-Formatted Tutorial

Peggy Chi, Nathan Frey, Katrina Panovich, and Irfan Essa (Google Research)

ACM Symposium on User Interface Software and Technology (UIST 2021)

HowToCut is an automatic approach that converts a Markdown-formatted tutorial into an instructional video by presenting the visual instructions with a synthesized voiceover.

HelpViz: Automatic Generation of Contextual Visual Mobile Tutorials from Text-Based Instructions

Mingyuan Zhong, Gang Li, Peggy Chi, and Yang Li (Google Research and University of Washington)

ACM Symposium on User Interface Software and Technology (UIST 2021)

HelpViz is a tool for generating contextual visual mobile tutorials from text-based instructions that are abundant on the web.

Automatic Non-Linear Video Editing Transfer

Nathan Frey, Peggy Chi, Weilong Yang, and Irfan Essa (Google Research)

AI for Content Creation (AICC) Workshop at CVPR 2021

We propose Computer Vision based techniques that extract editing styles in a source video and apply the edits to matched footage for video creation.

Automatic Generation of Two-Level Hierarchical Tutorials from Instructional Makeup Videos

Anh Truong, Peggy Chi, David Salesin, Irfan Essa, Maneesh Agrawala (Google Research and Stanford University)

ACM Conference on Human Factors in Computing Systems (CHI 2021)

We present a multi-modal approach for automatically generating hierarchical tutorials from instructional makeup videos.

URL2Video: Automatic Video Creation From a Web Page

Peggy Chi, Zheng Sun, Katrina Panovich, Irfan Essa (Google Research)

ACM Symposium on User Interface Software and Technology (UIST 2020)

URL2Video captures quality materials and design styles extracted from a web page, including fonts, colors, and layouts. Using constraint programming, URL2Video's design engine organizes the visual assets into a sequence of shots and renders to a video with user-specified aspect ratio and duration.

Interactive Visual Description of a Web Page for Smart Speakers

Peggy Chi and Irfan Essa (Google Research)

Conversational User Interface Workshop at ACM Conference on Human Factors in Computing Systems (CHI 2020)

Crowdsourcing Images for Global Diversity

Peggy Chi, Matthew Long, Akshay Gaur, Abhimanyu Deora, Anurag Batra, Daphne Luong (Google Research)

ACM Conference on Mobile Human-Computer Interaction (MobileHCI 2019)

DemoDraw: Authoring Illustrations of Human Movements by Iterative Physical Demonstration

Peggy Chi, Daniel Vogel, Mira Dontcheva, Wilmot Li, Björn Hartmann (UC Berkeley, U Waterloo, Adobe Research)

ACM Symposium on User Interface Software and Technology (UIST 2016)

DemoDraw is a multi-modal approach to generate illustrations as the user physically demonstrates the movements.

DemoWiz: Re-Performing Software Demonstrations for a Live Presentation

Peggy Chi, Bongshin Lee, and Steven Drucker (UC Berkeley & Microsoft Research)

ACM Conference on Human Factors in Computing Systems (CHI 2014)

DemoWiz is a system with a refined workflow that helps presenters capture software demonstrations, edit and rehearse them, and re-perform them for an engaging live presentation.

DemoCut: Generating Concise Instructional Videos for Physical Demonstrations

Peggy Chi, Joyce Liu, Jason Linder, Mira Dontcheva, Wilmot Li, Björn Hartmann (UC Berkeley & Adobe Research)

ACM Symposium on User Interface Software and Technology (UIST 2013)

DemoCut is a semi-automatic video editing system that improves the quality of amateur instructional videos for physical tasks.

Kinectograph: Body-Tracking Camera Control for Demonstration Videos

Derrick Cheng, Peggy Chi, Taeil Kwak, Björn Hartmann, and Paul Wright (UC Berkeley)

ACM Conference on Human Factors in Computing Systems (CHI 2013) Poster

Kinectograph is a recording device that automatically pans and tilts to follow specific body parts, e.g., hands, of a user in a video.

MixT: Automatic Generation of Step-by-Step Mixed Media Tutorials

Peggy Chi, Sally Ahn, Amanda Ren, Mira Dontcheva, Wilmot Li, and Björn Hartmann (UC Berkeley & Adobe Research)

ACM Symposium on User Interface Software and Technology (UIST 2012)

MixT is a system that automatically generates step-by-step mixed media tutorials from user demonstrations.

Raconteur: From Chat to Stories

Peggy Chi and Henry Lieberman (MIT Media Lab)

ACM Conference on Intelligent User Interfaces (IUI 2011)

ACM Conference on Human Factors in Computing Systems (CHI 2011)

Raconteur is a system for conversational storytelling that encourages people to make coherent points. It performs natural language processing in real-time on a text chat between a storyteller and a viewer and recommends appropriate media items from a library.

Programming Tools

Doppio: Tracking UI Flows and Code Changes for App Development

Peggy Chi, Senpo Hu, and Yang Li (Google Research)

ACM Conference on Human Factors in Computing Systems (CHI 2018)

DemoScript: Enhancing Cross-Device Interaction Scripting with Interactive Illustrations

Peggy Chi, Yang Li, Björn Hartmann (UC Berkeley & Google Research)

ACM Conference on Human Factors in Computing Systems (CHI 2016)

Best Paper Award

Weave: Scripting Cross-Device Wearable Interaction

Peggy Chi and Yang Li (UC Berkeley & Google Research)

ACM Conference on Human Factors in Computing Systems (CHI 2015)

Ubiquitous Computing

Enabling Calorie-Aware Cooking in a Smart Kitchen

Peggy Chi, Jen-Hao Chen, Hao-hua Chu, and Jin-Ling Lo (National Taiwan University)

ACM Conference on Persuasive Technology (Persuasive 2008)

Ubicomp Technologies for Play-Based Occupational Therapy

Jin-Ling Lo, Peggy Chi, Hao-Hua Chu, Hsin-Yen Wang, and Seng-Cho T. Chou (National Taiwan University)

IEEE Pervasive Computing Magazine (2009)

Playful Toothbrush: Ubicomp Technology for Teaching Tooth Brushing to Kindergarten Children

Yu-Chen Chang, Jin-Ling Lo, Chao-Ju Huang, Nan-Yi Hsu, Hao-Hua Chu, Hsin-Yen Wang, Peggy Chi, and Ya-Lin Hsieh (National Taiwan University)

ACM Conference on Human Factors in Computing Systems (CHI 2008)

Designing Smart Everyday Objects

Peggy Chi, Jen-hao Chen, Shih-yen Liu, Hao-hua Chu (National Taiwan University)

HCI International 2007

Prototypes

Burn Your Memory Away: One-Time Use Video Capture and Storage Device to Encourage Memory Appreciation

Peggy Chi, Xiao Xiao, Keywon Chung, and Carnaven Chiu (MIT Media Lab)

ACM Conference on Human Factors in Computing Systems (CHI 2009) Alt.chi

Stress OutSourced: A Haptic Social Network via Crowdsourcing

Keywon Chung, Carnaven Chiu, Xiao Xiao, Peggy Chi (MIT Media Lab)

ACM Conference on Human Factors in Computing Systems (CHI 2009) Alt.chi

Goal-Oriented Interfaces for Consumer Electronics

Peggy Chi and Henry Lieberman (MIT Media Lab)

MIT LabCAST

Designing Interactive Narrative for Children

Angela Chang, Peggy Chi, Nick Montfort, Cynthia Breazeal, and Henry Lieberman (MIT Media Lab)

Electronic Literature Organization (2010)