본문 바로가기

팝업레이어 알림

팝업레이어 알림이 없습니다.

최신글

Korea AI Summit

About the event

The Korea AI Summit 2022 aims to provide a platform for academic leaders from the prominent universities and institutes to share and explore the latest developments and technological trends in AI research. This year’s event will be themed “AI for Good” and will cover the focused topics in AI such as Metaverse & AI, Efficient AI, NLP and Speech: Scale and Beyond, and Responsible AI, and ADA workshop. The event will be held both by online and offline, and 200 participants from universities, companies, and research institutes will attend the offline session of the event.

We sincerely look forward to seeing you at Korea AI Summit 2022.

General Co-Chairs
Prof. Seong-Whan Lee (Korea University)
Dr. Lidong Zhou (Corporate Vice President of Microsoft and
Managing Director of Microsoft Research Asia)

  • Organizing Committee
General Chairs

Prof. Seong-Whan Lee

Korea University

Dr. Lidong Zhou

Corporate Vice President of Microsoft
and Managing Director of
Microsoft Research Asia

Program Chairs

Jinwoo Shin

KAIST

Alice Oh

KAIST

Miran Lee

Microsoft Research

Seungyong Lee

POSTECH

Meeyoung Cha

KAIST

Seung-won Hwang

SNU

Seungryong Kim

Korea University

PROGRAM SCHEDULE

  • Day 1
  • Day 2
14 Dec. 2022, Wednesday
09:00 - 10:00 Registration
10:00-11:00 Panel Discussion
LIVE STREAMING

Moderator: Alice Oh (KAIST),

Finale Doshi-Velez (Harvard University),

Edward Choi (KAIST),

Yoo-Geun Ham (Chonnam National University),

11:00-11:45 Keynote 1
LIVE STREAMING

"Innovating for the future of humanity"

Dr. Lidong Zhou (Corporate Vice President of Microsoft and Managing Director of Microsoft Research Asia)

11:45-12:00 Coffee Break
12:00 – 12:30 Opening Ceremony
LIVE STREAMING
Opening remarks Prof. Seong-Whan Lee (Korea University)
Welcoming address Yul Uhm (Director General, MSIT)
Congratulatory remarks Dr. Sung Bae Jun (President, IITP)
Commemorative Photoshoot
12:30 – 14:00 Lunch
14:00 - 15:30 Track A (G/B) Track B (Studio 1) Track C (Studio 2&3) Track D (Studio 4)
Efficient AI
LIVE STREAMING
NLP&Speech: Scale and Beyond
LIVE STREAMING
Responsible AI
LIVE STREAMING
Visual AI
LIVE STREAMING
Joseph E. Gonzalez
(UC Berkeley)
Torsten Hoefler
(ETH Zürich)
David Reitter
(Google Research)
Chanwoo Kim
(Samsung Research)
Krishna Gummadi
(MPI-SWS)
Virgilio Almeida
(UFMG)
Jonathan Stray
(Berkeley CHAI)
Angjoo Kanazawa
(UC Berkeley)
Junyong Noh
(KAIST)
Taehyun Rhee
(Victoria University of Wellington)
15:30 - 16:00 Coffee Break
16:00 - 17:30 Track A (G/B) Track B (Studio 1) Track C (Studio 2&3) Track D (Studio 4)
Efficient AI
LIVE STREAMING
NLP&Speech: Scale and Beyond
LIVE STREAMING
Responsible AI
LIVE STREAMING
Visual AI
LIVE STREAMING
Mingoo Seok
(Columbia University)
Joo-Young Kim
(KAIST)
Shane Moon
(Meta AI)
Xingdi (Eric) Yuan
(MSR – Montréal)
Asia Biega
(MPI-SP)
Diego Sáez-Trumper
(Wikimedia)
Yoon Sik Cho
(Chung Ang University)
Jiaya Jia
(The Chinese University of Hong Kong)
Bohyung Han
(SNU)
Hyun Soo Park
(University of Minnesota)
18:00 Banquet
09:00 - 10:00
Registration
10:00-11:00
Panel Discussion
LIVE STREAMING

Moderator: Alice Oh (KAIST),

Finale Doshi-Velez (Harvard University),

Edward Choi (KAIST),

Yoo-Geun Ham (Chonnam National University),

11:00-11:45
Keynote 1
LIVE STREAMING

"Innovating for the future of humanity"

Dr. Lidong Zhou (Corporate Vice President of Microsoft and Managing Director of Microsoft Research Asia)

11:45-12:00
Coffee Break
12:00 – 12:30
Opening Ceremony
LIVE STREAMING
Opening remarks Prof. Seong-Whan Lee (Korea University)
Welcoming address Yul Uhm(Director General, MSIT)
Congratulatory remarks Dr. Sung Bae Jun (President, IITP)
Commemorative Photoshoot
12:30 – 14:00
Lunch
14:00 - 15:30
Track A (G/B) Efficient AI
LIVE STREAMING
Joseph E. Gonzalez
(UC Berkeley)
Torsten Hoefler
(ETH Zürich)
Track B (Studio 1) NLP&Speech: Scale and Beyond
LIVE STREAMING
David Reitter
(Google Research)
Chanwoo Kim
(Samsung Research)
Track C (Studio 2&3) Responsible AI
LIVE STREAMING
Krishna Gummadi
(MPI-SWS)
Virgilio Almeida
(UFMG)
Jonathan Stray
(Berkeley CHAI)
Track D (Studio 4) Visual AI
LIVE STREAMING
Angjoo Kanazawa
(UC Berkeley)
Junyong Noh
(KAIST)
Taehyun Rhee
(Victoria University of Wellington)
15:30 - 16:00
Coffee Break
16:00 - 17:30
Track A (G/B) Efficient AI
LIVE STREAMING
Mingoo Seok
(Columbia University)
Joo-Young Kim
(KAIST)
Track B (Studio 1) NLP&Speech: Scale and Beyond
LIVE STREAMING
Shane Moon
(Meta AI)
Xingdi (Eric) Yuan
(MSR – Montréal)
Track C (Studio 2&3) Responsible AI
LIVE STREAMING
Asia Biega
(MPI-SP)
Diego Sáez-Trumper
(Wikimedia)
Yoon Sik Cho
(Chung Ang University)
Track D (Studio 4) Visual AI
LIVE STREAMING
Jiaya Jia
(The Chinese University of Hong Kong)
Bohyung Han
(SNU)
Hyun Soo Park
(University of Minnesota)
18:00
Banquet
This event is held by in-person invitation only, but anyone wishing to access sessions can participate through youtube channels.
15 Dec. 2022. Thursday
09:00 – 9:30 Registration
09:30 – 10:15 Keynote 2
LIVE STREAMING
"FarmVibes: Democratizing Digital Tools for Sustainable Agriculture"
Dr. Ranveer Chandra (Managing Director for Research for Industry, CTO of Agri-Food at Microsoft, and Head of Networking Research at Microsoft Research Redmond)
10:15 - 12:00 Technology Showcase (G/B) AI Innovation Hub(Studio2&3)
Collaborative Research Projects with Microsoft Research
Asia, supported by the MSIT, Korea, under the High-
Potential Individuals Global Training Program supervised
by the IITP
LIVE STREAMING
AI Innovation Hub 12-research unit report &
the 4th Steering Committee meeting
12:00 - 13:30 Lunch
13:30 - 15:00 Track A (G/B) Track B (Studio 1) Track C (Studio 2&3) Track D (Studio 4)
Efficient AI
LIVE STREAMING
NLP&Speech: Scale and Beyond
LIVE STREAMING
ADA Workshop
LIVE STREAMING
Visual AI
LIVE STREAMING
Dimitris Papailiopoulos
(University of Wisconsin-Madison)
Mostafa Dehghani
(Google Brain)
Colin Raffel
(Huggingface/UNC at Chapel Hill)
Nan Duan
(MSR – Beijing)
Miran Lee (MSR)
Donghee Yvette Wohn(NJIT)
Woo-Sung Jung(KAIST)
Asia Biega(MPI-SP)
Sungkyu Shaun Park
(Kangwon Nat'l University)
Kwang Moo Yi
(University of British Columbia)
Seung-Hwan Baek
(POSTECH)
Yasukata Furukawa
(Simon Fraser University)
15:00 - 15:30 Coffee Break
15:30 - 16:15 Keynote 3
LIVE STREAMING
"Needed planning for AI and the Information Revolution"
ACM A.M. Turing Award Laureate 1986
Prof. John E. Hopcroft (Cornell University)
16:15 - 16:30 Closing remarks
09:00 – 9:30
Registration
09:30 – 10:15
Keynote 2
LIVE STREAMING
"FarmVibes: Democratizing Digital Tools for Sustainable Agriculture"
Dr. Ranveer Chandra (Managing Director for Research for Industry, CTO of Agri-Food at Microsoft, and Head of Networking Research at Microsoft Research Redmond)
10:15 - 12:00
Technology Showcase (G/B)
Collaborative Research Projects with Microsoft Research
Asia, supported by the MSIT, Korea, under the High-
Potential Individuals Global Training Program supervised
by the IITP
LIVE STREAMING
AI Innovation Hub(Studio2&3)
AI Innovation Hub 12-research unit report &
the 4th Steering Committee meeting
12:00 - 13:30
Lunch
13:30 - 15:00
Track A (G/B) Efficient AI
LIVE STREAMING
Dimitris Papailiopoulos
(University of Wisconsin-Madison)
Mostafa Dehghani
(Google Brain)
Track B (Studio 1) NLP&Speech: Scale and Beyond
LIVE STREAMING
Colin Raffel
(Huggingface/UNC at Chapel Hill)
Nan Duan
(MSR – Beijing)
Track C (Studio 2&3) ADA Workshop
LIVE STREAMING
Miran Lee (MSR)
Donghee Yvette Wohn(NJIT)
Woo-Sung Jung(KAIST)
Asia Biega(MPI-SP)
Sungkyu Shaun Park
(Kangwon Nat'l University)
Track D (Studio 4) Visual AI
LIVE STREAMING
Kwang Moo Yi
(University of British Columbia)
Seung-Hwan Baek
(POSTECH)
Yasukata Furukawa
(Simon Fraser University)
15:00 - 15:30
Coffee Break
15:30 - 16:15
Keynote 3
LIVE STREAMING
"Needed planning for AI and the Information Revolution"
ACM A.M. Turing Award Laureate 1986
Prof. John E. Hopcroft (Cornell University)
16:15 - 16:30
Closing remarks
This event is held by in-person invitation only, but anyone wishing to access sessions can participate through youtube channels.

Keynote Speech

Keynote 1
Keynote 1

Dr. Lidong Zhou,

Corporate Vice President of Microsoft and Managing Director of Microsoft Research Asia
Profile
Dr. Lidong Zhou is Corporate Vice President of Microsoft and Managing Director of Microsoft Research Asia, responsible for the lab’s overall research and development activities, as well as collaborations with academic and industrial partners in the Asia Pacific region.
Dr. Zhou joined Microsoft in 2002 and has worked at Microsoft Research’s Silicon Valley lab as a researcher, at the Redmond lab as a principal researcher and Research Manager of the Systems Research Group, and at the Asia lab as Assistant Managing Director. In 2021, he was appointed as the Managing Director of Microsoft Research Asia.
Besides his management role, Dr. Zhou is a renowned computer scientist specializing in computer systems research. Throughout his career, he has been continuously advancing the state of the art in scalable, reliable, and trustworthy distributed systems. As a key technical lead for Microsoft in the design and development of large-scale distributed systems, Dr. Zhou has initiated and successfully led a series of important distributed system projects that support a wide range of Microsoft products and services, from search engines and big data infrastructure to cloud systems and AI infrastructure.
Dr. Zhou is both an ACM Fellow and an IEEE Fellow. He serves on the editorial board of ACM Transactions on Computer Systems, ACM Transactions on Storage, and IEEE Transactions on Computers. He chairs the ACM Software System Award Committee and serves on the steering committee of the biennial ACM Symposium on Operating Systems Principles (SOSP).
Dr. Zhou received his Ph.D. and M.S. in Computer Science from Cornell University and a B.S. in Computer Science from Fudan University.
Title
Innovating for the future of humanity
Abstract
We are on the verge of a major computing paradigm shift, powered by advances in computer science, especially in artificial intelligence. Our innovations today are going to re-shape our lives and our society for the coming decades. It is of paramount importance that we do so responsibly, with the future of humanity in mind. In this talk, we will outline our vision of the future, highlight the work we are doing at Microsoft Research Asia to demonstrate how technology can empower and enrich humanity, and call for interdisciplinary innovations that connect technology and humanity.
Keynote 2
Keynote 2

Dr. Ranveer Chandra

Managing Director for Research for Industry, CTO of Agri-Food at Microsoft, and Head of Networking Research at Microsoft Research Redmond
Profile
Ranveer Chandra is the Managing Director for Research for Industry, and the CTO of Agri-Food at Microsoft. He also leads the Networking Research Group at Microsoft Research, Redmond. Previously, Ranveer was the Chief Scientist of Microsoft Azure Global. His research has shipped as part of multiple Microsoft products, including VirtualWiFi in Windows 7 onwards, low power Wi-Fi in Windows 8, Energy Profiler in Visual Studio, Software Defined Batteries in Windows 10, and the Wireless Controller Protocol in XBOX One. His research also led to a new product, called Azure FarmBeats. Ranveer is active in the networking and systems research community, and has served as the Program Committee Chair of IEEE DySPAN 2012, and ACM MobiCom 2013.
Ranveer started the FarmBeats project at Microsoft in 2015. He is also leading the battery research project, and the white space networking project at Microsoft Research. He was invited to the USDA to present his work on FarmBeats, and this work was featured by Bill Gates in GatesNotes, and was selected by Satya Nadella as one of 10 projects that inspired him in 2017. Ranveer has also been invited to the FCC to present his work on TV white spaces, and spectrum regulators from India, China, Brazil, Singapore and US (including the FCC chairman) have visited the Microsoft campus to see his deployment of the world’s first urban white space network. As part of his doctoral dissertation, Ranveer developed VirtualWiFi. The software has over a million downloads and is among the top 5 downloaded software released by Microsoft Research. It is shipping as a feature in Windows since 2009.
Ranveer has published more than 100 papers, and holds over 150 patents granted by the USPTO. His research has been cited by the popular press, such as the Economist, MIT Technology Review, BBC, Scientific American, New York Times, WSJ, among others. He is a Fellow of the IEEE, and has won several awards, including best paper awards at ACM CoNext 2008, ACM SIGCOMM 2009, IEEE RTSS 2014, USENIX ATC 2015, Runtime Verification 2016 (RV’16), ACM COMPASS 2019, and ACM MobiCom 2019, the Microsoft Research Graduate Fellowship, the Microsoft Gold Star Award, the MIT Technology Review’s Top Innovators Under 35, TR35 (2010) and Fellow in Communications, World Technology Network (2012). He was recently recognized by the Newsweek magazine as America’s 50 most Disruptive Innovators (2021). Ranveer has an undergraduate degree from IIT Kharagpur, India and a PhD from Cornell University.
Title
FarmVibes: Democratizing Digital Tools for Sustainable Agriculture
Abstract
Agriculture is one of the biggest contributors to climate change. Agriculture and land use degradation, including deforestation, account for about a quarter of the global GHG emissions. Agriculture consumes about 70% of the world’s fresh water resources. Agriculture is also amongst the most impacted by climate change. Farmers depend on predictable weather for their farm management practices, and unexpected weather events, e.g. high heat, floods, etc. leaves them unprepared. Agriculture could also be a potential solution to the climate problem. If farmers use the right agricultural practices, then it can help remove carbon from the atmosphere. However, making progress on any of the above challenges is difficult due to the lack of data from the farms. Existing approaches for estimating emissions or sequestered carbon are very expensive. Through this project, our goal is to build affordable digital technologies to help farmers (1) estimate the amount of emissions in their farm, (2) with climate adaptation by predicting weather variations, and (3) determine the right management practices that can be profitable, and also help sequester carbon.
Keynote 3
Keynote 3

ACM A.M. Turing Award Laureate

Prof. John E. Hopcroft

IBM Professor of Engineering and Applied Mathematics in Computer Science at Cornell University
Profile
ACM A.M. Turing Award Laureate John E. Hopcroft is the IBM Professor of Engineering and Applied Mathematics in Computer Science at Cornell University. From January 1994 until June 2001, he was the Joseph Silbert Dean of Engineering. After receiving both his M.S. (1962) and Ph.D. (1964) in electrical engineering from Stanford University, he spent three years on the faculty of Princeton University. He joined the Cornell faculty in 1967, was named professor in 1972 and the Joseph C. Ford Professor of Computer Science in 1985. He served as chairman of the Department of Computer Science from 1987 to 1992 and was the associate dean for college affairs in 1993. An undergraduate alumnus of Seattle University, Hopcroft was honored with a Doctor of Humanities Degree, Honoris Causa, in 1990.
Hopcroft’s research centers on theoretical aspects of computing, especially analysis of algorithms, automata theory, and graph algorithms. He has coauthored four books on formal languages and algorithms with Jeffrey D. Ullman and Alfred V. Aho. His most recent work is on the study of information capture and access.
He was honored with the A. M. Turing Award in 1986. He is a member of the National Academy of Sciences (NAS), the National Academy of Engineering (NAE), a foreign member of the Chinese Academy of Sciences, and a fellow of the American Academy of Arts and Sciences (AAAS), the American Association for the Advancement of Science, the Institute of Electrical and Electronics Engineers (IEEEM), and the Association of Computing Machinery (ACM). In 1992, he was appointed by President Bush to the National Science Board (NSB), which oversees the National Science Foundation (NSF), and served through May 1998. From 1995-98, Hopcroft served on the National Research Council’s Commission on Physical Sciences, Mathematics, and Applications.
In addition to these appointments, Hopcroft serves as a member of the SIAM financial management committee, IIIT New Delhi advisory board, Microsoft’s technical advisory board for research Asia, and the Engineering Advisory Board, Seattle University.
Title
Needed planning for AI and the Information Revolution
Abstract
Civilization has undergone an agricultural revolution, an industrial revolution, and now is undergoing an information revolution.
The information revolution will make many changes in our lives. Jobs will change from manufacturing to information processing. Corporations will introduce new technologies and make major efforts to operate competitively in a new environment. Applied research will be applied to many problems resulting in new and important directions. The most important activity to be successful, will require creating high quality talent for the information age.
In this talk, I will discuss the efforts of China in creating the talent necessary to be a leading nation in the information age.

Sessions INFORMATION

  • Track A:
    Efficient AI
  • Track B :
    NLP & Speech
    Scale and Beyond
  • Track C :
    Responsible AI
  • Track D :
    Visual AI
  • Panel Discussion
  • ADA Workshop

Track A

Efficient AI

Theme

  • Network pruning and quantization
  • Efficient hardware implementations and compiler optimizations
  • Neuromorphic computing and bio-inspired emerging applications
  • Efficient neural architectures, e.g., transformers

Description

Today’s world needs orders of magnitude more efficient AI solutions to address environmental and energy crises. This is particularly problematic when looking at the growing data volumes and size of AI models, despite the end of Moore’s Law and Dennard Scaling. How can we resolve the issue, e.g., via algorithmic efficiency of deep learning, efficient hardware implementations or collaborative computing/learning on edge devices?

Program (Track Chair: Jinwoo Shin, KAIST)

Date&Time Name and Affiliation Title
Dec-14 (Wed.)
2pm - 2:30pm
Joseph E. Gonzalez,Professor
(UC Berkeley)
Models and Systems for Efficient Training and Inference
Dec-14 (Wed.)
2:30pm - 3:30pm
Torsten Hoefler,Professor
(ETH Zürich)
Efficient AI:
From supercomputers to smartphones
Dec-14 (Wed.)
4:00pm - 4:45pm
Mingoo Seok,Professor
(Columbia University)
Energy-Efficient AI Hardware
Dec-14 (Wed.)
4:45pm - 5:30pm
Joo-Young Kim,Professor
(KAIST)
A Multi-FPGA Appliance for
Accelerating Inference of
Hyperscale Transformer Models
Dec-15 (Thu.)
1:30pm - 2:00pm
Dimitris Papailiopoulos,Professor
(University of Wisconsin-Madison)
Transformers as universal computers
and prompts as their programs
Dec-15 (Thu.)
2:00pm - 3:00pm
Mostafa Dehghani,Research Scientist
(Google Brain)
Efficiency, the Next Grand Challenge of
Artificial Intelligence
Dec-14 (Wed.)
2pm - 2:30pm
Name and Affiliation Title
Joseph E. Gonzalez,Professor
(UC Berkeley)
Models and Systems for Efficient Training and Inference
Dec-14 (Wed.)
2:30pm - 3:30pm
Name and Affiliation Title
Torsten Hoefler,Professor
(ETH Zürich)
Efficient AI:
From supercomputers to smartphones
Dec-14 (Wed.)
4:00pm - 4:45pm
Name and Affiliation Title
Mingoo Seok,Professor
(Columbia University)
Energy-Efficient AI Hardware
Dec-14 (Wed.)
4:45pm - 5:30pm
Name and Affiliation Title
Joo-Young Kim,Professor
(KAIST)
A Multi-FPGA Appliance for
Accelerating Inference of
Hyperscale Transformer Models
Dec-15 (Thu.)
1:30pm - 2:00pm
Name and Affiliation Title
Dimitris Papailiopoulos,Professor
(University of Wisconsin-Madison)
Transformers as universal computers
and prompts as their programs
Dec-15 (Thu.)
2:00pm - 3:00pm
Name and Affiliation Title
Mostafa Dehghani,Research Scientist
(Google Brain)
Efficiency, the Next Grand Challenge of
Artificial Intelligence

Track B

NLP & Speech
Scale and Beyond

Theme

  • Large-scale and trustworthy language models
  • Semantics and grounding
  • Democratization of code and large-scale model development

Description

Large-scale AI models have advanced NLP and Speech, which has naturally motivated us to pursue larger scale, and richer modality models, but also beyond: How can we make generation more trustworthy and controlled? Can we robustly ground models for deeper semantics, as required in games and multimodal conversation? What are emerging NLP technologies for democratizing code and model intelligence?

Program (Track Chair: Seung-won Hwang, SNU)

Date&Time Dec-14 (Wed.) 14:00 – 15:30
Themes Conversation: Dialogue and Speech
Name and Affiliation Title
David Reitter, Senior Research Scientist
(Google Research)
Trustworthy and controlled dialogue in systems driven by very large language models
Chanwoo Kim, Corporate Executive Vice President(Samsung Research) Fusion of speech and language technologies to build more natural conversation systems.
Date&Time Dec-14 (Wed.) 16:00 – 17:30
Themes Grounding: Multimodality and Gamesh
Name and Affiliation Title
Shane Moon, Lead Research Scientist
(Meta AI)
Towards Multimodal Conversational AI
Xingdi (Eric) Yuan, Senior Researcher
(MSR - Montréal)
Towards building machines that can use language as a tool
Date&Time Dec-15(Thu) 13:30 – 15:00
Themes Democratization: Open-source and Code Intelligence
Name and Affiliation Title
Colin Raffel, Assistant Professor
(Huggingface/UNC at Chapel Hill)
Building Machine Learning Models like Open-Source Software
Nan Duan, Senior Principal Researcher
(MSR – Beijing)
Code Intelligence: Models, Applications and Future
Dec-14 (Wed.)
14:00 – 15:30
Themes
Conversation: Dialogue and Speech
Name and Affiliation
David Reitter, Senior Research Scientist
(Google Research)
Title
Trustworthy and controlled dialogue in systems driven by very large language models
Name and Affiliation
Chanwoo Kim, Corporate Executive Vice President(Samsung Research)
Title
Fusion of speech and language technologies to build more natural conversation systems.
Dec-14 (Wed.)
16:00 – 17:30
Themes
Grounding: Multimodality and Gamesh
Name and Affiliation
Shane Moon, Lead Research Scientist
(Meta AI)
Title
Towards Multimodal Conversational AI
Name and Affiliation
Xingdi (Eric) Yuan, Senior Researcher
(MSR - Montréal)
Title
Towards building machines that can use language as a tool
Dec-15(Thu) 13:30 – 15:00
Themes
Democratization: Open-source and Code Intelligence
Name and Affiliation
Colin Raffel, Assistant Professor
(Huggingface/UNC at Chapel Hill)
Title
Building Machine Learning Models like Open-Source Software
Name and Affiliation
Nan Duan, Senior Principal Researcher
(MSR – Beijing)
Title
Code Intelligence: Models, Applications and Future

Track C

Responsible AI

Theme

  • AI ethics and bias
  • Algorithmic fairness
  • Human reasoning
  • Health and well-being
  • Chatbots

Description

Artificial Intelligence (AI) algorithms are increasingly deployed in everyday tasks. As a society, we must ensure that AI systems contribute to the well-being of the global population and upload the values of various user groups and cultures by designing a safe, interpretable, robust, and fair system. We invite a diverse set of distinguished researchers in academia and industry to discuss pivotal cross-disciplinary topics for developing responsible AI algorithms.

Program (Track Chair: Meeyoung Cha, KAIST)

Session 01(14:00 ~ 15:30) Room: Signiel 76F Studio 2&3
14:00 (10min) Opening Meeyoung Cha
14:00 (15min) Welcome
Remark
Kilnam Chon (Professor Emeritus, KAIST)
Welcome to the Responsible AI Track
14:15 (25min) Talk 1 Krishna Gummadi (Director, MPI-SWS)
Foundations for Fair Social Computing
14:40 (25min) Talk 2 Virgilio Almeida (Professor Emeritus, UFMG)
Social and political challenges for AI in a Global South country
15:05 (25min) Talk 3 Jonathan Stray (Researcher, Berkeley Center for Human-Compatible AI)
Making Recommender Systems Healthy for People and Society
Session 02(16:00 ~ 17:30) Room: Studio 2&3
16:00 (20min) Talk 5 Asia Biega (Faculty, MPI-SP)
Designing AI Systems for Digital Well-Being
16:20 (20min) Talk 6 Diego Sáez-Trumper (Researcher, Wikimedia)
Wikipedia and Community Centered Machine Learning
16:40 (20min) Talk 7 Yoon Sik Cho (Professor, Chung Ang University)
Fair Recommender Systems
17:00 (30min) Panel Towards a healthy AI ecosystem with Responsible AI
Panelists: Kilnam Chon, Virgilio Almeida, Kyung Sin Park
(Moderator: Steven Euijong Whang, KAIST)
17:30 Closing Meeyoung Cha
14:00 (10min)
Opening Meeyoung Cha
14:00 (15min)
Welcome
Remark
Kilnam Chon (Professor Emeritus, KAIST)
Welcome to the Responsible AI Track
14:15 (25min)
Talk 1 Krishna Gummadi (Director, MPI-SWS)
Foundations for Fair Social Computing
14:40 (25min)
Talk 2 Virgilio Almeida (Professor Emeritus, UFMG)
Social and political challenges for AI in a Global South country
15:05 (25min)
Talk 3 Jonathan Stray (Researcher, Berkeley Center for Human-Compatible AI)
Making Recommender Systems Healthy for People and Society
Session 02(16:00 ~ 17:30) Room: Studio 2&3
16:00 (20min)
Talk 5 Asia Biega (Faculty, MPI-SP)
Designing AI Systems for Digital Well-Being
16:20 (20min)
Talk 6 Diego Sáez-Trumper (Researcher, Wikimedia)
Wikipedia and Community Centered Machine Learning
16:40 (20min)
Talk 7 Yoon Sik Cho (Professor, Chung Ang University)
Fair Recommender Systems
17:00 (30min)
Panel Towards a healthy AI ecosystem with Responsible AI
Panelists: Kilnam Chon, Virgilio Almeida, Kyung Sin Park
(Moderator: Steven Euijong Whang, KAIST)
17:30
Closing Meeyoung Cha

Track D

Visual AI

Theme

    Understanding, Synthesis, and Applications of Visual Information and Media:

  • Visual AI: Understanding of visual information augmenting a machine to be smarter
  • Graphical AI: Synthesis of visual media helping everybody become an artist
  • Metaverse & AI: Emerging applications of visual and graphical AI

Description

Visual information is a key for humans to understand the world and things going on in there. Visual media is a key for humans to share and enjoy the culture in the society. For AI to be good to humans, it should be able to understand visual information at the level of human vision and synthesize visual media with creativity like a human artist. We share the state-of-the-art research results on these aspects of visual AI together with emerging applications.

Program (Track Chair: Seungyong Lee, POSTECH)

Date&Time Dec-14 (Wed.) 14:00 – 15:30
Themes Metaverse & AI (Virtual Humans, Virtual Reality)
Name and Affiliation Title
Angjoo Kanazawa, Professor
(UC Berkeley)
Towards Capturing Reality: Scenes and 3D People
Junyong Noh, Professor
(KAIST)
Learning-based Character and Facial Animation
Taehyun Rhee, Professor
(Victoria University of Wellington)
Televerse: Teleport to the Augmented Real-World driven by 3i innovation (#immersive, #interactive, #intelligent)
Date&Time Dec-14 (Wed.) 16:00 – 17:30
Themes Visual AI (Visual Understanding)
Name and Affiliation Title
Jiaya Jia, Professor
(The Chinese University of Hong Kong)
Challenge and Opportunity of 3D Perception
Bohyung Han, Professor
(SNU)
Image Retrieval with Deep Learning
Hyun Soo Park, Professor
(University of Minnesota)
Self-supervised Behavioral Imaging
Date&Time Dec-15(Thu) 13:30 – 15:00
Themes Graphical AI (Visual Synthesis)
Name and Affiliation Title
Kwang Moo Yi, Professor
(University of British Columbia)
Neural field methods for 3D Vision
Seung-Hwan Baek, Professor
(POSTECH)
Differentiable computational imaging with light waves
Yasukata Furukawa, Professor
(Simon Fraser University)
Teaching a Computer to be an Architect
Dec-14 (Wed.)
14:00 – 15:30
Themes
Metaverse & AI (Virtual Humans, Virtual Reality)
Name and Affiliation
Angjoo Kanazawa, Professor
(UC Berkeley)
Title
Towards Capturing Reality: Scenes and 3D People
Name and Affiliation
Junyong Noh, Professor
(KAIST)
Title
Learning-based Character and Facial Animation
Name and Affiliation
Taehyun Rhee, Professor
(Victoria University of Wellington)
Title
Televerse: Teleport to the Augmented Real-World driven by 3i innovation (#immersive, #interactive, #intelligent)
Dec-14 (Wed.)
16:00 – 17:30
Themes
Visual AI (Visual Understanding)
Name and Affiliation
Jiaya Jia, Professor
(The Chinese University of Hong Kong)
Title
Challenge and Opportunity of 3D Perception
Name and Affiliation
Bohyung Han, Professor
(SNU)
Title
Image Retrieval with Deep Learning
Name and Affiliation
Hyun Soo Park, Professor
(University of Minnesota)
Title
Self-supervised Behavioral Imaging
Dec-15(Thu)
13:30 – 15:00
Themes
Graphical AI (Visual Synthesis)
Name and Affiliation
Kwang Moo Yi, Professor
(University of British Columbia)
Title
Neural field methods for 3D Vision
Name and Affiliation
Seung-Hwan Baek, Professor
(POSTECH)
Differentiable computational imaging with light waves
Name and Affiliation
Yasukata Furukawa, Professor
(Simon Fraser University)
Teaching a Computer to be an Architect

Panel Discussion

Theme

  • Recent advances and challenges in AI and ML for healthcare and climate change

Description

Recent advances in artificial intelligence and machine learning have been applied to some of the most difficult problems our society is facing. We will look at two of those problems in this panel: healthcare and climate. We invite three experts who work at the intersection of AI/ML and healthcare/climate. Through this panel, we aim to understand the current state of the research, what challenges remain, and how the scientific community can work together to solve these most pressing problems.

ADA Workshop

Theme

  • Increasing diversity and inclusiveness at workplace
  • Best practices in conducting interdisciplinary research
  • Insights on industry versus academia jobs

Description

The Ada workshop, held in honor of Ada Lovelace (who is known to have written the world's first computer program), is designed to provide students in engineering fields with career and research advice. This event is open to everyone, and we particularly welcome female and minority students.

Program (Moderator: Meeyoung Cha, KAIST)

Session 01(13:30)
13:30 ~ 13:35 Opening Meeyoung Cha & Diego Sáez-Trumper
13:35 ~ 13:52 Talk 1 Miran Lee at MSRA
An Introduction to Ada workshops initiated by
Microsoft Research Asia
13:52 ~ 14:09 Talk 2 Donghee Yvette Wohn at NJIT
Navigating Academic Conferences
14:09 ~ 14:26 Talk 3 Woo-Sung Jung at POSTECH
Social Roles of Science and Technology
14:26 ~ 14:43 Talk 4 Asia Biega (Faculty, MPI-SP)
Finding yourself in the modern research landscape
14:43 ~ 15:00 Talk 5 Sungkyu Shaun Park at KNU
How to survive in the age of convergence research
15:00 ~ Closing Meeyoung Cha & Diego Sáez-Trumper
13:30 ~ 13:35
Opening Meeyoung Cha & Diego Sáez-Trumper
13:35 ~ 13:52
Talk 1 Miran Lee at MSRA
An Introduction to Ada workshops initiated by
Microsoft Research Asia
13:52 ~ 14:09
Talk 2 Donghee Yvette Wohn at NJIT
Navigating Academic Conferences
14:09 ~ 14:26
Talk 3 Woo-Sung Jung at POSTECH
Social Roles of Science and Technology
14:26 ~ 14:43
Talk 4 Asia Biega (Faculty, MPI-SP)
Finding yourself in the modern research landscape
14:43 ~ 15:00
Talk 5 Sungkyu Shaun Park at KNU
How to survive in the age of convergence research
15:00 ~
Closing Meeyoung Cha & Diego Sáez-Trumper

Technology showcase

Presentation Lists

  • Deep Learning for Structure-based Drug Design of A2A Adenosine Receptor
  • Presenter: Sanghee Yoon, Sun Choi (Ewha Womans University) Improvements in biophysical methods further enhanced the appeal of reverse pharmacology along with structure-based drug design (SBDD), as the three-dimensional structures of proteins become more available, permitting the use of in silico tools to screen compounds for potential binding with the protein. Over the past few years, artificial intelligence (AI), particularly machine learning (ML), deep learning (DL), and reinforcement learning (RL) algorithms have been used to improve the drug discovery process. In G protein-coupled receptor (GPCR) which located on the cell surface that recognizing extracellular substances and transmit signals through cell membranes, the dynamics and functions of GPCRs are finely regulated depending on the type of bound ligands. Using A2A adenosine receptor (A2AAR) as a model system, we directly generate compounds from the given targets and analyze their drug-like properties.
  • Investigating Gender Bias in Multilingual Neural Machine Translation Systems
  • Presenter: Minwoo Lee, Kyomin Jung (Seoul National University) Recent works have shown that multilingual neural machine translation (MNMT) models are effective at improving translation performance of low-resource languages. This performance benefit has been mainly attributed to language-agnostic knowledge transfer during training. However, it is unclear whether transfer of gender bias, which bilingual NMT models have shown to exhibit, also occur. In this work, we find that contrary to previous claims, gender bias transfer does not necessarily occur for multilingual models, and multilingual models are capable of achieving higher gender accuracy than bilingual models.​
  • Language-Driven Image-to-Video Generation with Generative Neural Radiance Fields
  • Presenter: Sunghwan Hong, Seungryong Kim (Korea University) It has been a long-standing goal of Computer Vision, Multimedia, and Graphics to develop the ability to generate and manipulate videos. Recently, a new task called Text-Image-to-Video generation (TI2V) was introduced. Despite its exclusive performance, it does not properly reason about the 3D structure of the scene and may suffer from entangled representations, preventing from generating a diverse video with 3D awareness and explicitly manipulating attributes of a particular object of interest, e.g., object appearance, size or pose. ​
  • Question Answering for Supporting Multi-modal Clinical Decision Making
  • Presenter: Seongsu Bae, Yoonjae Choi (KAIST) There are a series of uni-modal EHR QA datasets such as emrQA (Pampari et al., 2018), MIMICSQL (Wang et al., 2020), and EHRSQL (Lee et al., 2022). However, none can handle access over the multi-modal data source. To make a breakthrough, we aim to build a new challenging benchmark and develop the model for multi-modal EHR QA.​
  • Towards Battery-free System for Complex Context Sensing
  • Presenter: Seungwoo Shim, Song Min Kim (KAIST) Remarkable advances in machine learning techniques has enabled lots of services improving the quality of everyday lives. For example, rich information in visual data provides various applications ranging from public safety to smart shopping and mobile healthcare. High-performance sensors such as LiDAR and high-resolution cameras are expensive, are difficult to attach to small objects, and have significant limitations in deployment due to privacy concerns. Instead, we propose battery-free sensor tags for understanding complex context. Low quality data due to limited energy budget can be compensated with multiple sensors.​
  • Efficient Container Networking for Edge Computing
  • Presenter: Yeonho Yoo, Chuck Yoo (Korea University) The edge devices adopt the internet of things (IoT) in various industrial fields and have become a new computing paradigm that tackles a set of technological challenges in industrial control, automation, and intelligence. As current edge device applications demand real-time control to manage heterogeneous devices properly, it is necessary to offer an efficient computing environment, which places computing resources near the edge to reduce network latency and data processing delays. Specifically, we focus on containers, which are a major building block in edge computing. We analyze the existing networking stack of containers in smart edge devices and then design and implement a streamlined container networking stack for edge computing called SCON. We observe that SCON provides both high network performance and low CPU usages. ​
  • Leveraging Large Language Models for Automatic and Traceable Debugging
  • Presenter: Sungmin Kang, Shin Yoo (KAIST) Debugging is an important developer activity, and as such numerous techniques have been proposed to automated the task. Nonetheless, developer surveys consistently indicate the need for explainable techniques, which have been underexplored. In this work, we propose that large language models (LLMs) are an appropriate solution to this challenge: using their reasoning capability, we can automatically generate debugging artifacts, and ask the LLM to justify its suggestion in natural language. Our preliminary results indicate competitive performance with existing work and a capability of generating explanations.​
  • Enhancing the Local Alignment in Chest X-ray Domain
  • Presenter: Gangwoo Kim, Jaewoo Kang (Korea University) To improve the local alignment performance in the chest X-ray domain (i..e., phrase grounding, object detection), we propose a novel method to generate the synthetic datasets for the phrase grounding task by leveraging a recent model for aligning image-text pairs, CheXzero [1]. Our method can automatically generate datasets for the phrase grounding task by using bounding box labels from object detection dataset. Fine-tuned on them, CLIP-based backbone model provides pseudo-label on the unlabeled image-report pairs. We finally evaluate our models on a recent benchmark in the chest X-ray domain, MS-CXR [2]​.
  • Audio Generation using HuBERT and EnCodec
  • Presenter: Youngdo Ahn, Jong Won Shin (GIST) To generate speech that sounds natural is an interesting topic. Recently, audio generation has been studied to generate natural and continuous audio samples for movie production or the development of virtual reality. The recent audio generation model is constructed with the audio language model and tokenizers with pre-trained models. In this presentation, we introduce our audio generation model based on HuBERT and EnCodec as tokenizers. To verify the ability of the audio generation model, we evaluated the word error rates (WER) using an automatic speech recognition model. ​
  • Revisiting and Reformulating Machine Learning-based Drug Repositioning
  • Presenter: Jung-Hyun Won, Lee Hyeong Ki (Seoul National University) In this study, we aimed to point out problems in the current task formulation of machine learning (ML)-based drug repositioning (DR) studies. We identified the spurious correlation between data collection pattern of drug characteristics and approved indications of drug by performing unsupervised clustering analysis. In addition, we found that train-test split based on random sampling led to the overestimation of the performances of existing DR models. Now, we are planning to reformulate DR problems as predicting investigational disease and generating eligibility criteria of clinical trials for a drug-disease pair.​
  • Accelerating Mixture of Experts Inference via DNN-Based Expert Activation Prediction
  • Presenter: Ranggi Hwang, Minsoo Rhu (KAIST) In recent years, mixture of experts (MoE) models have been adopted in various machine learning (ML) applications with remarkable performance, including natural language processing (NLP), image classifications, and recognitions. Although MoE effectively extends model capacity with minimally increasing computation requirements, it causes compute under-utilization and huge memory overheads with increased latency for inference. To address these problems, we proposed a DNN-based prediction network for expert activation, which can effectively decrease total latency while maximizing resource utilization.​
  • Towards accurate and efficient recommender systems
  • Presenter: SeongKu Kang, Hwanjo Yu (POSTECH) Recent recommender systems tend to adopt increasingly complex and large models to better understand the complex nature of user-item interactions. Accordingly, the inference latency increases as well, which has become a major obstacle to deployment. We focus on knowledge distillation to generate a powerful but compact model. Despite its breakthrough in classification problems, knowledge distillation for recommendation models and ranking problems has not been studied well in the previous literature. Our current research is devoted to developing distillation methods tailored for recommender systems.​
  • Panoramic Vision Transformer for Saliency Detection in 360 Videos
  • Presenter: Heeseung Yun, Gunhee Kim (Seoul National University) 360° video saliency detection is one of the challenging benchmarks for 360° video understanding since non-negligible distortion and discontinuity occur, and capture-worthy viewpoint is ambiguous by nature. We present a new framework named Panoramic Vision Transformer (PAVER). We design the encoder using Vision Transformer with deformable convolution, which enables us not only to plug pretrained models from normal videos into our architecture without additional modules or finetuning but also to perform geometric approximation only once, unlike previous deep CNN-based approaches. Thanks to its powerful encoder, PAVER can learn the saliency from three simple relative relations among local patch features, outperforming state-of-the-art models for multiple benchmarks.
  • Avatron: 3D Face Avatar and Voice Generation From Text
  • Presenter: Se-Yun Um, Hong-Goo Kang (Yonsei University) We propose a novel 3D facial animation model that does not require explicit temporal synchronization between talking faces and speech. Due to time domain differences between videos and speech, most approaches apply resampling to linguistic features from a pre-trained automatic speech recognition (ASR) model, resulting in unnatural facial movements. However, we efficiently utilize intermediate feature from a text-to-speech (TTS) model and feed it into an avatar model to be converted to the vertices of a 3D face avatar. Since our proposed model does not require an ASR model, we are able to reduce the number of parameters and computational complexity of the whole solution. ​
  • Collective Relevance Labeling for Passage Retrieval
  • Presenter: Jihyuk Kim, Seung-won Hwang (Seoul National University) Deep learning for Information Retrieval (IR) requires a large amount of high-quality query-document relevance labels, but such labels are inherently sparse. Label smoothing redistributes some observed probability mass over unobserved instances, often uniformly, uninformed of the true distribution. In contrast, we propose knowledge distillation for informed labeling, without incurring high computation overheads at evaluation time. Our contribution is designing a simple but efficient teacher model which utilizes collective knowledge, to outperform stateof-the-arts distilled from a more complex teacher model.
  • Meta-Learning for Single Image Deblurring with Core-Model​
  • Presenter: Hyunjin Son, Kyoung Mu Lee (Seoul National University) Previous end-to-end learning networks for deblurring are trained by large-scale external datasets, so it cannot exploit internal information within a given test image. In this work, we present a new approach for single image deblurring based on meta-learning using both external and internal data. First, we train the network via meta-learning with support set from pretrained network by external data. Then in the test time, this meta-trained model allows fast adaptation to target image with only a few parameter updates. Furthermore, we finetune the model and perform deblur in patches. The sub-network, called core-model learned relationship between patches in training time. To utilize the internal information to enhance the performance of deblurring, we can select important patches at test time.​
  • GRIT-VLP: Grouped Mini-batch Sampling for ​Efficient Vision and Language Pre-training
  • Presenter: Jaeseok Byun, Taesup Moon (Seoul National University) Most of the currently existing vision and language pre-training (VLP) methods have mainly focused on how to extract and align vision and text features. In contrast to the mainstream VLP methods, we highlight that two routinely applied steps during pre-training have crucial impact on the performance of the pre-trained model: in-batch hard negative sampling for image-text matching (ITM) and assigning the large masking probability for the masked language modeling (MLM). After empirically showing the unexpected effectiveness of above two steps, we systematically devise our GRIT-VLP, which adaptively samples mini-batches for more effective mining of hard negative samples for ITM while maintaining the computational cost for pre-training. Our method consists of three components: 1) GRouped mIni-baTch sampling (GRIT) strategy that collects similar examples in a mini-batch, 2) ITC consistency loss for improving the mining ability, and 3) enlarged masking probability for MLM. Consequently, we show our GRIT-VLP achieves a new state-of-the-art performance on various downstream tasks with much less computational cost. Furthermore, we demonstrate that our model is essentially in par with ALBEF, the previous state-of-the-art, only with one-third of training epochs on the same training data.
  • FedX: Unsupervised Federated Learning with Cross Knowledge Distillation​
  • Presenter: Sungwon Park, Meeyoung Cha (KAIST) We present FedX, an unsupervised federated learning framework. Our model learns unbiased representation from decentralized and heterogeneous local data. It employs a two-sided knowledge distillation with contrastive learning as a core component, allowing the federated system to function without requiring clients to share any data features. Furthermore, its adaptable architecture can be used as an add-on module for existing unsupervised algorithms in federated settings. Experiments show that our model improves performance significantly (1.58~5.52pp) on five unsupervised algorithms.​
  • Advances in Polarimetric Appearance​ of 3D Geometry​
  • Presenter: Shinyoung Yi, Min H. Kim (KAIST) Light is an electromagnetic wave which has a polarization information. In computer graphics and vision, polarization has played an important role to reconstruct 3D geometry and material appearance from captured images, and investigating polarimetric appearance itself is also considered as an important problem. In this context, our research group has proposed novel methods to deal with polarimetric appearance. Additionally, we introduces key challenges in polarimetric forward and inverse rendering problems, which have been being solved by our research group.​
  • Unsupervised Learning Framework for Fairness-aware Classification
  • Presenter: Sungho Park, HYERAN BYUN (Yonsei University) Previous studies for fairness has a limitation that they necessarily require additional sensitive attribute labels. In this paper, we propose a classification method that ensures fairness with respect to all potential sensitive attributes. The core idea is to dynamically cluster data samples based on whether they are being correctly classified or not. By giving more weight to the group with misclassified samples, we encourage a classification model to be trained fairly with respect to all the unknown sensitive attributes on benchmark datasets.

AI INNOVATION HUB

Research Unit

Unit # Research Subject Research Led by (affiliation)
1 Neurotalk Prof. Seong-Whan Lee
(Korea University)
2 HyperModal Prof. Jinwoo Shin
(KAIST)
3 MetaverseVision Prof. Minsoo Cho
(POSTECH)
4 HybridAI Prof. Seon Joo Kim
(Yonsei University)
5 DeepFold Prof. Eunok Paek
(Hanyang University)
6 AI4Discovery Prof. Minho Lee
(Kyungpook Nat’l University)
7 Universal Learning Machine Prof. Byoung-Tak Zhang
(Seoul Nat’l University)
8 Self-Evolving HW Intelligence Prof. Dongbo Min
(Ewha Womans University)
9 Collaborative Intelligence Prof. Choong-Seon Hong
(Kyunghee University)
10 Trans-Medical Intelligence Prof. Hyunjin Park
(Sungkunkwan University)
11 Space Observation Intelligence Prof. Jae-Young Shim
(UNIST)
12 Meta Energy Intelligence Prof. Jin Sul Kim
(Choonnam Nat’l University)

Event venue

SIGNIEL SEOUL

  • Address.

    LOTTE WORLD TOWER 76F-101F, 300 Olympic-ro, Songpa-gu, Seoul, Korea

  • Tel.

    02-3213-1000

  • SUBWAY
  • Line 2, Line 8 Jamsil Station - Exit 1 and Exit
    - Walk : 3 minutes

  • By KAL LIMOUSINE BUS
  • Incheon Airport Terminal 1 → Terminal 2 → LOTTE HOTEL WORLD (3 mins by walk from SIGNIEL SEOUL)

    ※ Bus No : 6705

    Terminal 1 Platform: 3B, 4A

    05:00 / 05:27 / 06:00 / 06:29 / 07:00 / 07:20 / 07:42 / 08:02 / 08:25 / 08:50 / 09:12 / 09:35 / 09:57 / 10:22 / 10:47 / 11:08 / 11:36 / 12:01 / 12:27 / 12:52 / 13:17 / 13:40 / 14:00 / 14:25 / 14:45 / 15:10 / 15:32 / 15:58 / 16:20 / 16:47 / 17:12 / 17:34 / 17:52 / 18:10 / 18:32 / 18:53 / 19:15 / 19:40 / 20:05 / 20:35 / 21:05 / 21:33 / 22:04 / 22:35 / 23:10

    Terminal 2 Platform: 17,18,19

    05:00 / 05:20 / 05:47 / 06:20 / 06:49 / 07:20 / 07:40 / 08:02 / 08:22 / 08:45 / 09:10 / 09:32 / 09:55 / 10:17 / 10:42 / 11:07 / 11:28 / 11:56 / 12:21 / 12:47 / 13:12 / 13:37 / 14:00 / 14:20 / 14:45 / 15:05 / 15:30 / 15:52 / 16:18 / 16:40 / 17:07 / 17:32 / 17:54 / 18:12 / 18:30 / 18:52 / 19:13 / 19:35 / 20:00 / 20:25 / 20:55 / 21:25 / 21:53 / 22:24 / 22:55 / 23:20

    LOTTE HOTEL WORLD (3 mins by walk from SIGNIEL SEOUL) → Incheon Airport Terminal 2 → Incheon Airport Terminal 1

    ※ Bus No : 6705

    04:55 / 05:00 / 05:10 / 05:20 / 05:30 / 05:45 / 06:00 / 06:20 / 06:40 / 07:00 / 07:25 / 07:50 / 08:15 / 08:40 / 09:00 / 09:25 / 09:50 / 10:15As for 04:55 bus, it only heads to terminal.

    LOTTE HOTEL WORLD (3 mins by walk from SIGNIEL SEOUL) ↔ Gimpo Airport
    Gimpo Airport → LOTTE HOTEL WORLD (3 mins by walk from SIGNIEL SEOUL)

    08:40 / 09:20 / 09:50 / 10:30 / 11:40 / 12:20 / 13:00 / 13:40 / 14:20 / 15:10 / 15:50 / 16:30 / 17:20 / 18:10 / 18:50 / 19:30 / 20:20 / 21:00 / 21:30 / 22:00 / 22:30 / 21:50 / 23:20

    LOTTE HOTEL WORLD (3mins by walk from SIGNIEL SEOUL) → Gimpo Airport

    05:05 / 05:35 / 06:05 / 06:35 / 07:10 / 07:45 / 08:20 / 09:00 / 09:40 / 10:20 / 10:50 / 11:30 / 12:10 / 12:40 / 13:10 / 13:50 / 14:30 / 15:10 / 16:00 / 16:50 / 16:30 / 18:20 / 19:10 / 19:30

    ※ Bus No : 6706

    ※ Information about KAL Limosuine (Operated by Korean Airlines) The bus (No. 6705) goes and comes from the Incheon Airport to LOTTE HOTEL WORLD (Jamsil) and it takes three minutes by walk to SIGNIEL SEOUL. The schedule starts from 5:00 am and it takes approximately 80-100 minutes depending on the traffic condition. The price for adult is KRW 16,000 and children is KRW 10,000. As for No. 6706 which heads to Gimpo Airport from LOTTE HOTEL WORLD (Jamsil) it takes about 60 mins and the cost for an adult is 7,500 won and a child is 4,500 won. Boarding : Bus Stop No.5 (Domestic Terminal) / Bus Stop No.6 (International Terminal)

    * For more information visit to https://www.kallimousine.com/eng/schedule_en.asp

  • BY airport limousine bus
  • 6000, 6006

  • BY AREX
  • Incheon Airport → Seoul Station

    05H : 20 23 33 43 52
    06H : 00 02 10 17 22 30 31 37 46 55
    07H : 00 07 12 20 24 31 43 54
    08H : 02 07 14 22 35 37 54
    09H : 10 12 20 28 35 41 46 54 57
    10H : 06 14 23 32 35 43 56 58
    11H : 12 22 30 36 51 56
    12H : 07 10 16 22 30 37 50 53
    13H : 01 06 14 24 32 35 41 52
    14H : 00 02 12 26 34 41 52
    15H : 00 05 13 26 29 38 45 54 59
    16H : 07 10 18 30 33 41 47 58
    17H : 06 15 20 31 40 51 59
    18H : 06 16 22 35 39 47 53
    19H : 06 15 18 26 36 46 54 58
    20H : 08 16 23 38 40 47
    21H : 00 02 13 23 32 45 49
    22H : 06 21 36 54
    23H : 08 25 42 57

    Gimpo Airport → Seoul Station
    05H : 43 52
    06H : 00 10 20 31 39 47 54
    07H : 01 09 15 20 24 33 38 45 51 57
    08H : 02 09 15 21 27 34 40 45 52
    09H : 00 08 14 23 31 43 52 57
    10H : 09 13 18 25 34 39 43 51
    11H : 03 08 12 20 30 35 42 49
    12H : 01 07 13 20 30 41 47 53
    13H : 01 07 14 25 32 38 43 51
    14H : 03 12 18 25 31 39 49 54
    15H : 05 11 18 31 36 42 51 58
    16H : 06 20 25 31 38 43 47 55
    17H : 05 12 17 24 35 46 50 57
    18H : 03 08 17 23 30 36 43 48 53 59
    19H : 11 18 25 30 35 46 50 55
    20H : 03 08 13 25 29 35 41 47 54
    21H : 00 11 17 20 25 32 39 44 50
    22H : 00 09 18 26 34 43 52 58
    23H : 13 31 45
    24H : 02 19 34
닫기
Joseph E. Gonzalez
UC Berkeley
he potential for machine learning systems
Bio
Joseph Gonzalez is a founding member of the UC Berkeley Sky Computing Lab and the RISE Lab where he studies the design of future cloud architectures and machine learning systems. He is also a member of the Berkeley AI Research Group where he works on new neural architectures for computer vision, natural language processing, and robotics. Gonzalez's research addresses problems in neural network design, efficient inference, computer vision, prediction serving, autonomous vehicles, graph analytics, and distributed systems. Building on his research, Gonzalez co-founded Aqueduct to commercialize a radically simpler production data science platform. Finally, Gonzalez helped develop the Data Science program at UC Berkeley and co-created Data100 which is now taught to over 1500 students a semester. Prior to joining Berkeley, Gonzalez co-founded Turi Inc (formerly GraphLab) based on his thesis work and created the GraphX project (now part of Apache Spark). Gonzalez’s innovative work has earned him significant recognition, including the Okawa Research Grant, NSF Early Career Award, and the NSF Expedition Award.
Title
Models and Systems for Efficient Training and Inference
Abstract
Over the past decade, my group has worked at the intersection of machine learning and systems to develop the models, algorithms, and frameworks needed to accelerate inference, scale training, and enable training on the edge. In this talk, I will highlight some of the key directions we explored and some of the lessons we learned. I will discuss our early work in efficient inference around dynamic models and architecture search and highlight design principles that are becoming increasingly relevant. I will then turn to one of the fundamental challenges of scaling training -- the memory wall. I will highlight our work on optimal strategies to manage memory usage both for GPUs and mobile processors as well as some of the techniques we have developed to leverage quantization during training. Finally, I will conclude by highlighting some ongoing efforts in my group to serve large language models.
닫기
Torsten Hoefler
ETH Zürich
Bio
Torsten Hoefler directs the Scalable Parallel Computing Laboratory (SPCL) at D-INFK ETH Zurich. He received his PhD degree in 2007 at Indiana University and started his first professor appointment in 2011 at the University of Illinois at Urbana-Champaign. Torsten has served as the lead for performance modeling and analysis in the US NSF Blue Waters project at NCSA/UIUC. Since 2013, he is professor of computer science at ETH Zurich and has held visiting positions at Argonne National Laboratories, Sandia National Laboratories, and Microsoft Research Redmond (Station Q). Dr. Hoefler's research aims at understanding the performance of parallel computing systems ranging from parallel computer architecture through parallel programming to parallel algorithms. He is also active in the application areas of Weather and Climate simulations as well as Machine Learning with a focus on Distributed Deep Learning. In those areas, he has coordinated tens of funded projects and an ERC Starting Grant on Data- Centric Parallel Programming. He has been chair of the Hot Interconnects conference and technical program chair of the Supercomputing and ACM PASC conferences. He is associate editor of the IEEE Transactions of Parallel and Distributed Computing (TPDS) and the Parallel Computing Journal (PARCO) and a key member of the Message Passing Interface (MPI) Forum. He has published more than 200 papers in peer-reviewed international conferences and journals and co-authored the latest versions of the MPI specification. He has received best paper awards at the ACM/IEEE Supercomputing Conference in 2010, 2013, and 2014 (SC10, SC13, SC14), EuroMPI 2013, IPDPS'15, ACM HPDC'15 and HPDC'16, ACM OOPSLA'16, and other conferences. Torsten received ETH Zurich's Latsis Prize in 2015, the SIAM SIAG/ Supercomputing Junior Scientist Prize in 2012, the IEEE TCSC Young Achievers in Scalable Computing Award in 2013, the Young Alumni Award 2014 from Indiana University, and the best student award 2005 of the Chemnitz University of Technology. Torsten was elected into the first steering committee of ACM's SIGHPC in 2013 and he was re-elected in 2016. His Erdős number is two (via Amnon Barak) and he is an academic descendant of Hermann von Helmholtz.
Title
Efficient AI: From supercomputers to smartphones
Abstract
Billion-parameter artificial intelligence models have proven to show exceptional performance in a large variety of tasks ranging from natural language processing, computer vision, and image generation to mathematical reasoning and algorithm generation. Those models usually require large parallel computing systems, often called "AI Supercomputers", to be trained initially. We will outline several techniques ranging from data ingestion, parallelization, to accelerator optimization that improve the efficiency of such training systems. Yet, training large models is only a small fraction of practical artificial intelligence computations. Efficient inference is even more challenging - models with hundreds-of-billions of parameters are expensive to use. We continue by discussing model compression and optimization techniques such as fine-grained sparsity as well as quantization to reduce model size and significantly improve efficiency during inference. These techniques may eventually enable inference with powerful models on hand-held devices.
닫기
Mingoo Seok
Columbia University
Bio
Mingoo Seok is an associate professor of Electrical Engineering at Columbia University. He received his B.S. from Seoul National University, South Korea, in 2005, and his M.S. and Ph.D. degree from the University of Michigan in 2007 and 2011, respectively, all in electrical engineering. His research interests are various aspects of VLSI circuits and architecture, including ultra- low-power integrated systems, cognitive and machine-learning computing, an adaptive technique for the process, voltage, temperature variations, transistor wear-out, integrated power management circuits, event-driven controls, and hybrid continuous and discrete computing. He won the 2015 NSF CAREER award and the 2019 Qualcomm Faculty Award. He is the technical program committee member for multiple conferences, including IEEE International Solid-State Circuits Conference (ISSCC). In addition, he has been an associate editor for IEEE Transactions on Circuits and Systems Part I (TCAS-I) (2014-2016), IEEE Transactions on VLSI Systems (TVLSI) (2015-present), IEEE Solid-State Circuits Letter (SSCL) (2017-present), and as a guest associate editor for IEEE Journal of Solid-State Circuits (JSSC) (2019).
Title
Energy-Efficient AI Hardware
Abstract
It is exciting to see state-of-the-art artificial intelligence (AI) models achieving human-level performance. But as a hardware engineer, it is also concerning that those models incur a tremendous amount of computational complexity. This has been putting a significant amount of pressure on the hardware community to make more energy-efficient hardware. Traditionally, device scaling (Moore’s law), Dennard’s scaling, dynamic voltage frequency scaling (DVFS), and their combinations are the most effective tool for extracting energy efficiency. Still, their efficacy generally has largely waned, for example, compared to 10-20 years ago. Recently, hardware researchers have tried many alternatives to continue energy efficiency improvement. In this seminar, we will introduce the four most prominent directions, namely non-von-Neumann architecture, in-memory-computing (IMC), analog-mixed-signal (AMS) computing, and energy-efficient algorithms. In the introduction, we will use the recent hardware prototypes as examples.
닫기
JooYoung Kim
KAIST
Bio
Joo-Young Kim received the B.S., M.S., and Ph. D degree in Electrical Engineering from Korea Advanced Institute of Science and Technology (KAIST), in 2005, 2007, and 2010, respectively. He is currently an Assistant Professor in the School of Electrical Engineering at KAIST. He is also the Director of AI Semiconductor Systems (AISS) research center. His research interests span various aspects of hardware design including VLSI design, computer architecture, FPGA, domain specific accelerators, hardware/software co-design, and agile hardware development. Before joining KAIST, Joo-Young was a Senior Hardware Engineering Lead at Microsoft Azure working on hardware acceleration for its hyper- scale big data analytics platform named Azure Data Lake. Before that, he was one of the initial members of Catapult project at Microsoft Research, where he deployed a fabric of FPGAs in datacenters to accelerate critical cloud services such as machine learning, data storage, and networking.
Title
Energy-Efficient AI Hardware
Abstract
Deep learning technology has made significant progress on various cognitive tasks, once believed impossible for computers to do well as humans, including image classification, object detection, speech recognition, and natural language processing. However, the vast adaptation of deep learning also highlights its shortcomings, such as limited generalizability and lack of interpretability. In addition, application-specific deep learning models require lots of manually annotated training samples with sophisticated learning schemes. Witnessing the performance saturation of early models such as MLP, CNN, and RNN, one notable recent innovation in deep learning architecture is the transformer model introduced in 2017. It has two good properties towards artificial general intelligence over conventional models. First, the performance of transformer models continues to grow with their model sizes and training data. Second, transformers can be pre-trained with tons of unlabeled data either through unsupervised or self-supervised learning and can be fine-tuned quickly for each application. In this talk, I will present a multi-FPGA acceleration appliance named DFX for accelerating hyperscale transformer-based AI models. Optimized for OpenAI’s GPT (Generative Pre-trained Transformer) models, it manages to execute an end-to-end inference with low latency and high throughput. DFX uses model parallelism and optimized dataflow that is model-and-hardware-aware for fast simultaneous workload execution among multiple devices. Its compute cores operate on custom instructions and support entire GPT operations including multi-head attentions, layer normalization, token embedding, and LM head. We implement the proposed hardware architecture on four Xilinx Alveo U280 FPGAs and utilize all of the channels of the high bandwidth memory (HBM) and the maximum number of compute resources for high hardware efficiency. Finally, DFX achieves 5.58× speedup and 3.99× energy efficiency over four NVIDIA V100 GPUs on the modern GPT-2 model. DFX is also 8.21× more cost-effective than the GPU appliance, suggesting that it can be a promising alternative in cloud datacenters.
닫기
Dimitris Papailiopoulos
University of Wisconsin-Madison
Bio
Dimitris Papailiopoulos is the Jay & Cynthia Ihlenfeld Associate Professor of Electrical and Computer Engineering at the University of Wisconsin-Madison, a faculty fellow of the Grainger Institute for Engineering, and a faculty affiliate at the Wisconsin Institute for Discovery. His research interests span machine learning, information theory, and distributed systems, with a current focus on efficient large-scale training algorithms. Before coming to Madison, Dimitris was a postdoctoral researcher at UC Berkeley and a member of the AMPLab. He earned his Ph.D. in ECE from UT Austin, under the supervision of Alex Dimakis. He received his ECE Diploma M.Sc. degree from the Technical University of Crete, in Greece. Dimitris is a recipient of the NSF CAREER Award (2019), three years of Sony Faculty Innovation Awards (2018, 2019 and 2020), a joint IEEE ComSoc/ITSoc Best Paper Award (2020), an IEEE Signal Processing Society, Young Author Best Paper Award (2015), the Vilas Associate Award (2021), the Emil Steiger Distinguished Teaching Award (2021), and the Benjamin Smith Reynolds Award for Excellence in Teaching (2019). In 2018, he co-founded MLSys, a new conference that targets research at the intersection of machine learning and systems.
Title
Transformers as universal computers and prompts as their programs.
Abstract
Large language models (LLMs) like GPT3 and have been recently shown to have impressive few-shot, in-context learning performance, across several tasks that they were not explicitly trained on, e.g., symbolic reasoning, arithmetic, code interpretation and others. This surprising behavior is unlocked by appropriately prompting the LLM, e.g., by providing a description of the task, along with a few examples. These observations have hinted at the possibility of LLMs becoming general purpose computers, programmable through their prompts. In this talk, we will explore this idea and show that beyond trainable, modern LLMs are also programmable. We show it is possible to build transformer networks so that that prompts (i.e., inputs) become executable programs. We will discuss how we can program an LLM to perform simple algebraic computations, how they can perform in-context learning by simulating learning algorithms at inference time, and how one can build a universal computer out of them. We will explore potential implications of these findings for the future of foundation models.
닫기
Mostafa Dehghani
Google Brain
Bio
Mostafa Dehghani is a researcher at Google Brain. He has been working on scaling neural networks for language, vision, and robotics. Besides large scale models, he works on improving the allocation of compute in neural networks, in particular Transformers, via adaptive and conditional computation. Mostafa obtained his Ph.D. from the University of Amsterdam where he worked on training neural networks with imperfect supervision.
Title
Efficiency, the Next Grand Challenge of Artificial Intelligence
Abstract
Scaling machine learning models has become the safest bet for researchers and practitioners to make fast progress toward the ultimate goals of artificial intelligence. The computational power has become the bottleneck for scaling in many cases, but also the lack of enough training data. Given this, the "efficiency" of learning algorithms is becoming more important than ever. In this talk, different aspects of efficiency will be covered, from the definition of various efficiency indicators to misnomers in this subject. We talk about the cost-quality trade-off in various setups and problems and discuss ideas for navigating our learning algorithms toward the Pareto frontiers of this trade-off. We zoom into some of the most used algorithms and state-of-the-art architectures in deep learning, such as Transformers, and discuss ideas for relaxing the complexity of different components in order to make these models and algorithms more efficient. Finally, we talk about the smart allocation of computational budget per input, a basic ability of the human brain that no strong learning algorithm possesses, and argue how the benchmark lottery is holding us back from achieving true efficiency via more adaptive algorithms.
닫기
David Reitter
Google
Bio
David Reitter is a research scientist at Google Research. Dr. Reitter has explored "how the mind works", with a focus on big-data psycholinguistics, and defining computational models of conversational interaction. Dr. Reitter's current research interests include factuality in generative language models. Reitter (PhD, Edinburgh 2008) previously worked at Carnegie Mellon and taught as a tenured professor at Pennsylvania State University.
Title
Trustworthy and controlled dialogue in systems driven by very large language models
Abstract
Conversational human-computer interaction has seen much interest in research and industry. Large generative language models have become able to often convincingly emulate the ease and fluency of human interaction. For example, that was demonstrated by Google’s Language Model for Dialog Applications (LaMDA), which is a 137B parameter model pre-trained on 1.56T worlds of public dialog data and web text (Thoppilan et al., 2022). The fluency and apparent intelligence of such models, however, comes with risks in terms of safety, as we need to prevent hallucination. Substantial progress has been made on the way to practical utility for end-users. I will show how to reliably evaluate whether large pretrained language models – such as T5 or GPT-3 – produce output grounded against provided, trustworthy evidence (“Attributable to Identified Sources”, Rashkin et al.). Answering the question whether a statement is justified by given evidence seems easy, but becomes intriguingly challenging even for human annotators once phenomena are considered that we see in real-world conversation. Human annotators can also delegate this process to large natural language inference models such as T5 and PaLM (Chowdhery et al., 2022), which can detect hallucinations (against a reference) as end-to-end architectures or by transforming statements into questions and applying question-answering models (Q2, Honovich et al., 2021). I will present experiments examining whether the 540B parameter model PaLM can enable zero-shot explanation generation for statement-to-evidence attribution, which may soon facilitate cheaper and quicker attribution verification and, in turn, safe and trustworthy natural language generation.
닫기
Chanwoo Kim
Samsung Research
Bio
Chanwoo Kim has been a corporate executive vice president at Samsung research leading the language and voice team. He joined Samsung research as a corporate vice president heading the speech processing Lab in Feb. 2018. He has been leading research on end-to-end speech recognition, end-to-end text-to-speech (TTS), machine translation, Natural Language Understanding (NLU), Language Modeling (LM) and Question Answering (QA), speech enhancement, key-word spotting, and so on at Samsung Research. Most of these research outcomes have been commercialized for Samsung products. He was a software engineer at the Google speech team between Feb. 2013 and Feb. 2018. He worked for acoustic modeling for speech recognition systems and enhancing noise robustness using deep learning techniques. While working for Google, he contributed to data-augmentation and acoustic modeling of Google speech recognition systems. He contributed to the commercialization of various Google AI speakers and google speech recognition systems. He was a speech scientist at Microsoft from Jan. 2011 to Jan. 2013. Dr. Kim received his Ph. D. from the Language Technologies Institute of School of Computer Science Carnegie Mellon University in Dec. 2010. He received his B.S and M.S. degrees in Electrical Engineering from Seoul National University in 1998 and 2001, respectively. Dr. Kim’s doctoral research was focused on enhancing the robustness of automatic speech recognition systems in noisy environments. Between 2003 and 2005 Dr. Kim was a Senior Research Engineer at LG Electronics, where he worked primarily on embedded signal processing and protocol stacks for multimedia systems. Prior to his employment at LG, he worked for EdumediaTek and SK Teletech as an R&D engineer.
Title
Fusion of speech and language technologies to build more natural conversation systems.
Abstract
In this talk, we discuss our recent research combining speech and language technologies. Traditionally, speech and language technologies have evolved as separate fields of study. However, after the introduction of the sequence-to-sequence models and neural network blocks such as Transformers, the structure of speech and language processing has become more similar. In addition to this, there has been growing interest in building end-to-end models that combine both the speech and language models rather than implementing pipelined systems consisting of separate speech and language models. In this talk, we introduce our recent work on Spoken Language Understanding (SLU) that combines speech recognition and Natural Language Understanding (NLU) and speech translation that integrates speech recognition with Machine Translation (MT). Finally, we discuss how to combine large-scale Language Models (LMs) with speech encoders to build more natural conversation systems.
닫기
Shane Moon
Meta AI
Bio
Dr. Seungwhan Moon is a Lead Research Scientist at Meta Reality Labs, conducting research in multimodal learning for AR/VR applications. His recent projects include multimodal and knowledge-grounded conversational AI, multimodal sensor understanding, etc. He received his PhD in School of Computer Science, Carnegie Mellon University under Prof. Jaime Carbonell. Many of his works are recognized and published at the leading NLP/ML conferences, including a best paper nomination at ACL'19. Before joining Facebook, he has also worked at Snapchat Research, Disney Research, etc. He is a recipient of Samsung Scholarship, LTI Research Fellowship, and Olin Merit Full Scholarship.
Title
Towards Multimodal Conversational AI
Abstract
There is a growing interest in building virtual assistants with multimodal capabilities in the recent literature. The unique requirements in this setting have inspired new research directions such as (a) understanding users’ situated multimodal contexts (e.g. vision, sensor signals) as well as language-oriented conversational contexts, (b) grounding language on growing external and internal knowledge graphs, and (c) developing inference models with on-device constraints and privacy-secured methods. In this talk, I highlight several new challenges and the state-of-the-art models for the aforementioned areas from the recently published literature.
닫기
Eric Yuan
MSR
Bio
Eric Yuan is a senior researcher at Microsoft Research, Montreal. He joined Maluuba (acquired by Microsoft in 2017) since October 2015, prior to that, he received his master’s degree from New York University in 2015, and bachelor’s degree from Beijing University of Technology in 2011. Since joining Maluuba and Microsoft, he has worked on a diverse set of systems that help machines to read, write, and use language as a tool. Recently, he is very interested in exploring ways of helping neural agents to learn procedural knowledge and to interact with human world through language.
Title
Towards building machines that can use language as a tool
Abstract
Thanks to the developments of advanced machine learning and natural language processing systems, we can consider leveraging language in a way that is beyond reading and writing. Language has many nice properties that makes it a good representation for agents. Leveraging language, agents can access human knowledge in a more natural way, they can also cooperate with various types of social peers. In this talk, I will briefly describe a few examples to give you an idea what I mean by leveraging language as a tool, especially in an interactive setting. Then, I will introduce a set of environments we designed that facilitate such research. Finally, I will discuss some exciting research directions we plan to pursue.
닫기
Colin Raffel
Huggingface / UNC at Chapel Hill
Bio
Colin Raffel is an Assistant Professor at UNC Chapel Hill and a Faculty Researcher at Hugging Face. His work aims to make it easy to get computers to do new things. Consequently, he works in machine learning (enabling computers to learn from examples) and natural language processing (enabling computers to communicate in natural language).
Title
Building Machine Learning Models like Open-Source Software
Abstract
Pre-trained models have become a cornerstone of modern ML pipelines thanks to the fact that they can provide improved performance with less labeled data on downstream tasks. However, these models are typically created by a resource-rich research group that unilaterally decides how a given model should be built, trained, and released, after which point it is left as-is until a better pre-trained model comes along to completely supplant it. In contrast, open-source development has proven that it is possible for a distributed community of contributors to work together to iteratively build complex and widely-used software. This kind of large-scale distributed collaboration is made possible through a mature set of tools including version control, continuous integration, merging, and more. In this talk, I will present a vision for building machine learning models in the way that open-source software is developed. I will also discuss our preliminary work on model merging, cheaply-communicable patches, hyper-distributed training on volunteer computing, and a version control system for model parameters.
닫기
Nan Duan
MSR
Bio
Dr. Nan DUAN is a senior principal researcher & research manager at Microsoft Research Asia. He is an adjunct Ph.D. supervisor at University of Science and Technology of China and an adjunct professor at Tianjin University. His research interests include natural language processing, code intelligence, multimodal intelligence, and machine reasoning.
Title
Code Intelligence: Models, Applications and Future
Abstract
As we all know, self-supervised pre-training has become the new paradigm in NLP and been extended to other areas such as software engineering. Due to the surface form similarity between programming languages and natural languages and the easy access of large-scale code corpus from Web, foundation models for code have already been developed in the past 2 years. In this talk, we will systematically introduce the latest code intelligence research from Microsoft Research Asia, covering foundation models for code and their applications in various scenarios such as code retrieval, completion, review, refinement, etc. We will summarize our observations and insights on this area and highlight possible future directions.
닫기
Kilnam Chon
Professor Emeritus, KAIST
Bio
Professor Chon contributed to the Internet’s growth in Asia through his extensive work in advancing Internet initiatives, research, and development. He developed the first Internet in Asia, called SDN in 1982, and his pioneering work inspired many others to promote the Internet’s further growth in the region. Chon has worked on networking systems, including the Internet, since the early 1980s. He founded and is the current chair of various regional Internet organizations such as Asia Pacific Networking Group (APNG), Asia Pacific Advanced Network (APAN), and Asia Pacific Top Level Domain Name Forum (APTLD). Professor Chon received a PhD degree in computer science from University of California, Los Angeles in 1974. He joined the Korea Institute of Electronics Technology in 1979 to work on computer system development and the internet development,, and moved to Korea Advanced Institute of Science and Technology (KAIST) in 1982 as a professor in the Computer Science Department. KAIST is the nation's leading science and technology institution. He taught at KAIST, Keio University in Japan, and Tsinghua University in China in the past.
https://www.internethalloffame.org/inductees/kilnam-chon https://en.wikipedia.org/wiki/Kilnam_Chon
Title
Welcome to the Responsible AI Track
Abstract
An overview on “Responsible AI” is given in this welcome remark to kick off the track. The responsible AI is not easy to implement like many other science and technology issues. I give two examples: responsible nuclear science and technology including nuclear power systems, and responsible automobile systems. Then, I show Korean cases; the ethics classes in computer science and AI governance conferences. I compare the histories of AI development and the Internet development in Korea next. Finally, I would like to mention on the AI ecosystem. An healthy AI ecosystem may be the ultimate goal of the responsible AI.
닫기
Krishna Gummadi
Director, MPI-SWS
Bio
Krishna P. Gummadi received the B.Tech. degree in computer science and engineering from IIT Madras, Chennai, India, in 2000, and the Ph.D. degree in computer science and engineering from the University of Washington, Seattle, WA, USA, in 2005.,He is currently the Scientific Director and the Head of the Networked Systems Research Group, Max Planck Institute for Software Systems (MPI-SWS), Saarbrücken, Germany. He also holds an honorary professorship at the University of Saarland, Saarbrücken. His research interests are in the measurement, analysis, design, and evaluation of complex Internet-scale systems. His current projects focus on understanding and building social computing systems. Specifically, they tackle the challenges associated with assessing the credibility of information shared by anonymous online crowds, understanding and controlling privacy risks for users sharing data on online forums, understanding, predicting, and influencing human behaviors on social media sites (e.g., viral information diffusion), and enhancing fairness and transparency of machine (data-driven) decision-making in social computing systems. Dr. Gummadi received the ERC Advanced Grant in 2017 to investigate “Foundations for Fair Social Computing” (No. 789373).
https://people.mpi-sws.org/~gummadi/
Title
Foundations for Fair Social Computing
Abstract
Over the past two decades, the Internet has enabled (and continues to enable) numerous disruptive socio-technical systems like BitTorrent, Facebook, Amazon, and Bitcoin that have transformed media landscape, personal, corporate, and political communications, trade, and monetary systems. The scale and societal impact of these Internet-scale systems raise fundamental questions about their transparency and potential for unfairness and bias against some of their users. Understanding these threats requires us to define measures and develop methods to quantify unfairness and bias, often via black-box auditing of opaque systems. In this talk, I will discuss some of our attempts to measure bias and unfairness and to tackle the challenges with designing fair and unbiased socio-technical systems, while maintaining their innovative potential.
닫기
Virgilio Almeida
Professor Emeritus, UFMG
Bio
Virgilio Almeida is an emeritus professor of Computer Science at the Federal University of Minas Gerais (UFMG). He is also Faculty Associate at the Berkman Klein Center at Harvard University. Virgilio received his PhD degree in Computer Science at Vanderbilt University, a Master's degree in computer science at PUC-Rio and a bachelor degree in Electrical Engineering from UFMG. He held visiting positions in several universities and research labs, such as Harvard University (School of Engineering and Applied Sciences), New York University, Boston University, Santa Fe Institute and HP Research Labs. Virgilio was the National Secretary for Information Technology Policies of the Brazilian government from 2011 to 2015. He was the chair of the Brazilian Internet Steering Committee (CGI.br) from 2011-2016. He was also the chair of NETmundial, the Global Multi-stakeholder Conference on the Future of Internet Governance, that was held in Sao Paulo in 2014. Virgilio is member of the Brazilian Academy of Sciences (ABC) and the World Academy of Sciences (TWAS). His list of publications is available at:
https://scholar.google.com/citations?user=sPKpIPwAAAAJ&hl=en&oi=ao
Title
Social and political challenges for AI in the Global South
Abstract
In addition to classic problems such as privacy and personal data protection, AI regulation and governance must face new problems that need to be framed through social and ethical lens, such as bias, justice, facial recognition, autonomous weapons, job destruction, and others. AI governance for the Global South should have goals and characteristics that differ from AI regulation in developed economies. In this talk, I will address answers to the following question: How computing research could contribute to the development of AI regulation and governance in the Global South?
닫기
Jonathan Stray
Researcher, Berkeley Center for Human-Compatible AI
Bio
I’m a Senior Scientist at the Berkeley Center for Human-Compatible AI (CHAI), working on recommender systems — the algorithms that select and rank content across social media, news apps, streaming music and video, and online shopping. I study how their operation affects well-being, polarization, and other things, and try to design recommenders that are better for people and society. For a decade I taught the double masters in computer science and journalism at Columbia Journalism School (lectures online). I led the development of Workbench, a visual programming system for data journalism, and built Overview, an open-source document set analysis system for investigative journalists. For a while I was an editor at the Associated Press, and I’ve also written for the New York Times, Foreign Policy, ProPublica, MIT Tech Review, and Wired. Before that, I did computer graphics R&D at Adobe Systems.
http://jonathanstray.com/me
Title
Making Recommender Systems Healthy for People and Society
Abstract
Recommender systems are the algorithms which select, filter, and personalize content across many of the world’s largest platforms and apps. As such, their positive and negative effects on individuals and on societies have been extensively theorized and studied. Our overarching question is how to ensure that recommender systems enact the values of the individuals and societies that they serve. Addressing this question in a principled fashion requires technical knowledge of recommender design and operation, and also critically depends on insights from diverse fields including social science, ethics, economics, psychology, policy and law. This talk reports on a multidisciplinary effort to synthesize theory and practice from different perspectives, with the goal of providing a shared language, articulating current design approaches, and identifying open problems.
닫기
Asia Biega
Faculty, MPI-SP
Bio
Asia J. Biega is a tenure-track faculty member at the Max Planck Institute for Security and Privacy (MPI-SP) leading the Responsible Computing group. Her research centers around developing, examining and computationally operationalizing principles of responsible computing, data governance & ethics, and digital well-being. Before joining MPI-SP, Asia worked at Microsoft Research Montréal in the Fairness, Accountability, Transparency, and Ethics in AI (FATE) Group. She completed her PhD in Computer Science at the MPI for Informatics and the MPI for Software Systems, winning the DBIS Dissertation Award of the German Informatics Society. In her work, Asia engages in interdisciplinary collaborations while drawing from her traditional CS education and her industry experience, including consulting and engineering stints at Microsoft, Google and in e-commerce.
https://asiabiega.github.io/
Title
Responsible AI : Designing AI Systems for Digital Well-Being
ADA Workshop : Finding yourself in the modern research landscape
Abstract
Approaches to responsible computing often reveal the complex interrelations between algorithms, data, human factors, and policy. Despite our embracing of this complexity, we have an opportunity to more fundamentally rethink our relationship with technology. Instead of developing band-aid interventions, we might ask what digital well-being should mean and how we might proactively design our platforms to promote it. In this talk, I will examine this question through the lens of user engagement. Is engagement an adequate and sufficient proxy for digital well-being? What are the limits of quantifying well-being through behaviorist measurements of users? What tools, interventions, and practices might support platforms in designing for digital well-being?
닫기
Diego Sáez-Trumper
Researcher, Wikimedia
Bio
Diego Sáez Trumper is a Senior Research Scientist at the Wikimedia Foundation and Visiting Research Fellow at University Pompeu Fabra, where he obtained his Phd in 2013 under the supervision of Ricardo Baeza-Yates. Before, Diego worked as researcher and data scientist at NTENT, Eurecat, QCRI and Yahoo Labs. He has also been a visitor and collaborator of several universities such as UMFG (Brazil), Cambridge (UK), and UCU (Ukraine). His research focuses on the usage of data science to understand and deal with the diffusion of (dis)information in online platforms.
https://meta.wikimedia.org/wiki/User:Diego_(WMF)
Title
Wikipedia and Community Centered Machine Learning
Abstract
Nowadays, Wikipedia is one of the largest existing efforts of human collaboration. One of the main strengths of this project is the amount and quality of human time spent on curating and organizing knowledge. At the Wikimedia Foundation Research Team our work is centered in developing technology that can support and make our communities become stronger and more efficient. Our technical challenges such as providing tools that can work in 300+ languages are as important as creating explainable algorithms that can be understood by our communities. Moreover, with the huge disparity of content and contributors across languages we need to be able to measure these gaps, and build systems that solve or mitigate potential biases. Also, while we want to make the content flow across languages, we need to be careful to not help to propagate unreliable information as well as maintaining a neutral point of view and respecting cultural differences. In this talk we are going to briefly explain how we measure biases, and how we build tools that support our communities making Wikipedia as complete and reliable as possible.
닫기
Yoon Sik Cho
Professor, Chung Ang University
Bio
YOON-SIK CHO received his BS degree in Electrical Engineering from Seoul National University, South Korea, and Ph.D. degree in Electrical Engineering from University of Southern California. He was an Academic Mentor for the RIPS program with the Institute for Pure and Applied Mathematics, University of California Los Angeles, and a Postdoctoral Scholar with the Information Sciences Institute, University of Southern California. He also has worked as a Data Scientist at Apple (Applied Machine Learning Div), Research Scientist at Medallia (Text Analytics), and Engineering Intern at Qualcomm (Corporate Research). He is currently an Assistant Professor with the Department of AI, Chung-Ang University, South Korea. His research interests include large-scale data science, link prediction in complex network, recommender system, and AI fairness.
https://sites.google.com/aicampus.cau.ac.kr/dsl/members
Title
Fair Recommender Systems
Abstract
Recommender Systems assist users in finding preferred items or relevant information through suggesting contents, items, or services. With its success in various services, and the convenience it brings, recommender systems have attracted growing attention in both research and industry communities. We are living in the world where many of our decisions are recommended (or guided) by AI. In this talk, we review how recommender systems could be affected by bias, resulting in unfair results. We discuss the methods for fair machine learning and how these can be applied to recommender systems. We also discuss our roles as a practitioner in AI to direct AI towards fair solutions.
닫기
Steven Euijong Whang
KAIST
Bio
Steven Euijong Whang is an associate professor at KAIST EE and AI. His research interests include Responsible AI and Data-centric AI. Previously he was a Research Scientist at Google Research and co-developed the data infrastructure of the TensorFlow Extended (TFX) machine learning platform. Steven received his Ph.D. in computer science in 2012 from Stanford University and his B.S. in computer science from KAIST in 2003. He is a Kwon Oh-Hyun Endowed Chair Professor (2020-2023) and received a Google Research Award (2022) and a Google AI Focused Research Award (2018, the first in Asia).
Homepage: https://stevenwhang.com
Title
Towards a healthy AI ecosystem with Responsible AI
Abstract
In this panel, we invite a diverse set of distinguished researchers and discuss how Responsible AI can be used to realize a healthy AI ecosystem. We will first discuss important legal, social, and policy issues in our world. We will then discuss whether current technological advances in Responsible AI including safety, interpretability, robustness, and fairness can be used to solve these issues. Finally, we discuss how humans can play a key role in this endeavor.
닫기
Kyung Sin Park
Korea University
Bio
Kyung Sin “KS” Park is a professor at the Korea University School of Law, and co-founder and Executive Director of www.opennetkorea.org. He served as Commissioner at the Korean Communication Standards Commission, a presidentially appointed internet content regulation body (2011-2014), and as a member of the National Media Commission, a Parliament-appointed advisory body on newspaper-broadcasting co-ownership bans and other media and Internet regulations (2010). He also served as International Relations Counsel to the Korea Film Council, arranged the Korea-France Film Co-production Treaty, and advised on the UNESCO Cultural Diversity Convention (2002-2007). He is Executive Director for both the PSPD Law Center (2008-) and Open Net Korea (2013-), which have pursued and won several high profile litigation and legislative actions in the areas of freedom of speech, privacy, net neutrality, web accessibility, digital innovation, and intellectual property. He founded the Korea University Law Review and the Law School’s Clinical Legal Education Center, and spearheaded www.internetlawclinic.org and www.transparency.or.kr under that Center. Dr. Park has been a visiting lecturer in Internet Law at UCI Law (2017) and in Global Censorship at UC Davis School of Law (2017). He has an AB in Physics from Harvard University, and JD from UCLA School of Law.
Homepage: https://faculty.korea.ac.kr/kufaculty/kyungsinpark/index.do
https://prostasia.org/blog/team/kyung-sin-park/
닫기
Angjoo Kanazawa
UC Berkeley
Bio
Angjoo Kanazawa is an Assistant Professor in the Department of Electrical Engineering and Computer Science at the University of California at Berkeley. Her research is at the intersection of Computer Vision, Computer Graphics, and Machine Learning, focusing on the visual perception of the dynamic 3D world behind everyday photographs and video. Previously, she was a research scientist at Google NYC, and prior to that she was a BAIR postdoc at UC Berkeley. She completed her PhD in Computer Science at the University of Maryland, College Park, where she also spent time at the Max Planck Institute for Intelligent Systems. She has been named a Rising Star in EECS and is a recipient of Anita Borg Memorial Scholarship, Best Paper Award in Eurographics 2016, Google Research Scholar Award 2021, and a Spark Fellow 2022. She also serves on the advisory board of Wonder Dynamics, whose goal is to utilize AI technologies to make VFX effects more accessible for indie filmmakers.
Title
Towards Capturing Reality: Scenes and 3D People
Abstract
In this talk I will give an overview on the line of research my group is conducting in capturing reality. Namely, how can we digitize the 3D world from visual observations? Neural Radiance Fields demonstrated an exciting potential in photorealistic 3D reconstruction, however its original form has many shortcomings that makes it impractical for casual photorealistic 3D capture. In this talk, I will discuss the series of work my group has been conducting to make NeRF more practical. I will also talk about the direction in dynamic 3D capture, specifically perceiving 3D people that move about from images and video, and how to perceive both humans and the 3D environment at the same time.
닫기
Junyong Noh
KAIST
Bio
Junyong Noh is a professor of graduate school of culture technology (GSCT) at Korea Advanced Institute of Science and Technology (KAIST). He received a Ph.D. degree in computer science from University of Southern California (USC) in 2002. From 2003 to 2006, he worked at a Hollywood visual effects company, Rhythm and Hues Studios, as a graphics scientist. Some of the movies he participated in creating CGI include Garfield, Superman Returns, and The Chronicles of Narnia. Since 2006, he joined KAIST where he leads Visual Media Lab. His current research interests lie in facial/character animation, image/video processing, and immersive content creation. His work has been published in premier venues such as IEEE TPAMI, ACM TOG, and CGF. Including Screen X, the first multi projection movie viewing system, tens of his research outcomes have been transferred to industry for commercialization.
Title
Learning-based Character and Facial Animation
Abstract
The recent development in learning-based approaches has allowed the creation, animation, and manipulation of high-quality virtual avatars that can be used in diverse applications such as movies, games, and metaverse. In this presentation, I will first talk about a real-time motion control method that can generate high-quality and complex motion from various sets of unstructured data ranging from 1 to 48 minutes without any manual intervention using reinforcement learning. I will demonstrate the results for a character achieving different tasks, from simple direction control to complex avoidance of moving obstacles. Another important area for creating a virtual avatar is facial animation retargeting. Unlike traditional approaches that heavily rely on manually paired data, I will introduce an unsupervised learning method that reformulates the retargeting of 3D facial blendshape-based animations in the image domain, inspired by recent developments in face swapping and reenactment. Finally, I will also briefly go over how to edit a portrait video in the wild to produce a temporally coherent and natural motion after editing, based on StyleGAN.
닫기
Taehyun Rhee
Victoria Univ. of Wellington
Bio
Taehyun James (TJ) Rhee is the Director of Computational Media Innovation Centre, Associate Professor (tenured full professor in US system) at Faculty of Engineering, Co-founder of Computer Graphics degrees at School of Engineering and Computer Science in Victoria University of Wellington, New Zealand, and a founder of the Mixed Reality start-up, DreamFlux. He worked in the immersive and interactive technology sector over 25 years, across academia and industry. He worked at Samsung (1996-2012) as their Principal Researcher and General Manager to lead their Computer Graphics, Medical Physics Research at Samsung Advanced Institute of Technology (SAIT), and a Senior Researcher and Senior Manager of Research Innovation Centre at Samsung Electronics. He severed as the general chair for Pacific Graphics 2020-2021, XR chair for SIGGRAPH Asia 2018, executive committee for Asia Graphics Association. His current research focuses on; Post Metaverse - Immersive telepresence, Augmented telecollaboration, Extended Reality, Multimedia streaming, Multiuser communication, and Live visual effects. Digital Twins - High-fidelity real-time computer graphics, 3D volumetric scanning, environment modeling, photo-realistic rendering, and cinematic composition. AI effects - training data generation, machine learning for computer vision and graphics, and deep neural network.
Title
Televerse: Teleport to the Augmented Real-World driven by 3i innovation (#immersive, #interactive, #intelligent)
Abstract
New Zealand is well-known for its beautiful nature, which has contributed to it being a prime destination for many movies and commercials. A strong media and technology ecosystem has been built to support this industry.
Computer Graphics (CG) and visual effects (VFX) enables seamless blending between computer generated imagery and recorded real footage. Recent advancements of real-time technologies actuate the transition from off-line post-production to real-time. Immersive media technologies transform the end-user experience from observing, to a high sense of presence within the story. High-speed networking changes the media distribution from a pre-recorded medium to live streaming. Modern AI contributes to automatic pipeline and smart solution for better human media interactions This talk will introduce our research in real-time live visual effects, immersive telepresence, augmented telecollaboration, volumetric environment capturing and modelling, appearance modelling and reconstruction, which have been driven by 3i (immersive, interactive, intelligent) innovation, interdisciplinary research across computer graphics, vision, data science, machine learning, and more. We will further discuss the convergence of the 3i innovation, introduce a concept of augmented telepresence, and the new frame work and platform, “televerse”, which allow user’s illusion to virtually teleport and augment their telepresence to communicate with people in distance. The potential applications and future extensions are further discussed while introducing our recent case studies with public end-users.
닫기
Jiaya Jia
CUHK
Bio
Jiaya Jia is a professor of Department of Computer Science and Engineering at The Chinese University of Hong Kong (CUHK) and an IEEE Fellow. His research focuses specifically on image/video understanding, detection and segmentation, multi-modal AI, computational imaging, practical optimization, and advanced learning for visual content. His papers were cited 50,000+ times on Google Scholar. 40+ PhDs and fellows from this group are now active in academia and industry. Jiaya Jia assumes the position of Associate Editor-in-Chief (AEiC) of IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) since 2021. TPAMI is and continues to be one of IEEE’s flagship journals and one of the premier journals across all of computer science. He is also in the editorial board of International Journal of Computer Vision (IJCV). He continuously served as area chair of ICCV, CVPR, AAAI, ECCV, and of several other conferences for 10+ years. He was on program committees of major conferences in graphics and computational imaging, including ICCP, SIGGRAPH, and SIGGRAPH Asia. His research was funded by Microsoft, Qualcomm, Adobe, Intel, NVidia, Amazon, Lenovo, and several other companies.
Title
Challenge and Opportunity of 3D Perception
Abstract
As a critical part of AI, accurately understanding our 3D surroundings attracted intensive attention and research. Despite a lot of opportunities to use 3D semantic data for modeling and perception, there are still challenges for designing effective and general methods with high economic and social values. In this talk, I will discuss the stage that current 3D perception study has reached, the limitation of previous methods to meet the requirement of real-life applications, and efforts we have made to move the area forward towards higher practicality, generality, and efficiency.
닫기
Bohyung Han
SNU
Bio
Bohyung Han is a Professor in the Department of Electrical and Computer Engineering at Seoul National University, Korea. Before the current position, he was an Associate Professor in the Department of Computer Science and Engineering at POSTECH and a visiting research scientist in Google AI and Snap Research, both in Venice, CA, USA. He received a Ph.D. degree from the Department of Computer Science at the University of Maryland, College Park, MD, USA, in 2005. He served or will be serving as a conference organizing and technical program committee member including a Senior Area Chair in CVPR, NeurIPS, and ICLR, and an Area Chair in CVPR, ICCV, ECCV, NIPS/NeurIPS, ICLR, and IJCAI, a General Chair in ACCV 2022, a Tutorial Chair in ICCV 2019, a workshop chair in CVPR 2021, and a Demo Chair in ECCV 2022. He is also an Associate Editor in TPAMI and MVA, and an Area Editor in CVIU. He received Google AI Focused Research Award in 2018, and his research group won the Visual Object Tracking (VOT) Challenge in 2015 and 2016.
Title
Image Retrieval with Deep Learning
Abstract
The advance in deep learning has made significant impacts on various problems in computer vision, and image retrieval research has also benefited from the rapid progress of deep learning. This talk discusses image retrieval problems at various levels, including feature representations, application-oriented tasks, and system upgrades. Specifically, I first present a local feature descriptor based on deep neural networks and then introduce two image retrieval applications such as image geolocalization and image search with text feedback. Finally, I briefly discuss how to seamlessly upgrade image retrieval systems in the backend without sacrificing accuracy.
닫기
Hyun Soo Park
University of Minnesota
Bio
Hyun Soo Park is an Associate Professor at the Department of Computer Science and Engineering, the University of Minnesota (UMN), and CEO of Playtag. He is interested in computer vision approaches for behavioral imaging. He has received NSF's CRII, NSF's CAREER Awards, and CVPR 2021 Best Paper Honorable Mention Award. Prior to UMN, he was a Postdoctoral Fellow in GRASP Lab at University of Pennsylvania. He earned his Ph.D. from Carnegie Mellon University.
Title
Self-supervised Behavioral Imaging
Abstract
Humans transmit social signals through a number of nonverbal cues, including gaze direction, facial expression, and body gesture. These cues are often subtle, e.g., microscopic facial muscle movements of cynical smiles, very difficult for AI to read. This poses a critical challenge in collecting, annotating, and learning from data. In this talk, I will present a few instances of our recent effort that addresses this challenge through self-supervised learning. This includes leveraging multiview geometry, temporal coherence, and semantic prior. To the end, I will discuss our new commercialization effort that pushes behavioral understanding in the wild.
닫기
Kwang Moo Yi
UBC
Bio
Kwang Moo Yi is an assistant professor in the Department of Computer Science at the University of British Columbia (UBC), and a member of the Computer Vision Lab, CAIDA, and ICICS at UBC. Before, he was at the University of Victoria as an assistant professor, where he is currently an adjunct professor. Prior to being a professor, he worked as a post-doctoral researcher at the Computer Vision Lab in École Polytechnique Fédérale de Lausanne (EPFL, Switzerland), working with Prof. Pascal Fuaand Prof. Vincent Lepetit. He received his Ph.D. from Seoul National University under the supervision of Prof. Jin Young Choi. He also received his B.Sc. from the same University. He serves as area chair for top Computer Vision conferences (CVPR, ICCV, and ECCV), as well as AAAI. He is part of the organizing committee for CVPR 2023.
Title
Neural field methods for 3D Vision
Abstract
In this talk, I will introduce our recent work on applying Neural Fields, with focus on controllable rendering of humans. Neural Field-based methods, spearheaded by Neural Radiance Fields (NeRF) have recently gained substantial attention thanks to the versatility and ease in integrating the method into already existing vision pipelines. In our group, much focus was on adapting bending and lifting the volumetric rendering process into one that can be controlled by the user, or some driving signal such as the human pose. I will introduce our two different attempts into achieving a parametric and non-parametric controllable NeRF model, which we recently presented at CVPR and ECCV this year. More specifically, I will discuss Co-NeRF, where we integrate few-shot learning and lifting into a NeRF framework, so that various changes in the scene can be annotated by the user and controlled. I will also talk about NeuMan and show how a parametric human model, specifically the SMPL model, with NeRF so that one can build a controllable human NeRF model with a single video.
닫기
Seung-Hwan Baek
POSTECH
Bio
Seung-hwan Baek is an assistant professor at POSTECH. Before joining POSTECH, he worked as a post-doctoral research associate at Princeton University and holds a Ph.D. degree in Computer Science from KAIST. His research interests lie in computer graphics and computer vision with a particular focus on computational imaging and display. His work aims to capture, model, and analyze the high-dimensional visual information of the real world originating from complex interplays between light, material appearance, and geometry. To this end, he designs end-to-end computational imaging and display systems for fundamental scientific analysis as well as diverse application domains.
Title
Differentiable computational imaging with light waves
Abstract
Modern camera systems have evolved to effectively capture light and become essential tools for many applications. Developing such imaging systems has commonly required hand-crafted or heuristic rules set by human experts. Post-processing algorithms for restoration, reconstruction, and recognition were devised in isolation with the design of imaging systems. This separated design principle often results in sub-optimal performance and fundamentally limits its application domains. In this presentation, we present our recent effort on re-designing camera systems in an end-to-end manner from optics to processing algorithms specifically to capture, analyze, and exploit the overlooked dimensions of light waves such as polarization and spectrum. Our key idea is to make the rendering process, i.e., light transport, differentiable in simulation and jointly optimize the system parameters with subsequent reconstruction algorithms, e.g., neural networks. We demonstrate that our joint optics-reconstruction design allows us to better understand the high-dimensional visual information of the real world originating from complex interplays between light, material appearance, and geometry.
닫기
Yasutaka Furukawa
SFU
Bio
Dr. Yasutaka Furukawa is an associate professor in the School of Computing Science at Simon Fraser University (SFU). Dr. Furukawa's group has made fundamental and practical contributions to 3D reconstruction algorithms, improved localization techniques, and computational architectural modeling. Their open-source software has been widely adopted by tech companies used in surprising applications such as 3D printing of turtle shells and archaeological reconstruction. Dr. Furukawa received the best student paper award at ECCV 2012, the NSF CAREER Award in 2015, CS-CAN Outstanding Young CS Researcher Award 2018, Google Faculty Research Awards in 2016, 2017, and 2018, and PAMI Longuet-Higgins prize in 2020.
Title
Teaching a Computer to be an Architect
Abstract
I will present our recent work on structured geometry reconstruction and generation, which help architects with their workflows. For reconstruction, I will talk about vector floorplan reconstruction from scanned floorplan images or RGBD images acquired on-site: What the key insights were and how we changed the landscape of floorplan reconstruction in the last 5 years. For generation, I will talk about the graph-constrained floorplan generation work (House-GAN): How we fused a reconstruction technique with GAN to build the system. Lastly, I will share my views of how the relationships of structured reconstruction and generation (two once very distant fields) are changing recently.
닫기
Prof. Finale Doshi-Velez
Harvard University
Bio
Finale Doshi-Velez is a Gordon McKay Professor in Computer Science at the Harvard Paulson School of Engineering and Applied Sciences. She completed her MSc from the University of Cambridge as a Marshall Scholar, her PhD from MIT, and her postdoc at Harvard Medical School. Her interests lie at the intersection of machine learning, healthcare, and interpretability.
닫기
Prof. Edward Choi
KAIST
Bio
Edward Yoonjae Choi is an Assistant Professor in KAIST Graduate School of AI. He completed his MS in Computer Science from KAIST, and his PhD from Georgia Institute of Technology, and his professional experiences from ETRI, DeepMind, Google Research, and Google Health Research . His research areas are Machine Learning for Healthcare, Natural Language Processing, Multimodal Learning.
닫기
Prof. Yoo-Geun Ham
Chonnam National University
Bio
Yoo-Geun Ham is a Faculty of Earth System and Environmental Science, Chonnam National University. He completed his BS Atmospheric Sciences, Seoul National University, and his PhD from Atmospheric Sciences, Seoul National University as well. He has experienced at Global Modeling and Assimilation Office, NASA Goddard Space Flight Center. His research areas are Deep Learning for climate forecast, Seasonal/Decadal forecasts using atmosphere-ocean coupled models, Climate variability, Climate change, the ensemble-based data assimilation system.
닫기
Miran Lee
MSRA
Bio
Miran Lee is a Director of Microsoft Research Outreach Group at Microsoft Research responsible for academic collaboration in Korea and the Asia-Pacific region. Miran joined Microsoft Research Asia in 2005 as a university relations manager to build long-term and mutually beneficial relations with academia. She is based in Korea, where she engages with leading research universities, research institutes, and relevant government agencies. She establishes strategies and directions, identifies business opportunities, designs various programs and projects, and manages the budget. She works with students, researchers, faculty members, and university administrators to build strong partnerships, and works closely with the research groups at Microsoft Research, focusing on research collaboration, curriculum development, talent fostering, and academic exchanges. She has successfully run many global and regional programs such as Gaming & Graphics, Web-Scale NLP, Machine Translation, eHealth, SORA (Software Radio), Kinect, Microsoft Azure for Research, and Contents Creation. She’s currently leading 2 themes, ‘Discovery’ and ’ Health and Life Science’ as a member of global v-team. Before her current role, Miran Lee co-founded Smart Systems, which specializes in IT outsourcing services in Illinois, United States. As CEO of Smart Systems, she successfully led the business with more than 100 percent annual growth. From 1993 to 2002, she worked at British Telecom Korea in various positions ranging from systems engineer to account director to vice president. Lee also worked at Samsung SDS, where she was responsible for International VAN (Value Added Network) businesses and led the International VAN business team. She started her business career as a system developer at General Electric Information Services, where she developed email, EDI, and in-house applications. Miran Lee was an adjunct professor in the Telecommunication Department at Anyang University for two years (2001–2002).
Title
An Introduction to Ada workshops initiated by Microsoft Research Asia
Abstract
In this talk, I will introduce the origin, objectives, previous events, and achievements of Ada Workshops, a series of events initiated by Microsoft Research Asia in 2016, named after the first computer programmer, Ada Lovelace. Furthermore, I will briefly introduce Microsoft’s actions in empowering Women in Computing.
닫기
Donghee Yvette Wohn
NJIT
Bio
Dr. Yvette Wohn (she/her) is an associate professor at NJIT and director of the Social Interaction Lab (socialinteractionlab.com). Her research is in the area of Human Computer Interaction (HCI) where she studies the characteristics and consequences of social interactions in online environments such as livestreaming, esports, virtual worlds, and social media. Funded by the National Science Foundation, Mozilla Foundation, and Yahoo, her main projects examine 1) content moderation, online harassment and the creation/maintenance of online safe spaces, 2) social exchange in digital economies & digital patronage (creator-supporter dynamics), and 3) news consumption via social media
https://yvettewohn.com/
Title
Navigating Academic Conferences
Abstract
Students usually think that diligently attending paper sessions is the most important part of conferences. Right? Wrong! Conferences are about much more than obtaining knowledge. Networking at conferences is essential for one’s career and is more important than what most students perceive. However, networking is hard, especially for shy personalities and non-native speakers. In this talk, I will cover some basic information about how to make the most out of conference experiences.
닫기
Woo-Sung Jung
Postech
Bio
Woo-Sung Jung is a professor of the Department of Industrial and Management of Engineering at Pohang University of Science and Technology (POSTECH). He earned his Ph.D. in Physics from KAIST (Korea Advanced Institute of Science and Technology) in 2006. His research interest is understanding society from social data. He employs complex network theory and its application tools to analyze the data. He focuses on societal and urban development using bibliometric data and various data collected from the city. He is a member of the Young Korea Academy of Science and Technology.
Title
Social Roles of Science and Technology
Abstract
This talk introduces the interaction between society and S&T (Science and Technology). The issues of our society have diversified recently. We have answered the society’s demand, and are ready to solve the upcoming questions. The importance of the co-evolution of technology and society will be also introduced.
닫기
Sungkyu Shaun Park
KNU
Bio
Sungkyu (Shaun) Park is an assistant professor at the Department of AI Convergence at Kangwon National University. He received his Ph.D. from the KAIST Graduate School of Culture Technology in 2020 and then worked as a senior researcher in the IBS Data Science Group until February 2022. Shaun is interested in understanding human behaviors in the real world through large-scale data retrieved from wearable devices, clinical tests, social media, and so on. In particular, his research focuses on the development of mobile and wearable applications for data collection and the development of interpretable AI predictive models for various clinical indicators in the field of mental health. He also owned a startup aiming to launch an intervention app for insomnia, developed based on his latest research.
Title
How to survive in the age of convergence research
Abstract
In this talk, I will briefly talk about how to do well in convergence research with successful collaboration with experts in various fields. I will back up my point of view based on my diverse experiences not only from the research venues but from the startup trials.