9th Workshop on Efficient Deep Learning for Computer Vision

ECV at CVPR 2026

Full Day Workshop | In-Person | June 2026 | Denver CO

2026 LPCVC Registration is now open

Overview & Topics

Mainstream computer vision research frequently overlooks factors like on-device latency, resource use, and power consumption. This year's ECV workshop addresses true efficiency in the era of Foundation Models and Embodied AI.

Efficient and Scalable Perception Systems

  • Foundation model efficiency, latency-aware training, and tunable accuracy-efficiency tradeoffs
  • Scalable architectures (e.g., mixture-of-experts, vision transformers)
  • Efficient representation learning for images, multimodal, 3D, and 4D data
  • Optimization for edge deployment and low-power hardware
  • Annotation-efficient pipelines (simulation, augmentation, active selection)

Generative and Foundation Models on Edge Devices

  • Generative AI for language models, parallel decoding, long-context and KV-cache management
  • Multimodal generative models, VQA and fusion for text, vision, audio, and sensor data
  • Diffusion models, step, adversarial, knowledge distillation, consistency models
  • Vision-Language-Action models and interactive environments

3D Scene Understanding and Reconstruction in AR/VR

  • 3D from multi-view, sensors, or single images
  • Real-time 3D perception, mapping, reconstruction, and rendering
  • Compositional and open-vocabulary 3D scene understanding
  • Robust segmentation and affordance prediction in complex scenes
  • Efficient 3D/4D representation learning and compression (point cloud, 6DoF)
  • Multi-view and multi-temporal consistency in 3D/4D generation

Embodied AI and Interactive Agents

  • Robotics and embodied vision: active agents, simulation, causal/object-centric representations
  • Dynamic modeling of environments and objects for manipulation and navigation (VLA)
  • Few-shot and adaptive learning for Vision-Language Agents (VLA) in open environments
  • Closed-loop policy learning for task completion in cluttered environments
  • Integration of human cues (gaze, pose) into perception and decision-making
  • Continual learning, adaptive memory, and long-horizon reasoning
  • Evaluation frameworks for perception, action, and memory in embodied systems

Avatars, Neural Rendering, and Personalization

  • Expressive avatar synthesis and display for HRI (human-robot interaction)
  • Neural rendering for dynamic humans and avatars
  • Personalization from limited data (few snapshots, generalizable enrollment)
  • Relighting, appearance adaptation, and editing under varied conditions
  • Consistency and realism across environments and devices
  • Efficient streaming and transmission of animated avatars

Egocentric and Social Perception

  • Egocentric hand/body tracking and human-object interaction
  • Emotion-aware communication and feedback in interactive systems
  • Strategies for social presence and trust in companion robots
  • Methods for continual adaptation and personalization in interactive systems

Potential Impacts

Efficient computer vision on mobile, wearable, and robotics devices unlocks transformative possibilities across industries.

  • Healthcare & Fitness
    • Remote Monitoring: Wearable devices monitoring vital signs and physical activities.
    • Early Diagnosis: Enhanced image processing for detection of skin conditions, eye health, etc.
  • Industrial Applications
    • Quality Control: Automated product inspection for manufacturing.
    • Maintenance and Repair: Wearable devices guiding technicians through complex repairs.
  • Accessibility
    • Assistance for the Visually Impaired: Identifying objects and reading text aloud.
    • Gesture Recognition: Enabling touchless interaction for users with physical disabilities.
  • Environmental Monitoring
    • Wildlife Tracking: Monitoring animal movements and detecting activities via edge devices.
  • Robotics
    • Extended Operational Time: Reduced energy consumption enabling longer battery life.
    • Real-Time Processing: Faster visual processing for responsive actions.
    • Increased Mobility: Compact processing units allow for smaller robots.
AI Vision Technology

Highlights from the 8th ECV Workshop (CVPR 2025)

Invited Speakers

Hai Li
Hai Li
Duke University

Topic: Intelligent Edge Computing

Forrest Iandola
Forrest Iandola
Meta

Topic: On Device Augmented Reality

Joseph Spisak
Joseph Spisak
Meta

Topic: Efficient Generative AI

Dashan Gao
Dashan Gao
Qualcomm

Topic: Agentic AI on devices

Song Han
Song Han
MIT

Topic: Efficient Visual Generation on the Edge

Xiaolong Wang
Xiaolong Wang
UCSD

Topic: Efficient Robot Learning from Human Videos

Oncel Tuzel
Oncel Tuzel
Apple

Topic: Advancing the Frontiers of Edge AI

Stella Yu
Stella Yu
University of Michigan

Topic: Context as Efficiency in Embodied AI

Schedule

Time Topic
09:00 AMWelcome by Organizers
09:10 AMInvited Talk
09:40 AMInvited Talk
10:10 AMInvited Talk
10:40 AMBreak and Onsite Demos
11:00 AMInvited Talk
11:30 AMInvited Talk
12:00 PMLunch
01:00 PMInvited Talk
01:30 PMInvited Talk
02:00 PMInvited Talk
02:30 PMBreak and Onsite Demos
03:00 PMCompetition: 2026 Low-Power Computer Vision Challenge
03:30 PMWorkshop Paper Presentations and Poster Session
05:30 PMConclusion by Organizers
05:40 PMAdjourn

2026 IEEE Low-Power Computer Vision Challenge (LPCVC)

Since 2015, the Low Power Computer Vision Challenge (LPCVC) has been the premier venue for optimizing computer vision not just for accuracy, but for execution time, energy consumption, and memory efficiency. Unlike cloud-based competitions, LPCVC 2026 focuses on practical, real-world applications running on edge devices (mobile phones and AI PCs).

Prizes (Per Track)

$6,000

Champion

$3,000

Runner-up

$1,000

Third Place

Early Bird: First 5 valid submissions in each track receive $200.

Important Dates

Registration Open
Feb 1 - Apr 30, 2026
Submission Open
Mar 1 - Apr 30, 2026
Winners Announced
May 15, 2026
Presentations
June 2026 (at ECV Workshop)

Call For Papers

We invite researchers, practitioners, and industry experts to submit original contributions. Join us to foster discussion and innovation around scalable, resource-conscious solutions that enable cutting-edge performance while reducing computational cost. This year’s ECV workshop addresses a range of topics including Efficient Perception, Generative AI on Edge, Embodied AI, and more.

View Detailed Topics

Submission Guidelines

Short Paper
  • Length: Max 4 pages (excluding refs).
  • Purpose: Share exciting and novel early-stage ideas.
  • Content: Describe ideas with preliminary experiments. Extensive comparisons are not required.
Long Paper
  • Length: Max 8 pages (excluding refs).
  • Purpose: Present mature works.
  • Content: Novel ideas with sufficient experiments, analyses, and comparisons.
  • Published in CVPR Workshop Proceedings
Formatting & Review
  • Template: Must follow CVPR 2026 Author Guidelines.
  • Anonymity: Double-blind peer review. No author names or affiliations in the manuscript.
  • Rejection: Papers beyond 8 pages (excluding refs) will be desk rejected.
  • Presentation: Oral talks or posters. Demo videos/code highly encouraged.

Important Dates

Submission Deadline
March 6, 2026 (11:59 PM, PST)
Notification
March 20, 2026 (11:59 PM, PST)
Camera Ready
April 10, 2026 (11:59 PM, PST)
Tips for a Strong Submission
  • Clearly demonstrate efficiency gains (FLOPs, speedup, memory).
  • Compare against state-of-the-art baselines.
  • Include real-world deployment scenarios if possible.
  • Share open-source code for reproducibility.
Ready to submit?
Submit via CMT

CMT ACKNOWLEDGMENT: The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.

Organizing Committee

Shuai Zhang
Shuai Zhang

Qualcomm

shuazhan@qti.qualcomm.com
Yung-Hsiang Lu
Yung-Hsiang Lu

Purdue University

yunglu@purdue.edu
Chen Chen
Chen Chen

University of Central Florida

chen.chen@crcv.ucf.edu
Xiao Hu
Xiao Hu

Qualcomm

Ning Bi
Ning Bi

Qualcomm

Pavlo Molchanov
Pavlo Molchanov

NVIDIA

Raghuraman Krishnamoorthi
Raghuraman Krishnamoorthi

Meta

Technical Program Committee

  • Joe Spisak Meta
  • Rameswar Panda IBM
  • Weiwei Li Xpeng
  • Taotao Jing Qualcomm
  • Yi Li Qualcomm
  • Yuan Li Qualcomm
  • Shuangjun Liu Qualcomm
  • Apratim Bhattacharyya Qualcomm
  • Soonhoi Ha Seoul National University
  • Xiaoli Li Singapore University of Technology and Design
  • Mengran Gou Qualcomm
  • Siwei Nie Qualcomm
  • Fanghui Xue Qualcomm
  • Zihao Ye (Student) Purdue University
  • Yuanhao Zou (Student) University of Central Florida