- Entanglement-verified time distribution in a metropolitan network The precise synchronization of distant clocks is a fundamental requirement for a wide range of applications. Here, we experimentally demonstrate a novel approach of quantum clock synchronization utilizing entangled and correlated photon pairs generated by a quantum dot at telecom wavelength. By distributing these entangled photons through a metropolitan fiber network in the Stockholm area and measuring the remote correlations, we achieve a synchronization accuracy of tens of picoseconds by leveraging the tight time correlation between the entangled photons. We show that our synchronization scheme is secure against spoofing attacks by performing a remote quantum state tomography to verify the origin of the entangled photons. We measured a distributed maximum entanglement fidelity of 0.817 pm 0.040 to the |Phi^+rangle Bell state and a concurrence of 0.660 pm 0.086. These results highlight the potential of quantum dot-generated entangled pairs as a shared resource for secure time synchronization and quantum key distribution in real-world quantum networks. 7 authors · Apr 1, 2025
1 A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild In this work, we investigate the problem of lip-syncing a talking face video of an arbitrary identity to match a target speech segment. Current works excel at producing accurate lip movements on a static image or videos of specific people seen during the training phase. However, they fail to accurately morph the lip movements of arbitrary identities in dynamic, unconstrained talking face videos, resulting in significant parts of the video being out-of-sync with the new audio. We identify key reasons pertaining to this and hence resolve them by learning from a powerful lip-sync discriminator. Next, we propose new, rigorous evaluation benchmarks and metrics to accurately measure lip synchronization in unconstrained videos. Extensive quantitative evaluations on our challenging benchmarks show that the lip-sync accuracy of the videos generated by our Wav2Lip model is almost as good as real synced videos. We provide a demo video clearly showing the substantial impact of our Wav2Lip model and evaluation benchmarks on our website: cvit.iiit.ac.in/research/projects/cvit-projects/a-lip-sync-expert-is-all-you-need-for-speech-to-lip-generation-in-the-wild. The code and models are released at this GitHub repository: github.com/Rudrabha/Wav2Lip. You can also try out the interactive demo at this link: bhaasha.iiit.ac.in/lipsync. 4 authors · Aug 23, 2020 1
- Multimodal Disease Progression Modeling via Spatiotemporal Disentanglement and Multiscale Alignment Longitudinal multimodal data, including electronic health records (EHR) and sequential chest X-rays (CXRs), is critical for modeling disease progression, yet remains underutilized due to two key challenges: (1) redundancy in consecutive CXR sequences, where static anatomical regions dominate over clinically-meaningful dynamics, and (2) temporal misalignment between sparse, irregular imaging and continuous EHR data. We introduce DiPro, a novel framework that addresses these challenges through region-aware disentanglement and multi-timescale alignment. First, we disentangle static (anatomy) and dynamic (pathology progression) features in sequential CXRs, prioritizing disease-relevant changes. Second, we hierarchically align these static and dynamic CXR features with asynchronous EHR data via local (pairwise interval-level) and global (full-sequence) synchronization to model coherent progression pathways. Extensive experiments on the MIMIC dataset demonstrate that DiPro could effectively extract temporal clinical dynamics and achieve state-of-the-art performance on both disease progression identification and general ICU prediction tasks. 5 authors · Oct 13, 2025