CHAN: Cross-Modal Hybrid Attention Network for Temporal Language Grounding in Videos | IEEE Conference Publication | IEEE Xplore