skip to main content
research-article

Data-driven Digital Lighting Design for Residential Indoor Spaces

Authors Info & Claims
Published:17 March 2023Publication History
Skip Abstract Section

Abstract

Conventionally, interior lighting design is technically complex yet challenging and requires professional knowledge and aesthetic disciplines of designers. This article presents a new digital lighting design framework for virtual interior scenes, which allows novice users to automatically obtain lighting layouts and interior rendering images with visually pleasing lighting effects. The proposed framework utilizes neural networks to retrieve and learn underlying design guidelines and the principles beneath the existing lighting designs, e.g., a newly constructed dataset of 6k 3D interior scenes from professional designers with dense annotations of lights. With a 3D furniture-populated indoor scene as the input, the framework takes two stages to perform lighting design: (1) lights are iteratively placed in the room; (2) the colors and intensities of the lights are optimized by an adversarial scheme, resulting in lighting designs with aesthetic lighting effects. Quantitative and qualitative experiments show that the proposed framework effectively learns the guidelines and principles and generates lighting designs that are preferred over the rule-based baseline and comparable to those of professional human designers.

Skip Supplemental Material Section

Supplemental Material

tog-22-0029-file003.mp4

mp4

86.7 MB

REFERENCES

  1. Max 3ds. 2021. 3ds Max. Retrieved from https://www.autodesk.com/products/3ds-max/overview.Google ScholarGoogle Scholar
  2. Avetisyan Armen, Dahnert Manuel, Dai Angela, Savva Manolis, Chang Angel X., and Nießner Matthias. 2019. Scan2CAD: Learning CAD model alignment in RGB-D scans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 26142623.Google ScholarGoogle ScholarCross RefCross Ref
  3. Bi Sai, Sunkavalli Kalyan, Perazzi Federico, Shechtman Eli, Kim Vladimir G., and Ramamoorthi Ravi. 2019. Deep CG2Real: Synthetic-to-real translation via image disentanglement. In Proceedings of the IEEE International Conference on Computer Vision. 27302739.Google ScholarGoogle ScholarCross RefCross Ref
  4. Birn Jeremy. 2014. Digital Lighting & Rendering. Pearson Education.Google ScholarGoogle Scholar
  5. Bychkovsky Vladimir, Paris Sylvain, Chan Eric, and Durand Frédo. 2011. Learning photographic global tonal adjustment with a database of input/output image pairs. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, 97104.Google ScholarGoogle Scholar
  6. Chaillou Stanislas. 2019. AI + Architecture: Towards a New Approach. Master’s thesis. Harvard School of Design.Google ScholarGoogle Scholar
  7. Chen Kang, Xu Kun, Yu Yizhou, Wang Tian-Yi, and Hu Shi-Min. 2015. Magic decorator: Automatic material suggestion for indoor digital scenes. ACM Trans. Graph. 34, 6 (2015), 111.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Chen Yu-Sheng, Wang Yu-Ching, Kao Man-Hsin, and Chuang Yung-Yu. 2018. Deep photo enhancer: Unpaired learning for image enhancement from photographs with GANs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 63066314.Google ScholarGoogle ScholarCross RefCross Ref
  9. Coohom. 2022. Coohom. Retrieved from https://www.coohom.com/.Google ScholarGoogle Scholar
  10. Efron Bradley and Tibshirani Robert. 1986. Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy. Statist. Sci. Feb. (1986), 5475.Google ScholarGoogle Scholar
  11. Fu Huan, Cai Bowen, Gao Lin, Zhang Ling-Xiao, Wang Jiaming, Li Cao, Zeng Qixun, Sun Chengyue, Jia Rongfei, Zhao Binqiang, and Zhang Hao. 2021. 3D-FRONT: 3D furnished rooms with layOuts and semaNTics. In Proceedings of the IEEE International Conference on Computer Vision. 1093310942.Google ScholarGoogle ScholarCross RefCross Ref
  12. Goodfellow Ian, Pouget-Abadie Jean, Mirza Mehdi, Xu Bing, Warde-Farley David, Ozair Sherjil, Courville Aaron, and Bengio Yoshua. 2014. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 27 (2014), 26722680.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Gordon Gary. 2015. Interior Lighting for Designers. John Wiley & Sons.Google ScholarGoogle Scholar
  14. Greene Daniel H.. 1983. The decomposition of polygons into convex parts. Computat. Geom. 1 (1983), 235259.Google ScholarGoogle Scholar
  15. Handa Ankur, Patraucean Viorica, Badrinarayanan Vijay, Stent Simon, and Cipolla Roberto. 2016a. Understanding real world indoor scenes with synthetic data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 40774085.Google ScholarGoogle Scholar
  16. Handa Ankur, Pătrăucean Viorica, Stent Simon, and Cipolla Roberto. 2016b. SceneNet: An annotated model generator for indoor scene understanding. In Proceedings of the IEEE International Conference on Robotics and Automation. IEEE, 57375743.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Hu Ruizhen, Huang Zeyu, Tang Yuhan, Kaick Oliver Van, Zhang Hao, and Huang Hui. 2020. Graph2Plan: Learning floorplan generation from layout graphs. ACM Trans. Graph. 39, 4 (2020), 118–1.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Hu Yuanming, He Hao, Xu Chenxi, Wang Baoyuan, and Lin Stephen. 2018. Exposure: A white-box photo post-processing framework. ACM Trans. Graph. 37, 2 (2018), 117.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Ignatov Andrey, Kobyshev Nikolay, Timofte Radu, Vanhoey Kenneth, and Gool Luc Van. 2017. DSLR-quality photos on mobile devices with deep convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision. 32773285.Google ScholarGoogle ScholarCross RefCross Ref
  20. Isola Phillip, Zhu Jun-Yan, Zhou Tinghui, and Efros Alexei A.. 2017. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 11251134.Google ScholarGoogle ScholarCross RefCross Ref
  21. Jakob Wenzel, Speierer Sébastien, Roussel Nicolas, and Vicini Delio. 2022. DR. JIT: A just-in-time compiler for differentiable rendering. ACM Trans. Graph. 41, 4 (2022), 119.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Jin Sam and Lee Sung-Hee. 2019. Lighting layout optimization for 3D indoor scenes. In Computer Graphics Forum, Vol. 38. Wiley Online Library, 733743.Google ScholarGoogle Scholar
  23. Johnson Justin, Alahi Alexandre, and Fei-Fei Li. 2016. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision (ECCV). Springer, 694711.Google ScholarGoogle ScholarCross RefCross Ref
  24. Karras Tero, Laine Samuli, Aittala Miika, Hellsten Janne, Lehtinen Jaakko, and Aila Timo. 2020. Analyzing and improving the image quality of StyleGAN. In Proceedings of the IEEE International Conference on Computer Vision.Google ScholarGoogle ScholarCross RefCross Ref
  25. Kawai John K., Painter James S., and Cohen Michael F.. 1993. Radioptimization: Goal based rendering. In Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques. 147154.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Kerr William B. and Pellacini Fabio. 2009. Toward evaluating lighting design interface paradigms for novice users. ACM Trans. Graph. 28, 3 (2009), 19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Kim Taeksoo, Cha Moonsu, Kim Hyunsoo, Lee Jung Kwon, and Kim Jiwon. 2017. Learning to discover cross-domain relations with generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning. 18571865.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Li Manyi, Patil Akshay Gadi, Xu Kai, Chaudhuri Siddhartha, Khan Owais, Shamir Ariel, Tu Changhe, Chen Baoquan, Cohen-Or Daniel, and Zhang Hao. 2019. GRAINS: Generative Recursive Autoencoders for Indoor Scenes. ACM Trans. Graph. 38, 2 (2019), 116.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Li Tzu-Mao, Aittala Miika, Durand Frédo, and Lehtinen Jaakko. 2018. Differentiable Monte Carlo ray tracing through edge sampling. ACM Trans. Graph. 37, 6 (2018), 111.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Li Zhengqin, Yu Ting-Wei, Sang Shen, Wang Sarah, Song Meng, Liu Yuhan, Yeh Yu-Ying, Zhu Rui, Gundavarapu Nitesh, Shi Jia, Bi Sai, Yu Hong-Xing, Xu Zexiang, Sunkavalli Kalyan, Hasan Milos, Ramamoorthi Ravi, and Chandraker Manmohan. 2021. OpenRooms: An open framework for photorealistic indoor scene datasets. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 71907199.Google ScholarGoogle ScholarCross RefCross Ref
  31. Lin Wen-Chieh, Huang Tsung-Shian, Ho Tan-Chi, Chen Yueh-Tse, and Chuang Jung-Hong. 2013. Interactive lighting design with hierarchical light representation. In Computer Graphics Forum, Vol. 32. Wiley Online Library, 133142.Google ScholarGoogle Scholar
  32. Liu Tianqiang, Hertzmann Aaron, Li Wilmot, and Funkhouser Thomas. 2015. Style compatibility for 3D furniture models. ACM Trans. Graph. 34, 4 (2015), 19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Marks Joe, Andalman Brad, Beardsley Paul A., Freeman William, Gibson Sarah, Hodgins Jessica, Kang Thomas, Mirtich Brian, Pfister Hanspeter, Ruml Wheeler, et al. 1997. Design galleries: A general approach to setting parameters for computer graphics and animation. In Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques. 389400.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Maya. 2021. Maya. Retrieved from https://www.autodesk.com/products/maya/overview.Google ScholarGoogle Scholar
  35. Merrell Paul, Schkufza Eric, Li Zeyang, Agrawala Maneesh, and Koltun Vladlen. 2011. Interactive furniture layout using interior design guidelines. ACM Trans. Graph. 30, 4 (2011), 110.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Mirza Mehdi and Osindero Simon. 2014. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014).Google ScholarGoogle Scholar
  37. Murray Naila, Marchesotti Luca, and Perronnin Florent. 2012. AVA: A large-scale database for aesthetic visual analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 24082415.Google ScholarGoogle ScholarCross RefCross Ref
  38. Nelder John A. and Mead Roger. 1965. A simplex method for function minimization. Comput. J. 7, 4 (1965), 308313.Google ScholarGoogle ScholarCross RefCross Ref
  39. Park Taesung, Liu Ming-Yu, Wang Ting-Chun, and Zhu Jun-Yan. 2019. Semantic image synthesis with spatially-adaptive normalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 23372346.Google ScholarGoogle ScholarCross RefCross Ref
  40. Paschalidou Despoina, Kar Amlan, Shugrina Maria, Kreis Karsten, Geiger Andreas, and Fidler Sanja. 2021. ATISS: Autoregressive Transformers for Indoor Scene Synthesis. Adv. Neural Inf. Process. Syst. 34 (2021).Google ScholarGoogle Scholar
  41. Paszke Adam, Gross Sam, Massa Francisco, Lerer Adam, Bradbury James, Chanan Gregory, Killeen Trevor, Lin Zeming, Gimelshein Natalia, Antiga Luca and others. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems. 32, (2019).Google ScholarGoogle Scholar
  42. Pellacini Fabio, Battaglia Frank, Morley R. Keith, and Finkelstein Adam. 2007. Lighting with paint. ACM Trans. Graph. 26, 2 (2007), 9–es.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Pellacini Fabio, Tole Parag, and Greenberg Donald P.. 2002. A user interface for interactive cinematic shadow design. ACM Trans. Graph. 21, 3 (2002), 563566.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Planner5D. 2022. Planner5D. Retrieved from https://www.planner5d.com/.Google ScholarGoogle Scholar
  45. Ren Haocheng, Zhang Hao, Zheng Jia, Zheng Jiaxiang, Tang Rui, Huo Yuchi, Bao Hujun, and Wang Rui. 2022. MINERVAS: Massive interior environments virtual synthesis. In Computer Graphics Forum, Vol. 41. Wiley Online Library, 6374.Google ScholarGoogle Scholar
  46. Ritchie Daniel, Wang Kai, and Lin Yu-an. 2019. Fast and flexible indoor scene synthesis via deep convolutional generative models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 61826190.Google ScholarGoogle ScholarCross RefCross Ref
  47. Roberts Mike, Ramapuram Jason, Ranjan Anurag, Kumar Atulit, Bautista Miguel Angel, Paczan Nathan, Webb Russ, and Susskind Joshua M.. 2021. Hypersim: A photorealistic synthetic dataset for holistic indoor scene understanding. In Proceedings of the IEEE International Conference on Computer Vision. 1091210922.Google ScholarGoogle ScholarCross RefCross Ref
  48. Schoeneman Chris, Dorsey Julie, Smits Brian, Arvo James, and Greenberg Donald. 1993. Painting with light. In Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques. 143146.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Schwarz Michael and Wonka Peter. 2014. Procedural design of exterior lighting for buildings with complex constraints. ACM Trans. Graph. 33, 5 (2014), 116.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Shacked Ram and Lischinski Dani. 2001. Automatic lighting design using a perceptual quality metric. In Computer Graphics Forum, Vol. 20. Wiley Online Library, 215227.Google ScholarGoogle Scholar
  51. Shapira Lior, Shamir Ariel, and Cohen-Or Daniel. 2009. Image appearance exploration by model-based navigation. In Computer Graphics Forum, Vol. 28. Wiley Online Library, 629638.Google ScholarGoogle Scholar
  52. Shimizu Evan, Paris Sylvain, Fisher Matt, Yumer Ersin, and Fatahalian Kayvon. 2019. Exploratory stage lighting design using visual objectives. In Computer Graphics Forum, Vol. 38. Wiley Online Library, 417429.Google ScholarGoogle Scholar
  53. Silberman Nathan, Hoiem Derek, Kohli Pushmeet, and Fergus Rob. 2012. Indoor segmentation and support inference from RGBD images. In Proceedings of the European Conference on Computer Vision (ECCV). Springer, 746760.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Song Shuran, Yu Fisher, Zeng Andy, Chang Angel X., Savva Manolis, and Funkhouser Thomas. 2017. Semantic scene completion from a single depth image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 17461754.Google ScholarGoogle ScholarCross RefCross Ref
  55. Sun Yifan, Huang Qixing, Hsiao Dun-Yu, Guan Li, and Hua Gang. 2021. Learning view selection for 3D scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1446414473.Google ScholarGoogle ScholarCross RefCross Ref
  56. Talebi Hossein and Milanfar Peyman. 2018. NIMA: Neural image assessment. IEEE Trans. Image Process. 27, 8 (2018), 39984011.Google ScholarGoogle ScholarCross RefCross Ref
  57. Maaten Laurens Van der and Hinton Geoffrey. 2008. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 11 (2008).Google ScholarGoogle Scholar
  58. VRay. 2021. VRay.Retrieved from https://www.chaosgroup.com/.Google ScholarGoogle Scholar
  59. Walch Andreas, Schwärzler Michael, Luksch Christian, Eisemann Elmar, and Gschwandtner Theresia. 2019. LightGuider: Guiding interactive lighting design using suggestions, provenance, and quality visualization. IEEE Trans. Visualiz. Comput. Graph. 26, 1 (2019), 569578.Google ScholarGoogle Scholar
  60. Wang Kai, Lin Yu-An, Weissmann Ben, Savva Manolis, Chang Angel X., and Ritchie Daniel. 2019. PlanIT: Planning and instantiating indoor scenes with relation graph and spatial prior networks. ACM Trans. Graph. 38, 4 (2019), 115.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Wang Kai, Savva Manolis, Chang Angel X., and Ritchie Daniel. 2018b. Deep convolutional priors for indoor scene synthesis. ACM Trans. Graph. 37, 4 (2018), 114.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Wang Ting-Chun, Liu Ming-Yu, Zhu Jun-Yan, Tao Andrew, Kautz Jan, and Catanzaro Bryan. 2018a. High-resolution image synthesis and semantic manipulation with conditional GANs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Google ScholarGoogle ScholarCross RefCross Ref
  63. Wang Xinpeng, Yeshwanth Chandan, and Nießner Matthias. 2021. SceneFormer: Indoor scene generation with transformers. In Proceedings of the International Conference on 3D Vision (3DV). IEEE, 106115.Google ScholarGoogle ScholarCross RefCross Ref
  64. Wang Xintao, Yu Ke, Dong Chao, and Loy Chen Change. 2018. Recovering realistic texture in image super-resolution by deep spatial feature transform. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 606615.Google ScholarGoogle ScholarCross RefCross Ref
  65. Wu Wenming, Fu Xiao-Ming, Tang Rui, Wang Yuhan, Qi Yu-Hao, and Liu Ligang. 2019. Data-driven interior plan generation for residential buildings. ACM Trans. Graph. 38, 6 (2019), 112.Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Xu Bing, Zhang Junfei, Wang Rui, Xu Kun, Yang Yong-Liang, Li Chuan, and Tang Rui. 2019. Adversarial Monte Carlo denoising with conditioned auxiliary feature modulation. ACM Trans. Graph. 38, 6 (2019), 224–1.Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Xu Kai, Ma Rui, Zhang Hao, Zhu Chenyang, Shamir Ariel, Cohen-Or Daniel, and Huang Hui. 2014. Organizing heterogeneous scene collections through contextual focal points. ACM Trans. Graph. 33, 4 (2014), 112.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Yan Zhicheng, Zhang Hao, Wang Baoyuan, Paris Sylvain, and Yu Yizhou. 2016. Automatic photo adjustment using deep neural networks. ACM Trans. Graph. 35, 2 (2016), 115.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Yi Zili, Zhang Hao, Tan Ping, and Gong Minglun. 2017. DualGAN: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE International Conference on Computer Vision. 28492857.Google ScholarGoogle ScholarCross RefCross Ref
  70. Yu Lap Fai, Yeung Sai Kit, Tang Chi Keung, Terzopoulos Demetri, Chan Tony F., and Osher Stanley J.. 2011. Make it home: Automatic optimization of furniture arrangement. ACM Trans. Graph. 30, 4 (2011).Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Zhang Cheng, Miller Bailey, Yan Kan, Gkioulekas Ioannis, and Zhao Shuang. 2020a. Path-space differentiable rendering. ACM Trans. Graph. 39, 4 (2020).Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Zhang Zaiwei, Yang Zhenpei, Ma Chongyang, Luo Linjie, Huth Alexander, Vouga Etienne, and Huang Qixing. 2020b. Deep generative modeling for scene synthesis via hybrid representations. ACM Trans. Graph. 39, 2 (2020), 121.Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Zhu Jun-Yan, Park Taesung, Isola Phillip, and Efros Alexei A.. 2017a. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision. 22232232.Google ScholarGoogle ScholarCross RefCross Ref
  74. Zhu Jun-Yan, Zhang Richard, Pathak Deepak, Darrell Trevor, Efros Alexei A., Wang Oliver, and Shechtman Eli. 2017b. Toward multimodal image-to-image translation. In Proceedings of the International Conference on Advances in Neural Information Processing Systems.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in

Full Access

  • Published in

    cover image ACM Transactions on Graphics
    ACM Transactions on Graphics  Volume 42, Issue 3
    June 2023
    181 pages
    ISSN:0730-0301
    EISSN:1557-7368
    DOI:10.1145/3579817
    Issue’s Table of Contents

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 17 March 2023
    • Online AM: 31 January 2023
    • Accepted: 22 December 2022
    • Revised: 26 October 2022
    • Received: 15 May 2022
    Published in tog Volume 42, Issue 3

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
  • Article Metrics

    • Downloads (Last 12 months)813
    • Downloads (Last 6 weeks)102

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

View Full Text

HTML Format

View this article in HTML Format .

View HTML Format