Abstract
As data gets more complex and applications of machine learning (ML) algorithms for decision-making broaden and diversify, traditional ML methods by minimizing an unconstrained or simply constrained convex objective are becoming increasingly unsatisfactory. To address this new challenge, recent ML research has sparked a paradigm shift in learning predictive models into non-convex learning and heavily constrained learning. Non-Convex Learning (NCL) refers to a family of learning methods that involve optimizing non-convex objectives. Heavily Constrained Learning (HCL) refers to a family of learning methods that involve constraints that are much more complicated than a simple norm constraint (e.g., data-dependent functional constraints, non-convex constraints), as in conventional learning. This paradigm shift has already created many promising outcomes: (i) non-convex deep learning has brought breakthroughs for learning representations from large-scale structured data (e.g., images, speech) (LeCun, Bengio, & Hinton, 2015; Krizhevsky, Sutskever, & Hinton, 2012; Amodei et al., 2016; Deng & Liu, 2018); (ii) non-convex regularizers (e.g., for enforcing sparsity or low-rank) could be more effective than their convex counterparts for learning high-dimensional structured models (C.-H. Zhang & Zhang, 2012; J. Fan & Li, 2001; C.-H. Zhang, 2010; T. Zhang, 2010); (iii) constrained learning is being used to learn predictive models that satisfy various constraints to respect social norms (e.g., fairness) (B. E. Woodworth, Gunasekar, Ohannessian, & Srebro, 2017; Hardt, Price, Srebro, et al., 2016; Zafar, Valera, Gomez Rodriguez, & Gummadi, 2017; A. Agarwal, Beygelzimer, Dudík, Langford, & Wallach, 2018), to improve the interpretability (Gupta et al., 2016; Canini, Cotter, Gupta, Fard, & Pfeifer, 2016; You, Ding, Canini, Pfeifer, & Gupta, 2017), to enhance the robustness (Globerson & Roweis, 2006a; Sra, Nowozin, & Wright, 2011; T. Yang, Mahdavi, Jin, Zhang, & Zhou, 2012), etc. In spite of great promises brought by these new learning paradigms, they also bring emerging challenges to the design of computationally efficient algorithms for big data and the analysis of their statistical properties.
- Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., & Wallach, H. (2018). A reductions approach to fair classification. In Proceedings of the 35th international conference on machine learning (icml) (pp.-).Google Scholar
- Agarwal, N., Allen Zhu, Z., Bullins, B., Hazan, E., & Ma, T. (2017). Finding approximate local minima faster than gradient descent. In Acm symposium on theory of computing (stoc) (pp. 1195--1199).Google ScholarDigital Library
- Allen-Zhu, Z., Li, Y., & Song, Z. (2018). A convergence theory for deep learning via over-parameterization. CoRR, abs/1811.03962.Google Scholar
- Allen-Zhu, Z. (2017). Natasha 2: Faster non-convex optimization than sgd. CoRR, /abs/1708.08694/v4.Google Scholar
- Amodei, D., Ananthanarayanan, S., Anubhai, R., Bai, J., Battenberg, E., Case, C., ... Zhu, Z. (2016). Deep speech 2: End-to-end speech recognition in english and mandarin. In Proceedings of the 33rd international conference on international conference on machine learning (icml) (pp. 173--182).Google Scholar
- An, N. T., & Nam, N. M. (2017). Convergence analysis of a proximal point algorithm for minimizing differences of functions. Optimization, 66(1), 129--147.Google ScholarCross Ref
- Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein generative adversarial networks. In International conference on machine learning (pp. 214--223).Google Scholar
- Arora, S., Cohen, N., & Hazan, E. (2018). On the optimization of deep networks: Implicit acceleration by overparameterization. arXiv preprint arXiv:1802.06509.Google Scholar
- Attouch, H., Bolte, J., & Svaiter, B. F. (2013). Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized gauss-seidel methods. Mathematical Programming, 137(1), 91--129.Google ScholarCross Ref
- Belagiannis, V., Rupprecht, C., Carneiro, G., & Navab, N. (2015). Robust optimization for deep regression. In Proceedings of the ieee international conference on computer vision (pp. 2830--2838).Google ScholarDigital Library
- Bolte, J., Sabach, S., & Teboulle, M. (2014). Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Mathematical Programming, 146, 459--494.Google ScholarDigital Library
- Bot, R. I., Csetnek, E. R., & Lászlá, S. C. (2016, Feb 01). An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions. EURO Journal on Computational Optimization, 4(1), 3--25.Google ScholarCross Ref
- Candès, E. J., Wakin, M. B., & Boyd, S. P. (2008, Dec 01). Enhancing sparsity by reweighted l1 minimization. Journal of Fourier Analysis and Applications, 14(5), 877--905.Google ScholarCross Ref
- Canini, K., Cotter, A., Gupta, M. R., Fard, M. M., & Pfeifer, J. (2016). Fast and flexible monotonic functions with ensembles of lattices. In Proceedings of the 30th international conference on neural information processing systems (nips) (pp. 2927--2935).Google Scholar
- Carlini, N., & Wagner, D. (2017). Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp) (pp. 39--57).Google Scholar
- Carmon, Y., Duchi, J. C., Hinder, O., & Sidford, A. (2016). Accelerated methods for non-convex optimization. CoRR, abs/1611.00756.Google Scholar
- Cartis, C., Gould, N. I. M., & Toint, P. L. (2011a, Dec 01). Adaptive cubic regularisation methods for unconstrained optimization. part ii: worst-case function- and derivative-evaluation complexity. Mathematical Programming, 130(2), 295--319.Google ScholarDigital Library
- Cartis, C., Gould, N. I. M., & Toint, P. L. (2011b). Adaptive cubic regularisation methods for unconstrained optimization. part i: motivation, convergence and numerical results. Mathematical Programming, 127(2), 245--295.Google ScholarCross Ref
- Chartrand, R. (2012). Nonconvex splitting for regularized low-rank+ sparse decomposition. IEEE Transactions on Signal Processing, 60(11), 5810--5819.Google ScholarDigital Library
- Chartrand, R., & Yin, W. (2016). Nonconvex sparse regularization and splitting algorithms. In Splitting methods in communication, imaging, science, and engineering (pp. 237--249). Springer.Google Scholar
- Chen, J., & Gu, Q. (2018). Closing the generalization gap of adaptive gradient methods in training deep neural networks. arXiv preprint arXiv:1806.06763.Google Scholar
- Chen, Z., Yuan, Z., Yi, J., Zhou, B., Chen, E., & Yang, T. (2019). Universal stage-wise learning for non-convex problems with convergence on averaged solutions. In 7th international conference on learning representations, ICLR 2019, new orleans, la, usa, may 6--9, 2019.Google Scholar
- Cherukuri, A., Gharesifard, B., & Cortes, J. (2017). Saddle-point dynamics: conditions for asymptotic stability of saddle points. SIAM Journal on Control and Optimization, 55(1), 486--511.Google ScholarCross Ref
- Cisse, M., Bojanowski, P., Grave, E., Dauphin, Y., & Usunier, N. (2017). Parseval networks: Improving robustness to adversarial examples. In Proceedings of the 34th international conference on machine learning-volume 70 (pp. 854--863).Google Scholar
- Daskalakis, C., Ilyas, A., Syrgkanis, V., & Zeng, H. (2017). Training gans with optimism. CoRR, abs/1711.00141.Google Scholar
- Davis, D., & Drusvyatskiy, D. (2018). Stochastic subgradient method converges at the rate o(k-1/4) on weakly convex functions. arXiv preprint arXiv:1802.02988.Google Scholar
- Deng, L., & Liu, Y. (2018). Deep learning in natural language processing. Springer.Google Scholar
- Du, S. S., Zhai, X., Poczos, B., & Singh, A. (2018). Gradient descent provably optimizes over-parameterized neural networks. arXiv preprint arXiv:1810.02054.Google Scholar
- Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul), 2121--2159.Google ScholarDigital Library
- Fan, J., & Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456), 1348--1360.Google ScholarCross Ref
- Fan, Y., Lyu, S., Ying, Y., & Hu, B. (2017). Learning with average top-k loss. In Advances in neural information processing systems 30: Annual conference on neural information processing systems 2017, 4--9 december 2017, long beach, ca, USA (pp. 497--505).Google Scholar
- Ghadimi, S., & Lan, G. (2013). Stochastic first- and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4), 2341--2368.Google ScholarDigital Library
- Globerson, A., & Roweis, S. (2006a). Nightmare at test time: Robust learning by feature deletion. In Proceedings of the 23rd international conference on machine learning (pp. 353--360).Google ScholarDigital Library
- Globerson, A., & Roweis, S. (2006b). Nightmare at test time: robust learning by feature deletion. In Proceedings of the 23rd international conference on machine learning (pp. 353--360).Google ScholarDigital Library
- Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672--2680).Google ScholarDigital Library
- Gouk, H., Frank, E., Pfahringer, B., & Cree, M. (2018). Regularisation of neural networks by enforcing lipschitz continuity. arXiv preprint arXiv:1804.04368.Google Scholar
- Grnarova, P., Levy, K. Y., Lucchi, A., Hofmann, T., & Krause, A. (2017). An online learning approach to generative adversarial networks. CoRR, abs/1706.03269.Google Scholar
- Gupta, M. R., Cotter, A., Pfeifer, J., Voevodski, K., Canini, K. R., Mangylov, A., ... Esbroeck, A. V. (2016). Monotonic calibrated interpolated look-up tables. Journal of Machine Learning Research (JMLR), 17, 109:1--109:47.Google Scholar
- Hardt, M., Price, E., Srebro, N., et al. (2016). Equality of opportunity in supervised learning. In Advances in neural information processing systems (pp. 3315--3323).Google Scholar
- He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the ieee conference on computer vision and pattern recognition (pp. 770--778).Google ScholarCross Ref
- Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems 30 nips) (pp. 6629--6640).Google Scholar
- Hillar, C. J., & Lim, L.-H. (2013, November). Most tensor problems are np-hard. Journal of ACM, 60(6), 45:1--45:39.Google ScholarDigital Library
- Khalaf, W., Astorino, A., d'Alessandro, P., & Gaudioso, M. (2017). A dc optimization-based clustering technique for edge detection. Optimization Letters, 11(3), 627--640.Google ScholarCross Ref
- Kingma, D. P., & Ba, J. (2015). Adam: A method for stochastic optimization. In 3rd international conference on learning representations, ICLR 2015, san diego, ca, usa, may 7--9, 2015, conference track proceedings. Retrieved from http://arxiv.org/abs/1412.6980Google Scholar
- Kiryo, R., Niu, G., du Plessis, M. C., & Sugiyama, M. (2017). Positive-unlabeled learning with non-negative risk estimator. In Advances in neural information processing systems 30 (pp. 1675--1685).Google Scholar
- Kohler, J. M., & Lucchi, A. (2017). Sub-sampled cubic regularization for non-convex optimization. In Proceedings of the international conference on machine learning (icml) (pp. 1895--1904).Google Scholar
- Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (nips) (pp. 1106--1114).Google Scholar
- LeCun, Y., Bengio, Y., & Hinton, G. E. (2015). Deep learning. Nature, 521(7553), 436--444.Google ScholarCross Ref
- Le Thi, H. A., & Dinh, T. P. (2014). Dc programming in communication systems: challenging problems and methods. Vietnam Journal of Computer Science, 1(1), 15--28.Google ScholarDigital Library
- Le Thi, H. A., Dinh, T. P., & Belghiti, M. (2014). Dca based algorithms for multiple sequence alignment (msa). Central European Journal of Operations Research, 22(3), 501--524.Google ScholarCross Ref
- Li, H., & Lin, Z. (2015). Accelerated proximal gradient methods for nonconvex programming. In Proceedings of the 28th international conference on neural information processing systems - volume 1 (pp. 379--387).Google Scholar
- Li, X., & Orabona, F. (2018). On the convergence of stochastic gradient descent with adaptive stepsizes. arXiv preprint arXiv:1805.08114.Google Scholar
- Li, Y., & Liang, Y. (2018). Learning overparameterized neural networks via stochastic gradient descent on structured data. In Advances in neural information processing systems (neurips) (pp. 8157--8166).Google Scholar
- Lin, Q., Liu, M., Rafique, H., & Yang, T. (2018). Solving weakly-convex-weakly-concave saddle-point problems as weakly-monotone variational inequality. arXiv preprint arXiv:1810.10207.Google Scholar
- Lin, Q., Nadarajah, S., Soheili, N., & Yang, T. (2019). A data efficient and feasible level set method for stochastic convex optimization with expectation constraints. CoRR, abs/1908.03077.Google Scholar
- Liu, M., & Yang, T. (2017a). On noisy negative curvature descent: Competing with gradient descent for faster non-convex optimization. CoRR, abs/1709.08571.Google Scholar
- Liu, M., & Yang, T. (2017b). Stochastic non-convex optimization with strong high probability second-order convergence. CoRR, abs/1710.09447.Google Scholar
- Liu, T., Pong, T. K., & Takeda, A. (2018, Sep 08). A successive difference-of-convex approximation method for a class of nonconvex nonsmooth optimization problems. Mathematical Programming.Google Scholar
- Luo, L., Xiong, Y., Liu, Y., & Sun, X. (2019). Adaptive gradient methods with dynamic bound of learning rate. arXiv preprint arXiv:1902.09843.Google Scholar
- Ma, R., Lin, Q., & Yang, T. (2019). Proximally constrained methods for weakly convex optimization with weakly convex constraints. arXiv preprint arXiv:1908.01871.Google Scholar
- Mahdavi, M., Yang, T., Jin, R., & Zhu, S. (2012). Stochastic gradient descent with only one projection. In Advances in neural information processing systems (nips) (p. 503--511).Google Scholar
- Nagarajan, V., & Kolter, J. Z. (2017). Gradient descent GAN optimization is locally stable. In Advances in neural information processing systems 30 (nips) (pp. 5591--5600).Google Scholar
- Namkoong, H., & Duchi, J. C. (2016). Stochastic gradient methods for distributionally robust optimization with f-divergences. In Advances in neural information processing systems (pp. 2208--2216).Google Scholar
- Namkoong, H., & Duchi, J. C. (2017). Variance-based regularization with convex objectives. In Advances in neural information processing systems (pp. 2971--2980).Google Scholar
- Nesterov, Y., & Polyak, B. T. (2006). Cubic regularization of newton method and its global performance. Math. Program., 108(1), 177--205.Google ScholarCross Ref
- Nitanda, A., & Suzuki, T. (2017). Stochastic difference of convex algorithm and its application to training deep boltzmann machines. In Artificial intelligence and statistics (pp. 470--478).Google Scholar
- Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.Google Scholar
- Rafique, H., Liu, M., Lin, Q., & Yang, T. (2018). Non-convex min-max optimization: Provable algorithms and applications in machine learning. CoRR, abs/1810.02060.Google Scholar
- Ravi, S. N., Dinh, T., Lokhande, V. S. R., & Singh, V. (2018). Constrained deep learning using conditional gradient and applications in computer vision. arXiv preprint arXiv:1803.06453.Google Scholar
- Real, E., Aggarwal, A., Huang, Y., & Le, Q. V. (2019). Regularized evolution for image classifier architecture search. In Proceedings of the aaai conference on artificial intelligence (Vol. 33, pp. 4780--4789).Google ScholarCross Ref
- Reddi, S. J., Zaheer, M., Sra, S., Poczos, B., Bach, F., Salakhutdinov, R., & Smola, A. J. (2017). A generic approach for escaping saddle points. arXiv preprint arXiv:1709.01434.Google Scholar
- Rigollet, P., & Tong, X. (2011, November). Neyman-pearson classification, convexity and stochastic constraints. J. Mach. Learn. Res., 12, 2831--2855.Google ScholarDigital Library
- Royer, C. W., & Wright, S. J. (2017). Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization. CoRR, abs/1706.03131.Google Scholar
- Sra, S., Nowozin, S., & Wright, S. J. (2011). Optimization for machine learning. The MIT Press.Google Scholar
- Tan, M., & Le, Q. V. (2019). Efficient-net: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946.Google Scholar
- Thi, H. A. L., Le, H. M., Phan, D. N., & Tran, B. (2017). Stochastic dca for the large-sum of non-convex functions problem and its application to group variable selection in classification. In Proceedings of the 34th international conference on machine learning-volume 70 (pp. 3394--3403).Google Scholar
- Tian, Y., Pei, K., Jana, S., & Ray, B. (2018). Deeptest: Automated testing of deep-neural-network-driven autonomous cars. In Proceedings of the 40th international conference on software engineering (pp. 303--314).Google ScholarDigital Library
- Wen, F., Chu, L., Liu, P., & Qiu, R. C. (2018). A survey on nonconvex regularization-based sparse and low-rank recovery in signal processing, statistics, and machine learning. IEEE Access, 6, 69883--69906.Google ScholarCross Ref
- Woodworth, B., Gunasekar, S., Ohannessian, M. I., & Srebro, N. (2017). Learning non-discriminatory predictors. arXiv preprint arXiv:1702.06081.Google Scholar
- Woodworth, B. E., Gunasekar, S., Ohannessian, M. I., & Srebro, N. (2017). Learning non-discriminatory predictors. In Proceedings of the 30th conference on learning theory, COLT 2017, amsterdam, the netherlands, 7--10 july 2017 (pp. 1920--1953).Google Scholar
- Wu, Y., & Liu, Y. (2007). Robust truncated hinge loss support vector machines. Journal of the American Statistical Association, 102(479), 974--983.Google ScholarCross Ref
- Xu, P., Roosta-Khorasani, F., & Mahoney, M. W. (2017). Newton-type methods for non-convex optimization under inexact hessian information. CoRR, abs/1708.07164.Google Scholar
- Xu, Y., Jin, R., & Yang, T. (2019). Stochastic proximal gradient methods for non-smooth non-convex regularized problems. arXiv preprint arXiv:1902.07672.Google Scholar
- Xu, Y., Lin, Q., & Yang, T. (2017). Stochastic convex optimization: Faster local growth implies faster global convergence. In Proceedings of the 34th international conference on machine learning-volume 70 (pp. 3821--3830).Google Scholar
- Xu, Y., Qi, Q., Lin, Q., Jin, R., & Yang, T. (2019). Stochastic optimization for DC functions and non-smooth non-convex regularizers with non-asymptotic convergence. In Proceedings of the 36th international conference on machine learning, ICML 2019, 9--15 june 2019, long beach, california, USA (pp. 6942--6951).Google Scholar
- Xu, Y., Rong, J., & Yang, T. (2018). First-order stochastic algorithms for escaping from saddle points in almost linear time. In Advances in neural information processing systems (neurips) (pp. 5530--5540).Google Scholar
- Xu, Y., Zhu, S., Yang, S., Zhang, C., Jin, R., & Yang, T. (2019). Learning with non-convex truncated losses by SGD. In Proceedings of the thirty-fifth conference on uncertainty in artificial intelligence, UAI 2019, tel aviv, israel, july 22--25, 2019 (p. 244).Google Scholar
- Yan, Y., Yang, T., Li, Z., Lin, Q., & Yang, Y. (2018). A unified analysis of stochastic momentum methods for deep learning. In Proceedings of the twenty-seventh international joint conference on artificial intelligence, IJCAI 2018, july 13--19, 2018, stockholm, sweden. (pp. 2955--2961).Google ScholarCross Ref
- Yang, L. (2018). Proximal gradient method with extrapolation and line search for a class of nonconvex and nonsmooth problems. CoRR, abs/1711.06831.Google Scholar
- Yang, T., Lin, Q., & Zhang, L. (2017). A richer theory of convex constrained optimization with reduced projections and improved rates. In Proceedings of the 34th international conference on machine learning (icml) (p. -).Google ScholarDigital Library
- Yang, T., Mahdavi, M., Jin, R., Zhang, L., & Zhou, Y. (2012). Multiple kernel learning from noisy labels by stochastic programming. In Proceedings of the international conference on machine learning (icml) (pp. 233--240).Google Scholar
- You, S., Ding, D., Canini, K. R., Pfeifer, J., & Gupta, M. R. (2017). Deep lattice networks and partial monotonic functions. In Advances in neural information processing systems 30 (nips) (pp. 2985--2993).Google Scholar
- Yu, Y., Zheng, X., Marchetti-Bowick, M., & Xing, E. P. (2015). Minimizing nonconvex non-separable functions. In The 17th international conference on artificial intelligence and statistics (AISTATS).Google Scholar
- Zafar, M. B., Valera, I., Gomez Rodriguez, M., & Gummadi, K. P. (2017). Fairness beyond disparate treatment and disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th international conference on world wide web (pp. 1171--1180).Google ScholarDigital Library
- Zaheer, M., Reddi, S., Sachan, D., Kale, S., & Kumar, S. (2018). Adaptive methods for nonconvex optimization. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, & R. Garnett (Eds.), Advances in neural information processing systems 31 (pp. 9793--9803). Curran Associates, Inc. Retrieved from http://papers.nips.cc/paper/8186-adaptive-methods-for-nonconvex-optimization.pdfGoogle Scholar
- Zhang, C.-H. (2010). Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics, 38, 894 -- 942.Google ScholarCross Ref
- Zhang, C.-H., & Zhang, T. (2012, 11). A general theory of concave regularization for high-dimensional sparse estimation problems. Statistical Science, 27(4), 576--593. Google ScholarCross Ref
- Zhang, S., & Xin, J. (2014). Minimization of transformed I_1 penalty: Theory, difference of convex function algorithm, and robust application in compressed sensing. CoRR, abs/1411.5735.Google Scholar
- Zhang, T. (2010, March). Analysis of multistage convex relaxation for sparse regularization. J. Mach. Learn. Res., 11, 1081--1107.Google ScholarDigital Library
- Zhong, W., & Kwok, J. T. (2014). Gradient descent with proximal average for nonconvex and composite regularization. In Proceedings of the twenty-eighth AAAI conference on artificial intelligence, july 27--31, 2014, québec city, québec, canada. (pp. 2206--2212).Google ScholarDigital Library
- Zhou, D., Tang, Y., Yang, Z., Cao, Y., & Gu, Q. (2018). On the convergence of adaptive gradient methods for nonconvex optimization. arXiv preprint arXiv:1808.05671.Google Scholar
- Zhu, D., Li, Z., Wang, X., Gong, B., & Yang, T. (2019). A robust zero-sum game framework for pool-based active learning. In The 22nd international conference on artificial intelligence and statistics (pp. 517--526).Google Scholar
- Zou, D., Cao, Y., Zhou, D., & Gu, Q. (2018). Stochastic gradient descent optimizes over-parameterized deep relu networks. CoRR, abs/1811.08888.Google Scholar
Index Terms
- Advancing non-convex and constrained learning: challenges and opportunities
Recommendations
Convex hulls of spheres and convex hulls of disjoint convex polytopes
Given a set @S of spheres in E^d, with d>=3 and d odd, having a constant number of m distinct radii @r"1,@r"2,...,@r"m, we show that the worst-case combinatorial complexity of the convex hull of @S is @Q(@__ __"1"=<"i"<>"j"=<"mn"in"j^@__ __^d^2^@__ __), ...
Convex hulls of spheres and convex hulls of convex polytopes lying on parallel hyperplanes
SoCG '11: Proceedings of the twenty-seventh annual symposium on Computational geometryGiven a set Σ of spheres in Ed, with d≥3 and d odd, having a fixed number of m distinct radii ρ1,ρ2,...,ρm, we show that the worst-case combinatorial complexity of the convex hull CHd(Σ) of Σ is Θ(Σ{1≤i≠j≤m}ninj⌊ d/2 ⌋), where ni is the number of ...
Comments