Abstract
Deep neural networks are a family of statistical learning models inspired by biological neural networks and are used to estimate functions that can depend on a large number of inputs and are generally unknown. In this paper we build upon the works of Katz, Bommarito and Blackman 2014, who use extremely randomized trees and feature engineering to help in predicting the behaviour of Supreme Court of United States. We explore Machine Learning techniques to achieve our goals including SVM and Neural Networks, but attain state-of-the-art accuracy with Deep Neural Networks trained using momentum methods and incorporating the Dropout technique. We explicitly use only data available prior to the decision and predict the decisions with 70.4 percent accuracy across 7,700 cases with nearly 70,000 justice votes. Our model is simple yet robust, uses far less feature vectors to train and still provides excellent accuracy, but most importantly deploys no feature engineering.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Katz, D.M., Bommarito, M.J., Blackman, J.: Predicting the Behavior of the Supreme Court of the United States: A General Approach. Available at SSRN 2463244
Haykin, S., Network, N.: A comprehensive foundation. Neural Networks 2, 1–3 (2004)
Ruger, T.W., Kim, P.T., Martin, A.D., Quinn, K.M.: The supreme court forecasting project: legal and political science approaches to predicting supreme court decisionmaking. Columbia Law Rev. 104(4), 1150–1210 (2004)
Guimera, R., Sales-Pardo, M.: Justice blocks and predictability of us supreme court votes. PloS one 6(11), e27188 (2011)
Martin, A.D., Quinn, K.M., Ruger, T.W., Kim, P.T.: Competing approaches to predicting supreme court decision making. Perspect. Polit. 2(04), 761767 (2004)
Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I., Bergeron, A., Bouchard, N., Warde-Farley, D., Bengio, Y.: Theano: new features and speed improvements. In: NIPS 2012 deep learning workshop (2012)
Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley, D., Bengio, Y.: Theano: a CPU and GPU math expression compiler. In: Proceedings of the Python for Scientific Computing Conference (SciPy), 30 June–3 July, Austin, TX (2010)
Tieleman, T., Hinton. G.: Lecture 6.5-rmsprop: divide the gradient by a running average of its recent magnitude. In: COURSERA: Neural Networks for Machine Learning, vol. 4 (2012)
Dauphin, Y.N., de Vries, H., Chung, J., Bengio, Y.: RMSProp and equilibrated adaptive learning rates for non-convex optimization (2015). arXiv preprint arXiv:1502.04390
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Nesterov, Y.: A method of solving a convex programming problem with convergence rate O(1/sqr(k)). Sov. Math. Dokl. 27, 372376 (1983)
Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, New York (1995)
Harold J.S., Epstein, L., Martin, A.D., Segal, J.A., Ruger, T.J., Benesh, S.C.: Supreme Court Database, Version 2014 Release 01. http://Supremecourtdatabase.org
Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: International Conference on Artificial Intelligence and Statistics, pp. 315–323
LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient BackProp. In: Orr, G.B., Müller, K.-R. (eds.) NIPS-WS 1996. LNCS, vol. 1524, pp. 9–48. Springer, Heidelberg (1998)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Sharma, R.D., Mittal, S., Tripathi, S., Acharya, S. (2015). Using Modern Neural Networks to Predict the Decisions of Supreme Court of the United States with State-of-the-Art Accuracy. In: Arik, S., Huang, T., Lai, W., Liu, Q. (eds) Neural Information Processing. ICONIP 2015. Lecture Notes in Computer Science(), vol 9490. Springer, Cham. https://doi.org/10.1007/978-3-319-26535-3_54
Download citation
DOI: https://doi.org/10.1007/978-3-319-26535-3_54
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-26534-6
Online ISBN: 978-3-319-26535-3
eBook Packages: Computer ScienceComputer Science (R0)