As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
After taking note of the conceptual fact that robots may well carry humans inside them, and more specifically that modern AI-infused cars, jets, spaceships, etc. can be viewed as such robots, we present a case study in which inconsistent attitude measurements resulted in the tragic crash in Sweden of such a jet and the death of both pilots. After setting out desiderata for an automated defeasible inductive reasoner able to suitably prevent such tragedies, we formalize the scenario in a first-order defeasible reasoner—OSCAR—and find that it can quickly generate a partial solution to the dilemma the pilots couldn’t conquer. But we then note and address the shortcomings of OSCAR relative to the desiderata, and adumbrate a solution supplied by a more expressive reasoner based on an inductive defeasible multi-operator cognitive calculus (ℐ𝒟𝒞ℰ𝒞) that is inspired by a merely deductive (monotonic) precursor (𝒟𝒞ℰ𝒞). Our solution in this calculus exploits both the social and cultural aspects of of the jet/robot we suggest be engineered in the future. After describing our solution, some remarks about related prior work follow, we present and rebut two objections, and then wrap up with a brief conclusion.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.