Skip to main content
Log in

Intelligent agents and liability: is it a doctrinal problem or merely a problem of explanation?

  • Published:
Artificial Intelligence and Law Aims and scope Submit manuscript

Abstract

The question of liability in the case of using intelligent agents is far from simple, and cannot sufficiently be answered by deeming the human user as being automatically responsible for all actions and mistakes of his agent. Therefore, this paper is specifically concerned with the significant difficulties which might arise in this regard especially if the technology behind software agents evolves, or is commonly used on a larger scale. Furthermore, this paper contemplates whether or not it is possible to share the responsibility with these agents and what are the main objections surrounding the assumption of considering such agents as responsible entities. This paper, however, is not intended to provide the final answer to all questions and challenges in this regard, but to identify the main components, and provide some perspectives on how to deal with such issue.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. For more information, see Russell and Norvig (1995).

  2. They are also technically capable of representing buyers and sellers, negotiating with human parties or even with each other, and concluding transactions without any human intervention or knowledge during the conclusion of transactions. Fore example, See Tete-a-Tete (http://ecommerce.media.mit.edu/tete-a-tete/), which is an online negotiation system where price and other terms of transactions are handled entirely by software agents.

  3. See, for example, AuctionBot where the user specifies a number of parameters, and after that, it is up to the agent to manage the auction, monitor the price change, interact with other bidding agents, and compete autonomously in the marketplace for the best bids. Unlike popular online auction sites such as eBay’s Auction- Web, which require consumers to manage their own negotiation strategies over an extended period of time, AuctionBot can perform tasks that require immediate response to events with no delay while its user is away from a Web interaction. For more information, see R. Guttman, et al., ‘Agent-mediated electronic commerce: A survey’, Knowledge Engineering Review, vol. 13 (2), 1998. See also P. R. Wurman, et al., ‘The Michigan Internet AuctionBot: A Configurable Auction Server for Human and Software Agents’, In Proceedings of the second International Conference on Autonomous Agents (ICAA-98), New York, ACM Press, 1998, pp. 301–308.

  4. Contemplate, for example, the recent cases in which Argos.com mistakenly offered Sony televisions for £2.99, Amazom.co.uk erroneously listed HP iPAQ pocket PC for £7.32 instead of £287 each, and Kodak advertised digital cameras on its website at £100 instead of £329 each.

  5. A good example of such analyses is the “Guide to Enactment” accompanying the UNCITRAL Model Law, which provides that “The Data messages that are generated automatically by computers without human intervention should be regarded as “originating” from the legal entity on behalf of which the computer is operated”.

  6. Davis (1998), p. 1148.

  7. Johnson (1985).

  8. Such as users, designers, distributors, administrators of the platform, trusted third parties, and owners of the servers.

  9. Would it be fair to assign liability to the human user when such user is so far away from the transactional environment and he is no longer aware of the form and structure of the software agent nor has he any awareness of the agent’s decision making processes?

  10. Given that some intelligent agents may have mutated from the original version written by the programmer, or may persist as a result of self-programming, placing the human in the casual chain of liability becomes difficult and so harsh.

  11. Karnow (Karnow 1996), p. 189.

  12. For more information, see Bechtel (1985).

  13. Deborah G. Johnson, supra note 7, p. 55.

  14. W. Bechtel, supra note 12, p. 305.

  15. Ibid.

  16. For more information, see Dennett (1984).

  17. This becomes more acute if we take into account the fact that intelligent agents can modify their code, and even create new instructions.

  18. Allen and Widdison (1996).

  19. P. Hayes, et al., ‘Human Reasoning about Artificial Intelligence’, in E. Dietrich (ed.), Thinking Computer & Virtual Persons (San Diego, CA: Academic Press, 1994), p. 333.

  20. Kerr (1999).

  21. From this perspective, Karnow proposed a system akin to the registration system for companies according to which a software agent should be submitted to certification procedures for the purpose of guaranteeing coverage for risks arising out of its use. However, such a system has been criticised because it does not completely solve the problem of identification, and has been seen as an unnecessary expense, which would be unwelcome and superfluous to the needs of those engaging in e-commerce. For more information, see Karnow, supra note 11.

  22. A person owes a duty of care not to injure those who it can be reasonably foreseen would be affected by his acts or omissions. However, there is still a difficulty in determining what is reasonably foreseeable and what is not. The term “reasonably foreseeable” can be constructed and interpreted broadly, and the scope of “duty of care” is still not completely clear. For an excellent discussion of “reasonable foreseeability” and “duty of care”, see Donoghue v. Stevenson [1932] AC 562 which concerns a decomposed snail found in a bottle of ginger beer. This case posed the issue of the situations to which the law of negligence extends, and the extent to which we can consider the action or harm reasonably foreseeable. The practical effect of this case was to confirm that a manufacturer of products owes a duty to the consumer (end-user) to take reasonable care to prevent any damage or injury to the consumer arising from the product. The other point this case made clear is that there is no need for a contract between plaintiff and defendant for liability in tort to arise.

  23. See, for example, Cooper v. Horn, 448 S.E.2d 403 (Va.1994). If a flood is reasonably foreseeable, then the law imposes liability on the builders of the dam that fails because it was inadequately constructed and was thus unable to withstand heavy rainfall.

  24. C. Karnow, supra note 11, p. 179.

  25. For more information, see G. Sartor, ‘Intentional concepts and the legal discipline of software agents’, in J. Pitt (ed.), Open Agent Societies: Normative Specifications in Multi-Agent Systems (Chichester: John Wiley & Sons Inc, 2003).

  26. Tom Allen, Robin Widdison, supra note 18, p 46.

  27. Giovanni Sartor, ‘Agents in Cyberlaw’, Workshop on The Law of Electronic Agents (LEA2002), as available at http://www.cirfid.unibo.it/~agsw/lea02/pp/Sartor.pdf, on 29/03/04.

  28. Most philosophers and commentators deny the possibility of computers being subjects of responsibility. See, for example, Jordan (1963).

  29. See, for example, State Farm Mutual Auto. Ins. Co. v. Brockhurst 453 F.2d 533, 10th Cir. (1972). In this case, the court ruled that the insurance company was bound by the contract formed by its computer (an insurance renewal) since this computer only operated as programmed by the company.

  30. W. Bechtel, supra note 12, p. 305.

  31. Ibid, p. 297.

  32. Leon E. Wein, ‘The responsibility of intelligent artefacts: toward an automated jurisprudence’, Harvard Journal of Law and Technology, Vol.6, 1992, p. 116.

  33. See Lopez v. McDonald's, 238 Cal. Rptr. 436, 445–446 (Cal. Ct. App. 1987). In this case, it was held that McDonald's owed no duty to plaintiffs, and is not liable for deaths of plaintiffs caused by an unforeseeable mass murder assault at its restaurant.

  34. See Beard v London General Omnibus Co [1900] 2 QB 530 in which the employer of a bus conductor who in the absence of the driver negligently drove the bus himself was held not vicariously liable. See also Twine v Bean's Express Ltd [1946] 1 All ER 202 when a hitchhiker had been given a lift contrary to express instructions and was fatally injured. In this case, it was held that the employer was not vicariously liable since the servant was doing something totally outside the scope of his employment, namely, giving a lift to a person who had no right whatsoever to be there.

  35. See Ready Mixed Concrete (South East) Ltd v Minister of Pensions and National Insurance [1968] 2 QB 497, in which it was held that three conditions must be fulfilled for a contract of service to exist. First, the servant agrees, in consideration of a wage or other remuneration, to provide his own work and skill in the performance of some service for his master; secondly, he agrees, expressly or impliedly, that in the performance of that service he will be subject to the other's control in a sufficient degree to make that other master; thirdly, the other provisions of the contract are consistent with its being a contract of service.

  36. Emily M. Weitzenboeck, ‘Electronic Agents and the Formation of Contracts’, International Journal of Law and Information Technology, Vol. 9. Issue 3, 2001, p. 209.

  37. W. Bechtel, supra note 12, p. 297.

  38. See, for example, Kurzweil 1999). See also Moravec (1999).

References

Cases

  • Beard v. London General Omnibus Co (1900) 2 QB 530

  • Cooper v. Horn, 448 S.E.2d 403 (Va.1994)

  • Donoghue v. Stevenson (1932) AC 562

  • Lopez v. McDonald’s, 238 Cal. Rptr. 436, 445–446 (Cal. Ct. App. 1987)

  • Ready Mixed Concrete (South East) Ltd v. Minister of Pensions and National Insurance (1968) 2 QB 497

  • State Farm Mutual Auto. Ins. Co. v. Brockhurst 453 F.2d 533, 10th Cir. (1972)

  • Twine v. Bean’s Express Ltd (1946) 1 All ER 202

Books

  • Johnson DG (1985) Computer ethics. Prentice Hall, Englewood Cliffs

    Google Scholar 

  • Moravec H (1999) Robot: mere machine to transcendent mind. Oxford University Press, New York

    Google Scholar 

  • Kurzweil R (1999) The age of spiritual machines. Viking Press, New York

    Google Scholar 

  • Russell SJ, Norvig P (1995) Artificial intelligence: a modern approach. Prentice Hall, New Jersey

    MATH  Google Scholar 

Articles

  • Dennett DC (1984) I could not have done otherwise—so what? J Philos 81(10):599

    Article  Google Scholar 

  • Weitzenboeck EM (2001) Electronic agents and the formation of contracts. Int J Law Inf Technol 9(3):209

    Google Scholar 

  • Sartor G (2003) Intentional concepts and the legal discipline of software agents. In: Pitt J (ed) Open agent societies: normative specifications in multi-agent systems. Wiley, Chichester

    Google Scholar 

  • Davis JR (1998) On self-enforcing contracts, the right to hack, and willfully ignorant agents). 13 Berkeley Techno Law J

  • Karnow (1996) Liability for distributed artificial intelligences. 11 Berkeley Tech L J 147

  • Wein LE (1992) The responsibility of intelligent artefacts: toward an automated jurisprudence. Harv J Law Technol 6:116

    Google Scholar 

  • Jordan N (1963) Allocation of functions between man and machines in automated systems. J Appl Psychol 47(3):161–165

    Article  Google Scholar 

  • Kerr IR (1999) Spirits in the material world: intelligent agents as intermediaries in electronic commerce. Dalhousie Law J 22(2):217–218

    Google Scholar 

  • Allen T, Widdison R (1996) Can computers make contracts? Harv J L Tech 9(25):42

    Google Scholar 

  • Bechtel W (1985) Attributing responsibility to computer systems. Metaphilosophy 16(4):298

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Emad Abdel Rahim Dahiyat.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Dahiyat, E.A.R. Intelligent agents and liability: is it a doctrinal problem or merely a problem of explanation?. Artif Intell Law 18, 103–121 (2010). https://doi.org/10.1007/s10506-010-9086-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10506-010-9086-8

Keywords

Navigation