Video streaming applications in wireless camera networks: A change detection based approach targeted to 6LoWPAN

https://doi.org/10.1016/j.sysarc.2013.05.009Get rights and content

Abstract

Video streaming applications in wireless camera networks composed by low-end devices are attractive for enabling new pervasive high value services. When complying with the 6LoWPAN standard, the reduced amount of available bandwidth imposes the transmission of low resolution images. In this paper we present a low-complexity algorithm based on background subtraction and error resilience techniques aimed at reducing the transmission bandwidth of a video stream of uncompressed images thus permitting a higher frame rate. By means of realistic simulation studies, the performance of the presented algorithm is analyzed against state-of-the-art solutions like JPEG. These results can be considered for designing a next generation of smart cameras suited for 6LoWPAN.

Introduction

Wireless camera networks (WCNs) based on low-end devices have experienced a rapid growth in the last several years due to the hardware miniaturization process fostered by the electronic industry, and the simultaneous advances in computer vision, embedded computing and sensor networks disciplines. Research efforts are nowadays oriented at replacing high-cost and high-performance camera network devices [1] with low-cost, low-complexity embedded devices organized in distributed systems [2].

The possibility of integrating multimedia and computer vision based capabilities on low-end devices permits to create innovative and pervasive WCNs [3] able to perform high-valued applications such as distributed visual surveillance [4], object tracking [5] and traffic monitoring [6]. Considering video-streaming services targeted to video surveillance applications, a pervasive network would allow to gather complementary views of a given scene at a reduced installation cost, thus significantly increasing the available information content. However, low-complexity camera network devices are strongly constrained in terms of battery, memory, processing capability, and achievable data rate so that each video-streaming protocol is expected to be fully customized from state-of-the-art solutions. Indeed, because of these severe limitations, the highest frame rate reachable by a smart camera node is the result of a strong trade-off between the frame rate achievable by a given compression algorithm, derived from the adopted hardware architecture, and the available transmission bandwidth. To better support multimedia based applications in pervasive WCNs several research projects have prototyped embedded devices suited for image acquisition, elaboration, and transmission. A comprehensive survey is given in [7], where applications and features of the most common WCN devices (e.g., Cyclops [8], MeshEye [9], CITRIC [10]) are presented. Along with the devices presented in [7], must be cited the Seed-Eye board [11], an innovative camera network device in which all the computational tasks are demanded to a microcontroller embedding an amount of memory equal to 128 KBytes both for image storing and run-time device functionality. According to previous boards the Seed-Eye communication capabilities are based on the IEEE802.15.4 [12] standard, a key building block towards the full accomplishment of the so called Internet of Things vision [13] in which all the devices are part of the global Internet. The Seed-Eye is an output of the IPERMOB [14] research project in which a first example of camera network has been deployed with the aim of monitoring parking lots and analyzing traffic flows [15].

Considering WCN devices compliant with the IEEE802.15.4 standard the amount of available bandwidth at physical (PHY) layer is equal to 250 Kbit/s with even lower values at the medium access control (MAC) layer (e.g., less than 200 Kbit/s in case of both unslotted based transmissions [16] and slotted based transmissions [17]). To face the strong network bandwidth constraints new possible network architecture for WCNs must be defined. A reference architecture for wireless camera networks based on low-cost embedded devices has been preliminarily investigated in [18], where along with multimedia enabled sensor devices, camera nodes (CN), the concept of multimedia processing hub (MPH) is introduced. A MPH is a node of the network with higher computational capabilities with respect to simple multimedia nodes, and able to aggregate video streams. In a hierarchic view of the whole network CNs send video data to the MPH that works as a multimedia sink sending aggregated data towards external network gateways.

Albeit the above proposed WCN architecture is a first step towards an effective development of pervasive camera network systems, it does not take into account advantages and drawbacks coming from new protocol solutions aiming at realizing a full interoperability with the Internet world. In 2007 the Internet Engineering Task Force (IETF) standardized the IPv6 over Low power Wireless Personal Area Networks (6LoWPAN) [19], an adaptation of the IPv6 protocol targeted to low-cost sensor devices, and permitting a full interoperability among them in the spirit of the Internet of Things. A 6LoWPAN network (whose architecture is defined in [20] and pictorially sketched in Fig. 1) is mainly composed by three type of nodes: the “simple” Host (H) not implementing forwarding and routing services, the 6LoWPAN Router (6LR) having forwarding and routing capabilities, and the 6LoWPAN Border Router (6LBR) connecting each sub net to the Internet by translating 6LoWPAN into IPv6 packets and viceversa. The adoption of 6LoWPAN in a WCN permits the full interoperability with the Internet at the cost of a further reduction in the available communication bandwidth (i.e., less than 100 Kbit/s in the average), thus requiring high performance data compression scheme with a possible joint use of low-complexity traffic shaping algorithms to overcome data bursts in the network.

In case of video streaming applications over 6LoWPAN based WCNs this constraint imposes source coding techniques with a strong trade-off between compression efficiency and complexity. Traditional video compression techniques rely on both intra-frame compression techniques to remove redundancy in standalone frames, and inter-frame compression in which temporal redundancy is further removed. Such approaches are widely used by state-of-the-art video encoders, such as MPEG, H.263, and H.264, that require powerful processing algorithms and high levels of energy consumption, thus making their use completely unfeasible in low-cost wireless camera devices. State-of-the-art video compression techniques in wireless camera networks based on low-cost embedded devices are mainly focused on in-node intra-frame algorithms, such as standard JPEG or derived optimized versions. In [21] a survey of JPEG-based video compression techniques targeted to low-end devices is presented. Along with the benefits of possible JPEG fixed point implementations [22], and modified versions with change detection approaches specifically developed for JPEG [23], in the survey the main disadvantages in using this standard are pointed out. The big issue in using JPEG in low-end WCNs is the lack of error resilience properties [24] that can be overcome by Forward Error Correction (FEC) [25] techniques, Erasure Correction (EC) codes [26], Interleaving scheme [27] and Variable-Length Coding (VLC) [28], thus increasing the required transmission bandwidth [29] and lowering the highest reachable frame rate. The same issues can be experienced in other JPEG-based compression approaches [30], as well as solutions in which a coding scheme is performed by using frequency transformation functions, such as in [31] where a new hybrid DPCM/DCT coding scheme is proposed to achieve an acceptable compression gain with a low computational complexity, and [32] where a wavelet based coding technique is proposed and its performance jointly evaluated with Unequal Error Protection (UEP) [33] techniques. Although all the above mentioned techniques focus on in-node low-complexity algorithms, only few works have proposed at design stage the use of distributed techniques [18], while specific compression scheme targeted to 6LoWPAN WCNs have not been deployed yet.

In this work we present an innovative video compression approach to be used in video-surveillance applications and implementing: (i) in-node video compression based on low-complexity computer vision techniques, and (ii) aiming at reducing the transmission bandwidth by removing inter-frame redundancy. The presented solution is an extension of the work presented in [34]. More in particular, hereby we extended the “basic” codec with an error resilient streaming technique, applicable to a 6LoWPAN network with the evident benefit of improving QoS. Moreover, the achieved performance in compression capabilities and error resiliency are compared in terms of perceived video quality respect to a JPEG based transmission. All performance results have been obtained by means of a simulative approach, and to be as realistic as possible (i) we developed a “C” library to emulate the compression and decompression algorithms on a full fledged device, while considering microcontroller based constraints (e.g., no dynamic memory, no floating point operations); (ii) we imported an image dataset [35] from the IPERMOB project [14], and characterized by real images acquired by sensor devices equipped with a low-cost camera [11]; (iii) the packet format is compliant with the 6LoWPAN specifications defined in IETF RFC 6282 [36]; (iv) packet corruption is taken into account by using real network loss traces taken from the IPERMOB data acquisition campaigns. We expect to candidate our approach to streaming applications in WCNs based on 6LoWPAN in which video streams originated by multimedia sensors can be consumed by high-end devices in the spirit of the Internet of Things (see Fig. 2).

The remainder of the article is organized as follows: in Section 2 we present the overall video streaming chain and its features in respect of a JPEG based approach, as well as the proposed error resilience technique targeted to 6LoWPAN networks. Error resilience capabilities of the proposed codec are further compared with JPEG in Section 3 by considering real loss traces. Conclusions follow in Section 4.

Section snippets

Video compression and transmission

State-of-the-art video-surveillance applications are based on a streaming service between a static camera and a remote control point; the service is required to fulfill the temporal constraints usually defined in terms of frame rate. The minimum frame rate required by such applications is about 1 fps. The diagram shown in Fig. 3 describes the main functional blocks, and their relations, in a standard video-surveillance application based on a streaming service. In this situation, on the

Performance evaluation

In this section we comment upon the effects introduced by environmental noise in point to point communication. For all performed simulations we used a loss trace characterized by a Bit Error Rate (BER) equal to 5·10-5 as from the best fit of the data acquisition campaigns of the IPERMOB project [14]. Must be stressed that all the results discussed in this section derives from simulative experiments in which the impact of bit errors is evaluated by using images with a QQ-VGA resolution coming

Conclusions

In this paper an innovative approach for a streaming technique fitting the usual constraints of a WCN is discussed. We presented an innovative idea about in-node compression and inter-frame removal of redundant information. At the network level, special techniques for implementing error resilience behaviors are suggested and tested against relevant metrics related to computational complexity, bandwidth usage, and perceived reconstruction quality. Realistic simulations compare JPEG, appointed as

Claudio Salvadori was born in Arezzo, Italy in 1978. He received the Laurea degree in Telecommunication Engineering in 2006 from the Universit‘a degli Studi di Siena, Italy. Since November 2009 he is a Ph.D. student in Wireless Sensor Network at Scuola Superiore Sant’Anna, Pisa, Italy. His research interests include Wireless Multimedia Sensor Networks, embedded and distributed signal processing.

References (38)

  • I. Akyildiz et al.

    A survey on wireless multimedia sensor networks

    Computer Networks (Elsevier)

    (2007)
  • C. Duran-Faundez et al.

    Tiny block-size coding for energy-efficient image compression and communication in wireless camera sensor networks

    Signal Processing: Image Communication

    (2011)
  • M. Valera et al.

    Intelligent distributed surveillance systems: a review

    IEEE Proceedings – Vision, Image and Signal Processing

    (2005)
  • B. Rinner et al.

    Toward Pervasive Smart Camera Networks

    (2009)
  • Z.R.K. Shafique et al.

    Distributed Sensor Networks for Visual Surveillance

    (2011)
  • C. Arth, C. Leistner, H. Bischof, Object reacquisition and tracking in large-scale smart camera networks, in:...
  • C. Salvadori et al.

    On-board Image Processing in Wireless Multimedia Sensor Networks: A Parking Space Monitoring Solution for Intelligent Transportation Systems

    (2012)
  • B. Tavli et al.

    A survey of visual sensor network platforms

    Multimedia Tools and Applications

    (2011)
  • M. Rahimi, R. Baer, O. Iroezi, J. García, J. Warrior, D. Estrin, M. Srivastava, Cyclops: in situ image sensing and...
  • S. Hengstler, D. Prashanth, S. Fong, H. Aghajan, Mesheye: a hybrid-resolution smart camera mote for applications in...
  • P. Chen, P. Ahammad, C. Boyer, S. Huang, L. Lin, E. Lobaton, M. Meingast, O. Songhwai, S. Wang, Y. Posu, A. Yang, C....
  • Seed-eye board, A Multimedia WSN device....
  • IEEE Computer Society, Wireless Medium Access Control (MAC) and Physical Layer (PHY) Specifications for Low-Rate...
  • G. Kortuem et al.

    Smart objects as building blocks for the internet of things

    IEEE Internet Computing

    (2009)
  • R. Mambrini, A. Rossi, P. Pagano, P. Ancilotti, O. Salvetti, A. Bertolino, P. Gai, L. Costalli, Ipermob: towards an...
  • A. Alessandrelli, A. Azzarà, M. Petracca, C. Nastasi, P. Pagano, ScanTraffic: Smart Camera Network for Traffic...
  • B. Latrè et al.

    Throughput and delay analysis of unslotted IEEE 802.15.4

    Journal of Networks

    (2006)
  • T. Park et al.

    Throughput and energy consumption analysis of IEEE 802.15.4 slotted CSMACA

    Electronics Letters

    (2005)
  • T. Melodia et al.

    Research Challenges for Wireless Multimedia Sensor Networks

    (2011)
  • Cited by (14)

    • Grouping and time-series notifying of periodic data in a real-time streaming system for smart toy claw machine

      2019, Journal of Systems Architecture
      Citation Excerpt :

      A grouping configuration of cluster nodes for Video-On-Demand (VOD) systems was proposed to provide Quality of Service (QoS) to more clients under limited resources [21]. A low-complexity algorithm based on background subtraction and error resilience techniques was proposed to reduce the transmission bandwidth of a video stream [22]. A generic response modeling technique was proposed for runtime scheduling of embedded streaming applications [23].

    • Highly-Efficient Bulk Data Transfer for Structured Dissemination in Wireless Embedded Network Systems

      2017, Journal of Systems Architecture
      Citation Excerpt :

      WENS have been widely deployed recently to support Internet-of-things [3–6]. Bulk data dissemination is used to distribute a large data object reliably from a sink node to all network nodes in WENS, becoming an essential building module for a variety of WENS systems, e.g., remote software management [7,8], security patches [9], reprogramming [10,11] and video distribution [12,13]. The existing studies [14–16] often employ a Connected Dominating Set (CDS) structure for bulk data dissemination.

    • BlueVoice: Voice communications over Bluetooth Low Energy in the Internet of Things scenario

      2016, Computer Communications
      Citation Excerpt :

      Along with classical monitoring services, new advanced applications, mainly based on 802.15.4, started to be investigated in the last few years. In [9] and [10], for instance, multimedia communications, both voice and video, have been proposed and analyzed. On the contrary, the use of Bluetooth LE for multimedia data transmission is still at an early stage, and the lack of available solutions mainly depends on the original set of applications thought for this technology (e.g., healthcare, fitness).

    • Communication performances of IEEE 802.15.4 wireless sensor motes for data-intensive applications: A comparison of WaspMote, Arduino MEGA, TelosB, MicaZ and iMote2 for image surveillance

      2014, Journal of Network and Computer Applications
      Citation Excerpt :

      We will therefore present a performance study of sending, receiving and relaying capabilities of Libelium WaspMote, Arduino-based motes, Telosb-based motes, MicaZ motes and iMote2 motes when a large number of packets needs to be streamed from sources to sink node. There have been previous work on image/multimedia sensors (Rahimi et al., 2008; Hengstler and Aghajan, 2006; Cucchiara et al., 2007; Misra et al., 2008; Soro and Heinzelman, 2009; Rinner and Wolf, 2009; Paniga et al., 2011; Salvadori et al., 2013) but few of them really consider timing on realistic hardware constraints for sending/receiving packets. Paniga et al. (2011) and Chen et al. (2013) are probably the closest work to ours with real experimentations on iMote2 and TelosB sensors.

    • Remote detection of forest fires from video signals with classifiers based on K-SVD learned dictionaries

      2014, Engineering Applications of Artificial Intelligence
      Citation Excerpt :

      Battery is the main power source in a sensor node, and secondary power supplies that absorb energy from the environment, such as solar panels, must be added to the node. To support multimedia based applications in WCNs, different research projects have prototyped embedded devices suited for image acquisition, processing, and transmission (Akyildiz et al., 2006; Salvadori et al., 2013; Fernández-Berni et al., 2012). A comprehensive survey is given by Yick et al. (2008).

    View all citing articles on Scopus

    Claudio Salvadori was born in Arezzo, Italy in 1978. He received the Laurea degree in Telecommunication Engineering in 2006 from the Universit‘a degli Studi di Siena, Italy. Since November 2009 he is a Ph.D. student in Wireless Sensor Network at Scuola Superiore Sant’Anna, Pisa, Italy. His research interests include Wireless Multimedia Sensor Networks, embedded and distributed signal processing.

    Matteo Petracca received the M.S. degree in Telecommunication Engineering in 2003 and the Ph.D. degree in Information and System Engineering in 2007, both from the Politecnico di Torino, Turin, Italy. From January 2008 to November 2009 he was a post-doc researcher at the Politecnico di Torino working on multimedia processing and transmission over packet networks. In 2009 he joined the Scuola Superiore Sant’Anna in Pisa, Italy and in the 2010 the CNIT (National Inter-University Consortium for Telecommunications) as research fellow. Dr. Petracca has been actively involved in many R&D projects in US (UTMOST, UTDRIVE) and in Italy (VICSUM, IPERMOB). He was the leader of the Work Package related to WSN implementation in the IPERMOB project. He is co-author of papers published in international journals, peer-reviewed conference proceedings and book chapters.

    Simone Madeo, received the M.S. degree in Computer Engineering in 2011 from University of Pisa, Italy. He is currently a research-assistant at CNIT National Laboratory of Photonic Networks. His research activities have a specific focus on Wireless Sensor Networks applied to multimedia streaming, Computer Vision topics and distributed applications.

    Stefano Bocchino received the B.S. degree in Informatics and Automation Engineering from Universit‘a Politecnica delle Marche, Ancona, Italy, and the M.S. degree in Informatics Engineering from the Universit‘a Politecnica delle Marche, Ancona, Italy. He is currently a Ph.D. student in Embedded System at the Scuola Superiore Sant’Anna, Pisa, Italy. His research interests include Wireless Sensor Networks, routing protocol for 6LoWPAN networks and node localization.

    Paolo Pagano received his M.S. degree in Physics in 1999 from Trieste University (I). In 2003 he received his Ph.D. degree in High Energy Physics from Trieste University having worked for the COMPASS collaboration at CERN (CH). In 2004 he was hired by HISKP at Bonn University (D). In 2006 he received a Master in Computer Science from Scuola Superiore Sant’Anna in Pisa (I). In the same year he joined the REal-TIme System (RETIS) laboratory of the Scuola. From 2009 he is with the CNIT (National Inter-University Consortium for Telecommunications). He is leading the Real-time Wireless Networks area at the CNIT National Laboratory of Photonic Networks in Pisa. His research activities have a specific focus on Wireless Sensor Networks applied to traffic monitoring. He is responsible of public and private research grants in the domain of Intelligent Transport Systems.

    View full text