Zhifeng Lai and S.C. Cheung†
Department of Computer Science and Engineering Hong Kong University of Science and Technology Kowloon, Hong Kong
W.K. Chan
Department of Computer Science City University of Hong Kong Tat Chee Avenue, Hong Kong
{zflai, scc}@cse.ust.hk
wkchan@cs.cityu.edu.hk
ABSTRACT
NesC is a programming language for applications that run on top of networked sensor nodes. Such an application mainly uses an interrupt to trigger a sequence of operations, known as contexts, to perform its actions. However, a high degree of inter-context interleaving in an application can cause it to be error-prone. For instance, a context may mistakenly alter another context’s data kept at a shared variable. Existing concurrency testing techniques target testing programs written in general-purpose programming languages, where a small scale of inter-context interleaving between program executions may make these techniques inapplicable. We observe that nesC blocks new context interleaving when handling interrupts, and this feature significantly restricts the scale of inter-context interleaving that may occur in a nesC application. This paper models how operations on different contexts may interleave as inter-context flow graphs. Based on these graphs, it proposes two test adequacy criteria, one on inter-context data-flows and another on intercontext control-flows. It evaluates the proposal by a real-life open-source nesC application. The empirical results show that the new criteria detect significantly more failures than their conventional counterparts.
1. INTRODUCTION
NesC language [10] is designed for programming wireless sensor network (WSN) applications that are deployed on a collection of small low-powered low-capability devices known as motes. Each mote usually has sensing and wireless communication capabilities. WSN applications are useful for monitoring their
References: [1] Aho, A. V., Sethi, R., and Ullman, J. D. 1988. Compilers: Principles, Techniques, and Tools, Chapter 10. AddisonWesley Pub. Co., 1988. [2] Ammons, G., Ball, T., and Larus, J. R. 1997. Exploiting hardware performance counters with flow and context sensitive profiling. SIGPLAN Not. 32, 5 (May. 1997), 85-96. [3] Andrews, J. H., Briand, L. C., Labiche, Y., and Namin, A. S. 2006. Using mutation analysis for assessing and comparing testing coverage criteria. IEEE Trans. Softw. Eng. 32, 8 (Aug. 2006), 608-624. [4] Archer, W., Levis, P., and Regehr, J. 2007. Interface contracts for TinyOS. In Proc. of IPSN '07, 158-165. [5] Brylow, D., Damgaard, N., and Palsberg, J. 2001. Static checking of interrupt-driven software. In Proc. of ICSE '01, 47-56. [6] Cheong, E., Liebman, J., Liu, J., and Zhao, F. 2003. TinyGALS: a programming model for event-driven embedded systems. In Proc. of SAC '03, 698-704. [7] Cooprider, N., Archer, W., Eide, E., Gay, D., and Regehr, J. 2007. Efficient memory safety for TinyOS. In Proc. of SenSys '07, 205-218. [8] Frankl, P. G. and Weiss, S. N. 1993. An experimental comparison of the effectiveness of branch testing and data flow testing. IEEE Trans. Softw. Eng. 19, 8 (Aug. 1993), 774-787. [9] Frankl, P. G. and Weyuker, E. J. 1988. An applicable family of data flow testing criteria. IEEE Trans. Softw. Eng. 14, 10 (Oct. 1988), 1483-1498. [10] Gay, D., Levis, P., von Behren, R., Welsh, M., Brewer, E., and Culler, D. 2003. The nesC language: A holistic approach to networked embedded systems. In Proc. of PLDI '03, 1-11. [11] Harrold, M. J. and Soffa, M. 1994. Efficient computation of interprocedural definition-use chains. ACM Trans. Program. Lang. Syst. 16, 2 (Mar. 1994), 175-204. [12] Han, C., Kumar, R., Shea, R., Kohler, E., and Srivastava, M. 2005. SOS: A dynamic operating system for sensor networks. In Proc. of MobiSys '05. [13] Henzinger, T. A., Jhala, R., and Majumdar, R. 2004. Race checking by context inference. In Proc. of PLDI '04, 1-13. [14] Hill, J., Szewczyk, R., Woo, A., Hollar, S., Culler, D., and Pister, K. 2000. System architecture directions for networked sensors. SIGOPS Oper. Syst. Rev. 34, 5 (Dec. 2000), 93-104. [15] Hill, T. and Lewicki, P. 2007. STATISTICS Methods and Applications. StatSoft, Tulsa, OK, 2007. [16] Huang, J. C. 1975. An approach to program testing. ACM Comput. Surv. 7, 3 (Sep. 1975), 113-128. [17] Hutchins, M., Foster, H., Goradia, T., and Ostrand, T. 1994. Experiments of the effectiveness of dataflow- and [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] controlflow-based test adequacy criteria. In Proc. of ICSE '94, 191-200. Kanungo, T., Mount, D. M., Netanyahu, N. S., Piatko, C. D., Silverman, R., and Wu, A. Y. 2002. An efficient k-means clustering algorithm: analysis and implementation. IEEE Trans. Pattern Anal. Mach. Intell. 24, 7 (Jul. 2002), 881-892. Kim, S., Pakzad, S., Culler, D., Demmel, J., Fenves, G., Glaser, S., and Turon, M. 2007. Health monitoring of civil infrastructures using wireless sensor networks. In Proc. of IPSN '07, 254-263. Lei, Y., and Carver, R. H. 2006. Reachability testing of concurrent programs. IEEE Trans. Softw. Eng. 32, 6 (Jun. 2006), 382-403. Levis, P., Lee, N., Welsh, M., and Culler, D. 2003. TOSSIM: accurate and scalable simulation of entire TinyOS applications. In Proc. of SenSys '03, 126-137. Lu, H., Chan, W. K., and Tse, T. H. 2006. Testing contextaware middleware-centric programs: a data flow approach and an RFID-based experimentation. In Proc. of SIGSOFT '06/FSE-14, 242-252. Lu, H., Chan, W. K., and Tse, T. H. 2008. Testing pervasive software in the presence of context inconsistency resolution services. In Proc. of ICSE '08, 61-70. Memon, A. M., Soffa, M. L., and Pollack, M. E. 2001. Coverage criteria for GUI testing. In Proc. of ESEC/FSE-9, 256-267. Nguyen, N. T. and Soffa, M. L. 2007. Program representations for testing wireless sensor network applications. In Proc. of DOSTA '07, 20-26. Regehr, J. 2005. Random testing of interrupt-driven software. In Proc. of EMSOFT '05, 290-298. Rutherford, M. J., Carzaniga, A., and Wolf, A. L. 2006. Simulation-based test adequacy criteria for distributed systems. In Proc. of SIGSOFT '06/FSE-14, 231-241. TinyOS Tutorials. Modules and the TinyOS execution model. http://docs.tinyos.net/index.php/Modules_and_the_TinyOS_ Execution_Model. Titzer, B. L., Lee, D. K., and Palsberg, J. 2005. Avrora: scalable sensor network simulation with precise timing. In Proc. of IPSN '05, 477-482. Tse, T. H., Yau, S. S., Chan, W. K., Lu, H., and Chen, T. Y. 2004. Testing context-sensitive middleware-based software applications. In Compsac '04, 458-466. Wang, Z., Elbaum, S., and Rosenblum, D. S. 2007. Automated generation of context-aware tests. In Proc. of ICSE '07, 406-415. Xie, Q. and Memon, A. M. 2007. Designing and comparing automated test oracles for GUI-based software applications. ACM Trans. Softw. Eng. Methodol. 16, 1 (Feb. 2007), 4. Yang, C. D., Souter, A. L., and Pollock, L. L. 1998. All-dupath coverage for parallel programs. In Proc. of ISSTA '98, 153-162. Zhang, Y. and West, R. 2006. Process-aware interrupt scheduling and accounting. In Proc. of RTSS '06, 191-201. Zhu, H., Hall, P. A., and May, J. H. 1997. Software unit test coverage and adequacy. ACM Comput. Surv. 29, 4 (Dec. 1997), 366-427.