Preview

Slowing Down The Error Propagation Speed: Case Study

Good Essays
Open Document
Open Document
763 Words
Grammar
Grammar
Plagiarism
Plagiarism
Writing
Writing
Score
Score
Slowing Down The Error Propagation Speed: Case Study
3.3. Slowing Down The Error Propagation Speed
In this section, we will explain the studies that made for slowing down the error propagation. In this project, we will attack this research problem from multiple angles. First, we will consider different algorithms and gauge whether such choices have a significant impact on error propagation profiles. Second, we will consider compiler optimizations that slow error propagation.
3.3.1 Impact of the algorithms
Error propagation behavior is connected with the assignment statement at the assembly level. Since different algorithm performed different operations to solve the problem, it is expected that they have different error propagation behavior. Therefore, by switching the algorithms with their alternatives,
…show more content…
In order to compare algorithms that have different characteristics, these different characteristics are considered.
From the viewpoint of different variables, the error only injected into common data, likewise, the propagation is analyzed for common data. In regard to different execution times, the error propagation speed is defined as corruption per time proportion. Thus, a more fair comparison between different algorithms is provided.
3.3.2 Impact of the compiler optimization techniques
Compiler helps to improve the performance by performing some optimization techniques, such as reducing the code size or increase memory re-use. In this section, we study the relation between compiler optimization and error propagation. Two loop transformation optimization, loop tiling and the loop unrolling are examined in detail.
To analyze the relation between these two optimization techniques and the error propagation speed, the framework that examined in detail in the previous section is used.
Optimization performed by using Intel C Compiler [31], they are enabled by using “#pragma block_loop factor(n)” and “pragma unroll (n)”
…show more content…
If there are any possibility to optimization affects the code, they are ignored. Therefore, firstly the optimizations performed on the application source code, then the executable file is decompiled to c language with Hopper disassembler [32], the speed function which is explained the previous section added to the c code that obtained from disassembler.
Similar to the algorithms, the optimizations could also change the execution time. Therefore, when comparing error propagation speed of the optimized code and the original code, speed is calculated with respect to the percentage of the time.
3.3.2.1. Loop tiling
Strip-mine is a loop transformation which is using to improve the memory management; it applies to a single loop and transforms this loop to a nested loop [27]. By using this optimization, the loop divides into two loops that have smaller index set. The outer loop is iterate over the blocks and the inner loop iterate along the each block [27, 28]. It needed the clean-up code if the iteration size cannot divide strip size without remainder [29]. Code 1 is an example for strip-mine optimization. The optimization performed on the original code in 1a, the code in 1b is

You May Also Find These Documents Helpful

  • Satisfactory Essays

    The second category of fault changes individual instructions in the text segment. These faults are intended to approximate the assembly-level manifestation of real C-level programming…

    • 285 Words
    • 2 Pages
    Satisfactory Essays
  • Powerful Essays

    En1320 Unit 1 Research Paper 1

    • 27742 Words
    • 111 Pages

    is extensively discussed in other works and is not the focus of this guide. The second…

    • 27742 Words
    • 111 Pages
    Powerful Essays
  • Better Essays

    Int Task 3

    • 2010 Words
    • 9 Pages

    Research into which operating system performs better is extremely relevant in today’s technological environment. Every task that a computer performs is a mathematical operation. An area that is highly affected by the performance of the operating system is computer programming. When a computer programmer writes a program, the program must be converted from a programming language into a language the computer can understand, a process called compiling. Compiling is very hardware intensive and completely dependent on mathematical operations. The faster a computer is able to perform mathematical tasks, the faster compiling will run as well. Knowing which operations system performs better can save software developers time, which in turn saves money in development.…

    • 2010 Words
    • 9 Pages
    Better Essays
  • Good Essays

    Stats Final guide

    • 3002 Words
    • 13 Pages

    (a) The type ll error is more important than type l error (b)  is larger then …

    • 3002 Words
    • 13 Pages
    Good Essays
  • Satisfactory Essays

    Physics 11th Grade

    • 273 Words
    • 2 Pages

    We use relative error to make sure experiments calculations are precise. We use relation deviation to find the approximate calculations of an…

    • 273 Words
    • 2 Pages
    Satisfactory Essays
  • Good Essays

    11. Using a floating-point value as a control variable in a loop, but failing to account for the effects of floating-point precision, can cause a loop to execute much differently than expected.…

    • 719 Words
    • 3 Pages
    Good Essays
  • Satisfactory Essays

    This article focuses a lot on the output of the rewriting activity, inspecting the rewritten HPC programs and causes of source-code bloat. “A key metric was the number of SLOC (source lines of code).” The use of source code, uncovers various indications that the rewritten programs had fewer lines of code, and also that they were easier to read, verify, and modify. The new code is clear, concise, and easy to read. Readability and maintainability of the source code benefited greatly from condensing these transformations into a few functions, but the performance suffered from the extra procedure calls and loss of many specializations and optimizations of the transformations. The battle to deliver good performance on expressive HPC source code must still…

    • 630 Words
    • 2 Pages
    Satisfactory Essays
  • Good Essays

    Complexity metrics: A variety of software metrics can be computed to determine the complexity of program control flow.…

    • 431 Words
    • 2 Pages
    Good Essays
  • Good Essays

    Applications of time and measurement describe mathematical calculations particularly susceptible to propagating successive errors in downstream operations. Those flows create compounding phenomena, known as Propagation of Error, with the potential to severely degrade accuracy unless otherwise corrected or compensated. Imagine the profound corollaries of erroneous ship navigation while crossing an ocean, missile trajectory on a defense system or medical research for an experimental cancer treatment? In an age where technical accuracy often tethers people to seemingly routine activities, introducing indiscernible errors or functions can be the difference between convenience and jeopardizing lives.…

    • 434 Words
    • 2 Pages
    Good Essays
  • Best Essays

    Over the past 25 years, there has been much advancement in computer systems and architecture to improve system performance. The development of concepts such as cache memory, virtual memory, pipelining, and reduced instruction set computing (RISC) have led to increases in speed and processing power, as well as optimization of CPU usage and energy efficiency. These concepts have evolved over the years, and continue to evolve and give rise to new concepts which enhance system performance at an almost exponential rate. Computers today are more powerful, and cheaper to manufacture and maintain than ever before. This paper will examine the evolution of, and current trends in improving computer system performance by exploring concepts such as cache memory, virtual memory, pipelining, and RISC, and assessing the impact these concepts have made, and continue to make on system performance.…

    • 2038 Words
    • 6 Pages
    Best Essays
  • Good Essays

    The cost-per-defect metric has been in continuous use since the 1970’s for examining the economic value of software quality. Hundreds of journal articles and scores of books include stock phrases, such as “it costs 100 times as much to fix a defect after release as during early development.”…

    • 7101 Words
    • 29 Pages
    Good Essays
  • Good Essays

    What is Poke-Yoke?

    • 1208 Words
    • 5 Pages

    The following pages of this paper will present two devices of mistake-proofing. I will define each device separately and…

    • 1208 Words
    • 5 Pages
    Good Essays
  • Powerful Essays

    [3] K S Trivedi, Probability and Statistics with Reliability, Queuing and Computer Science Applications, Prentice Hall, Englewood Cliffs, NJ, 1982 [4] Paul J. Fortier, Howard E. Michael, “Computer Systems Performance Evaluation and Prediction”, Elsvier Science (USA), 2003. [5] Curnow and B A Wichmann, “A Synthetic Benchmark”, Computer Journal, Vol.19, No.1, pp.43-49, 1976 [6] Reinhold P. Weicker, “A Synthetic systems programming benchmark”, Communications of ACM, Vol.27, No 10, Oct 1984,pp.10-13 [7] F. H. McMahon. The Livermore Fortran Kernels test of the Numerical Performance Range. In J. L. Martin, editor, Performance Evaluation of Supercomputers, pages 143--186. Elsevier Science B.V., NorthHolland, Amsterdam, 1988. [8]D. Bailey and John T Barton, “ The NAS Kernel Benchmark Programs”, NASA Ames Research Center, June 13, 1986 [9] J. J. Dongarra. Performance of various Computers using Standard Linear Equations Software in a Fortran Environment. Computer Science Dept. Technical Report CS-89-85, University of Tennessee, Knoxville, Tennessee, March 1990. [10] E. Anderson, Z. Bai, C. Bischof, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, S. Ostrouchov, and D. Sorensen. LAPACK Users ' Guide. SIAM, Philadelphia, PA, 1992. [11] www.netlib.org/scalapack [12] M Berry et al., The PERFECT Club Benchmarks: Effective Performance Evaluation of Supercomputers. July 1994 [13] www.spec.org. [14] www.tpc.org. [15] www.eembc.org [16]www.bapco.com [17]www.cloudharmony.com…

    • 4528 Words
    • 19 Pages
    Powerful Essays
  • Powerful Essays

    Measurement-Based More Accurate Expensive due to configurations Model-Based Combined approach where measurements are made at the subsystem level and models are built to derive system-level measures…

    • 2900 Words
    • 12 Pages
    Powerful Essays
  • Good Essays

    Exception Handling

    • 879 Words
    • 4 Pages

    Exception is an abnormal condition that arises when executing a program. In the languages that do not support exception handling, errors must be checked and handled manually, usually through the use of error codes. In contrast, Java: 1) provides syntactic mechanisms to signal, detect and handle errors 2) ensures a clean separation between the code executed in the absence of errors and the code to handle various kinds of errors 3) brings run-time error management into object-oriented programming…

    • 879 Words
    • 4 Pages
    Good Essays