Preview

Pt1420 Unit 1 Problem Solving Paper

Good Essays
Open Document
Open Document
496 Words
Grammar
Grammar
Plagiarism
Plagiarism
Writing
Writing
Score
Score
Pt1420 Unit 1 Problem Solving Paper
5. Let us now assume that the values of n, m, and p stay the same (n = 100, m = 1000, and p = 100). However, the time needed to execute compute_something_expensive_and_non_parallelizable(i, j) varies at random between 1 second and 20 hours. How would you now map the tasks to processors to make the computations efficient?
Solution:
The above problem refers to problem of load balancing where time of execution of each task varies at random. Dynamic mapping technique can be used for solving the above problem. In dynamic mapping technique, tasks are managed by Master node and all other nodes that depend on Master for work are called slave nodes.
Here in the given problem, each iteration of inner loop is taking random execution time. Tasks can be created based on iterations of inner
…show more content…
Is the following statement valid? This setup solves the problem of concept drift. Justify your answer. Solution:
Given: An updatable Naïve Bayes model that is re-trained on incoming data, using all the data from starting of the stream.
Solution:
This may solve the problem of concept drift but not effectively.
If the stream is not old and data that is accumulated is not old , then the given updatable Naïve Bayes can have fast adaptation to concept changes and solve problem of Concept drift.
If the stream has started long back, then it accumulates old data along with new incoming data, where the old data tends to become outdated. When model is re-trained on this complete stream’s data, then it reacts to concept changes slower. Training the learner/model with stream data that has old accumulated data can induce inaccuracy in predictions.
Ideal solution is to maintain Sliding Window that stores most recent examples in First In First Out fashion. Small window sizes can assure fast adaptation in times of concept drift but fails in stable period. A large window size reacts slower to concept drift but gives a better performance at stable

You May Also Find These Documents Helpful

  • Good Essays

    Grady and Monroe are each paid a weekly salary allowance of $950. The doll shop is located in a state that requires unemployment compensation contributions of employers of one or more individuals. The company is subject to state contributions at a rate of 3.1% for wages not in excess of $8,100. Compute each of the following amounts based upon the 41st weekly payroll period for the week ending October 19, 2010:…

    • 541 Words
    • 3 Pages
    Good Essays
  • Good Essays

    Ops 571 Bottlenecks

    • 701 Words
    • 3 Pages

    Each step takes a certain amount of time to complete; in order to get the proper amount of sleep and to be able to complete all the steps in the process, a collection of data had to be collected over the past several weeks. The collection of data showed the average amount of complete the entire process was roughly one and a half hours. After taking a look at the amount of time it took to complete each task identifying the bottlenecks in the process became clear.…

    • 701 Words
    • 3 Pages
    Good Essays
  • Satisfactory Essays

    Work Flow

    • 349 Words
    • 2 Pages

    Would it be possible to make previously sequential tasks parallel, or to centralize certain tasks?…

    • 349 Words
    • 2 Pages
    Satisfactory Essays
  • Powerful Essays

    Wealth of Nations Summary

    • 2614 Words
    • 11 Pages

    By using this example we can show how productivity works, because if all of those tasks will be given to one person only, the productivity will suffer thus it takes too much time to complete the program, unlike when it is divided into smaller tasks.…

    • 2614 Words
    • 11 Pages
    Powerful Essays
  • Powerful Essays

    Data Mining

    • 2070 Words
    • 9 Pages

    using data mining tools and techniques to take advantage of historical data. By using pattern…

    • 2070 Words
    • 9 Pages
    Powerful Essays
  • Good Essays

    Cynevin Framework

    • 1020 Words
    • 5 Pages

    So this is why i decided to use this framework as a categorisation model is good for quick exploitation of the data but extremely poor in times of change.…

    • 1020 Words
    • 5 Pages
    Good Essays
  • Powerful Essays

    order to adapt to the latest characteristics of the time series. For example, if there has been a…

    • 8029 Words
    • 56 Pages
    Powerful Essays
  • Powerful Essays

    The Car Classify System is an intelligence system which applying Case Based Reasoning approach. This system was developing to classify the car class whether they are unacc, acc, good or very good (vgood). This system classified the car by calculate the similarities with the new data with the history data in the data base. The new data is data that key in by the user. The history data is the old data that get from the UCI Machine Learning website.…

    • 681 Words
    • 3 Pages
    Powerful Essays
  • Better Essays

    [16] A. Moschitti. A study on optimal paramter tuning for Rocchio text classifier. In Proceedings of the European Conference on Information Retrieval, Pisa, Italy, 2003. [17] K. Papineni. Why inverse document frequency? In NAACL ’01: Second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies 2001, pages 1–8, Morristown, NJ, USA, 2001. Association for Computational Linguistics. [18] J. M. Ponte and W. B. Croft. A language modeling approach to information retrieval. In Research and Development in Information Retrieval, pages 275–281, 1998. [19] M. Radovanovic and M. Ivanovic. Document representations for classification of short web-page descriptions. In DaWaK, pages 544–553, 2006. [20] R. Robertson and K. Sparck-Jones. Simple, proven approaches to text retrieval. Technical report, 1997. [21] S. Robertson. Understanding inverse document frequency: on theoretical arguments for idf. Journal of Documentation, 5:503–520, 2004. [22] M. Sahami. Learning limited dependence bayesian classifiers. In Proceedings of the International Conference on Knowledge Discovery and Data Mining, pages 335–338, 1996. [23] K. Schneider. A new feature selection score for multinomial naive bayes text classification based on kl-divergence. In The Companion Volume to the Proceedings of 42st Annual Meeting of the Association for Computational Linguistics, Barcelona, Spain, July 2004. [24] H. Schutze, D. A. Hull, and J. O. Pedersen. A comparison of classifiers and document representations for the routing problem. In Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval, Seattle, Washington, 1995. [25] K. Sparck Jones. A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 28(1):11–21, 1972. [26] S. Tan, X. Cheng, M. M. Ghanem, B. Wang, and H. Xu. A novel refinement approach for text categorization. In CIKM ’05: Proceedings of the 14th ACM international conference on Information and knowledge management, pages 469–476, Bremen, Germany, 2005. [27] V. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1995. [28] Y. Yang and X. Liu. A reexamination of text categorization methods. In Proceedings of the 22nd ACM SIGIR Conference on Research and Development in Information Retrieval, 1999. [29] Y. Yang and J. O. Pedersen. A comparative study on feature selection in text categorization. In Proceedings of the 14th International Conference on Machine Learning, Nashville, US, 1997.…

    • 6409 Words
    • 26 Pages
    Better Essays
  • Satisfactory Essays

    The t i i Th training d t ( b data (observations, measurements, etc) are used to ti t t ) dt Training data learn a classifier The training data are labeled data New data (unlabeled) are classified Using the training data…

    • 551 Words
    • 3 Pages
    Satisfactory Essays
  • Good Essays

    Bayesian theory is increasingly being adopted by the data scientists and analysts across the world. Most of the times the data set available or the information is incomplete. To deal with this realm of inductive logic, usage of probability theory becomes essential. As per the new perceptions, probability theory today is recognized as a valid principle of logic that is used for drawing inferences related to hypothesis of interest. E.T. Jaynes in the late 20th century, shared the view of “Probability theory as logic”. Today this is commonly called Bayesian probability theory in recognition with the work done in the late 18th century by an English clergyman and mathematician Thomas Bayes. (Gregory, Phil;, 2010)…

    • 1088 Words
    • 5 Pages
    Good Essays
  • Good Essays

    each of the task, and also to find out the least time required to complete the…

    • 716 Words
    • 3 Pages
    Good Essays
  • Satisfactory Essays

    Planned set of interrelated tasks to be executed over a fixed period and within certain cost and other limitations.…

    • 309 Words
    • 2 Pages
    Satisfactory Essays
  • Satisfactory Essays

    The rate of learning computers using smart programs has a big advantage that is they can share their data between different computers. Data and results achieved first by other computers by trial and error or by observation of patterns can be copied directly to other computers, hence increasing their rate of learning.…

    • 339 Words
    • 2 Pages
    Satisfactory Essays
  • Powerful Essays

    grid computing

    • 1779 Words
    • 8 Pages

    problems can only be met with a vast variety of heterogeneous resources. The increased use and popularity of the Internet and the availability of high-speed networks have gradually changed the way we do computing. These technologies have enabled the…

    • 1779 Words
    • 8 Pages
    Powerful Essays