Introduction to Computational Thinking 1st Edition Thomas Mailundgafnylugnet41
Introduction to Computational Thinking 1st Edition Thomas Mailund
Introduction to Computational Thinking 1st Edition Thomas Mailund
Introduction to Computational Thinking 1st Edition Thomas Mailund
The document provides information about algorithms including:
- Algorithms are step-by-step procedures to solve problems and are represented in programs, pseudocode, or flowcharts.
- Common algorithm applications include sorting, data retrieval, routing, and games.
- Pseudocode is an informal way to describe algorithms using a combination of English and programming languages.
- Algorithm analysis evaluates correctness, efficiency, complexity, and other factors.
A gentle introduction to algorithm complexity analysisLewis Lin 🦊
This document introduces algorithm complexity analysis and "Big O" notation. It aims to help programmers and students understand this theoretical computer science topic in a practical way. The document motivates algorithm complexity analysis by explaining how it allows formal comparison of algorithms' speed independently of implementation details. It then provides an example analysis of finding the maximum value in an array to illustrate counting the number of basic instructions an algorithm requires.
This document provides information about the CS 331 Data Structures course. It includes the contact information for the professor, Dr. Chandran Saravanan, as well as online references and resources about data structures. It then covers topics like structuring and organizing data, different types of data structures suitable for different applications, basic principles of data structures, language support for data structures, selecting an appropriate data structure, analyzing algorithms, and provides an example analysis of a sample algorithm's runtime complexity.
Linear search examines each element of a list sequentially, one by one, and checks if it is the target value. It has a time complexity of O(n) as it requires searching through each element in the worst case. While simple to implement, linear search is inefficient for large lists as other algorithms like binary search require fewer comparisons.
The document provides an overview of a Python programming presentation on the Joy of Computing with Python. It discusses why programming is important, the importance of clear instructions, and introduces concepts like Scratch and loops. The presentation is split into 8 weeks, with topics covered including crowd computing, genetic algorithms, searching and sorting algorithms, recursion, and simulations of games like snakes and ladders and the lottery.
The document discusses the analysis of algorithms, including time and space complexity analysis. It covers key aspects of analyzing algorithms such as determining the basic operation, input size, and analyzing best-case, worst-case, and average-case time complexities. Specific examples are provided, such as analyzing the space needed to store real numbers and analyzing the time complexity of sequential search. Order of growth and asymptotic analysis techniques like Big-O, Big-Omega, and Big-Theta notation are also explained.
Problem solving using computers - Unit 1 - Study materialTo Sum It Up
Problem solving using computers involves transforming a problem description into a solution using problem-solving strategies, techniques, and tools. Programming is a problem-solving activity where instructions are written for a computer to solve something. The document then discusses the steps in problem solving like definition, analysis, approach, coding, testing etc. It provides examples of algorithms, flowcharts, pseudocode and discusses concepts like top-down design, time complexity, space complexity and ways to swap variables and count values.
This document provides an overview of algorithm analysis and asymptotic complexity. It discusses learning outcomes related to analyzing algorithm efficiency using Big O, Omega, and Theta notation. Key points covered include:
- Defining the problem size n and relating algorithm running time to n
- Distinguishing between best-case, worst-case, and average-case complexity
- Using asymptotic notation like Big O to give upper bounds on complexity rather than precise calculations
- Common asymptotic categories like O(n), O(n^2), O(n log n) that classify algorithm growth rates
The document provides an overview of algorithms, including definitions, types, characteristics, and analysis. It begins with step-by-step algorithms to add two numbers and describes the difference between algorithms and pseudocode. It then covers algorithm design approaches, characteristics, classification based on implementation and logic, and analysis methods like a priori and posteriori. The document emphasizes that algorithm analysis estimates resource needs like time and space complexity based on input size.
Design & Analysis of Algorithms Lecture NotesFellowBuddy.com
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
These lecture notes cover the design and analysis of algorithms over 4 modules. Module I introduces algorithms, their characteristics, expectations and analysis. It discusses asymptotic analysis using big O, Ω and Θ notations to analyze the growth of algorithms like insertion sort, which has a worst case running time of Θ(n2). Subsequent modules cover dynamic programming, greedy algorithms, graphs, and NP-completeness. The notes provide an overview of key algorithm design and analysis topics.
These lecture notes cover algorithms and their analysis over 4 modules. Module I introduces algorithms, their properties, analysis of complexity and asymptotic notations. It covers analysis of sorting algorithms like merge sort, quicksort and binary search. Module II covers dynamic programming and greedy algorithms. Module III covers graph algorithms like BFS, DFS and minimum spanning trees. Module IV covers advanced topics like fast Fourier transform, string matching, NP-completeness and approximation algorithms.
This document provides an overview and introduction to the concepts taught in a data structures and algorithms course. It discusses the goals of reinforcing that every data structure has costs and benefits, learning commonly used data structures, and understanding how to analyze the efficiency of algorithms. Key topics covered include abstract data types, common data structures, algorithm analysis techniques like best/worst/average cases and asymptotic notation, and examples of analyzing the time complexity of various algorithms. The document emphasizes that problems can have multiple potential algorithms and that problems should be carefully defined in terms of inputs, outputs, and resource constraints.
This document provides information about the book "Python for Informatics: Exploring Information". It was originally based on the book "Think Python: How to Think Like a Computer Scientist" by Allen B. Downey but has been significantly revised by Charles Severance to focus on using Python for data analysis and exploring information. The revisions include replacing code examples and exercises with data-oriented problems, reorganizing some topics, and adding new chapters on real-world Python applications like web scraping, APIs, and databases. The goal is to teach useful data skills to students who may not become professional programmers.
This document is the preface to the book "Python for Informatics: Remixing an Open Book". It discusses how the book was created by modifying the open source book "Think Python" by Allen B. Downey to have a stronger focus on data analysis and exploring information using Python. Key changes included replacing number examples with data examples, reorganizing topics to get to data analysis quicker, and adding new chapters on data-related Python topics like regular expressions, web scraping, and databases. The goal was to produce a text suitable for a first technology course with an informatics rather than computer science focus.
The document discusses an algorithms analysis and design course. The major objectives are to design and analyze modern algorithms, compare their efficiencies, and solve real-world problems. Students will learn to prove algorithm correctness, analyze running times, and apply techniques like dynamic programming and graph algorithms. While algorithms can differ in efficiency, even on faster hardware, the computational model used makes reasonable assumptions for comparing algorithms asymptotically.
The document discusses data structures and algorithms. It defines key concepts like primitive data types, data structures, static vs dynamic structures, abstract data types, algorithm design, analysis of time and space complexity, and recursion. It provides examples of algorithms and data structures like arrays, stacks and the factorial function to illustrate recursive and iterative implementations. Problem solving techniques like defining the problem, designing algorithms, analyzing and testing solutions are also covered.
The document discusses data structures and algorithms. It defines key concepts like primitive data types, data structures, static vs dynamic structures, abstract data types, algorithm design, analysis of time and space complexity, recursion, stacks and common stack operations like push and pop. Examples are provided to illustrate factorial calculation using recursion and implementation of a stack.
This document discusses data structures and algorithms. It begins by defining data structures as the logical organization of data and primitive data types like integers that hold single pieces of data. It then discusses static versus dynamic data structures and abstract data types. The document outlines the main steps in problem solving as defining the problem, designing algorithms, analyzing algorithms, implementing, testing, and maintaining solutions. It provides examples of space and time complexity analysis and discusses analyzing recursive algorithms through repeated substitution and telescoping methods.
The document discusses data structures and algorithms. It defines key concepts like primitive data types, data structures, static vs dynamic structures, abstract data types, algorithm analysis including time and space complexity, and common algorithm design techniques like recursion. It provides examples of algorithms and data structures like stacks and using recursion to calculate factorials. The document covers fundamental topics in data structures and algorithms.
The document discusses data structures and algorithms. It defines key concepts like primitive data types, data structures, static vs dynamic structures, abstract data types, algorithm design, analysis of time and space complexity, and recursion. It provides examples of algorithms and data structures like stacks and using recursion to calculate factorials. The document covers fundamental topics in data structures and algorithms.
The document discusses data structures and algorithms. It defines key concepts like primitive data types, data structures, static vs dynamic structures, abstract data types, algorithm design, analysis of time and space complexity, and recursion. It provides examples of algorithms and data structures like stacks and using recursion to calculate factorials. The document covers fundamental topics in data structures and algorithms.
The document discusses the analysis of algorithms, including time and space complexity analysis. It covers key aspects of analyzing algorithms such as determining the basic operation, input size, and analyzing best-case, worst-case, and average-case time complexities. Specific examples are provided, such as analyzing the space needed to store real numbers and analyzing the time complexity of sequential search. Order of growth and asymptotic analysis techniques like Big-O, Big-Omega, and Big-Theta notation are also explained.
Problem solving using computers - Unit 1 - Study materialTo Sum It Up
Problem solving using computers involves transforming a problem description into a solution using problem-solving strategies, techniques, and tools. Programming is a problem-solving activity where instructions are written for a computer to solve something. The document then discusses the steps in problem solving like definition, analysis, approach, coding, testing etc. It provides examples of algorithms, flowcharts, pseudocode and discusses concepts like top-down design, time complexity, space complexity and ways to swap variables and count values.
This document provides an overview of algorithm analysis and asymptotic complexity. It discusses learning outcomes related to analyzing algorithm efficiency using Big O, Omega, and Theta notation. Key points covered include:
- Defining the problem size n and relating algorithm running time to n
- Distinguishing between best-case, worst-case, and average-case complexity
- Using asymptotic notation like Big O to give upper bounds on complexity rather than precise calculations
- Common asymptotic categories like O(n), O(n^2), O(n log n) that classify algorithm growth rates
The document provides an overview of algorithms, including definitions, types, characteristics, and analysis. It begins with step-by-step algorithms to add two numbers and describes the difference between algorithms and pseudocode. It then covers algorithm design approaches, characteristics, classification based on implementation and logic, and analysis methods like a priori and posteriori. The document emphasizes that algorithm analysis estimates resource needs like time and space complexity based on input size.
Design & Analysis of Algorithms Lecture NotesFellowBuddy.com
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
These lecture notes cover the design and analysis of algorithms over 4 modules. Module I introduces algorithms, their characteristics, expectations and analysis. It discusses asymptotic analysis using big O, Ω and Θ notations to analyze the growth of algorithms like insertion sort, which has a worst case running time of Θ(n2). Subsequent modules cover dynamic programming, greedy algorithms, graphs, and NP-completeness. The notes provide an overview of key algorithm design and analysis topics.
These lecture notes cover algorithms and their analysis over 4 modules. Module I introduces algorithms, their properties, analysis of complexity and asymptotic notations. It covers analysis of sorting algorithms like merge sort, quicksort and binary search. Module II covers dynamic programming and greedy algorithms. Module III covers graph algorithms like BFS, DFS and minimum spanning trees. Module IV covers advanced topics like fast Fourier transform, string matching, NP-completeness and approximation algorithms.
This document provides an overview and introduction to the concepts taught in a data structures and algorithms course. It discusses the goals of reinforcing that every data structure has costs and benefits, learning commonly used data structures, and understanding how to analyze the efficiency of algorithms. Key topics covered include abstract data types, common data structures, algorithm analysis techniques like best/worst/average cases and asymptotic notation, and examples of analyzing the time complexity of various algorithms. The document emphasizes that problems can have multiple potential algorithms and that problems should be carefully defined in terms of inputs, outputs, and resource constraints.
This document provides information about the book "Python for Informatics: Exploring Information". It was originally based on the book "Think Python: How to Think Like a Computer Scientist" by Allen B. Downey but has been significantly revised by Charles Severance to focus on using Python for data analysis and exploring information. The revisions include replacing code examples and exercises with data-oriented problems, reorganizing some topics, and adding new chapters on real-world Python applications like web scraping, APIs, and databases. The goal is to teach useful data skills to students who may not become professional programmers.
This document is the preface to the book "Python for Informatics: Remixing an Open Book". It discusses how the book was created by modifying the open source book "Think Python" by Allen B. Downey to have a stronger focus on data analysis and exploring information using Python. Key changes included replacing number examples with data examples, reorganizing topics to get to data analysis quicker, and adding new chapters on data-related Python topics like regular expressions, web scraping, and databases. The goal was to produce a text suitable for a first technology course with an informatics rather than computer science focus.
The document discusses an algorithms analysis and design course. The major objectives are to design and analyze modern algorithms, compare their efficiencies, and solve real-world problems. Students will learn to prove algorithm correctness, analyze running times, and apply techniques like dynamic programming and graph algorithms. While algorithms can differ in efficiency, even on faster hardware, the computational model used makes reasonable assumptions for comparing algorithms asymptotically.
The document discusses data structures and algorithms. It defines key concepts like primitive data types, data structures, static vs dynamic structures, abstract data types, algorithm design, analysis of time and space complexity, and recursion. It provides examples of algorithms and data structures like arrays, stacks and the factorial function to illustrate recursive and iterative implementations. Problem solving techniques like defining the problem, designing algorithms, analyzing and testing solutions are also covered.
The document discusses data structures and algorithms. It defines key concepts like primitive data types, data structures, static vs dynamic structures, abstract data types, algorithm design, analysis of time and space complexity, recursion, stacks and common stack operations like push and pop. Examples are provided to illustrate factorial calculation using recursion and implementation of a stack.
This document discusses data structures and algorithms. It begins by defining data structures as the logical organization of data and primitive data types like integers that hold single pieces of data. It then discusses static versus dynamic data structures and abstract data types. The document outlines the main steps in problem solving as defining the problem, designing algorithms, analyzing algorithms, implementing, testing, and maintaining solutions. It provides examples of space and time complexity analysis and discusses analyzing recursive algorithms through repeated substitution and telescoping methods.
The document discusses data structures and algorithms. It defines key concepts like primitive data types, data structures, static vs dynamic structures, abstract data types, algorithm analysis including time and space complexity, and common algorithm design techniques like recursion. It provides examples of algorithms and data structures like stacks and using recursion to calculate factorials. The document covers fundamental topics in data structures and algorithms.
The document discusses data structures and algorithms. It defines key concepts like primitive data types, data structures, static vs dynamic structures, abstract data types, algorithm design, analysis of time and space complexity, and recursion. It provides examples of algorithms and data structures like stacks and using recursion to calculate factorials. The document covers fundamental topics in data structures and algorithms.
The document discusses data structures and algorithms. It defines key concepts like primitive data types, data structures, static vs dynamic structures, abstract data types, algorithm design, analysis of time and space complexity, and recursion. It provides examples of algorithms and data structures like stacks and using recursion to calculate factorials. The document covers fundamental topics in data structures and algorithms.
A DECISION SUPPORT SYSTEM FOR ESTIMATING COST OF SOFTWARE PROJECTS USING A HY...ijfcstjournal
One of the major challenges for software, nowadays, is software cost estimation. It refers to estimating the
cost of all activities including software development, design, supervision, maintenance and so on. Accurate
cost-estimation of software projects optimizes the internal and external processes, staff works, efforts and
the overheads to be coordinated with one another. In the management software projects, estimation must
be taken into account so that reduces costs, timing and possible risks to avoid project failure. In this paper,
a decision- support system using a combination of multi-layer artificial neural network and decision tree is
proposed to estimate the cost of software projects. In the model included into the proposed system,
normalizing factors, which is vital in evaluating efforts and costs estimation, is carried out using C4.5
decision tree. Moreover, testing and training factors are done by multi-layer artificial neural network and
the most optimal values are allocated to them. The experimental results and evaluations on Dataset
NASA60 show that the proposed system has less amount of the total average relative error compared with
COCOMO model.
This document provides information about the Fifth edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
May 2025: Top 10 Read Articles Advanced Information Technologyijait
International journal of advanced Information technology (IJAIT) is a bi monthly open access peer-reviewed journal, will act as a major forum for the presentation of innovative ideas, approaches, developments, and research projects in the area advanced information technology applications and services. It will also serve to facilitate the exchange of information between researchers and industry professionals to discuss the latest issues and advancement in the area of advanced IT. Core areas of advanced IT and multi-disciplinary and its applications will be covered during the conferences.
This study will provide the audience with an understanding of the capabilities of soft tools such as Artificial Neural Networks (ANN), Support Vector Regression (SVR), Model Trees (MT), and Multi-Gene Genetic Programming (MGGP) as a statistical downscaling tool. Many projects are underway around the world to downscale the data from Global Climate Models (GCM). The majority of the statistical tools have a lengthy downscaling pipeline to follow. To improve its accuracy, the GCM data is re-gridded according to the grid points of the observed data, standardized, and, sometimes, bias-removal is required. The current work suggests that future precipitation can be predicted by using precipitation data from the nearest four grid points as input to soft tools and observed precipitation as output. This research aims to estimate precipitation trends in the near future (2021-2050), using 5 GCMs, for Pune, in the state of Maharashtra, India. The findings indicate that each one of the soft tools can model the precipitation with excellent accuracy as compared to the traditional method of Distribution Based Scaling (DBS). The results show that ANN models appear to give the best results, followed by MT, then MGGP, and finally SVR. This work is one of a kind in that it provides insights into the changing monsoon season in Pune. The anticipated average precipitation levels depict a rise of 300–500% in January, along with increases of 200-300% in February and March, and a 100-150% increase for April and December. In contrast, rainfall appears to be decreasing by 20-30% between June and September.
Top Cite Articles- International Journal on Soft Computing, Artificial Intell...ijscai
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI) is an open access peer-reviewed journal that provides an excellent international forum for sharing knowledge and results in theory, methodology and applications of Artificial Intelligence, Soft Computing. The Journal looks for significant contributions to all major fields of the Artificial Intelligence, Soft Computing in theoretical and practical aspects. The aim of the Journal is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
6. Python features
no compiling or linking rapid development cycle
no type declarations simpler, shorter, more flexible
automatic memory management garbage collection
high-level data types and
operations
fast development
object-oriented programming code structuring and reuse, C++
embedding and extending in C mixed language systems
classes, modules, exceptions "programming-in-the-large"
support
dynamic loading of C modules simplified extensions, smaller
binaries
dynamic reloading of C modules programs can be modified without
stopping
7. Python features
universal "first-class" object
model
fewer restrictions and rules
run-time program construction handles unforeseen needs, end-user
coding
interactive, dynamic nature incremental development and testing
access to interpreter information metaprogramming, introspective objects
wide portability cross-platform programming without
ports
compilation to portable byte-
code
execution speed, protecting source code
built-in interfaces to external
services
system tools, GUIs, persistence,
databases, etc.
9. Uses of Python
• shell tools
• system admin tools, command line programs
• extension-language work
• rapid prototyping and development
• language-based modules
• instead of special-purpose parsers
• graphical user interfaces
• database access
• distributed programming
• Internet scripting
10. Brief History of Python
• Invented in the Netherlands, early 90s by Guido
van Rossum
• Named after Monty Python
• Open sourced from the beginning
• Considered a scripting language, but is much
more
• Scalable, object oriented and functional from the
beginning
• Used by Google from the beginning
• Increasingly popular
11. Installing
•Python is pre-installed on most Unix systems,
including Linux and MAC OS X
•Download from http://python.org/download/
•Python comes with a large library of standard
modules
•There are several options for an IDE
• IDLE – works well with Windows
• Emacs with python-mode or your favorite text editor
• Eclipse with Pydev (http://pydev.sourceforge.net/)
14. What is Computation?
In this part, we will discuss two points:
• Computational Thinking
• Computational Problem
One can major [i.e. graduate] in computer
science and do anything. One can major
in English or mathematics and go on to a
multitude of different careers. Ditto
computer science. One can major in
computer science and go on to a career in
medicine, law, business, politics, any type
of science or engineering, and even the
arts.
Wing (2006)
Jeannette M. Wing
Professor of Computer Science (Carnegie
Mellon University, United States) and Head
of Microsoft Research International
15. What does each of them mean (try to write something down in your
own words, without looking them up)?
• Computational thinking
• Computational problem
Answer:
• computational thinking is not merely knowing how to use
an algorithm or a data structure, but, when faced with a
problem, to be able to analyze it with the techniques and
skills that computer science puts at our disposal.
• A computational problem is described as a problem that is
expressed sufficiently precisely that it is possible to
attempt to build an algorithm to solve it.
16. The point is that computational thinking is not about
thinking like a computer rather, computational thinking is first
and foremost. Computational thinking consists of the skills to:
• formulate a problem as a computational problem
• construct a good computational solution (i.e. an algorithm) for
the problem, or explain why there is no such solution.
A computational thinker won’t, however, be satisfied with just any solution: the
solution has to be a ‘good’ one. You have already seen that some solutions for
finding a word in a dictionary are much better (in particular, faster) than others.
The search for good computational solutions is a theme that runs throughout
this module. Finally, computational thinking goes beyond finding solutions: if no
good solution exists, one should be able to explain why this is so. This requires
insight into the limits of computational problem solving.
17. Computational thinking :(Automation)
the feedback loop that one has when you’re abstracting from some
physical-world phenomenon, creating a mathematical model of this
physical-world phenomenon, and then analyzing the abstraction,
doing sorts of manipulations of those abstractions, and in fact
automating the abstraction, that then tells us more about the
physical-world phenomenon that we’re actually modelling.’
22. 22
Running Time
• Most algorithms transform
input objects into output
objects.
• The running time of an
algorithm typically grows
with the input size.
• Average case time is often
difficult to determine.
• We focus on the worst case
running time.
• Easier to analyze
• Crucial to applications such
as games, finance and
robotics
0
20
40
60
80
100
120
Running
Time
1000 2000 3000 4000
Input Size
best case
average case
worst case
23. Why discarding average case,
And choose worst case instead?
• An algorithm may run faster on some inputs than it does
on others of the same size. Thus, we may wish to express
the running time of an algorithm as the function of the
input size
• Average-case analysis is typically quite challenging. It
requires us to define a probability distribution on the set of
inputs, which is often a difficult task.
• An average-case analysis usually requires that we
calculate expected running times based on a given input
distribution, which usually involves sophisticated
probability theory. Therefore we will characterize running
times in terms of the worst case, as a function of the input
size, n, of the algorithm.
• Worst-case analysis is much easier than average-case
analysis, as it requires only the ability to identify the worst-
case input, which is often simple.
24. 24
Experimental Studies
• Write a program
implementing the
algorithm
• Run the program with
inputs of varying size and
composition, noting the
time needed:
• Plot the results
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
0 50 100
Input Size
Time
(
ms)
25. 25
Limitations of Experiments
• It is necessary to implement the whole algorithm
before conducting any experiment, which may be
difficult.
• Results may not be indicative of the running time on
other inputs not included in the experiment.
• In order to compare two algorithms, the same
hardware and software environments must be
used
26. So we need another way to measure the
performance of the algorithms
• So we need to learn about Theoretical analysis or
Asymptotic analysis.
• Uses a high-level description of the algorithm
instead of an implementation (Pseudo code).
• Characterizes running time as a function of the
input size, n.
• Takes into account all possible inputs.
• Allows us to evaluate the speed of an algorithm
independent of the hardware/software
environment.
Pseudo code
• High-level description of an
algorithm.
• More structured than English
prose.
• Less detailed than a program.
• Preferred notation for
describing algorithms.
• Hides program design issues.
27. 27
Big-Oh Notation
• Given functions f(n) and
g(n), we say that f(n) is
O(g(n)) if there are positive
constants
c and n0 such that
f(n) cg(n) for n n0
• Example: 2n + 10 is O(n)
• 2n + 10 cn
• (c 2) n 10
• n 10/(c 2)
• Pick c = 3 and n0 = 10
28. 28
Big-Oh and Growth Rate
• The big-Oh notation gives an upper bound on the
growth rate of a function
• The statement “f(n) is O(g(n))” means that the
growth rate of f(n) is no more than the growth rate
of g(n)
• We can use the big-Oh notation to rank functions
according to their growth rate
f(n) is O(g(n)) g(n) is O(f(n))
g(n) grows more Yes No
f(n) grows more No Yes
Same growth Yes Yes
29. 29
Relatives of Big-Oh
big-Omega
f(n) is (g(n)) if there is a constant c > 0
and an integer constant n0 1 such that
f(n) c•g(n) for n n0
big-Theta
f(n) is (g(n)) if there are constants c’ > 0 and c’’ >
0 and an integer constant n0 1 such that
c’•g(n) f(n) c’’•g(n) for n n0
31. Essential Seven functions to estimate algorithms
performance
g(n) = n
for i in range(0,
n):
Print(i)
32. Essential Seven functions to estimate algorithms
performance
g(n) = lg n
Def power_of_2(a):
x = 0
while a > 1:
a = a/2
x = x+1
return x
33. Essential Seven functions to estimate algorithms
performance
g(n) = n lg n
for i in range(0,n):
Def power_of_2(a):
x = 0
while a > 1:
a = a/2
x = x+1
return x
34. Essential Seven functions to estimate algorithms
performance
g(n) = n2
for i in range(0,n):
for j in range(0,n):
print(i*j);
35. Essential Seven functions to estimate algorithms
performance
g(n) = n3
for i in range(0, k):
for i in range(0,n):
for j in range(0,n):
print(i*j);
36. Essential Seven functions to estimate algorithms
performance
g(n) = 2n
def F(n):
if n == 0:
return 0
elif n == 1:
return 1
else: return
F(n-1) + F(n-2)
37. Seven Important Functions
• Seven functions that often appear
in algorithm analysis:
• Constant 1
• Logarithmic log n
• Linear n
• N-Log-N n log n
• Quadratic n2
• Cubic n3
• Exponential 2n
• In a log-log chart, the slope of the
line corresponds to the growth rate
Analysis of Algorithms 37
38. Why Growth Rate Matters Slide by Matt Stallmann included
with permission.
if runtime is... time for n + 1 time for 2 n time for 4 n
c lg n c lg (n + 1) c (lg n + 1) c(lg n + 2)
c n c (n + 1) 2c n 4c n
c n lg n
~ c n lg n
+ c n
2c n lg n + 2cn 4c n lg n + 4cn
c n2
~ c n2
+ 2c n 4c n2
16c n2
c n3
~ c n3
+ 3c n2
8c n3
64c n3
c 2n
c 2 n+1
c 2 2n
c 2 4n
runtime
quadruples
when
problem
size doubles
39. Comparison of Two Algorithms
39
insertion sort is
n2
/ 4
merge sort is
2 n lg n
sort a million items?
insertion sort takes
roughly 70 hours
while
merge sort takes
roughly 40 seconds
This is a slow machine, but if
100 x as fast then it’s 40 minutes
versus less than 0.5 seconds
40. How to calculate the algorithm’s complexity
We may not be able to predict to the nanosecond
how long a Python program will take, but do know
some things about timing:
This loop takes time k*n, for some constants k.
k : How long it takes to go through the loop once
n : The number of times through the loop
(we can use this as the “size” of the problem)
The total time k*n is linear in n
for i in range(0, n):
print(i);
41. 41
Constant time
• Constant time means there is some
constant k such that this operation
always takes k nanoseconds
• A Java statement takes constant time
if:
• It does not include a loop
• It does not include calling a
method whose time is
unknown or is not a constant
• If a statement involves a choice (if or
switch) among operations, each of
which takes constant time, we
consider the statement to take
constant time
• This is consistent with worst-case analysis
43. 43
Prefix Averages 2 (Looks
Better)
The following algorithm uses an internal Python
function to simplify the code
Algorithm prefixAverage2 still runs in O(n2
) time!
44. 44
Prefix Averages 3 (Linear
Time)
The following algorithm computes prefix averages
in linear time by keeping a running sum
Algorithm prefixAverage3 runs in O(n) time