2. Definition
A data structure is a particular way of organizing data in a
computer so that it can be used effectively.
A data structure is a specialized format for organizing,
processing, retrieving and storing data.
5. Contd…
Primitive data structures are basic structures and are directly operated upon by
machine instructions.
Primitive data structures have different representations on different computers.
Integers, floats, character and pointers are examples of primitive data structures.
These data types are available in most programming languages as built in type.
• Integer: It is a data type which allows all values without fraction part. We can use it for
whole numbers.
• Float: It is a data type which use for storing fractional numbers.
• Character: It is a data type which is used for character values.
Pointer: A variable that holds memory address of another variable are called pointer.
7. Non primitive Data
Structures
These are derived from primitive data structures.
The non-primitive data structures emphasize on structuring of
a group of homogeneous or heterogeneous data items.
A Non-primitive data type is further divided into Linear and
Non-Linear data structure.
8. Linear data structures
A data structure is said to be Linear, if its elements are
connected in linear fashion by means of logically or in
sequence memory locations.
There are two ways to represent a linear data structure in
memory,
• Static memory allocation
• Dynamic memory allocation
9. Static memory allocation
Array
In an array, elements in memory are arranged in continuous
memory. All the elements of an array are of the same type.
And, the type of elements that can be stored in the form of
arrays is determined by the programming language.
10. Dynamic memory allocation
Stack
In stack data structure, elements are stored in the LIFO
principle. That is, the last element stored in a stack will be
removed first.
In a stack, operations can be perform only from one end (i.e
top)
11. Contd…
Queue
Unlike stack, the queue data structure works in the FIFO
principle where first element stored in the queue will be
removed first.
In a queue, addition and removal are performed from separate
ends.
12. Contd…
Linked List
In linked list data structure, data elements are connected
through a series of nodes. And, each node contains the data
items and address to the next node.
13. Non linear data structures
Unlike linear data structures, elements in non-linear data
structures are not in any sequence.
Instead they are arranged in a hierarchical manner where one
element will be connected to one or more elements.
Graph
In graph data structure, each node is called vertex and each
vertex is connected to other vertices through edges.
14. Contd…
Trees
Similar to a graph, a tree is also a collection of vertices and
edges. However, in tree data structure, there can only be one
edge between two vertices.
15. Operations on data structure
1. Create: The create operation results in reserving memory for
program elements. This can be done by declaration statement.
Creation of data structure may take place either during compile-
time or run-time. malloc() function of C language is used for
creation.
2. Destroy: Destroy operation destroys memory space allocated
for specified data structure. free() function of C language is used
to destroy data structure.
3. Selection: Selection operation deals with accessing a
particular data within a data structure.
4. Updation: It updates or modifies the data in the data structure.
5. Splitting:- Splitting is a process of partitioning single list to
multiple list.
16. Contd…
6. Traversing: Every data structure contains the set of data elements.
Traversing the data structure means visiting each element of the data
structure in order to perform some specific operation like searching or
sorting.
7. Insertion: Insertion can be defined as the process of adding the
elements to the data structure at any location.
If the size of data structure is n then we can only insert n-1 data
elements into it.
A condition when a user tries to insert a new element in a data structure
that does not have the needed space for new element is called Overflow.
8. Deletion: The process of removing an element from the data structure
is called Deletion. We can delete an element from the data structure at
any random location.
If we try to delete an element from an empty data structure
then underflow occurs.
17. Contd…
9. Searching: The process of finding the location of an element
within the data structure is called Searching. There are two
algorithms to perform searching, Linear Search and Binary
Search.
10. Sorting: The process of arranging the data structure in a
specific order is known as Sorting. There are many algorithms
that can be used to perform sorting, for example, insertion sort,
selection sort, bubble sort, etc.
11. Merging: When two lists List A and List B of size M and N
respectively, of similar type of elements, clubbed or joined to
produce the third list, List C of size (M+N), then this process is
called merging.
18. Algorithm
An algorithm is a process or a set of rules required to perform
calculations or some other problem-solving operations
especially by a computer.
The formal definition of an algorithm is that it contains the
finite set of instructions which are being carried in a specific
order to perform the specific task.
It is not the complete program or code; it is just a solution
(logic) of a problem, which can be represented either as an
informal description using a Flowchart or Pseudocode.
19. Characteristics of an Algorithm
The following are the characteristics of an algorithm:
1. Input: An algorithm has some input values. We can pass 0 or some
input value to an algorithm.
2. Output: We will get 1 or more output at the end of an algorithm.
3. Unambiguity: An algorithm should be unambiguous which means
that the instructions in an algorithm should be clear and simple.
4. Finiteness: An algorithm should have finiteness. Here, finiteness
means that the algorithm should contain a limited number of
instructions, i.e., the instructions should be countable.
20. Contd…
5. Effectiveness: An algorithm should be effective as each
instruction in an algorithm affects the overall process.
6. Language independent: An algorithm must be language-
independent so that the instructions in an algorithm can be
implemented in any of the languages with the same output.
21. Dataflow of an Algorithm
1. Problem: A problem can be a real-world problem or any instance from
the real-world problem for which we need to create a program or the set
of instructions. The set of instructions is known as an algorithm.
2. Algorithm: An algorithm will be designed for a problem which is a
step by step procedure.
3. Input: After designing an algorithm, the required and the desired
inputs are provided to the algorithm.
4. Processing unit: The input will be given to the processing unit, and the
processing unit will produce the desired output.
5. Output: The output is the outcome or the result of the program.
22. Example
Problem:- Add two numbers entered by the user:
Step 1: Start
Step 2: Declare three variables a, b, and sum.
Step 3: Enter the values of a and b.
Step 4: Add the values of a and b and store the result in the sum
variable, i.e., sum= a+ b.
Step 5: Print sum
Step 6: Stop
23. Asymptotic Notations
The commonly used asymptotic notations used for calculating
the running time complexity of an algorithm is given below:
1. Big oh Notation (О)
2. Big Omega Notation (Ω)
3. Theta Notation (θ)
24. Big oh Notation (O)
Big O notation is an asymptotic notation that measures the performance
of an algorithm by simply providing the order of growth of the function.
This notation provides an upper bound on a function which ensures that
the function never grows faster than the upper bound. So, it gives the
least upper bound on a function so that the function never grows faster
than this upper bound.
It is the formal way to express the upper boundary of an algorithm
running time.
It measures the worst case of time complexity or the algorithm's longest
amount of time to complete its operation.
26. Big Omega Notation ( )
Ω
It basically describes the best-case scenario which is opposite to the
big o notation.
It is the formal way to represent the lower bound of an algorithm's
running time. It measures the best amount of time an algorithm can
possibly take to complete or the best-case time complexity.
It determines what is the fastest time that an algorithm can run.
If we required that an algorithm takes at least certain amount of
time without using an upper bound. It is used to bound the growth
of running time for large input size.
28. Theta Notation ( )
θ
The theta notation mainly describes the average case scenarios.
It represents the realistic time complexity of an algorithm. Every
time, an algorithm does not perform worst or best, in real-world
problems, algorithms mainly fluctuate between the worst-case
and best-case, and this gives the average case of the algorithm.
Big theta is mainly used when the value of worst-case and the
best-case is same.
It is the formal way to express both the upper bound and lower
bound of an algorithm running time.
30. Algorithm Complexity
The performance of the algorithm can be measured in two factors:
1. Time complexity: The time complexity of an algorithm is the
amount of time required to complete the execution.
The time complexity of an algorithm is denoted by the big O notation.
Here, big O notation is the asymptotic notation to represent the time
complexity.
The time complexity is mainly calculated by counting the number of
steps to finish the execution.
31. Contd…
2. Space complexity: An algorithm's space complexity is the amount
of space required to solve a problem and produce an output.
Similar to the time complexity, space complexity is also expressed in
big O notation.
For an algorithm, the space is required for the following purposes:
1. To store program instructions
2. To store constant values
3. To store variable values
4. To track the function calls, jumping statements, etc.
32. Contd…
Auxiliary space: The extra space required by the algorithm,
excluding the input size, is known as an auxiliary space.
The space complexity considers both the spaces, i.e., auxiliary
space, and space used by the input.
Space complexity = Auxiliary space + Input size.