This power point presentation will give you the knowledge of merge sort algorithm how it works with a given problem solving example. It also describe about the time complexity of merge sort algorithm, and the program in c .
This document discusses different types of sorting algorithms. It describes internal sorting and external sorting, with internal sorting handling all data in memory and external sorting requiring external memory. Bubble sort, selection sort, and insertion sort are briefly explained as examples of sorting methods. Bubble sort works by comparing adjacent elements and swapping if out of order, selection sort finds the minimum element and selection sort inserts elements into the sorted position. Pseudocode and examples are provided for each algorithm.
The document discusses Python's four main collection data types: lists, tuples, sets, and dictionaries. It provides details on lists, including that they are ordered and changeable collections that allow duplicate members. Lists can be indexed, sliced, modified using methods like append() and insert(), and have various built-in functions that can be used on them. Examples are provided to demonstrate list indexing, slicing, changing elements, adding elements, removing elements, and built-in list methods.
The document discusses various searching and sorting algorithms. It describes linear search, binary search, and interpolation search for searching unsorted and sorted lists. It also explains different sorting algorithms like bubble sort, selection sort, insertion sort, quicksort, shellsort, heap sort, and merge sort. Linear search searches sequentially while binary search uses divide and conquer. Sorting algorithms like bubble sort, selection sort, and insertion sort are in-place and have quadratic time complexity in the worst case. Quicksort, mergesort, and heapsort generally have better performance.
1) Stacks are linear data structures that follow the LIFO (last-in, first-out) principle. Elements can only be inserted or removed from one end called the top of the stack.
2) The basic stack operations are push, which adds an element to the top of the stack, and pop, which removes an element from the top.
3) Stacks have many applications including evaluating arithmetic expressions by converting them to postfix notation and implementing the backtracking technique in recursive backtracking problems like tower of Hanoi.
Linked lists are linear data structures where each node contains a data field and a pointer to the next node. There are two types: singly linked lists where each node has a single next pointer, and doubly linked lists where each node has next and previous pointers. Common operations on linked lists include insertion and deletion which have O(1) time complexity for singly linked lists but require changing multiple pointers for doubly linked lists. Linked lists are useful when the number of elements is dynamic as they allow efficient insertions and deletions without shifting elements unlike arrays.
This document discusses function overloading, inline functions, and friend functions in C++. Function overloading allows functions to have the same name but different parameters, enabling polymorphism. Inline functions have their body inserted at call sites for efficiency. Friend functions can access private members of a class but are external functions without object access. Examples are provided to illustrate each concept.
This document discusses priority queues. It defines a priority queue as a queue where insertion and deletion are based on some priority property. Items with higher priority are removed before lower priority items. There are two main types: ascending priority queues remove the smallest item, while descending priority queues remove the largest item. Priority queues are useful for scheduling jobs in operating systems, where real-time jobs have highest priority and are scheduled first. They are also used in network communication to manage limited bandwidth.
The document discusses stacks, which are a fundamental data structure used in programs. It defines a stack as a linear list of items where additions and deletions are restricted to one end, called the top. Common stack operations include push, which adds an element to the top, and pop, which removes an element from the top. Stacks have applications in parsing expressions, reversing strings, implementing depth-first search algorithms, and calculating arithmetic expressions in prefix and postfix notation. Stacks can be implemented using static arrays or dynamic arrays/linked lists.
This document discusses hashing techniques for implementing symbol tables. It begins by reviewing the motivation for symbol tables in compilers and describing the basic operations of search, insertion and deletion that a hash table aims to support efficiently. It then discusses direct addressing and its limitations when key ranges are large. The concept of a hash function is introduced to map keys to a smaller range to enable direct addressing. Collision resolution techniques of chaining and open addressing are covered. Analysis of expected costs for different operations on chaining hash tables is provided. Various hash functions are described including division and multiplication methods, and the importance of choosing a hash function to distribute keys uniformly is discussed. The document concludes by mentioning universal hashing as a technique to randomize the hash function
NumPy is a Python package that provides multidimensional array and matrix objects as well as tools to work with these objects. It was created to handle large, multi-dimensional arrays and matrices efficiently. NumPy arrays enable fast operations on large datasets and facilitate scientific computing using Python. NumPy also contains functions for Fourier transforms, random number generation and linear algebra operations.
This document provides information about priority queues and binary heaps. It defines a binary heap as a nearly complete binary tree where the root node has the maximum/minimum value. It describes heap operations like insertion, deletion of max/min, and increasing/decreasing keys. The time complexity of these operations is O(log n). Heapsort, which uses a heap data structure, is also covered and has overall time complexity of O(n log n). Binary heaps are often used to implement priority queues and for algorithms like Dijkstra's and Prim's.
The document provides an overview of recursive and iterative algorithms. It discusses key differences between recursive and iterative algorithms such as definition, application, termination, usage, code size, and time complexity. Examples of recursive algorithms like recursive sum, factorial, binary search, tower of Hanoi, and permutation generator are presented along with pseudocode. Analysis of recursive algorithms like recursive sum, factorial, binary search, Fibonacci number, and tower of Hanoi is demonstrated to determine their time complexities. The document also discusses iterative algorithms, proving an algorithm's correctness, the brute force approach, and store and reuse methods.
This document discusses sparse matrices. It defines a sparse matrix as a matrix with more zero values than non-zero values. Sparse matrices can save space by only storing the non-zero elements and their indices rather than allocating space for all elements. Two common representations for sparse matrices are the triplet representation, which stores the non-zero values and their row and column indices, and the linked representation, which connects the non-zero elements. Applications of sparse matrices include solving large systems of equations.
Triggers are stored database procedures that are automatically invoked in response to certain events like data changes. They allow flexible management of data integrity by enforcing business rules. Triggers can be used to log events, gather statistics, modify data when views are updated, enforce referential integrity across nodes, publish database events, prevent operations during certain hours, and enforce complex integrity rules that cannot be defined with constraints alone. Unlike stored procedures, triggers are not explicitly invoked but rather automatically fire in response to triggering events like data modifications.
There are two broad categories of sorting methods based on merging: internal merge sort and external merge sort. Internal merge sort handles small lists that fit into primary memory, including simple merge sort and two-way merge sort. External merge sort is for very large lists that exceed primary memory, including balanced two-way merge sort and multi-way merge sort. The simple merge sort uses a divide-and-conquer approach to recursively split lists in half, sort each sublist, and then merge the sorted sublists.
This document discusses and provides examples of depth-first search (DFS) and breadth-first search (BFS) algorithms for traversing graphs. It explains that DFS involves recursively exploring all branches of the graph as deep as possible before backtracking, while BFS involves searching the neighbors of the starting node first before moving to the next level. Examples are given showing the step-by-step process of applying DFS and BFS to traverse graphs and mark visited vertices.
The document discusses hashing techniques for storing and retrieving data from memory. It covers hash functions, hash tables, open addressing techniques like linear probing and quadratic probing, and closed hashing using separate chaining. Hashing maps keys to memory addresses using a hash function to store and find data independently of the number of items. Collisions may occur and different collision resolution methods are used like open addressing that resolves collisions by probing in the table or closed hashing that uses separate chaining with linked lists. The efficiency of hashing depends on factors like load factor and average number of probes.
PPT On Sorting And Searching Concepts In Data Structure | In Programming Lang...Umesh Kumar
The document discusses various sorting and searching algorithms:
- Bubble sort, selection sort, merge sort, quicksort are sorting algorithms that arrange data in a particular order like numerical or alphabetical.
- Linear search and binary search are searching algorithms where linear search sequentially checks each item while binary search divides the data set in half with each comparison.
- Examples are provided to illustrate how each algorithm works step-by-step on sample data sets.
Arrays in Python can hold multiple values and each element has a numeric index. Arrays can be one-dimensional (1D), two-dimensional (2D), or multi-dimensional. Common operations on arrays include accessing elements, adding/removing elements, concatenating arrays, slicing arrays, looping through elements, and sorting arrays. The NumPy library provides powerful capabilities to work with n-dimensional arrays and matrices.
A linked list is a linear data structure where each element (node) is a separate object, connected to the previous and next elements by references. The first element is referred to as the head of the linked list and the last element is referred to as the tail. The nodes in a linked list can store data or simply act as a reference to the next node. Linked lists have several advantages, such as dynamic sizing and easy insertion and deletion of elements. They are commonly used in a variety of applications, such as implementing stacks, queues, and dynamic memory allocation.
There are several types of linked lists, including:
1. Singly linked list: Each node has a reference to the next node in the list.
2. Doubly linked list: Each node has a reference to both the next and previous node in the list.
3. Circular linked list: The last node in the list points back to the first node, creating a loop.
4. Multilevel linked list: Each node in a linked list can contain another linked list.
5. Doubly Circular linked list: Both the first and last node points to each other, forming a circular loop.
6. Skip list: A probabilistic data structure where each node has multiple references to other nodes.
7. XOR linked list: Each node stores the XOR of the addresses of the previous and next nodes, rather than actual addresses.
This document discusses different searching methods like sequential, binary, and hashing. It defines searching as finding an element within a list. Sequential search searches lists sequentially until the element is found or the end is reached, with efficiency of O(n) in worst case. Binary search works on sorted arrays by eliminating half of remaining elements at each step, with efficiency of O(log n). Hashing maps keys to table positions using a hash function, allowing searches, inserts and deletes in O(1) time on average. Good hash functions uniformly distribute keys and generate different hashes for similar keys.
The document discusses several sorting algorithms including selection sort, insertion sort, bubble sort, merge sort, and quick sort. It provides details on how each algorithm works including pseudocode implementations and analyses of their time complexities. Selection sort, insertion sort and bubble sort have a worst-case time complexity of O(n^2) while merge sort divides the list into halves and merges in O(n log n) time, making it more efficient for large lists.
B-Trees are tree data structures used to store data on disk storage. They allow for efficient retrieval of data compared to binary trees when using disk storage due to reduced height. B-Trees group data into nodes that can have multiple children, reducing the height needed compared to binary trees. Keys are inserted by adding to leaf nodes or splitting nodes and promoting middle keys. Deletion involves removing from leaf nodes, borrowing/promoting keys, or joining nodes.
A queue is a non-primitive linear data structure that follows the FIFO (first-in, first-out) principle. Elements are added to the rear of the queue and removed from the front. Common operations on a queue include insertion (enqueue) and deletion (dequeue). Queues have many real-world applications like waiting in lines and job scheduling. They can be represented using arrays or linked lists.
The document discusses different types of linked lists including:
- Singly linked lists that can only be traversed in one direction.
- Doubly linked lists that allow traversal in both directions using forward and backward pointers.
- Circular linked lists where the last node points back to the first node allowing continuous traversal.
- Header linked lists that include a header node at the beginning for simplified insertion and deletion. Header lists can be grounded where the last node contains a null pointer or circular where the last node points to the header.
- Two-way or doubly linked lists where each node contains a forward and backward pointer allowing bidirectional traversal through the list.
Chapter 8 advanced sorting and hashing for printAbdii Rashid
Shell sort improves on insertion sort by first sorting elements that are already partially sorted. It does this by using a sequence of increment values to sort sublists within the main list. The time complexity of shell sort is O(n^3/2).
Quicksort uses a divide and conquer approach. It chooses a pivot element and partitions the list into two sublists based on element values relative to the pivot. The sublists are then recursively sorted. The average time complexity of quicksort is O(nlogn) but it can be O(n^2) in the worst case.
Mergesort follows the same divide and conquer strategy as quicksort. It recursively divides the list into halves until single elements
The document discusses four sorting algorithms: selection sort, insertion sort, bubble sort, and shellsort. It provides explanations of how each algorithm works, examples of pseudocode and walking through examples, and analyses of the time and space complexity of each algorithm. Selection sort and insertion sort have quadratic time complexity while bubble sort and shellsort have various improvements but are still generally quadratic.
The document discusses stacks, which are a fundamental data structure used in programs. It defines a stack as a linear list of items where additions and deletions are restricted to one end, called the top. Common stack operations include push, which adds an element to the top, and pop, which removes an element from the top. Stacks have applications in parsing expressions, reversing strings, implementing depth-first search algorithms, and calculating arithmetic expressions in prefix and postfix notation. Stacks can be implemented using static arrays or dynamic arrays/linked lists.
This document discusses hashing techniques for implementing symbol tables. It begins by reviewing the motivation for symbol tables in compilers and describing the basic operations of search, insertion and deletion that a hash table aims to support efficiently. It then discusses direct addressing and its limitations when key ranges are large. The concept of a hash function is introduced to map keys to a smaller range to enable direct addressing. Collision resolution techniques of chaining and open addressing are covered. Analysis of expected costs for different operations on chaining hash tables is provided. Various hash functions are described including division and multiplication methods, and the importance of choosing a hash function to distribute keys uniformly is discussed. The document concludes by mentioning universal hashing as a technique to randomize the hash function
NumPy is a Python package that provides multidimensional array and matrix objects as well as tools to work with these objects. It was created to handle large, multi-dimensional arrays and matrices efficiently. NumPy arrays enable fast operations on large datasets and facilitate scientific computing using Python. NumPy also contains functions for Fourier transforms, random number generation and linear algebra operations.
This document provides information about priority queues and binary heaps. It defines a binary heap as a nearly complete binary tree where the root node has the maximum/minimum value. It describes heap operations like insertion, deletion of max/min, and increasing/decreasing keys. The time complexity of these operations is O(log n). Heapsort, which uses a heap data structure, is also covered and has overall time complexity of O(n log n). Binary heaps are often used to implement priority queues and for algorithms like Dijkstra's and Prim's.
The document provides an overview of recursive and iterative algorithms. It discusses key differences between recursive and iterative algorithms such as definition, application, termination, usage, code size, and time complexity. Examples of recursive algorithms like recursive sum, factorial, binary search, tower of Hanoi, and permutation generator are presented along with pseudocode. Analysis of recursive algorithms like recursive sum, factorial, binary search, Fibonacci number, and tower of Hanoi is demonstrated to determine their time complexities. The document also discusses iterative algorithms, proving an algorithm's correctness, the brute force approach, and store and reuse methods.
This document discusses sparse matrices. It defines a sparse matrix as a matrix with more zero values than non-zero values. Sparse matrices can save space by only storing the non-zero elements and their indices rather than allocating space for all elements. Two common representations for sparse matrices are the triplet representation, which stores the non-zero values and their row and column indices, and the linked representation, which connects the non-zero elements. Applications of sparse matrices include solving large systems of equations.
Triggers are stored database procedures that are automatically invoked in response to certain events like data changes. They allow flexible management of data integrity by enforcing business rules. Triggers can be used to log events, gather statistics, modify data when views are updated, enforce referential integrity across nodes, publish database events, prevent operations during certain hours, and enforce complex integrity rules that cannot be defined with constraints alone. Unlike stored procedures, triggers are not explicitly invoked but rather automatically fire in response to triggering events like data modifications.
There are two broad categories of sorting methods based on merging: internal merge sort and external merge sort. Internal merge sort handles small lists that fit into primary memory, including simple merge sort and two-way merge sort. External merge sort is for very large lists that exceed primary memory, including balanced two-way merge sort and multi-way merge sort. The simple merge sort uses a divide-and-conquer approach to recursively split lists in half, sort each sublist, and then merge the sorted sublists.
This document discusses and provides examples of depth-first search (DFS) and breadth-first search (BFS) algorithms for traversing graphs. It explains that DFS involves recursively exploring all branches of the graph as deep as possible before backtracking, while BFS involves searching the neighbors of the starting node first before moving to the next level. Examples are given showing the step-by-step process of applying DFS and BFS to traverse graphs and mark visited vertices.
The document discusses hashing techniques for storing and retrieving data from memory. It covers hash functions, hash tables, open addressing techniques like linear probing and quadratic probing, and closed hashing using separate chaining. Hashing maps keys to memory addresses using a hash function to store and find data independently of the number of items. Collisions may occur and different collision resolution methods are used like open addressing that resolves collisions by probing in the table or closed hashing that uses separate chaining with linked lists. The efficiency of hashing depends on factors like load factor and average number of probes.
PPT On Sorting And Searching Concepts In Data Structure | In Programming Lang...Umesh Kumar
The document discusses various sorting and searching algorithms:
- Bubble sort, selection sort, merge sort, quicksort are sorting algorithms that arrange data in a particular order like numerical or alphabetical.
- Linear search and binary search are searching algorithms where linear search sequentially checks each item while binary search divides the data set in half with each comparison.
- Examples are provided to illustrate how each algorithm works step-by-step on sample data sets.
Arrays in Python can hold multiple values and each element has a numeric index. Arrays can be one-dimensional (1D), two-dimensional (2D), or multi-dimensional. Common operations on arrays include accessing elements, adding/removing elements, concatenating arrays, slicing arrays, looping through elements, and sorting arrays. The NumPy library provides powerful capabilities to work with n-dimensional arrays and matrices.
A linked list is a linear data structure where each element (node) is a separate object, connected to the previous and next elements by references. The first element is referred to as the head of the linked list and the last element is referred to as the tail. The nodes in a linked list can store data or simply act as a reference to the next node. Linked lists have several advantages, such as dynamic sizing and easy insertion and deletion of elements. They are commonly used in a variety of applications, such as implementing stacks, queues, and dynamic memory allocation.
There are several types of linked lists, including:
1. Singly linked list: Each node has a reference to the next node in the list.
2. Doubly linked list: Each node has a reference to both the next and previous node in the list.
3. Circular linked list: The last node in the list points back to the first node, creating a loop.
4. Multilevel linked list: Each node in a linked list can contain another linked list.
5. Doubly Circular linked list: Both the first and last node points to each other, forming a circular loop.
6. Skip list: A probabilistic data structure where each node has multiple references to other nodes.
7. XOR linked list: Each node stores the XOR of the addresses of the previous and next nodes, rather than actual addresses.
This document discusses different searching methods like sequential, binary, and hashing. It defines searching as finding an element within a list. Sequential search searches lists sequentially until the element is found or the end is reached, with efficiency of O(n) in worst case. Binary search works on sorted arrays by eliminating half of remaining elements at each step, with efficiency of O(log n). Hashing maps keys to table positions using a hash function, allowing searches, inserts and deletes in O(1) time on average. Good hash functions uniformly distribute keys and generate different hashes for similar keys.
The document discusses several sorting algorithms including selection sort, insertion sort, bubble sort, merge sort, and quick sort. It provides details on how each algorithm works including pseudocode implementations and analyses of their time complexities. Selection sort, insertion sort and bubble sort have a worst-case time complexity of O(n^2) while merge sort divides the list into halves and merges in O(n log n) time, making it more efficient for large lists.
B-Trees are tree data structures used to store data on disk storage. They allow for efficient retrieval of data compared to binary trees when using disk storage due to reduced height. B-Trees group data into nodes that can have multiple children, reducing the height needed compared to binary trees. Keys are inserted by adding to leaf nodes or splitting nodes and promoting middle keys. Deletion involves removing from leaf nodes, borrowing/promoting keys, or joining nodes.
A queue is a non-primitive linear data structure that follows the FIFO (first-in, first-out) principle. Elements are added to the rear of the queue and removed from the front. Common operations on a queue include insertion (enqueue) and deletion (dequeue). Queues have many real-world applications like waiting in lines and job scheduling. They can be represented using arrays or linked lists.
The document discusses different types of linked lists including:
- Singly linked lists that can only be traversed in one direction.
- Doubly linked lists that allow traversal in both directions using forward and backward pointers.
- Circular linked lists where the last node points back to the first node allowing continuous traversal.
- Header linked lists that include a header node at the beginning for simplified insertion and deletion. Header lists can be grounded where the last node contains a null pointer or circular where the last node points to the header.
- Two-way or doubly linked lists where each node contains a forward and backward pointer allowing bidirectional traversal through the list.
Chapter 8 advanced sorting and hashing for printAbdii Rashid
Shell sort improves on insertion sort by first sorting elements that are already partially sorted. It does this by using a sequence of increment values to sort sublists within the main list. The time complexity of shell sort is O(n^3/2).
Quicksort uses a divide and conquer approach. It chooses a pivot element and partitions the list into two sublists based on element values relative to the pivot. The sublists are then recursively sorted. The average time complexity of quicksort is O(nlogn) but it can be O(n^2) in the worst case.
Mergesort follows the same divide and conquer strategy as quicksort. It recursively divides the list into halves until single elements
The document discusses four sorting algorithms: selection sort, insertion sort, bubble sort, and shellsort. It provides explanations of how each algorithm works, examples of pseudocode and walking through examples, and analyses of the time and space complexity of each algorithm. Selection sort and insertion sort have quadratic time complexity while bubble sort and shellsort have various improvements but are still generally quadratic.
This document provides an overview of several advanced sorting algorithms: Shell sort, Quick sort, Heap sort, and Merge sort. It describes the key ideas, time complexities, and provides examples of implementing each algorithm to sort sample data sets. Shell sort improves on insertion sort by sorting elements in a two-dimensional array. Quick sort uses a pivot element and partitions elements into left and right subsets. Heap sort uses a heap data structure and sorts by swapping elements. Merge sort divides the list recursively and then merges the sorted halves.
Ch2 Part III-Advanced Sorting algorithms.pptxMohammed472103
The document discusses various sorting algorithms including quick sort, shell sort, and heap sort. Quick sort works by selecting a pivot element and partitioning the array into two sub-arrays of elements less than and greater than the pivot. It has average time complexity of O(n log n) but worst case of O(n^2). Shell sort is similar to insertion sort but operates on elements far apart first to improve performance. Its average and best cases are O(n log n) but worst is O(n^2). Heap sort works by converting the array into a heap data structure, removing the maximum element from the root, and rebuilding the heap each time. It has time complexity of O(n log n) in
The document discusses several sorting algorithms:
1. Shell sort improves on insertion sort by sorting elements farther apart before closer elements.
2. Bubble sort compares adjacent elements and swaps them if out of order, repeating until sorted.
3. Quicksort chooses a "pivot" element and partitions the array into subarrays of smaller and larger elements, recursively sorting them.
4. Selection sort finds the minimum element and swaps it into the sorted portion of the array.
The document summarizes various sorting algorithms:
- Bubble sort works by repeatedly swapping adjacent elements that are in the wrong order until the list is fully sorted. It requires O(n^2) time.
- Insertion sort iterates through the list and inserts each element into its sorted position. It is an adaptive algorithm with O(n) time for nearly sorted inputs.
- Quicksort uses a divide and conquer approach, recursively partitioning the list around a pivot element and sorting the sublists. It has average case performance of O(nlogn) time.
Linear Search, Binary Search
Bubble Sort, Merge Sort
Quick Sort, Selection Sort
Insertion Sort, Radix Sort, Bucket Sort
Complexity Issues of algorithms
Linear Search is defined as a sequential search algorithm that starts at one end and goes through each element of a list until the desired element is found, otherwise the search continues till the end of the data set. It is the easiest searching algorithm.
In Binary search, best case occurs when the element to search is found in first comparison, i.e., when the first middle element itself is the element to be searched. The best-case time complexity of Binary search is O(1).
Average Case :
The average case time complexity of Binary search is O(logn)
Worst Case:
In Binary search, the worst case occurs, when we have to keep reducing the search space till it has only one element. The worst-case time complexity of Binary search is O(logn).
Traverse a collection of elements
Move from the front to the end
“Bubble” the largest value to the end using pair-wise comparisons and swapping
Algorithm Bubble_Sort(a, n)
{
for i := 1 to n – 1 do
{
// to keep track of the number of iterations
for j := 0 to n –i-1
{
// to compare the elements inside the particular iteration
if (a[j] > a[j + 1]) then
swap(a[j], a[j + 1]); // swap if any element is greater than its adjacent element
}
}
}
Best Case: The best case occurs when the array is already sorted. So the number of comparisons required is N-1 and the number of swaps required = 0. Hence the best case complexity is O(N).
Worst Case: The worst-case condition for bubble sort occurs when elements of the array are arranged in decreasing order.In the worst case, the total number of iterations or passes required to sort a given array is (N-1). where ‘N’ is the number of elements present in the array.
In each iteration Total number of swaps = Total number of comparison
Total number of comparison (Worst case) = N(N-1)/2Total number of swaps (Worst case) = N(N-1)/2
So worst case and average case time complexity is O(N2) as N2 is the highest order term.
Partitioning a[1:n] into two subarrays in such a way that sorted subarrays do not need to be merged later.
This is accomplished by rearranging the elements in a[1:n] such that a[i] <= a[j] for all i between 1 to m, and for all j between m+1 to n where 1<=m<=n.
Thus, the elements in a[1:m] and a[m+1:n] can be sorted independently
No need for merging
The rearrangement of the elements is accomplished by picking some element of a[],say t = a[s],and then reordering the other elements so that all elements appearing before t in a[1:n] are less than or equal to t and all elements appearing after t are greater than or equal to t.
This rearranging is referred to as partitioning.
In selection sort, the smallest value among the unsorted elements of the array is selected in every pass and inserted to its appropriate position into the array.
In this algorithm, the array is divided into two parts, first is sorted part, and ano
Lecture 1 sorting insertion & shell sortAbirami A
This document discusses several sorting algorithms including insertion sort, shell sort, heap sort, quick sort, merge sort, bucket sort, and radix sort. It provides details on insertion sort and shell sort, including their algorithms, examples, and exercises to implement them in C code. Insertion sort works by dividing a list into sorted and unsorted parts, moving elements in the unsorted part into the sorted part in the correct position. Shell sort, an improved version of insertion sort, works similarly but divides the list into segments defined by an increment value.
The document discusses sorting algorithms and their time complexities. It describes several common sorting algorithms like bubble sort, selection sort, insertion sort, merge sort, and quick sort. Bubble sort, selection sort and insertion sort have a time complexity of O(n^2) in the worst case. Merge sort and quick sort are more efficient with divide-and-conquer approaches, giving them worst-case time complexities of O(n log n). The document provides pseudocode and explanations of how each algorithm works.
This document provides an overview and comparison of insertion sort and shellsort sorting algorithms. It describes how insertion sort works by repeatedly inserting elements into a sorted left portion of the array. Shellsort improves on insertion sort by making passes with larger increments to shift values into approximate positions before final sorting. The document discusses the time complexities of both algorithms and provides examples to illustrate how they work.
Sorting Order and Stability in Sorting.
Concept of Internal and External Sorting.
Bubble Sort,
Insertion Sort,
Selection Sort,
Quick Sort and
Merge Sort,
Radix Sort, and
Shell Sort,
External Sorting, Time complexity analysis of Sorting Algorithms.
Shell sort and selection sort are sorting algorithms. Shell sort improves on insertion sort by having elements spaced further apart initially and sorting these groups before finishing with adjacent elements. Selection sort works by finding the minimum element and swapping it to the front of the unsorted section in each pass. Both algorithms have average complexity of Ο(n^2) time but shell sort can perform better than selection sort for medium sized data sets.
The document discusses two sorting algorithms: insertion sort and shellsort. Insertion sort works by repeatedly building a sorted subset by taking elements from the unsorted set and inserting them into the sorted place. Shellsort improves on insertion sort by comparing elements farther apart. It works by making multiple passes with smaller increments to shift elements into place until adjacent elements are sorted. Both algorithms have quadratic runtime but insertion sort is simpler while shellsort is faster for medium sized lists. Examples are provided to demonstrate how each algorithm sorts a sample list.
The document discusses and compares several sorting algorithms: bubble sort, selection sort, insertion sort, merge sort, and quick sort. For each algorithm, it provides an explanation of how the algorithm works, pseudocode for the algorithm, and an analysis of the time complexity. The time complexities discussed are:
Bubble sort: O(N^2) in worst case, O(N) in best case
Selection sort: O(N^2)
Insertion sort: O(N^2) in worst case, O(N) in best case
Merge sort: O(N log N)
Quick sort: O(N log N) on average
The document provides an overview of several sorting algorithms, including insertion sort, bubble sort, selection sort, and radix sort. It describes the basic approach for each algorithm through examples and pseudocode. Analysis of the time complexity is also provided, with insertion sort, bubble sort, and selection sort having worst-case performance of O(n^2) and radix sort having performance of O(nk) where k is the number of passes.
This document discusses and compares several sorting algorithms: bubble sort, insertion sort, selection sort, quick sort, shell sort, and cocktail sort. It provides pseudocode to demonstrate the basic steps of each algorithm. The key points are:
- Sorting algorithms arrange elements in a list in a particular order, such as ascending or descending.
- Comparison-based sorting algorithms compare elements to determine their order, while non-comparison sorts do not use comparisons.
- Quicksort has the best average case performance of O(n log n) time, while others like bubble sort and selection sort have a worse average case of O(n2) time.
- Quicksort is generally the fastest for large data sets, while
General packet radio services (GPRS) is step to efficiently transport high-speed data over the current GSM and TDMA-based wireless network infrastructures.
Deployment of GPRS networks allows a variety of new applications ranging from mobile e-commerce to mobile corporate VPN access
Deployments of GPRS network has already taken place in several countries in Europe and the far east.
We have learnt that any computer system is made of hardware and software.
The hardware understands a language, which humans cannot understand. So we write programs in high-level language, which is easier for us to understand and remember.
These programs are then fed into a series of tools and OS components to get the desired code that can be used by the machine.
This is known as Language Processing System.
screen speculo is an android App. The main feature of App is to mirror screen between multiple android devices. In this App, the screen of main user’s device will be visible to all other devices. This App will provide two different modes to connect with multiple user. First mode is open mode which allows other users to connect with main user and main user can share screen and media. Second mode is moderated access mode which enable moderator to approve and reject other user’s request that means main user will have all the controls.
The document discusses the 8085 microprocessor, including its pinout, demultiplexing of its address/data bus, and generation of control signals. It describes how the 8085 has a multiplexed address/data bus on pins AD7-AD0 and explains how to use an latch and ALE signal to separate the low-order address. It also shows how to generate separate memory and I/O read and write control signals by combining the RD, WR and IO/M signals using logic gates. Finally, it provides a diagram of an 8085 MPU interfaced with memory and I/O using a latch to demultiplex the bus and logic gates to produce the necessary control signals.
One pass assembler, Two pass assembler,
Advanced Assembler Directives
Index
------
One-pass assembler
Forward Reference
Two-pass assembler using variant-I
Two-pass assembler using variant-II
Advanced Assembler Directives
Design of two pass assembler
Classes, Objects and Method - Object Oriented Programming with JavaRadhika Talaviya
The document discusses various object-oriented programming concepts in Java including classes, objects, constructors, method overloading, passing arguments, returning objects, recursion, the 'new' operator, 'this' and 'static' keywords, and inner classes. It provides examples to illustrate class and object declarations, creating objects using the new operator, using constructors, overloading methods and constructors, passing arguments by value and reference, returning objects from methods, using recursion to calculate factorials, and using the this and static keywords.
A firewall is a network security system that monitors and controls the incoming and outgoing network traffic based on predetermined security rules. Packet filter is a hardware or software designed to block or allow transmission of packets based on criteria such as port, IP address, protocol.
Shopping At Mall without standing in Queue for Bill Payment by Scanning Bar c...Radhika Talaviya
Banking can be defined as the business activity of accepting and safeguarding money owned by other individual and entities, and then lending out this money in order to earn a profit. However, with the passage of time, the activities covered by banking business have widened and now various other services and also offered by banks. The banking services these day, include issuance of debit and credit card, providing safe custody of valuable items, lockers, ATM services and online transfer of fund across the country/world. We have chosen the topic which combine banking process and shopping system.
The document provides information about programming basic computer operations like arithmetic, logic, and input/output operations using machine language and assembly language. It describes machine language and assembly language programming, including the use of pseudo instructions and address symbol tables. It provides examples of programming multiplication, loops, and double precision addition in assembly language.
A stack is a linear data structure in which an element can be inserted or deleted only at one end of the list.
A stack works on the principle of last in first out and is also known as a Last-In-First-Out (LIFO) list.
A bunch of books is one of the common examples of stack. A new book to be added to the bunch is placed at the top and a book to be removed is also taken off from the top.
Therefore, in order to take out the book at the bottom, all the books above it need to be removed from the bunch.
Managers are responsible for controlling and administering organizations and staff. There are typically three levels of managers - workers, first-line managers, and middle/senior managers. Managers play key roles such as setting objectives, making plans, guiding workers, and resolving issues. Effective managers have strong conceptual, human, and technical skills to fulfill their interpersonal, informational, and decision-making duties.
The document discusses various concepts related to relational databases including:
- Primary keys uniquely identify rows in a table. Foreign keys match values in other tables' primary keys.
- Relational databases represent data using relations which have a schema and instances consisting of tuples.
- Relational algebra operations like selection, projection, join, etc. allow querying relational data.
The changes in the surface air temperature,reffered to as the global temperature, brought about by the enhanced green house effect, which is enduced by emmission of greenhouse gases into the air.
Here in my lens, I am throwing light on the life cycle of a girl's life. It starts from when a girl is born in a family extending on to her upbringing to her marriage and then to her pregnancy and delivery. After which, if a girl is born again, the same cycle repeats.
Nanophysics the physics of structures and artefacts with
dimensions in the nanometer range or of
phenomena occurring in nanoseconds. Nanoscience is the study of atoms, molecules and object whose size is of the nanometer scale (1-100nm).
I'm OK, You're OK, by Thomas A Harris MD, is one of the best selling self-help books ever published.It is a practical guide to Transactional Analysis as a method for solving problems in life.
How Binning Affects LED Performance & Consistency.pdfMina Anis
🔍 What’s Inside:
📦 What Is LED Binning?
• The process of sorting LEDs by color temperature, brightness, voltage, and CRI
• Ensures visual and performance consistency across large installations
🎨 Why It Matters:
• Inconsistent binning leads to uneven color and brightness
• Impacts brand perception, customer satisfaction, and warranty claims
📊 Key Concepts Explained:
• SDCM (Standard Deviation of Color Matching)
• Recommended bin tolerances by application (e.g., 1–3 SDCM for retail/museums)
• How to read bin codes from LED datasheets
• The difference between ANSI/NEMA standards and proprietary bin maps
🧠 Advanced Practices:
• AI-assisted bin prediction
• Color blending and dynamic calibration
• Customized binning for high-end or global projects
A SEW-EURODRIVE brake repair kit is needed for maintenance and repair of specific SEW-EURODRIVE brake models, like the BE series. It includes all necessary parts for preventative maintenance and repairs. This ensures proper brake functionality and extends the lifespan of the brake system
First Review PPT gfinal gyft ftu liu yrfut goSowndarya6
CyberShieldX provides end-to-end security solutions, including vulnerability assessment, penetration testing, and real-time threat detection for business websites. It ensures that organizations can identify and mitigate security risks before exploitation.
Unlike traditional security tools, CyberShieldX integrates AI models to automate vulnerability detection, minimize false positives, and enhance threat intelligence. This reduces manual effort and improves security accuracy.
Many small and medium businesses lack dedicated cybersecurity teams. CyberShieldX provides an easy-to-use platform with AI-powered insights to assist non-experts in securing their websites.
Traditional enterprise security solutions are often expensive. CyberShieldX, as a SaaS platform, offers cost-effective security solutions with flexible pricing for businesses of all sizes.
Businesses must comply with security regulations, and failure to do so can result in fines or data breaches. CyberShieldX helps organizations meet compliance requirements efficiently.
11th International Conference on Data Mining (DaMi 2025)kjim477n
Welcome To DAMI 2025
Submit Your Research Articles...!!!
11th International Conference on Data Mining (DaMi 2025)
July 26 ~ 27, 2025, London, United Kingdom
Submission Deadline : June 07, 2025
Paper Submission : https://csit2025.org/submission/index.php
Contact Us : Here's where you can reach us : [email protected] or [email protected]
For more details visit : Webpage : https://csit2025.org/dami/index
A substation at an airport is a vital infrastructure component that ensures reliable and efficient power distribution for all airport operations. It acts as a crucial link, converting high-voltage electricity from the main grid to the lower voltages needed for various airport facilities. This essay will explore the functions, components, and importance of a substation at an airport.
Functions of an Airport Substation:
Voltage Conversion:
Substations step down high-voltage electricity to lower levels suitable for airport operations, like terminal buildings, runways, and other facilities.
Power Distribution:
They distribute electricity to various loads, including lighting, air conditioning, navigation systems, and ground support equipment.
Grid Stability:
Substations help maintain the stability of the power grid by controlling voltage levels and managing power flows.
Redundancy and Reliability:
Airports often have redundant substations or interconnected systems to ensure uninterrupted power supply, even in case of a fault.
Switching and Control:
Substations provide switching capabilities to connect or disconnect circuits, enabling maintenance and power management.
Protection:
Substations incorporate protective devices, like circuit breakers and relays, to safeguard the power system from faults and ensure safe operation.
Key Components of an Airport Substation:
Transformers: These convert high-voltage electricity to lower voltage levels.
Circuit Breakers: These devices switch circuits on or off, protecting the system from faults.
Busbars: These are large, conductive bars that distribute electricity from transformers to other equipment.
Switchgear: This includes equipment that controls the flow of electricity, such as isolators and switches.
Control and Protection Systems: These systems monitor the substation's performance, detect faults, and automatically initiate corrective actions.
Capacitors: These improve the power factor and reduce losses in the system.
Importance of Airport Substations:
Reliable Power Supply:
Substations are essential for providing reliable power to critical airport functions, ensuring safety and efficiency.
Safe and Efficient Operations:
They contribute to the safe and efficient operation of runways, terminals, and other airport facilities.
Airport Infrastructure:
Substations are an integral part of the airport's infrastructure, enabling various operations and services.
Economic Impact:
Substations support the economic activities of the airport, including passenger and cargo handling.
Modernization and Sustainability:
Modern substations incorporate advanced technologies and systems to improve efficiency, reduce energy consumption, and enhance sustainability.
In conclusion, an airport substation is a crucial component of airport infrastructure, ensuring reliable and efficient power distribution, grid stability, and safe operations.
A DECISION SUPPORT SYSTEM FOR ESTIMATING COST OF SOFTWARE PROJECTS USING A HY...ijfcstjournal
One of the major challenges for software, nowadays, is software cost estimation. It refers to estimating the
cost of all activities including software development, design, supervision, maintenance and so on. Accurate
cost-estimation of software projects optimizes the internal and external processes, staff works, efforts and
the overheads to be coordinated with one another. In the management software projects, estimation must
be taken into account so that reduces costs, timing and possible risks to avoid project failure. In this paper,
a decision- support system using a combination of multi-layer artificial neural network and decision tree is
proposed to estimate the cost of software projects. In the model included into the proposed system,
normalizing factors, which is vital in evaluating efforts and costs estimation, is carried out using C4.5
decision tree. Moreover, testing and training factors are done by multi-layer artificial neural network and
the most optimal values are allocated to them. The experimental results and evaluations on Dataset
NASA60 show that the proposed system has less amount of the total average relative error compared with
COCOMO model.
Rearchitecturing a 9-year-old legacy Laravel application.pdfTakumi Amitani
An initiative to re-architect a Laravel legacy application that had been running for 9 years using the following approaches, with the goal of improving the system’s modifiability:
・Event Storming
・Use Case Driven Object Modeling
・Domain Driven Design
・Modular Monolith
・Clean Architecture
This slide was used in PHPxTKY June 2025.
https://phpxtky.connpass.com/event/352685/
Impurities of Water and their Significance.pptxdhanashree78
Impart Taste, Odour, Colour, and Turbidity to water.
Presence of organic matter or industrial wastes or microorganisms (algae) imparts taste and odour to water.
Presence of suspended and colloidal matter imparts turbidity to water.
Development of Portable Biomass Briquetting Machine (S, A & D)-1.pptxaniket862935
Analysis and Design of Algorithms -Sorting Algorithms and analysis
1. SHREE SWAMI ATMANAND SARASWATI INSTITUTE
OF TECHNOLOGY
Analysis and Design of Algorithms(2150703)
PREPARED BY: (Group:2)
Bhumi Aghera(130760107001)
Monika Dudhat(130760107007)
Radhika Talaviya(130760107029)
Rajvi Vaghasiya(130760107031)
Sorting Algorithms and analysis
GUIDED BY:
Prof. Vrutti Shah
Prof. Zinal Solanki
3. Bubble Sort
• The list is divided into two sub lists: sorted and unsorted.
• The largest element is bubbled from the unsorted list and moved to the sorted sub list.
• Each time an element moves from the unsorted part to the sorted part one sort pass is
completed.
• Given a list of n elements, bubble sort requires up to n-1 passes to sort the data.
• Compare each element (except the last one) with its neighbor to the right.
• If they are out of order, swap them.
• This puts the largest element at the very end.
• The last element is now in the correct and final place.
• Continue as above until you have no unsorted elements on the left.
4. Bubble Sort Algorithm
• Bubble_Sort(A)
for i = 1 to A.length-1
for j = A.length downto i+1
if A[j] < A[j-1]
exchange A[j] with A[j-1]
• Analysis of algorithm:
1. Best case:: O(n)
- The number of key comparisons are (n-1)
2. Worst case:: O(𝑛2)
- The number of key comparisons are n*(n-1)/ 2
3. Average case:: O(𝑛2
)
- We have to look at all possible initial data organizations
6. Selection Sort
• The list is divided into two sub lists, sorted and unsorted, which are divided by an
imaginary wall.
• We find the smallest element from the unsorted sub list and swap it with the element
at the beginning of the unsorted data.
• After each selection and swapping, the imaginary wall between the two sub lists move
one element ahead, increasing the number of sorted elements and decreasing the
number of unsorted ones.
• Each time we move one element from the unsorted sub list to the sorted sub list, we
say that we have completed a sort pass.
• A list of n elements requires n-1 passes to completely rearrange the data.
7. Selection Sort Algorithm
• Procedure select(T[1…n])
for i = 1 to n - 1 do
minj = i
minx = T[i]
for j = i + 1 to n do
if T[j] < minx then minj = j
minx = T[j]
T[minj] = T[i]
T[i] = minx
• Analysis of algorithm:
The best case, the worst case, and the average case of the selection sort algorithm are same.
Number of key comparisons are n*(n-1)/2
So, selection sort is O(𝑛2)
8. Selection Sort Example
7 2 8 5 4
2 7 8 5 4
2 4 8 5 7
2 4 5 8 7
2 4 5 7 8
• List is sorted by selecting list element and
moving it to its proper position.
• Algorithm finds position of smallest element
and moves it to top of unsorted portion of list.
• Repeats process above until entire list is
sorted.
9. Insertion Sort
• Insertion sort keeps making the left side of the array sorted until the whole array is
sorted. It sorts the values seen far away and repeatedly inserts unseen values in the
array into the left sorted array.
• It is the simplest of all sorting algorithms.
• Although it has the same complexity as Bubble Sort, the insertion sort is a little over
twice as efficient as the bubble sort.
• An example of an insertion sort occurs in everyday life while playing cards. To sort
the cards in your hand you extract a card, shift the remaining cards, and then insert the
extracted card in the correct place. This process is repeated until all the cards are in
the correct sequence.
10. Insertion Sort Algorithm
• insertion_sort(A)
for j ← 2 to n
do key ← a[j]
i ← j-1
while i>0 and A[j]>key
A[i+1] ← A[i]
i ← i+1
A[i+1]← key
• Analysis of algorithm
Best case : O(n). It occurs when the data is in sorted order. After making one pass
through the data and making no insertions, insertion sort exits.
Average case : θ(𝑛2
) since there is a wide variation with the running time.
Worst case : O(𝑛2
) if the numbers were sorted in reverse order.
12. Shell sort
• Shell sort works by comparing elements that are distant rather than adjacent elements
in an array or list where adjacent elements are compared.
• Shell sort uses a sequence h1, h2, …, ht called the increment sequence. Any increment
sequence is fine as long as h1 = 1 and some other choices are better than others.
• Shell sort makes multiple passes through a list and sorts a number of equally sized
sets using the insertion sort.
• The distance between comparisons decreases as the sorting algorithm runs until the
last phase in which adjacent elements are compared.
• After each phase and some increment hk, for every i, we have a[ i ] ≤ a [ i + hk ] all
elements spaced hk apart are sorted.
13. Shell Sort Algorithm
Shell_sort(A[1…n])
for(gap=n/2 ; gap>0 ; gap=gap/2)
for(p=gap ; p<n ; p++)
temp=G[p];
for(j=p ; j>=gap && temp<a[j-gap] ; j=j-gap)
a[j]=a[j-gap]
a[j]=temp;
• Analysis of algorithm
Best Case: The best case in the shell sort is when the array is already sorted in the
right order. The number of comparisons is less.
Worst Case: The running time of shell sort depends on the choice of increment
sequence. The problem with shell’s increments is that pairs of increments are not
necessarily relatively prime and smaller increments can have little effect.
14. Shell Sort Example
18 32 12 5 38 33 16 2
compare
18<38. so, no swap.
32<33. so, no swap.
12<16. so, no swap.
5>2. so, swap to each other.
compare
compare
compare
Step-1 18 32 12 5 38 33 16 2
18 32 12 2 38 33 16 5
15. Shell Sort Example
18 32
compare
18>12. so, swap to each other.
32>2. so, swap to each other.
38>16. so, swap to each other.
33>5. so, swap to each other.
compare
compare
compare
12 2 38 33 16 5
12 2 18 32 16 5 38 33
Step-2 18 32 12 5 38 33 16 2
16. Shell Sort Example
• The last increment or phase of Shell sort is basically an Insertion Sort algorithm.
• Using insertion sort:
Final sorted array is:
Step-3
12 2 1618 32 5 38 33
12 2 18 32 16 5 38 33
2 5 12 16 18 32 33 38
17. Heap Sort
• Heap is an essential complete binary tree, each of whose node includes an element of
information called value of node and which has the property that the value of each
internal node is greater than or equal to the value of its children. This is called heap
property.
• Heap sort has O(n log n) worst- case running time.
18. Heap Sort Algorithm
Procedure Heap-Sort (T [1 .. n]: real)
begin {of the procedure}
Make-Heap (T[1..n])
For i ← n down to 2 do
begin {for loop}
T [i] ← T[1]
T[1] ← T[i]
Sift-Down (T [1.. (i - 1)], 1)
end {for-loop}
end {procedure}
Procedure Make-Heap (T[1..m]: real, i)
begin {of procedure}
for i ← [n/2] down to 1 do
begin {of for-loop}
Sift-Down(T, i)
end {for-loop}
end {procedure}
19. Heap Sort Algorithm
Procedure Sift-Down (T[1..m]: real, i)
begin {of procedure}
k ← i
repeat
j ← k
if 2j≤ n && T[2j] > T[k]
begin {of if }
then k ← 2j
end {of if }
if 2j≤ n && T[2j + 1] > T[k]
begin {of if }
then k ← 2j + 1
begin {to exchange T [j] and T [k]}
temp ← A [j]
A[j] ← A [k]
A [k] ← temp
end {of if }
if(j = k)
then node has arrived its
final position until j=k
end {of if }
end {for loop}
end {of procedure}
20. Heap Sort
• Complete binary tree.
Left child
of the
root
The next nodes
always fill the next
level from left-to-right.
Root
left child
of the
root
Right child
of the
root
The third node is
always the right child
of the root.
The second node is
always the left child
of the root.
When a complete
binary tree is built,
its first node must be
the root.
21. Heap Sort
• A heap is a certain
kind of complete
binary tree.
Each node in a heap
contains a key that
can be compared to
other nodes' keys.
19
4222127
23
45
35
The "heap property"
requires that each
node's key is >= the
keys of its children
22. Adding a new node to heap
• Put the new node in the
next available spot.
• Push the new node upward,
swapping with its parent
until the new node reaches
an acceptable location.
19
4222127
23
45
35
42
23. Adding a new node to heap
19
4222142
23
45
35
27
• Put the new node in the
next available spot.
• Push the new node upward,
swapping with its parent
until the new node reaches
an acceptable location.
24. Adding a new node to heap
19
4222135
23
45
42
27
• Put the new node in the
next available spot.
• Push the new node upward,
swapping with its parent
until the new node reaches
an acceptable location.
25. Removing the Top of a Heap
• Move the last node
onto the root.
• Root node place at
end of Queue.
19
4222135
23
45
42
27
45Q
• The parent has a key
that is >= new node,
or
• The node reaches the
root.
• The process of
pushing the new node
upward is called
reheapification
upward.
26. Removing the Top of a Heap
• Move the last node
onto the root.
• Push the out-of-place
node downward,
swapping with its
larger child until the
new node reaches an
acceptable location.
19
4222135
23
27
42
27. Removing the Top of a Heap
19
4222135
23
42
27
• Move the last node
onto the root.
• Push the out-of-place
node downward,
swapping with its
larger child until the
new node reaches an
acceptable location.
28. Removing the Top of a Heap
19
4222127
23
42
35
• Move the last node
onto the root.
• Push the out-of-place
node downward,
swapping with its
larger child until the
new node reaches an
acceptable location.
29. Removing the Top of a Heap
• The children all have keys <=
the out-of-place node, or The
node reaches the leaf.
• The process of pushing the new
node downward is called
reheapification downward.
• Now swap 42 &19, Delete
node 42 and place it in
Queue. 19
4222127
23
42
35
42 45Q
30. Heap Sort
• Do this process until the all node is Deleted.
• After this,
Sorted data in Queue Is:
4 19 21 22 23 27 35 42 45Q
31. Bucket Sort
• Assumption: the keys are in the range [0, N)
• Basic idea:
1. Create N linked lists (buckets) to divide interval [0,N) into subintervals of
size 1
2. Add each input element to appropriate bucket
3. Concatenate the buckets
32. Bucket Sort Algorithm
procedure bucketSort (array, n) is
buckets <- new array of n empty lists
for i = 0 to (length(array)-1) do
insert array[i] into buckets [msbits(array[i],k)]
for i = 0 to n-1 do
nextSort (buckets[i]);
return the concatenation of buckets[0], … buckets[n-1]
33. Bucket Sort Example
Input:
Each element of the array is put in one of the N “Buckets”
2 1 3 1 2
2 1 3 1 2
2
1
2
3
1 3 1 2
2
1
2
3
1
34. Bucket Sort Example
3 1 2
2
1
2
3
1
3
1 2
2
1
2
3
1
3
1
2
2
1
2
3
1
3
1
2
2
1
2
3
1
3
1
2
• Now each element
is in proper bucket.
36. Radix sort
• Radix sort is a multiple pass distribution sort.
• It distributes each item to a bucket according to part of the item’s key.
• After each pass, items are collected from the buckets, keeping the items in order,
then redistributed according to the next most significant part of the key.
• This sorts keys digit-by-digit or if keys are strings that we want to sort alphabetically,
it sorts character-by-character.
• Number of passes or bucket sort stages will depend on the number of digits in the
maximum value.
• The algorithm takes O(n) time per bucket sort. There are log10 𝑘= O(log n) bucket
sorts. So the total time is O(n log k)
38. Radix Sort Example
Put elements into bucket according to digit at 1’s place
Put elements into array from bucket keeping the order
Pass-1
9 179 239 38 10 5 36
210 3 4 5 6 7 8 9
9
179
239
3810 5 36
10 5 36 38 9 179 239
39. Radix Sort Example
Put elements into bucket according to digit at ten’s place
Put elements into array from bucket keeping the order
Pass-2
210 3 4 5 6 7 8 9
9
179
239
38
105 36
10 5 36 38 9 179 239
5 9 10 36 38 239 179
40. Radix Sort Example
Put elements into bucket according to digit at hundred’s place
Put elements into array from bucket keeping the order
Pass-3
210 3 4 5 6 7 8 9
9
239179
38
5
36
5 9 10 36 38 239 179
5 9 10 36 38 179 239
10
41. • Assumption: input is in the range 1..k
• Basic idea:
• Count number of elements k each element i
• Use that number to place i in position k of sorted array
• No comparisons! Runs in time O(n + k)
• Stable sort
• Does not sort in place:
• O(n) array to hold sorted output
• O(k) array for scratch storage
Counting Sort
42. Counting Sort Algorithm
for i 1 to k
do C[i] 0
1.
for j 1 to n
do C[A[ j]] C[A[ j]] + 1
2.
for i 2 to k
do C[i] C[i] + C[i–1]
3.
for j n downto 1
do B[C[A[ j]]] A[ j]
C[A[ j]] C[A[ j]] – 1
4.
Initialize
Count
Compute running sum
Re-arrange
52. Counting Sort Example
1 3 3 4 4
Sorted elements:
• The initialization of the Count array, and the second for loop which performs a
prefix sum on the count array, each iterate at most k + 1 times and therefore
take O(k) time.
• The other two for loops, and the initialization of the output array, each
take O(n) time. Therefore the time for the whole algorithm is the sum of the
times for these steps, O(n + k).