Data Structures, which is also called as Abstract Data Types (ADT) provide powerful options for programmer. Here is a tutorial which talks about various ADTs - Linked Lists, Stacks, Queues and Sorting Algorithms
The document discusses three sorting algorithms: bubble sort, selection sort, and insertion sort. Bubble sort works by repeatedly swapping adjacent elements that are in the wrong order. Selection sort finds the minimum element and swaps it into the sorted portion of the array. Insertion sort inserts elements into the sorted portion of the array, swapping as needed to put the element in the correct position. Both selection sort and insertion sort have a time complexity of O(n^2) in the worst case.
The document discusses different sorting algorithms including merge sort and quicksort. Merge sort has a divide and conquer approach where an array is divided into halves and the halves are merged back together in sorted order. This results in a runtime of O(n log n). Quicksort uses a partitioning approach, choosing a pivot element and partitioning the array into subarrays of elements less than or greater than the pivot. In the best case, this partitions the array in half at each step, resulting in a runtime of O(n log n). In the average case, the runtime is also O(n log n). In the worst case, the array is already sorted, resulting in unbalanced partitions and a quadratic runtime of O(n^2
The document discusses various searching and sorting algorithms. It describes linear search, binary search, and interpolation search for searching unsorted and sorted lists. It also explains different sorting algorithms like bubble sort, selection sort, insertion sort, quicksort, shellsort, heap sort, and merge sort. Linear search searches sequentially while binary search uses divide and conquer. Sorting algorithms like bubble sort, selection sort, and insertion sort are in-place and have quadratic time complexity in the worst case. Quicksort, mergesort, and heapsort generally have better performance.
This document discusses recursion in programming. It defines recursion as a procedure that calls itself, with different parameters each time. It explains the key components of a recursive method including base cases and recursive calls. It provides examples of different types of recursion like single/multiple and direct/indirect recursion. Examples of recursively defined sequences and functions like factorials and Fibonacci series are given. Contact details are provided at the end.
The document discusses implementation of stacks. It describes stacks as linear data structures that follow LIFO principles. Key stack operations like push and pop are outlined. Stacks are often implemented using arrays or linked lists. Examples of stack applications include recursion handling, expression evaluation, parenthesis checking, and backtracking problems. Conversion from infix to postfix notation using a stack is also demonstrated.
Programming involves developing programs by specifying computational steps in a programming language. An algorithm is a logical list of steps to solve a problem. Developing good algorithms involves specifying clear input/output, variables, and ensuring the algorithm terminates in a finite number of steps. Flowcharts provide a pictorial representation of algorithm steps and are useful for explaining programs. A computer program consists of instructions provided to the computer to solve a problem.
The document discusses linked lists, which are a linear data structure consisting of nodes connected to each other via pointers. Each node contains data and a pointer to the next node. There are several types of linked lists including singly linked lists where each node has a next pointer, doubly linked lists where each node has next and previous pointers, and circular linked lists where the last node points to the first node. The document covers terminology, advantages and disadvantages, operations, and implementations of different types of linked lists such as dynamic vs static memory allocation and uses in applications.
This document contains a data structures question paper from Anna University. It has two parts:
Part A contains 10 short answer questions covering topics like ADT, linked stacks, graph theory, algorithm analysis, binary search trees, and more.
Part B contains 5 long answer questions each worth 16 marks. Topics include algorithms for binary search, linear search, recursion, sorting, trees, graphs, files, and more. Students are required to write algorithms, analyze time complexity, and provide examples for each question.
hashing is encryption process mostly used in programming language for security purpose.
This presentation will you understand all about hashing and also different techniques used in it for encryption process
Garbage collection is a form of automatic memory management used in computer programs to reclaim memory occupied by objects that are no longer needed. John McCarthy invented garbage collection for Lisp in 1959. Languages either use garbage collection or require manual memory management through techniques like allocating and freeing memory. Common garbage collection algorithms include reference counting, mark and sweep, and generational collection. The Java Virtual Machine uses different garbage collectors like serial, parallel, concurrent mark and sweep, and Garbage First collectors to reclaim memory in the Java heap.
Infix to Postfix Conversion Using StackSoumen Santra
Infix to Postfix Conversion Using Stack is one of the most significant example of application of Stack which is an ADT (Abstract Data Type) based on LIFO concept.
It is related to Analysis and Design Of Algorithms Subject.Basically it describe basic of topological sorting, it's algorithm and step by step process to solve the example of topological sort.
Linked list
Advantages and disadvantages
Types of linked lists
Singly linked list
Doubly linked list
Header linked lists
Applications of linked list
Algorithm to search a value
Example of LinkedList
Algorithm for inserting a node
single link list
Applications of Arrays
data in continuous memory
queues
stacks
beginning of a linked list
traversing a linked list
Algorithm for traversing
Grounded header linked list
Circular Header linked list
The document discusses different notation styles for representing arithmetic expressions, including infix, prefix, and postfix notations. It provides examples of converting expressions between these notations. Infix notation is the conventional style that humans use, but prefix and postfix notations are better suited for computer parsing. The document also covers parsing expressions, operator precedence, and the steps to convert between infix and prefix and infix and postfix notations.
The document discusses stacks and queues. It defines stacks as LIFO data structures and queues as FIFO data structures. It describes basic stack operations like push and pop and basic queue operations like enqueue and dequeue. It then discusses implementing stacks and queues using arrays and linked lists, outlining the key operations and memory requirements for each implementation.
This document discusses data structures and linked lists. It provides definitions and examples of different types of linked lists, including:
- Single linked lists, which contain nodes with a data field and a link to the next node.
- Circular linked lists, where the last node links back to the first node, forming a loop.
- Doubly linked lists, where each node contains links to both the previous and next nodes.
- Operations on linked lists such as insertion, deletion, traversal, and searching are also described.
The document discusses various indexing techniques used to improve data access performance in databases, including ordered indices like B-trees and B+-trees, as well as hashing techniques. It covers the basic concepts, data structures, operations, advantages and disadvantages of each approach. B-trees and B+-trees store index entries in sorted order to support range queries efficiently, while hashing distributes entries uniformly across buckets using a hash function but does not support ranges.
Insertion sort bubble sort selection sortUmmar Hayat
The document discusses three sorting algorithms: insertion sort, bubble sort, and selection sort. Insertion sort has best-case linear time but worst-case quadratic time, sorting elements in place. Bubble sort repeatedly compares and swaps adjacent elements, having quadratic time in all cases. Selection sort finds the minimum element and exchanges it into the sorted portion of the array in each pass, with quadratic time.
The document discusses heap data structures and their use in priority queues and heapsort. It defines a heap as a complete binary tree stored in an array. Each node stores a value, with the heap property being that a node's value is greater than or equal to its children's values (for a max heap). Algorithms like Max-Heapify, Build-Max-Heap, Heap-Extract-Max, and Heap-Increase-Key are presented to maintain the heap property during operations. Priority queues use heaps to efficiently retrieve the maximum element, while heapsort sorts an array by building a max heap and repeatedly extracting elements.
We will discuss the following: Sorting Algorithms, Counting Sort, Radix Sort, Merge Sort.Algorithms, Time Complexity & Space Complexity, Algorithm vs Pseudocode, Some Algorithm Types, Programming Languages, Python, Anaconda.
The document summarizes how garbage collection works in Java. It describes the marking phase where referenced and unreferenced objects are identified. Unreferenced objects are then deleted in the normal deletion step. For better performance, referenced objects can also be compacted together. The document further explains generational garbage collection, where new objects are allocated to the young generation and aged objects are promoted to the old generation. Minor and major garbage collections handle each generation. Different garbage collectors, like serial, parallel, CMS and G1, are also summarized regarding their implementation and suitability for different applications.
PPT On Sorting And Searching Concepts In Data Structure | In Programming Lang...Umesh Kumar
The document discusses various sorting and searching algorithms:
- Bubble sort, selection sort, merge sort, quicksort are sorting algorithms that arrange data in a particular order like numerical or alphabetical.
- Linear search and binary search are searching algorithms where linear search sequentially checks each item while binary search divides the data set in half with each comparison.
- Examples are provided to illustrate how each algorithm works step-by-step on sample data sets.
The document discusses various data structures and their classification. It begins by stating the objectives of understanding how data structures can be classified, basic data types and arrays, and problem-oriented data structures used to solve specific problems. It then defines key terms like data, information, and data structures. It describes primitive and non-primitive, linear and non-linear data structures. It also discusses basic and problem-oriented data structures like lists, stacks, queues, and trees. It provides examples and applications of different data structures.
linear search and binary search, Class lecture of Data Structure and Algorithms and Python.
Stack, Queue, Tree, Python, Python Code, Computer Science, Data, Data Analysis, Machine Learning, Artificial Intellegence, Deep Learning, Programming, Information Technology, Psuedocide, Tree, pseudocode, Binary Tree, Binary Search Tree, implementation, Binary search, linear search, Binary search operation, real-life example of binary search, linear search operation, real-life example of linear search, example bubble sort, sorting, insertion sort example, stack implementation, queue implementation, binary tree implementation, priority queue, binary heap, binary heap implementation, object-oriented programming, def, in BST, Binary search tree, Red-Black tree, Splay Tree, Problem-solving using Binary tree, problem-solving using BST, inorder, preorder, postorder
This document discusses Java wrapper classes. It explains that wrapper classes allow primitive types to be used as objects. Each primitive type has a corresponding wrapper class (e.g. Integer for int). Wrapper classes provide methods to convert between primitive types and their object equivalents. They allow primitives to be used in contexts that require objects, like collections, and provide additional functionality like constants and parsing/formatting methods.
Hashing is a technique used to uniquely identify objects by assigning each object a key, such as a student ID or book ID number. A hash function converts large keys into smaller keys that are used as indices in a hash table, allowing for fast lookup of objects in O(1) time. Collisions, where two different keys hash to the same index, are resolved using techniques like separate chaining or linear probing. Common applications of hashing include databases, caches, and object representation in programming languages.
The document discusses different types of queues, including simple, circular, priority, and double-ended queues. It describes the basic queue operations of enqueue and dequeue, where new elements are added to the rear of the queue and existing elements are removed from the front. Circular queues are more memory efficient than linear queues by connecting the last queue element back to the first, forming a circle. Priority queues remove elements based on priority rather than order of insertion. Double-ended queues allow insertion and removal from both ends. Common applications of queues include CPU and disk scheduling, synchronization between asynchronous processes, and call center phone systems.
This document discusses priority queues. It defines a priority queue as a queue where insertion and deletion are based on some priority property. Items with higher priority are removed before lower priority items. There are two main types: ascending priority queues remove the smallest item, while descending priority queues remove the largest item. Priority queues are useful for scheduling jobs in operating systems, where real-time jobs have highest priority and are scheduled first. They are also used in network communication to manage limited bandwidth.
The document discusses sorting algorithms. It begins by defining the sorting problem as taking an unsorted sequence of numbers and outputting a permutation of the numbers in ascending order. It then discusses different types of sorts like internal versus external sorts and stable versus unstable sorts. Specific algorithms covered include insertion sort, bubble sort, and selection sort. Analysis is provided on the best, average, and worst case time complexity of insertion sort.
The document provides information on various sorting and searching algorithms, including bubble sort, insertion sort, selection sort, quick sort, sequential search, and binary search. It includes pseudocode to demonstrate the algorithms and example implementations with sample input data. Key points covered include the time complexity of each algorithm (O(n^2) for bubble/insertion/selection sort, O(n log n) for quick sort, O(n) for sequential search, and O(log n) for binary search) and how they work at a high level.
hashing is encryption process mostly used in programming language for security purpose.
This presentation will you understand all about hashing and also different techniques used in it for encryption process
Garbage collection is a form of automatic memory management used in computer programs to reclaim memory occupied by objects that are no longer needed. John McCarthy invented garbage collection for Lisp in 1959. Languages either use garbage collection or require manual memory management through techniques like allocating and freeing memory. Common garbage collection algorithms include reference counting, mark and sweep, and generational collection. The Java Virtual Machine uses different garbage collectors like serial, parallel, concurrent mark and sweep, and Garbage First collectors to reclaim memory in the Java heap.
Infix to Postfix Conversion Using StackSoumen Santra
Infix to Postfix Conversion Using Stack is one of the most significant example of application of Stack which is an ADT (Abstract Data Type) based on LIFO concept.
It is related to Analysis and Design Of Algorithms Subject.Basically it describe basic of topological sorting, it's algorithm and step by step process to solve the example of topological sort.
Linked list
Advantages and disadvantages
Types of linked lists
Singly linked list
Doubly linked list
Header linked lists
Applications of linked list
Algorithm to search a value
Example of LinkedList
Algorithm for inserting a node
single link list
Applications of Arrays
data in continuous memory
queues
stacks
beginning of a linked list
traversing a linked list
Algorithm for traversing
Grounded header linked list
Circular Header linked list
The document discusses different notation styles for representing arithmetic expressions, including infix, prefix, and postfix notations. It provides examples of converting expressions between these notations. Infix notation is the conventional style that humans use, but prefix and postfix notations are better suited for computer parsing. The document also covers parsing expressions, operator precedence, and the steps to convert between infix and prefix and infix and postfix notations.
The document discusses stacks and queues. It defines stacks as LIFO data structures and queues as FIFO data structures. It describes basic stack operations like push and pop and basic queue operations like enqueue and dequeue. It then discusses implementing stacks and queues using arrays and linked lists, outlining the key operations and memory requirements for each implementation.
This document discusses data structures and linked lists. It provides definitions and examples of different types of linked lists, including:
- Single linked lists, which contain nodes with a data field and a link to the next node.
- Circular linked lists, where the last node links back to the first node, forming a loop.
- Doubly linked lists, where each node contains links to both the previous and next nodes.
- Operations on linked lists such as insertion, deletion, traversal, and searching are also described.
The document discusses various indexing techniques used to improve data access performance in databases, including ordered indices like B-trees and B+-trees, as well as hashing techniques. It covers the basic concepts, data structures, operations, advantages and disadvantages of each approach. B-trees and B+-trees store index entries in sorted order to support range queries efficiently, while hashing distributes entries uniformly across buckets using a hash function but does not support ranges.
Insertion sort bubble sort selection sortUmmar Hayat
The document discusses three sorting algorithms: insertion sort, bubble sort, and selection sort. Insertion sort has best-case linear time but worst-case quadratic time, sorting elements in place. Bubble sort repeatedly compares and swaps adjacent elements, having quadratic time in all cases. Selection sort finds the minimum element and exchanges it into the sorted portion of the array in each pass, with quadratic time.
The document discusses heap data structures and their use in priority queues and heapsort. It defines a heap as a complete binary tree stored in an array. Each node stores a value, with the heap property being that a node's value is greater than or equal to its children's values (for a max heap). Algorithms like Max-Heapify, Build-Max-Heap, Heap-Extract-Max, and Heap-Increase-Key are presented to maintain the heap property during operations. Priority queues use heaps to efficiently retrieve the maximum element, while heapsort sorts an array by building a max heap and repeatedly extracting elements.
We will discuss the following: Sorting Algorithms, Counting Sort, Radix Sort, Merge Sort.Algorithms, Time Complexity & Space Complexity, Algorithm vs Pseudocode, Some Algorithm Types, Programming Languages, Python, Anaconda.
The document summarizes how garbage collection works in Java. It describes the marking phase where referenced and unreferenced objects are identified. Unreferenced objects are then deleted in the normal deletion step. For better performance, referenced objects can also be compacted together. The document further explains generational garbage collection, where new objects are allocated to the young generation and aged objects are promoted to the old generation. Minor and major garbage collections handle each generation. Different garbage collectors, like serial, parallel, CMS and G1, are also summarized regarding their implementation and suitability for different applications.
PPT On Sorting And Searching Concepts In Data Structure | In Programming Lang...Umesh Kumar
The document discusses various sorting and searching algorithms:
- Bubble sort, selection sort, merge sort, quicksort are sorting algorithms that arrange data in a particular order like numerical or alphabetical.
- Linear search and binary search are searching algorithms where linear search sequentially checks each item while binary search divides the data set in half with each comparison.
- Examples are provided to illustrate how each algorithm works step-by-step on sample data sets.
The document discusses various data structures and their classification. It begins by stating the objectives of understanding how data structures can be classified, basic data types and arrays, and problem-oriented data structures used to solve specific problems. It then defines key terms like data, information, and data structures. It describes primitive and non-primitive, linear and non-linear data structures. It also discusses basic and problem-oriented data structures like lists, stacks, queues, and trees. It provides examples and applications of different data structures.
linear search and binary search, Class lecture of Data Structure and Algorithms and Python.
Stack, Queue, Tree, Python, Python Code, Computer Science, Data, Data Analysis, Machine Learning, Artificial Intellegence, Deep Learning, Programming, Information Technology, Psuedocide, Tree, pseudocode, Binary Tree, Binary Search Tree, implementation, Binary search, linear search, Binary search operation, real-life example of binary search, linear search operation, real-life example of linear search, example bubble sort, sorting, insertion sort example, stack implementation, queue implementation, binary tree implementation, priority queue, binary heap, binary heap implementation, object-oriented programming, def, in BST, Binary search tree, Red-Black tree, Splay Tree, Problem-solving using Binary tree, problem-solving using BST, inorder, preorder, postorder
This document discusses Java wrapper classes. It explains that wrapper classes allow primitive types to be used as objects. Each primitive type has a corresponding wrapper class (e.g. Integer for int). Wrapper classes provide methods to convert between primitive types and their object equivalents. They allow primitives to be used in contexts that require objects, like collections, and provide additional functionality like constants and parsing/formatting methods.
Hashing is a technique used to uniquely identify objects by assigning each object a key, such as a student ID or book ID number. A hash function converts large keys into smaller keys that are used as indices in a hash table, allowing for fast lookup of objects in O(1) time. Collisions, where two different keys hash to the same index, are resolved using techniques like separate chaining or linear probing. Common applications of hashing include databases, caches, and object representation in programming languages.
The document discusses different types of queues, including simple, circular, priority, and double-ended queues. It describes the basic queue operations of enqueue and dequeue, where new elements are added to the rear of the queue and existing elements are removed from the front. Circular queues are more memory efficient than linear queues by connecting the last queue element back to the first, forming a circle. Priority queues remove elements based on priority rather than order of insertion. Double-ended queues allow insertion and removal from both ends. Common applications of queues include CPU and disk scheduling, synchronization between asynchronous processes, and call center phone systems.
This document discusses priority queues. It defines a priority queue as a queue where insertion and deletion are based on some priority property. Items with higher priority are removed before lower priority items. There are two main types: ascending priority queues remove the smallest item, while descending priority queues remove the largest item. Priority queues are useful for scheduling jobs in operating systems, where real-time jobs have highest priority and are scheduled first. They are also used in network communication to manage limited bandwidth.
The document discusses sorting algorithms. It begins by defining the sorting problem as taking an unsorted sequence of numbers and outputting a permutation of the numbers in ascending order. It then discusses different types of sorts like internal versus external sorts and stable versus unstable sorts. Specific algorithms covered include insertion sort, bubble sort, and selection sort. Analysis is provided on the best, average, and worst case time complexity of insertion sort.
The document provides information on various sorting and searching algorithms, including bubble sort, insertion sort, selection sort, quick sort, sequential search, and binary search. It includes pseudocode to demonstrate the algorithms and example implementations with sample input data. Key points covered include the time complexity of each algorithm (O(n^2) for bubble/insertion/selection sort, O(n log n) for quick sort, O(n) for sequential search, and O(log n) for binary search) and how they work at a high level.
The document describes several sorting algorithms:
1) Bubble sort repeatedly compares and swaps adjacent elements, moving the largest values to the end over multiple passes. It has a complexity of O(n^2).
2) Insertion sort inserts elements one by one into the sorted portion of the array by shifting elements and comparing. It is O(n^2) in worst case but O(n) if nearly sorted.
3) Selection sort finds the minimum element and swaps it into the first position in each pass to build the sorted array. It has complexity O(n^2).
4) Merge sort divides the array into halves recursively, then merges the sorted halves to produce the fully sorted array.
The document discusses sorting algorithms. It begins by defining sorting as arranging data in logical order based on a key. It then discusses internal and external sorting methods. For internal sorting, all data fits in memory, while external sorting handles data too large for memory. The document covers stability, efficiency, and time complexity of various sorting algorithms like bubble sort, selection sort, insertion sort, and merge sort. Merge sort uses a divide-and-conquer approach to sort arrays with a time complexity of O(n log n).
Comparison sorting algorithms work by making pairwise comparisons between elements to determine the order in a sorted list. They have a lower bound of Ω(n log n) time complexity due to needing to traverse a decision tree with a minimum of n log n comparisons. Counting sort is a non-comparison sorting algorithm that takes advantage of key assumptions about the data to count and place elements directly into the output array in linear time O(n+k), where n is the number of elements and k is the range of possible key values.
This document discusses algorithms and analysis of algorithms. It covers key concepts like time complexity, space complexity, asymptotic notations, best case, worst case and average case time complexities. Examples are provided to illustrate linear, quadratic and logarithmic time complexities. Common sorting algorithms like quicksort, mergesort, heapsort, bubblesort and insertionsort are summarized along with their time and space complexities.
The document discusses the bubble sort algorithm. It begins by explaining how bubble sort works by repeatedly stepping through a list and swapping adjacent elements that are out of order until the list is fully sorted. It then provides a step-by-step example showing the application of bubble sort to sort an array from lowest to highest. The document concludes by presenting pseudocode for a bubble sort implementation.
This document discusses the complexity of algorithms and the tradeoff between algorithm cost and time. It defines algorithm complexity as a function of input size that measures the time and space used by an algorithm. Different complexity classes are described such as polynomial, sub-linear, and exponential time. Examples are given to find the complexity of bubble sort and linear search algorithms. The concept of space-time tradeoffs is introduced, where using more space can reduce computation time. Genetic algorithms are proposed to efficiently solve large-scale construction time-cost tradeoff problems.
The document discusses time and space complexity analysis of algorithms. Time complexity measures the number of steps to solve a problem based on input size, with common orders being O(log n), O(n), O(n log n), O(n^2). Space complexity measures memory usage, which can be reused unlike time. Big O notation describes asymptotic growth rates to compare algorithm efficiencies, with constant O(1) being best and exponential O(c^n) being worst.
Introduction to data structures and AlgorithmDhaval Kaneria
This document provides an introduction to algorithms and data structures. It defines algorithms as step-by-step processes to solve problems and discusses their properties, including being unambiguous, composed of a finite number of steps, and terminating. The document outlines the development process for algorithms and discusses their time and space complexity, noting worst-case, average-case, and best-case scenarios. Examples of iterative and recursive algorithms for calculating factorials are provided to illustrate time and space complexity analyses.
This document discusses different sorting algorithms including bubble sort, insertion sort, and selection sort. It provides details on each algorithm, including time complexity, code examples, and graphical examples. Bubble sort is an O(n2) algorithm that works by repeatedly comparing and swapping adjacent elements. Insertion sort also has O(n2) time complexity but is more efficient than bubble sort for small or partially sorted lists. Selection sort finds the minimum value and swaps it into place at each step.
The document discusses arrays and linked lists as abstract data types (ADTs). It describes arrays as the simplest data structure, storing elements in sequential memory locations. Linked lists store elements using pointers, with each node containing data and a pointer to the next node. The document outlines common operations on arrays and linked lists like traversal, insertion, deletion, and searching. It also discusses different types of linked lists like singly linked, doubly linked, and circular linked lists.
This document provides an overview of common data structures and algorithms. It discusses static and dynamic data structures, including arrays, linked lists, stacks, and queues. Arrays allow storing multiple elements of the same type and can be one-dimensional, two-dimensional, or multidimensional. Linked lists connect nodes using pointers and can be singly linked, doubly linked, or circular linked. Stacks follow LIFO principles using push and pop operations, while queues use enqueue and dequeue following FIFO order. These data structures find applications in areas like memory management, expression evaluation, job scheduling, and graph searches.
Data structures and Algorithm analysis_Lecture 2.pptxAhmedEldesoky24
This document discusses different data structures for lists, including abstract data types (ADTs), arrays, singly linked lists, and doubly linked lists. It describes common list operations like insert, remove, find, and their time complexities for each implementation. Array-based lists allow direct access but expensive updates. Linked lists have efficient insertion/deletion but slow random access. Doubly linked lists and circular lists add previous pointers for easier traversal in both directions.
This document discusses different data structures and algorithms. It provides examples of common data structures like arrays, linked lists, stacks, queues, trees, and graphs. It describes what each data structure is, how it stores and organizes data, and examples of its applications. It also discusses abstract data types, algorithms, and how to analyze the time and space complexity of algorithms.
The document discusses data structures and linked lists. It defines data structures as logical ways of organizing and storing data in computer memory for efficient use. Linked lists are introduced as a linear data structure where elements are connected via pointers and can grow/shrink dynamically. Algorithms for traversing, inserting, and deleting nodes from singly linked lists using both recursion and iteration are provided with pseudocode.
This document summarizes a massive open online course on Udemy about fundamental data structures and algorithms using the C language. The 15-hour course covers key topics like stacks, queues, linked lists, trees, recursion, and analyzing algorithm efficiency. It aims to help students strengthen programming skills and prepare for technical interviews at top companies. The course consists of 14 sections and includes weekly quizzes on the Udemy platform.
The document discusses different data structures including stacks, queues, and linked lists. It provides examples and definitions for each type of data structure. Stacks follow LIFO order, queues follow FIFO order, and linked lists connect nodes using pointers. Single linked lists only connect nodes forward, while double linked lists connect nodes both forward and backward. Circular lists connect the last node to the first node to form a loop.
A data structure is a way of storing data in computer memory so that it can be retrieved and manipulated efficiently. There are two main categories of data structures: linear and non-linear. Linear data structures include arrays, stacks, and queues where elements are stored in a linear order. Non-linear structures include trees and graphs where elements are not necessarily in a linear order. Common operations on data structures include traversing, searching, insertion, deletion, sorting, and merging. Algorithms use data structures to process and solve problems in an efficient manner.
This document discusses the key concepts and operations related to linked lists. It describes the different types of linked lists including singly linked lists, doubly linked lists, circular linked lists, and circular doubly linked lists. It provides algorithms for common linked list operations like insertion, deletion, and traversal. Memory allocation and various applications of linked lists are also covered.
Data can exist in various forms such as numbers, text, images, and more. Data itself has little meaning until it is processed to extract useful information. There are different types of data including categorical/qualitative data, which represents characteristics like gender, and numerical/quantitative data, which can be discrete like coin flips or continuous like measurements. Common data structures used to organize and store data include arrays, linked lists, stacks, queues, trees and graphs. Efficient searching of data structures is important and can be done using methods like linear search, which sequentially checks each element, and binary search, which can more quickly find elements in a sorted data set.
Linked lists are a dynamic data structure that store elements sequentially using pointers. Each element contains data and a pointer to the next element. This allows efficient insertion and removal of elements but inefficient random access. There are different types of linked lists including singly, doubly, and circularly linked lists. Linked lists offer more flexible memory allocation than arrays but have higher overhead and slower access times.
Linear data structures include arrays, strings, stacks, queues, and lists. Arrays store elements contiguously in memory, allowing efficient random access. Linked lists store elements non-contiguously, with each element containing a data field and pointer to the next element. This allows dynamic sizes but less efficient random access. Linear data structures are ordered, with each element having a single predecessor and successor except for the first and last elements.
This document discusses arrays and linked lists. It provides information on:
- What arrays and linked lists are and how they are used to store data in memory. Arrays use indexes to access data while linked lists use nodes connected through pointers.
- Common operations for each including insertion, deletion, and searching. For arrays this includes shifting elements, while for linked lists it involves manipulating the pointers between nodes.
- Specific implementation details for single linked lists, including defining node structures and performing operations at different points in the list.
Student will be able to learn linear data structures concepts in detailed manner. This PPT comprises of the following topics: LIST ADT, Singly Linked List, Doubly Linked List, Circular Linked List, Applications of Linked List, Applications of Stack and Queue, Tower of Hanoi, Balancing Parenthesis
The document discusses stacks and queues, which are linear data structures. It defines a stack as a first-in, last-out (FILO) structure where elements can only be inserted or removed from one end. A queue is defined as a first-in, first-out (FIFO) structure where elements can only be inserted at one end and removed from the other. The document then describes common stack and queue operations like push, pop, enqueue, dequeue and provides examples of their applications. It also discusses two common implementations of stacks and queues using arrays and linked lists.
The document discusses implementing a function to check if a character is a hexadecimal digit. It explains that a hexadecimal digit ranges from 0-9, A-F, a-f in the ASCII table. It provides examples of inputting different characters and checking if they are hexadecimal digits or not. The sample execution section is empty. It lists functions as the prerequisite for understanding how to create a custom function to check for hexadecimal digits.
The document provides an example program to implement a student record system using an array of structures. It involves reading the number of students and subjects, student names and marks for each subject, calculating averages and grades. The program displays menus to view all student details or a particular student's details based on roll number or name. It demonstrates declaring a structure for student records, reading input into an array of structures, calculating averages and grades, and printing the student records with options to search by roll number or name.
This document discusses writing a macro called swap(t,x,y) that swaps two arguments of any data type t. It asks the user to input a data type and two values of that type, then swaps the values and displays the output. It explains how to swap two integers by using a temporary variable and applying the same concept to arguments of any type t by using macros. The objective is to understand macro preprocessing in C.
This document discusses defining a macro called SIZEOF to return the size of a data type without using the sizeof operator. It explains that by taking the difference of the addresses of a variable and the variable plus one, cast to char pointers, you can get the size in bytes. An example is provided using an integer variable x, showing how taking the difference of (&x+1) and &x after casting to char pointers returns the size of an int, which is 4 bytes. Background on macros and pointers is provided. The objective is stated as understanding macro usage in preprocessing.
The document describes a C program to multiply two matrices. It explains that the program takes input of rows and columns for Matrix A and B, reads in the element values, and checks that the column of the first matrix equals the row of the second before calculating the product. An example is provided where the matrices can be multiplied, producing the output matrix, and another where they cannot due to mismatched dimensions. Requirements for the program include pointers, 2D arrays, and dynamic memory allocation.
The document describes an assignment to read in an unspecified number (n) of names of up to 20 characters each, sort the names alphabetically, and print the sorted list. It provides examples of reading in 3 names ("Arunachal", "Bengaluru", "Agra"), sorting them using a custom string comparison function, and printing the sorted list ("Agra", "Arunachal", "Bengaluru"). Pre-requisites for the assignment include functions, dynamic arrays, and pointers. The objective is to understand how to use functions, arrays and pointers to complete the task.
This document provides instructions for an assignment to implement fragments using an array of pointers. It asks the student to write a program that reads the number of rows and columns for each row, reads the elements for each row, calculates the average for each row, sorts the rows based on average, and prints the results. It includes examples that show reading input values, storing them in an array using pointers, calculating averages, sorting rows, and sample output. The prerequisites are listed as pointers, functions, and dynamic memory allocation, and the objective is stated as understanding dynamic memory allocation and arrays of pointers.
The document describes an algorithm to generate a magic square of size n×n. It takes the integer n as input from the user and outputs the n×n magic square. A magic square is an arrangement of distinct numbers in a square grid where the sum of each row, column and diagonal is equal. The algorithm uses steps like starting from the middle of the grid and moving element by element in a pattern, wrapping around when reaching the boundaries.
This document discusses endianness and provides an example program to convert between little endian and big endian formats. It defines endianness as the order of bytes in memory, and describes little endian as having the least significant byte at the lowest memory address and big endian as the opposite. An example shows inputting a 2-byte number in little endian format and outputting it in big endian. Pre-requisites of pointers and the objective of understanding endianness representations are also stated.
The document provides steps to calculate variance of an array using dynamic memory allocation in C. It explains what variance is, shows an example to calculate variance of a sample array by finding the mean, deviations from mean, squaring the deviations and calculating the average of squared deviations. The key steps are: 1) Read array size and elements, 2) Calculate mean, 3) Find deviations from mean, 4) Square the deviations and store in another array, 5) Calculate average of squared deviations to get variance.
This document provides examples for an assignment to create a menu-driven program that stores and manipulates different data types (char, int, float, double) in dynamically allocated memory. It allocates 8 consecutive bytes to store the variables and uses flags to track which data types are stored. The menu allows the user to add, display, and remove elements as well as exit the program. Examples demonstrate initializing the flags, adding/removing elements, updating the flags, and displaying only elements whose flags are set. The objective is to understand dynamic memory allocation using pointers.
The document discusses generating non-repetitive pattern strings (NRPS) of length n using k distinct characters. It explains that an NRPS has a pattern that is not repeated consecutively. It provides steps to check if a string is an NRPS, including comparing characters and resetting a count if characters do not match. It also describes how to create an NRPS by starting with an ordered pattern and then copying subsequent characters to generate new patterns without repetition until the string reaches the desired length n. Sample inputs and outputs are provided.
The document discusses how to check if a string is a pangram, which is a sentence containing all 26 letters of the English alphabet. It provides an example of implementing the algorithm to check for a pangram by initializing an array to track letter occurrences, iterating through the input string to mark letters in the array, and checking if all letters are marked to determine if it is a pangram.
The document explains how to print all possible combinations of a given string by swapping characters. It provides an example of generating all six combinations of the string "ABC" through a step-by-step process of swapping characters. It also lists the prerequisites as strings, arrays, and pointers and the objective as understanding string manipulations.
The document describes an assignment to write a program that squeezes characters from one string (s1) that match characters in a second string (s2). It provides examples of input/output and step-by-step demonstrations of the program removing matching characters from s1. It also lists prerequisites of functions, arrays, and pointers and the objective of understanding these concepts as they relate to strings.
The document discusses implementing the strtok() string tokenization function. It explains that strtok() breaks a string into tokens based on delimiters. The document then provides pseudocode to implement a custom strtok() function by iterating through the string, overwriting delimiter characters with null terminators to create tokens, and returning a pointer to each token. Sample input/output is provided. The objective is stated as understanding string functions, with prerequisites of strings, storage classes, and pointers.
The document provides details on an assignment to write a program that recursively reverses a given string without using static variables, global variables, or loops. It includes the input, output, and examples of reversing the strings "Extreme" and "hello world". It also provides sample execution and pre-requisites of strings and recursive functions, with the objective being to understand reversing a string recursively.
The document provides code and examples for reversing a string using an iterative method in C++. It explains taking in a string as input, declaring output and input strings of the same length, and swapping the first and last characters, second and second to last, and so on through multiple iterations until the string is reversed. Examples show reversing the strings "Extreme" to "emertxE" and "hello world" to "dlrow olleh" through this iterative swap process. Pre-requisites of strings and loops are noted, with the objective stated as understanding string reversal using an iterative approach.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2025/06/state-space-models-vs-transformers-for-ultra-low-power-edge-ai-a-presentation-from-brainchip/
Tony Lewis, Chief Technology Officer at BrainChip, presents the “State-space Models vs. Transformers for Ultra-low-power Edge AI” tutorial at the May 2025 Embedded Vision Summit.
At the embedded edge, choices of language model architectures have profound implications on the ability to meet demanding performance, latency and energy efficiency requirements. In this presentation, Lewis contrasts state-space models (SSMs) with transformers for use in this constrained regime. While transformers rely on a read-write key-value cache, SSMs can be constructed as read-only architectures, enabling the use of novel memory types and reducing power consumption. Furthermore, SSMs require significantly fewer multiply-accumulate units—drastically reducing compute energy and chip area.
New techniques enable distillation-based migration from transformer models such as Llama to SSMs without major performance loss. In latency-sensitive applications, techniques such as precomputing input sequences allow SSMs to achieve sub-100 ms time-to-first-token, enabling real-time interactivity. Lewis presents a detailed side-by-side comparison of these architectures, outlining their trade-offs and opportunities at the extreme edge.
National Fuels Treatments Initiative: Building a Seamless Map of Hazardous Fu...Safe Software
The National Fuels Treatments Initiative (NFT) is transforming wildfire mitigation by creating a standardized map of nationwide fuels treatment locations across all land ownerships in the United States. While existing state and federal systems capture this data in diverse formats, NFT bridges these gaps, delivering the first truly integrated national view. This dataset will be used to measure the implementation of the National Cohesive Wildland Strategy and demonstrate the positive impact of collective investments in hazardous fuels reduction nationwide. In Phase 1, we developed an ETL pipeline template in FME Form, leveraging a schema-agnostic workflow with dynamic feature handling intended for fast roll-out and light maintenance. This was key as the initiative scaled from a few to over fifty contributors nationwide. By directly pulling from agency data stores, oftentimes ArcGIS Feature Services, NFT preserves existing structures, minimizing preparation needs. External mapping tables ensure consistent attribute and domain alignment, while robust change detection processes keep data current and actionable. Now in Phase 2, we’re migrating pipelines to FME Flow to take advantage of advanced scheduling, monitoring dashboards, and automated notifications to streamline operations. Join us to explore how this initiative exemplifies the power of technology, blending FME, ArcGIS Online, and AWS to solve a national business problem with a scalable, automated solution.
Artificial Intelligence in the Nonprofit Boardroom.pdfOnBoard
OnBoard recently partnered with Microsoft Tech for Social Impact on the AI in the Nonprofit Boardroom Survey, an initiative designed to uncover the current and future role of artificial intelligence in nonprofit governance.
Your startup on AWS - How to architect and maintain a Lean and Mean account J...angelo60207
Prevent infrastructure costs from becoming a significant line item on your startup’s budget! Serial entrepreneur and software architect Angelo Mandato will share his experience with AWS Activate (startup credits from AWS) and knowledge on how to architect a lean and mean AWS account ideal for budget minded and bootstrapped startups. In this session you will learn how to manage a production ready AWS account capable of scaling as your startup grows for less than $100/month before credits. We will discuss AWS Budgets, Cost Explorer, architect priorities, and the importance of having flexible, optimized Infrastructure as Code. We will wrap everything up discussing opportunities where to save with AWS services such as S3, EC2, Load Balancers, Lambda Functions, RDS, and many others.
Providing an OGC API Processes REST Interface for FME FlowSafe Software
This presentation will showcase an adapter for FME Flow that provides REST endpoints for FME Workspaces following the OGC API Processes specification. The implementation delivers robust, user-friendly API endpoints, including standardized methods for parameter provision. Additionally, it enhances security and user management by supporting OAuth2 authentication. Join us to discover how these advancements can elevate your enterprise integration workflows and ensure seamless, secure interactions with FME Flow.
מכונת קנטים המתאימה לנגריות קטנות או גדולות (כמכונת גיבוי).
מדביקה קנטים מגליל או פסים, עד עובי קנט – 3 מ"מ ועובי חומר עד 40 מ"מ. בקר ממוחשב המתריע על תקלות, ומנועים מאסיביים תעשייתיים כמו במכונות הגדולות.
Establish Visibility and Manage Risk in the Supply Chain with Anchore SBOMAnchore
Over 70% of any given software application consumes open source software (most likely not even from the original source) and only 15% of organizations feel confident in their risk management practices.
With the newly announced Anchore SBOM feature, teams can start safely consuming OSS while mitigating security and compliance risks. Learn how to import SBOMs in industry-standard formats (SPDX, CycloneDX, Syft), validate their integrity, and proactively address vulnerabilities within your software ecosystem.
Creating an Accessible Future-How AI-powered Accessibility Testing is Shaping...Impelsys Inc.
Web accessibility is a fundamental principle that strives to make the internet inclusive for all. According to the World Health Organization, over a billion people worldwide live with some form of disability. These individuals face significant challenges when navigating the digital landscape, making the quest for accessible web content more critical than ever.
Enter Artificial Intelligence (AI), a technological marvel with the potential to reshape the way we approach web accessibility. AI offers innovative solutions that can automate processes, enhance user experiences, and ultimately revolutionize web accessibility. In this blog post, we’ll explore how AI is making waves in the world of web accessibility.
Scaling GenAI Inference From Prototype to Production: Real-World Lessons in S...Anish Kumar
Presented by: Anish Kumar
LinkedIn: https://www.linkedin.com/in/anishkumar/
This lightning talk dives into real-world GenAI projects that scaled from prototype to production using Databricks’ fully managed tools. Facing cost and time constraints, we leveraged four key Databricks features—Workflows, Model Serving, Serverless Compute, and Notebooks—to build an AI inference pipeline processing millions of documents (text and audiobooks).
This approach enables rapid experimentation, easy tuning of GenAI prompts and compute settings, seamless data iteration and efficient quality testing—allowing Data Scientists and Engineers to collaborate effectively. Learn how to design modular, parameterized notebooks that run concurrently, manage dependencies and accelerate AI-driven insights.
Whether you're optimizing AI inference, automating complex data workflows or architecting next-gen serverless AI systems, this session delivers actionable strategies to maximize performance while keeping costs low.
Mastering AI Workflows with FME - Peak of Data & AI 2025Safe Software
Harness the full potential of AI with FME: From creating high-quality training data to optimizing models and utilizing results, FME supports every step of your AI workflow. Seamlessly integrate a wide range of models, including those for data enhancement, forecasting, image and object recognition, and large language models. Customize AI models to meet your exact needs with FME’s powerful tools for training, optimization, and seamless integration
AI Agents in Logistics and Supply Chain Applications Benefits and ImplementationChristine Shepherd
AI agents are reshaping logistics and supply chain operations by enabling automation, predictive insights, and real-time decision-making across key functions such as demand forecasting, inventory management, procurement, transportation, and warehouse operations. Powered by technologies like machine learning, NLP, computer vision, and robotic process automation, these agents deliver significant benefits including cost reduction, improved efficiency, greater visibility, and enhanced adaptability to market changes. While practical use cases show measurable gains in areas like dynamic routing and real-time inventory tracking, successful implementation requires careful integration with existing systems, quality data, and strategic scaling. Despite challenges such as data integration and change management, AI agents offer a strong competitive edge, with widespread industry adoption expected by 2025.
Domino IQ – What to Expect, First Steps and Use Casespanagenda
Webinar Recording: https://www.panagenda.com/webinars/domino-iq-what-to-expect-first-steps-and-use-cases/
HCL Domino iQ Server – From Ideas Portal to implemented Feature. Discover what it is, what it isn’t, and explore the opportunities and challenges it presents.
Key Takeaways
- What are Large Language Models (LLMs) and how do they relate to Domino iQ
- Essential prerequisites for deploying Domino iQ Server
- Step-by-step instructions on setting up your Domino iQ Server
- Share and discuss thoughts and ideas to maximize the potential of Domino iQ
Domino IQ – Was Sie erwartet, erste Schritte und Anwendungsfällepanagenda
Webinar Recording: https://www.panagenda.com/webinars/domino-iq-was-sie-erwartet-erste-schritte-und-anwendungsfalle/
HCL Domino iQ Server – Vom Ideenportal zur implementierten Funktion. Entdecken Sie, was es ist, was es nicht ist, und erkunden Sie die Chancen und Herausforderungen, die es bietet.
Wichtige Erkenntnisse
- Was sind Large Language Models (LLMs) und wie stehen sie im Zusammenhang mit Domino iQ
- Wesentliche Voraussetzungen für die Bereitstellung des Domino iQ Servers
- Schritt-für-Schritt-Anleitung zur Einrichtung Ihres Domino iQ Servers
- Teilen und diskutieren Sie Gedanken und Ideen, um das Potenzial von Domino iQ zu maximieren
Floods in Valencia: Two FME-Powered Stories of Data ResilienceSafe Software
In October 2024, the Spanish region of Valencia faced severe flooding that underscored the critical need for accessible and actionable data. This presentation will explore two innovative use cases where FME facilitated data integration and availability during the crisis. The first case demonstrates how FME was used to process and convert satellite imagery and other geospatial data into formats tailored for rapid analysis by emergency teams. The second case delves into making human mobility data—collected from mobile phone signals—accessible as source-destination matrices, offering key insights into population movements during and after the flooding. These stories highlight how FME's powerful capabilities can bridge the gap between raw data and decision-making, fostering resilience and preparedness in the face of natural disasters. Attendees will gain practical insights into how FME can support crisis management and urban planning in a changing climate.
If You Use Databricks, You Definitely Need FMESafe Software
DataBricks makes it easy to use Apache Spark. It provides a platform with the potential to analyze and process huge volumes of data. Sounds awesome. The sales brochure reads as if it is a can-do-all data integration platform. Does it replace our beloved FME platform or does it provide opportunities for FME to shine? Challenge accepted
Trends Artificial Intelligence - Mary MeekerClive Dickens
Mary Meeker’s 2024 AI report highlights a seismic shift in productivity, creativity, and business value driven by generative AI. She charts the rapid adoption of tools like ChatGPT and Midjourney, likening today’s moment to the dawn of the internet. The report emphasizes AI’s impact on knowledge work, software development, and personalized services—while also cautioning about data quality, ethical use, and the human-AI partnership. In short, Meeker sees AI as a transformative force accelerating innovation and redefining how we live and work.
3. Abstract Data Types:
ADT
✔
A set of data values and associated operations that are precisely specified
independent of any particular implementation.
✔
Example: stack, queue, priority queue.
4. DataStructures
✔
The term data structure refers to a scheme for organizing related pieces of
Information.
✔
The basic types of data structures include: files, lists, arrays,records, trees,
tables.
✔
A data structure is the concrete implementation of that type, specifying
how much memory is required and, crucially, how fast the execution of
each operation will be
5. Timing
✔
Every time we run the program we need to estimate how long a program
will run since we are going to have different input values so the running
time will vary.
✔
The worst case running time represents the maximum running time possible
for all input values.
✔
We call the worst case timing big Oh written O(n).The n represents the
worst case execution time units.
7. Complexities
✔
Linear for loops
✔
Example:
✔
Complexity : O(n)
✔
for loops are considered n time units because they will repeat a pro-
gramming statement n times.
✔
The term linear means the for loop increments or decrements by 1
k = 0;
for(i = 0; i < n; i++)
k++;
8. Complexities
✔
Non Linear loops
✔
Example:
✔
Complexity : O(log n)
✔
For every iteration of the loop counter i will divide by 2.
✔
If i starts is at 16 then then successive i’s would be 16, 8, 4, 2, 1. The final
value of k would be 4. Non linear loops are logarithmic.
✔ The timing here is definitely log 2
n because 2 4
= 16. Can also works for
multiplication.
k=0;
for(i=n; i>0; i=i/2)
k++;
9. Complexities
✔
Nested for loops
✔
Example:
✔
Complexity : O(n2
)
✔
Nested for loops are considered n 2
time units because they represent
a loop executing inside another loop.
k=0;
for(i=0; i<n; i++)
for(j=0; j<n; j++)
k++;
11. Complexities
✔
Loops with non-linear inner loops
✔
Example:
✔
Complexity : O(n log n)
✔
The outer loop is O(n) since it increments linear.
✔
The inner loop is O(n log n) and is non-linear because decrements by
dividing by 2.
✔
The final worst case timing is: O(n) * O(log n) = O(n log n)
k=0;
for(i=0;i<n;i++)
for(j=i; j>0; j=j/2)
k++;
12. Complexities
✔
Inner loop incrementer initialized to outer loop incrementer
✔
Example:
✔
Complexity : O( n2
)
✔
In this situation we calculate the worst case timing using both loops.
✔
For every i loop and for start of the inner loop j will be n-1 , n-2, n-3.
k=0;
for(i=0;i<n;i++)
for(j=i;j<n;j++)
k++;
13. Complexities
✔
Power Loops
✔
Example:
✔
Complexity : O( 2n
)
✔
To calculate worst case timing we need to combine the results of both
loops.
✔
For every iteration of the loop counter i will multiply by 2.
✔
The values for j will be 1, 2, 4, 8, 16 and k will be the sum of these
numbers 31 which is 2 n
- 1.
k=0;
for(i=1;i<=n;i=i*2)
for(j=1;j<=i;j++)
k++;
16. Stages:Program Design
✔
Identify the data structures.
✔
Operations: Algorithms
✔
Efficiency( Complexity )
✔
Implementation
✔
What goes into header file?
✔
What goes into C program?
✔
What are libraries? Why do we need them?
✔
How to create libraries.
18. Abstract
✔
A collection of items accessible one after another beginning at the head
and ending at the tail is called a list.
✔
A linked list is a data structure consisting of a group of nodes which
together represent a sequence.
✔
Under the simplest form, each node is composed of a data and a reference
(in other words, a link) to the next node in the sequence.
12 24 45 56
Node
19. Why : Linked List
✔ Elements can be inserted into linked lists indefinitely, while an array will
eventually either fill up or need to be resized.
✔ Further memory savings can be achieved.
✔ Simple example of a persistent data structure.
✔
On the other hand, arrays allow random access, while linked lists allow only
sequential access to elements.
✔
Another disadvantage of linked lists is the extra storage needed for
references, which often makes them impractical for lists of small data items
such as characters or boolean values.
Linked List Vs Arrays
20. Linked List : Types
List
Linear Circular
Single Double Single Double
21. Singly Linked List
✔ The simplest kind of linked list is a singly-linked list (or slist for short), which
has one link per node.
✔ This link points to the next node in the list, or to a null value or empty list if
it is the final node.
12 24 45 56
Example
22. Doubly Linked List
✔ A variant of a linked list in which each item has a link to the previous
item as well as the next.
✔ This allows easily accessing list items backward as well as forward and
deleting any item in constant time, also known as two-way linked list,
symmetrically linked list.
12
Example
34 56
23. Singly Circular
Linked List
✔ Similar to an ordinary singly-linked list, except that the next link of
the last node points back to the first node.
12 24 45 56
Example
24. Doubly Linked List
✔ A variant of a linked list in which each item has a link to the previous
item as well as the next.
✔ This allows easily accessing list items backward as well as forward and
deleting any item in constant time, also known as two-way linked list,
symmetrically linked list.
12
Example
34 56
25. Doubly Circular
Linked List
✔ Similar to a doubly-linked list, except that the previous link of the first node
points to the last node and the next link of the last node points to the first
node.
12
Example
34 56
26. TradeOffs
Single Linked List Double Linked List
Less space per node More space per node
Elementary operations
Less expensive
Elementary operations
more expensive
Bit difficult to manipulate Easier to manipulate
Double Linked List Vs Single Linked List
Circular Linked List Vs Linear Linked List
✔ Allows quick access to the first and last records through a single pointer
(the address of the last element).
✔ Their main disadvantage is the complexity of iteration, which has subtle
special cases.
30. Abstract
✔
Stack is a collection of items in which only the most recently added item
may be removed.
✔
The latest added item is at the top.
✔
Basic operations are push and pop.
✔
Also known as last-in, first-out or LIFO.
31. Abstract ...
✔
Simply stack is a memory in which value are stored and retrieved in last in
first out manner by using operations called push and pop.
32. Stack : Operations
Empty
Stack
push(A)
Top A
push(B)
Top
A
B
push(C)
Top
A
B
C
Push Operations
OVERFLOW STATE
If the stack is full and does not contain enough space to accept
the given item
33. Stack : Operations
Empty
Stack
pop()
Top A
pop()
Top
A
B
pop()
Top
A
B
C
Pop Operations
UNDERFLOW STATE
If the stack is empty and performing pop() operation results
in underflow.
34. Stack : Operations
Empty
Stack
push(A)
Top A
push(B)
Top
A
B
push(C)
Top
A
B
C
Push Operations
OVERFLOW STATE
If the stack is full and does not contain enough space to accept
the given item
35. Stack : Operations
Empty
Stack
pop()
Top A
pop()
Top
A
B
pop()
Top
A
B
C
Pop Operations
UNDERFLOW STATE
If the stack is empty and performing pop() operation results
in underflow.
36. Stack : Operations
1. Create the stack
2. Add to the stack
3. Delete from the stack
4. Print the stack
5. Destroy the stack
37. Applications
1. Decimal to Binary conversion
2. Conversion of expressions
Infix – Postfix
Infix – Prefix
3. Evaluation of expressions
Infix expression evaluation.
Prefix expression evaluation.
Postfix expression evaluation.
4. Function calls in C
39. Search Algorithms
“A search algorithm is a method of locating a
specific item of information in a larger collection of
data. “
40. Linear Search
✔ This is a very simple algorithm.
✔ It uses a loop to sequentially step through an array, starting with the first
element.
✔ It compares each element with the value being searched for and stops when
that value is found or the end of the array is reached.
41. Efficiency:
Linear Search
The Advantage is its simplicity.
✔ It is easy to understand
✔ Easy to implement
✔ Does not require the array to be in order
The Disadvantage is its inefficiency
✔ If there are 20,000 items in the array and what you are looking for is in the
19,999th
element, you need to search through the entire list.
43. Binary Search OR
Half-Interval Search
✔ The binary search is much more efficient than the linear search.
✔ It requires the list to be in order
✔ The algorithm starts searching with the middle element.
● If the item is less than the middle element, it starts over searching the first
half of the list.
●
If the item is greater than the middle element, the search starts over
starting with the middle element in the second half of the list.
● It then continues halving the list until the item is found.
45. Efficiency:
Binary Search
Advantage
✔ It uses “Divide & Conquer” technique.
✔ Binary search uses the result of each comparison to eliminate half of the list
from further searching.
✔ Binary search reveals whether the target is before or after the current
position in the list, and that information can be used to narrow the search.
✔
Binary search is significantly better than linear search for large lists of data
✔
Binary search maintains a contiguous subsequence of the starting sequence
where the target value is surely located.
46. Efficiency:
Binary Search
Disdvantage
✔ Binary search can interact poorly with the memory hierarchy (i.e. caching),
because of its random-access nature.
✔ For in-memory searching, if the interval to be searching is small, a linear
search may have superior performance simply because it exhibits better
locality of reference.
✔ Binary search algorithm employs recursive approach and this approach
requires more stack space.
48. Linear Vs Binary
✔ The benefit of binary search over linear search becomes significant for lists over
about 100 elements.
✔
For smaller lists linear search may be faster because of the speed of the simple
increment compared with the divisions needed in binary search.
✔
The general moral is that for large lists binary search is very much faster than
linear search, but is not worth while for small lists.
✔ Note that binary search is not appropriate for linked list structures (no random
access for the middle term).
52. Bubble Sort
✔ Bubble sort, sometimes referred to as sinking sort, is a simple sorting
algorithm.
✔ Works by repeatedly stepping through the list to be sorted, comparing each
pair of adjacent items and swapping them if they are in the wrong order.
✔ The pass through the list is repeated until no swaps are needed, which
indicates that the list is sorted.
✔
The algorithm gets its name from the way smaller elements "bubble" to the
top of the list.
53. Algorithm
procedure bubbleSort( A : list of sortable items ) defined as:
do
swapped := false
for each i in 0 to length( A ) - 1 do:
if A[ i ] > A[ i + 1 ] then
swap( A[ i ], A[ i + 1 ] )
swapped := true
end if
end for
while swapped
end procedure
54. Example
Let us take the array of numbers "5 1 4 2 8"
First Pass:
( 5 1 4 2 8 ) -> ( 1 5 4 2 8 ),
//Here, algorithm compares the first two elements, and swaps since 5 > 1.
( 1 5 4 2 8 ) -> ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) -> ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) -> ( 1 4 2 5 8 ),
//Now, since these elements are already in order (8 > 5), algorithm does not
swap them.
55. Example
Second Pass:
( 1 4 2 5 8 ) -> ( 1 4 2 5 8 )
( 1 4 2 5 8 ) -> ( 1 2 4 5 8 ), Swap since 4 > 2
( 1 2 4 5 8 ) -> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) -> ( 1 2 4 5 8 )
Now, the array is already sorted, but our algorithm does not know if it is completed.
The algorithm needs one whole pass without any swap to know it is sorted.
57. Performance Analysis
✔ Bubble sort has worst-case and average complexity both О(n2), where n is the
number of items being sorted.
✔
There exist many sorting algorithms with substantially better worst-case or
average complexity of O(n log n)
✔
Even other О(n2) sorting algorithms, such as insertion sort, tend to have better
performance than bubble sort.
✔ Therefore, bubble sort is not a practical sorting algorithm when n is large.
58. Performance Analysis
Advantage
✔ The only significant advantage that bubble sort has over most other
implementations, even quicksort, but not insertion sort, is that the ability to
detect that the list is sorted is efficiently built into the algorithm.
✔ Although bubble sort is one of the simplest sorting algorithms to understand
and implement, its O(n2) complexity means that its efficiency decreases
dramatically on lists of more than a small number of elements.
✔ Due to its simplicity, bubble sort is often used to introduce the concept of an
algorithm, or a sorting algorithm, to introductory computer science students
60. Insertion Sort
“Insertion sort is a simple sorting algorithm that
builds the final sorted array (or list) one
item at a time”
61. Algorithm
insertionSort(array A)
for i = 1 to length[A]-1 do
begin
value = A[i]
j = i-1
while j >= 0 and A[j] > value do
begin
swap( A[j + 1], A[j] )
j = j-1
end
A[j+1] = value
end
63. Performance Analysis
Advantages
✔ Simple implementation
✔
Efficient for (quite) small data sets
✔
Adaptive (i.e., efficient) for data sets that are already substantially sorted:
the time complexity is O(n + d), where d is the number of inversions
✔ More efficient in practice than most other simple quadratic (i.e., O(n2))
algorithms such as selection sort or bubble sort; the best case (nearly sorted
input) is O(n)
✔ Stable; i.e., does not change the relative order of elements with equal keys
✔ In-place; i.e., only requires a constant amount O(1) of additional memory
space
✔ Online; i.e., can sort a list as it receives it
64. Performance Analysis
Dis-Advantages
✔ Insertion sort typically makes fewer comparisons than selection sort, it
requires more writes because the inner loop can require shifting large sections
of the sorted portion of the array.
✔
In general, insertion sort will write to the array O(n2) times, whereas selection
sort will write only O(n) times.
✔ For this reason selection sort may be preferable in cases where writing to
memory is significantly more expensive than reading, such as with EEPROM or
flash memory.
66. Selection Sort
“selection sort is a sorting algorithm,
specifically an in-place comparison sort”
The idea of selection sort is rather simple:
we repeatedly find the next largest (or smallest) element in
the array and move it to its final position in the sorted array
67. Algorithm
for i = 1:n,
k = i
for j = i+1:n, if a[j] < a[k], k = j
→ invariant: a[k] smallest of a[i..n]
swap a[i,k]
→ invariant: a[1..i] in final position
end
68. Example
✔ The list is divided into two sublists, sorted and unsorted, which are divided by
an imaginary wall.
✔ We find the smallest element from the unsorted sublist and swap it with the
element at the beginning of the unsorted data.
✔ After each selection and swapping, the imaginary wall between the two sublists
move one element ahead, increasing the number of sorted elements and
decreasing the number of unsorted ones.
✔ Each time we move one element from the unsorted sublist to the sorted sublist,
we say that we have completed a sort pass.
✔ A list of n elements requires n-1 passes to completely rearrange the data.
70. Performance Analysis
✔ Selecting the lowest element requires scanning all n elements (this takes n − 1
comparisons) and then swapping it into the first position
72. Quick Sort
“As the name implies, it is quick, and it is the
algorithm generally preferred for sorting.”
Basic Idea [ Divide & Conquer Technique ]
1. Pick an element, say P (the pivot)
2. Re-arrange the elements into 3 sub-blocks,
✔ those less than or equal to P (left block)
✔ P(the only element in the middle block)
✔ Those greater than P( right block)
3. Repeat the process recursively for the left & right sub
blocks.
73. Algorithm
function partition(array, left, right, pivotIndex)
pivotValue := array[pivotIndex]
swap array[pivotIndex] and array[right] // Move pivot to end
storeIndex := left
for i from left to right ? 1
if array[i] ? pivotValue
swap array[i] and array[storeIndex]
storeIndex := storeIndex + 1
swap array[storeIndex] and array[right] // Move pivot to its final place
return storeIndex
74. Algorithm
procedure quicksort(array, left, right)
if right > left
select a pivot index (e.g. pivotIndex := left)
pivotNewIndex := partition(array, left, right, pivotIndex)
quicksort(array, left, pivotNewIndex - 1)
quicksort(array, pivotNewIndex + 1, right)
76. Advantages
Plus Points
✔ It is in-place since it uses only a small auxiliary stack.
✔ It requires only n log(n) time to sort n items.
✔ It has an extremely short inner loop
✔ This algorithm has been subjected to a thorough mathematical analysis, a very
precise statement can be made about performance issues.
77. Disadvantages
Minus Points
✔ It is recursive. Especially if recursion is not available, the implementation is
extremely complicated.
✔ It requires quadratic (i.e., n2) time in the worst-case.
✔ It is fragile i.e., a simple mistake in the implementation can go unnoticed and
cause it to perform badly.
79. Merge Sort
“MergeSort is a Divide and Conquer algorithm. It divides
input array in two halves, calls itself for the two halves
and then merges the two sorted halves.”
Basic Idea
Divide & Conquer Technique
80. Algorithm
Conceptually, a merge sort works as follows:
✔
Divide the unsorted list into n sublists, each containing 1 element (a list of 1
element is considered sorted).
✔
Repeatedly merge sublists to produce new sorted sublists until there is only 1
sublist remaining. This will be the sorted list.
81. Algorithm
function mergesort(m)
var list left, right, result
if length(m) ? 1
return m
var middle = length(m) / 2
for each x in m up to middle
add x to left
for each x in m after middle
add x to right
left = mergesort(left)
right = mergesort(right)
result = merge(left, right)
return result
82. Algorithm
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ? first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
end while
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
84. Applications
1) Merge Sort is useful for sorting linked lists in O(nLogn) time. Other nlogn
algorithms like Heap Sort, Quick Sort (average case nLogn) cannot be applied to
linked lists.
2) Inversion Count Problem
3) Used in External Sorting
86. Bucket Sort
“Bucket sort, or bin sort, is a sorting algorithm
that works by partitioning an array into a
number of buckets”
87. Bucket Sort : Idea
Idea:
✔
suppose the values are in the range 0..m-1; start with m empty buckets
numbered 0 to m-1, scan the list and place element s[i] in bucket s[i],
and then output the buckets in order.
✔ Will need an array of buckets, and the values in the list to be sorted will
be the indexes to the buckets
✔ No comparisons will be necessary
88. Algorithm
Bucket sort works as follows:
✔
Set up an array of initially empty "buckets".
✔
Scatter: Go over the original array, putting each object in its bucket.
✔
Sort each non-empty bucket.
✔ Gather: Visit the buckets in order and put all elements back into the original
array.
89. Algorithm: Pseudocode
function bucketSort(array, n) is
buckets ← new array of n empty lists
for i = 0 to (length(array)-1) do
insert array[i] into buckets[msbits(array[i], k)]
for i = 0 to n - 1 do
nextSort(buckets[i]);
return the concatenation of buckets[0], ...., buckets[n-1]
92. Radix Sort
“Radix sort is a non-comparative integer sorting
algorithm that sorts data with integer keys by
grouping keys by the individual digits which
share the same significant position and value”
97. Abstract
“A tree is a widely used abstract data type (ADT) or data structure
implementing this ADT that simulates a hierarchical tree structure, with
a root value and subtrees of children, represented as a set of linked
nodes. “
98. Abstract
✔ A data structure accessed beginning at the root node.
✔ Each node is either a leaf or an internal node.
✔
An internal node has one or more child nodes and is called the parent of its child
nodes.
✔
All children of the same node are siblings.
✔ Contrary to a physical tree, the root is usually depicted at the top of the
structure, and the leaves are depicted at the bottom.
99. Operations:
Binary Tree
✔
Create new node
✔
Insert into the tree
✔ Delete from the tree
✔ Deleting a leaf
✔ Deleting the node with one child
✔ Deleting the node with two children
103. Traversal:Binary Tree
if the tree is not empty
traverse the left subtree
visit the root
traverse the right subtree
if the tree is not empty
visit the root
traverse the left subtree
traverse the right subtree
if the tree is not empty
traverse the left subtree
traverse the right subtree
visit the root
InorderInorder preorder
postorder
104. Applications:
Binary Tree
✔
Storing a set of names, and being able to lookup based on a prefix of the
name. (Used in internet routers.)
✔ Storing a path in a graph, and being able to reverse any subsection of
the path in O(log n) time. (Useful in travelling salesman problems).
✔ Use in Heap sorting.
105. Heap Sort:
✔
A sort algorithm that builds a heap, then repeatedly extracts the maximum
item.
✔ Run time is O(n log n).
✔ A kind of in-place sort.
107. Abstract
“Hashing is a method to store data in an array so that storing,
searching,inserting and deleting data is fast (in theory it’s O(1)). For
this every record needs an unique key.“
Basic Idea
Not to search for the correct position of a record with comparisons but
to compute the position within the array.
The function that returns the position is called the 'hash function' and
the array is called a 'hash table'.
108. Hash Function
✔ A function that maps keys to integers, usually to get an even distribution on a
smaller set of values.
✔ A hash table or hash map is a data structure that uses a hash function
to map identifying values, known as keys (e.g., a person’s name), to their
associated values (e.g., their telephone number).
✔ Thus, a hash table implements an associative array.
✔ The hash function is used to transform the key into the index (the hash) of an
array element (the slot or bucket) where the corresponding value is to be
sought.
109. Hash Function:
Example
For example : John Smith ==> sum of ascii values % 10
= (74+111+104+110+83+109+105+116+104) % 10
= 916 % 10
= 6 (index)
110. Collision Handling
✔ Ideally, the hash function should map each possible key to a unique slot
index, but this ideal is rarely achievable in practice (unless the hash keys
are fixed; i.e. new entries are never added to the table after it is created).
✔ Instead, most hash table designs assume that hash collisionsdifferent keys
that map to the same hash valuewill occur and must be accommodated in
some way.