To excel in data science coding interviews, it's essential to master a variety of questions that test your programming skills and understanding of data science concepts. We have prepared a list of the Top 50 Data Science Interview Questions along with their answers to ace interviews.
Q.1 Write a function to reverse a string in Python
Reversing a string means flipping its order of characters. You can do this by using Python slicing ([::-1]
) or by iterating from the end to the start. This is a common operation in string manipulation.
Python
def reverse_string(s):
return s[::-1]
print(reverse_string("hello"))
Q.2 Check if a string is a palindrome.
Compare the original string with its reversed version to see if they are the same. If they match, it means the string is a palindrome. Return True
or False
based on this comparison. How can you check a string is palindrome or not in Python:
Python
def is_palindrome(s):
return s == s[::-1]
print(is_palindrome("madam"))
Q 3. Write a function to find the nth Fibonacci number using recursion.
The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones, usually starting with 0 and 1. Let's see the code to find the nth Fibonacci number:
Python
def fibonacci(n):
if n <= 1:
return n
else:
return fibonacci(n - 1) + fibonacci(n - 2)
n = 5
print(f"The {n}th Fibonacci number is: {fibonacci(n)}")
OutputThe 5th Fibonacci number is: 5
Q.4 Find indices of two numbers that add up to a specific target in an array.
First we create a dictionary to store numbers and their indices as you iterate through the array. For each number, check if its complement (target minus the number) exists in the dictionary. If it does, return their indices.
Let's see the code:
Python
def two_sum(nums, target):
num_map = {}
for index, num in enumerate(nums):
complement = target - num
if complement in num_map:
return [num_map[complement], index]
num_map[num] = index
print(two_sum([2, 7, 3, 15], 10))
Q.5 Write a Python function to calculate the factorial of a number.
This function calculates the factorial of a number using recursion. If the number is 0 or 1, it returns 1. Otherwise, it multiplies the number by the factorial of the number minus 1.
Python
def factorial(n):
if n == 0 or n == 1:
return 1
return n * factorial(n - 1)
print(factorial(5))
Q.6 How would you count the occurrences of each element in a list?
we use Python's Counter
from the collections
module to count how often each element appears in a list. Let's do the implementation:
Python
from collections import Counter
def count_occurrences(lst):
return Counter(lst)
print(count_occurrences([1, 2, 2, 3, 3, 3]))
OutputCounter({3: 3, 2: 2, 1: 1})
Q.7 How would you load a CSV file into a Pandas DataFrame?
we can load a csv file into the pandas DataFrame like we have implemented .You can also do this for any csv file
Python
import pandas as pd
df = pd.read_csv('file.csv')
Q.8 Write a function to calculate the element-wise sum of two Numpy arrays.
This adds corresponding elements of two Numpy arrays, creating a new array with the results.
Python
import numpy as np
arr1 = np.array([1, 2])
arr2 = np.array([4, 5])
result = np.add(arr1, arr2)
print(result)
Q.9 How would you extract the diagonal elements of a Numpy matrix?
In this question we have to retreive the diagonal elements in the numpy matrix and we do this with the help of a function called 'np.diagonal'.
Python
import numpy as np
matrix = np.array([[1, 2, 3], [4, 5, 6]])
print(np.diagonal(matrix))
Q.10 Write code to reshape a 1D Numpy array into a 2D array with 3 rows.
To reshape 1D Numpy array into 2D array we use the function named as 'array.reshape' and this concept is called reshaping the Array
Python
import numpy as np
arr = np.array([1, 2, 3, 4, 5, 6])
reshaped = arr.reshape(3, 2)
print(reshaped)
Output[[1 2]
[3 4]
[5 6]]
Q.11 Write a Python function to calculate the mean, median, and standard deviation of a list of numbers.
we calculate the mean, median, and standard deviation of a list using Numpy's built-in functions. Now we see the implementation of this:
Python
import numpy as np
lst = [10, 20, 30, 40]
mean = np.mean(lst)
median = np.median(lst)
std_dev = np.std(lst)
print(mean)
print(median)
print(std_dev)
Output25.0
25.0
11.180339887498949
Q 12. How would you handle missing values in a DataFrame?
fillna()
replaces missing values with the mean of each column, while dropna()
removes rows with any missing values.
df.fillna(df.mean(), inplace=True)
df.dropna(inplace=True)
Q 13. How do you read and write data from a file in Python?
To read and write data from file in Python, we use the built-in open()
function, which provides a way to open a file and perform various operations on it, such as reading or writing.
Python
with open('file.txt', 'r') as file:
content = file.read()
print(content)
Python
with open('file.txt', 'w') as file:
file.write("Hello, world!")
Q 14. Write an SQL query to retrieve all columns from a table named employees
where the age
is greater than 30.
This query selects all columns from the employees
table where the age
column value is greater than 30.
SELECT * FROM employees
WHERE age > 30;
Q 15. Write an SQL query to join two tables: orders
and customers
, where the customer_id
in orders
matches the id
in customers
.
The JOIN
operation merges orders
and customers
based on the matching customer_id
and id
columns.
SELECT orders, customers.
FROM orders
JOIN customers ON orders.customer_id = customers.id;
Q 16. Write an SQL query to find the average salary for each department in a company
table, but only for departments with more than 10 employees.
This query calculates the average salary by department but only for those departments that have more than 10 employees, using HAVING
to filter the groups.
SELECT department, AVG(salary) AS avg_salary
FROM employees
GROUP BY department
HAVING COUNT(employee_id) > 10;
Q 17. Write an SQL query to find all employees whose salary is greater than the average salary in the employees
table.
The subquery (SELECT AVG(salary) FROM employees)
calculates the average salary, and the outer query retrieves all employees earning more than that average.
SELECT * FROM employees
WHERE salary > (SELECT AVG(salary) FROM employees);
Q 18. Write an SQL query to find the total sales from the sales
table for each product.
This query groups the data by product_id
and calculates the total sales (SUM(sales_amount)
) for each product.
SELECT product_id, SUM(sales_amount) AS total_sales
FROM sales
GROUP BY product_id;
Q 19. How can you return JSON data in a Flask route?
To return JSON data, you can use the jsonify() function, which converts Python dictionaries to JSON format.This route will return the dictionary as a JSON response.
Python
from flask import Flask, jsonify
app = Flask(__name__)
@app.route(‘/data’)
def data():
return jsonify({“name”: “John”, “age”: 30})
if __name__ == ‘__main__’:
app.run()
Q 20. How do you create a simple “Hello, World!” app in Flask?
To create a simple Flask app, you define a route using the @app.route() decorator and return a response from a view function. This will start a basic Flask app that responds with “Hello, World!” when the root URL is accessed
Python
from flask import Flask
app = Flask(__name__)
@app.route(‘/’)
def hello():
return “Hello, World!”
if __name__ == ‘__main__’:
app.run()
Q 21. Can you explain how Flask handles HTTP methods like GET and POST?
Flask allows you to specify which HTTP methods a route should respond to by using the methods
parameter in the @app.route()
decorator. By default, Flask routes respond to GET requests, but you can specify others such as POST, PUT, DELETE, etc
Python
@app.route('/form', methods=['GET', 'POST'])
def handle_form():
if request.method == 'POST':
return 'Form submitted!'
return 'Form not yet submitted.
Q 22. Create a Class to Represent a Person with Basic Attributes.
__init__(self, name, age)
initializes the Person
object with a name
and age
.birthday(self)
increases the person's age
by 1.__str__(self)
provides a human-readable string representation of the Person
object.
Python
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
def birthday(self):
self.age += 1
def __str__(self):
return f"Name: {self.name}, Age: {self.age}"
# Example usage:
person1 = Person("Alice", 30)
print(person1)
person1.birthday()
print(person1)
OutputName: Alice, Age: 30
Name: Alice, Age: 31
Q 23. Explain how a hash table works. Provide an example.
Hash table is a data structure that stores key-value pairs. It uses a hash function to compute an index into an array of buckets or slots, from which the desired value can be found.
Python
hash_table = {}
hash_table["name"] = "Rihan"
hash_table["age"] = 22
hash_table["city"] = "America"
print(hash_table["name"])
print(hash_table["age"])
print(hash_table["city"])
Q 24. Find the First Non-Repeated Character in a String
Traverse the string and keep track of the frequency of each character. Check which character appears exactly once and return it.
Python
def first_non_repeated_char(s):
char_count = {}
for char in s:
if char in char_count:
char_count[char] += 1
else:
char_count[char] = 1
for char in s:
if char_count[char] == 1:
return char
return None
# Example usage:
print(first_non_repeated_char("swiss"))
print(first_non_repeated_char("aabbcc"))
Q 25. Check if Two Strings are Anagrams of Each Other.
Python
def are_anagrams(str1, str2):
if len(str1) != len(str2):
return False
return sorted(str1) == sorted(str2)
print(are_anagrams("listen", "silent"))
print(are_anagrams("hello", "world"))
Q 26. How do you transpose a NumPy array?
Transposing an array means swapping its rows and columns. You can use the transpose method or the .T attribute. Here’s how you can do it:
Python
import numpy as np
array = np.array([[1, 2], [4, 5]])
transposed_array = array.T
print(transposed_array)
Q 27 Find the median of two sorted arrays of different sizes.
We merge both arrays, sort them, and return the median value. If the total number of elements is odd, the middle element is the median. If even, it is the average of the two middle elements.
Python
def find_median(arr1, arr2):
merged = sorted(arr1 + arr2)
n = len(merged)
return (merged[n // 2] + merged[n // 2 - 1]) / 2 if n % 2 == 0 else merged[n // 2]
print(find_median([1, 3], [2, 4, 5]))
Q 28 Implement a sliding window to find the maximum sum of a subarray of a given size k
.
We use a sliding window technique to calculate the sum of each subarray of size k
. The maximum sum is updated at each step.
Python
def max_sum_subarray(arr, k):
window_sum = sum(arr[:k])
max_sum = window_sum
for i in range(k, len(arr)):
window_sum += arr[i] - arr[i - k]
max_sum = max(max_sum, window_sum)
return max_sum
print(max_sum_subarray([2, 1, 5, 1, 3, 2], 3))
Q 29. Find the kth smallest element in an unsorted array.
We use the heapq
library to find the kth smallest element in unsorted array. Below is the code:
Python
import heapq
def kth_smallest(arr, k):
return heapq.nsmallest(k, arr)[-1]
print(kth_smallest([7, 10, 4, 3, 20, 15], 3))
Q 30. Generate all possible permutations of a given list of numbers.
We use Python's itertools.permutations
to generate all permutations of the given list.
Python
from itertools import permutations
def generate_permutations(arr):
return list(permutations(arr))
print(generate_permutations([1, 2, 3]))
Output[(1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), (3, 2, 1)]
Q 31. Simulate a biased coin flip given a fair coin function.
We flip the fair coin twice. If the result is heads-tails or tails-heads, we return a biased value; otherwise, repeat
Python
import random
def biased_coin():
flip1, flip2 = random.randint(0, 1), random.randint(0, 1)
if flip1 != flip2:
return flip1
return biased_coin()
print(biased_coin())
Q 32. Calculate the confidence interval for a given dataset (assume normal distribution).
Confidence Interval is a range where we are certain that true value exists. The selection of a confidence level for an interval determines the probability that the confidence interval will contain the true parameter value. is the We calculate the mean and margin of error using the standard deviation and z-score for the confidence level.
Python
import numpy as np
from scipy.stats import norm
def confidence_interval(data, confidence=0.95):
mean, std = np.mean(data), np.std(data, ddof=1)
z = norm.ppf((1 + confidence) / 2)
margin_of_error = z * (std / np.sqrt(len(data)))
return mean - margin_of_error, mean + margin_of_error
print(confidence_interval([1, 2, 3, 4, 5]))
Q 33. Implement the Chi-squared test for independence on a contingency table.
Calculate the Chi-squared statistic by comparing observed and expected frequencies in the contingency table.
Python
import numpy as np
from scipy.stats import chi2_contingency
def chi_squared_test(contingency_table):
chi2, p, dof, expected = chi2_contingency(contingency_table)
return chi2, p
table = [[10, 20], [20, 40]]
print(chi_squared_test(table))
Q 34. Generate random numbers following a given probability distribution.
Use numpy.random.choice
to generate random numbers based on a given probability distribution.
Python
import numpy as np
def generate_random_numbers(elements, probabilities, size):
return np.random.choice(elements, size=size, p=probabilities)
print(generate_random_numbers([1, 2, 3], [0.2, 0.5, 0.3], 10))
Output[2 1 2 2 2 3 1 2 2 2]
Q 35. Implement k-nearest neighbors from scratch.
In this we calculate distances from a given point, sort them, and return the majority label of the k
closest points.
Python
import numpy as np
from collections import Counter
def knn(X_train, y_train, X_test, k):
distances = [np.linalg.norm(x - X_test) for x in X_train]
k_neighbors = [y_train[i] for i in np.argsort(distances)[:k]]
return Counter(k_neighbors).most_common(1)[0][0]
X_train = np.array([[1, 2], [2, 3], [3, 4]])
y_train = [0, 1, 1]
X_test = np.array([2.5, 3])
print(knn(X_train, y_train, X_test, 2))
Q 36. Write a function to calculate the silhouette score for clustering results.
we use sklearn
's silhouette_score
function to evaluate the quality of clustering. let' see the implementation with code.
Python
from sklearn.metrics import silhouette_score
def calculate_silhouette_score(X, labels):
return silhouette_score(X, labels)
from sklearn.datasets import make_blobs
X, labels = make_blobs(n_samples=10, centers=2, random_state=0)
print(calculate_silhouette_score(X, labels))
Q 37. Perform one-hot encoding of categorical variables in a dataset.
Use pandas.get_dummies
to convert categorical columns into one-hot-encoded columns.
Python
import pandas as pd
def one_hot_encode(data, column):
return pd.get_dummies(data, columns=[column])
df = pd.DataFrame({'Color': ['Red', 'Blue', 'Green']})
print(one_hot_encode(df, 'Color'))
Output Color_Blue Color_Green Color_Red
0 0 0 1
1 1 0 0
2 0 1 0
Q 38. Implement Principal Component Analysis (PCA) to reduce dimensionality.
Compute covariance matrix, eigenvalues, and eigenvectors to reduce dimensions.
Python
import numpy as np
def pca(X, n_components):
X_centered = X - np.mean(X, axis=0)
covariance_matrix = np.cov(X_centered, rowvar=False)
eigenvalues, eigenvectors = np.linalg.eig(covariance_matrix)
principal_components = eigenvectors[:, :n_components]
return X_centered.dot(principal_components)
X = np.array([[1, 2], [3, 4], [5, 6]])
print(pca(X, 1))
Output[[-2.82842712]
[ 0. ]
[ 2.82842712]]
Q 39 Write a function to handle missing data using multiple imputation.
we use Simple Imputer
to replace missing values with the mean or another strategy. Below is the code:
Python
from sklearn.impute import SimpleImputer
import numpy as np
def impute_missing_data(data):
imputer = SimpleImputer(strategy='mean')
return imputer.fit_transform(data)
data = np.array([[1, 2], [np.nan, 3], [7, 6]])
print(impute_missing_data(data))
Q 40 Group a dataset by a column and calculate the rolling average for another column.
Use pandas.groupby
and rolling
to calculate rolling averages.
Python
import pandas as pd
def rolling_average(df, group_col, target_col, window):
return df.groupby(group_col)[target_col].rolling(window=window).mean().reset_index()
data = {'Group': ['A', 'A', 'B', 'B'], 'Value': [10, 20, 30, 40]}
df = pd.DataFrame(data)
print(rolling_average(df, 'Group', 'Value', 2))
Output Group level_1 Value
0 A 0 NaN
1 A 1 15.0
2 B 2 NaN
3 B 3 35.0
Q 42 Find the most common n-grams in a given text dataset.
Use nltk
to extract and count n-grams.
Python
from nltk import ngrams
from collections import Counter
def most_common_ngrams(text, n, top_k):
words = text.split()
n_grams = list(ngrams(words, n))
return Counter(n_grams).most_common(top_k)
text = "data science is fun and data science is interesting"
print(most_common_ngrams(text, 2, 2))
Q 43. Create a pivot table from raw transactional data.
Use pandas.pivot_table
to summarize data into a pivot table.
Python
import pandas as pd
def create_pivot_table(df, index, columns, values, aggfunc):
return pd.pivot_table(df, index=index, columns=columns, values=values, aggfunc=aggfunc)
data = {'Category': ['A', 'A', 'B'], 'Type': ['X', 'Y', 'X'], 'Value': [10, 20, 30]}
df = pd.DataFrame(data)
pivot_table = create_pivot_table(df, index='Category', columns='Type', values='Value', aggfunc='sum')
print(pivot_table)
OutputType X Y
Category
A 10.0 20.0
B 30.0 NaN
Q 44. Given an integer array, find the sum of the largest contiguous subarray within the array. For example, given the array A = [0,-1,-5,-2,3,14] it should return 17 because of [3,14]. Note that if all the elements are negative it should return zero.
Python
def max_subarray(arr):
n = len(arr)
max_sum = arr[0] #max
curr_sum = 0
for i in range(n):
curr_sum += arr[i]
max_sum = max(max_sum, curr_sum)
if curr_sum <0:
curr_sum = 0
return max_sum
Q 45. Extract entities (e.g., person names, locations) from a given text using Python libraries.
Use spacy
for named entity recognition (NER).
Python
import spacy
def extract_entities(text):
nlp = spacy.load("en_core_web_sm")
doc = nlp(text)
return [(ent.text, ent.label_) for ent in doc.ents]
print(extract_entities("Barack Obama was born in Hawaii."))
Q 46. Write a Python function to tokenize a sentence into words (split by spaces, removing punctuation).
Tokenization is a basic text processing step. Here we tokenize the input text into words, ignoring punctuation.
Python
import re
def simple_tokenizer(text):
return re.findall(r'\b\w+\b', text.lower())
text = "Hello, how are you?"
tokens = simple_tokenizer(text)
print(tokens)
Q 47. Implement a simple NER function using regular expressions to extract names, locations, and dates.
We’ll use regular expressions to extract potential named entities such as names (capitalized words), locations, and dates.
Python
import re
def simple_ner(text):
names = re.findall(r'\b[A-Z][a-z]*\b', text)
locations = re.findall(r'\b(?:New York|Paris|London)\b', text)
dates = re.findall(r'\b\d{1,2}/\d{1,2}/\d{4}\b', text)
return {'names': names, 'locations': locations, 'dates': dates}
text = "John went to New York on 12/12/2020"
entities = simple_ner(text)
Q 48. Write a function to convert a color image into grayscale.
Grayscale conversion can be done by averaging the RGB channels or using a weighted sum.
Python
import cv2
def to_grayscale(image_path):
image = cv2.imread(image_path)
return cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
Q 49. Write a custom loss function that calculates the mean squared error but with a twist: it penalizes the model more if the predicted values are too large.
A custom loss function can be written to not only calculate the error but also include additional conditions, like penalizing large predictions.
Python
import tensorflow as tf
def custom_loss(y_true, y_pred):
mse = tf.reduce_mean(tf.square(y_true - y_pred))
penalty = tf.reduce_mean(tf.square(y_pred)) * 0.01 # Penalize large predictions
return mse + penalty
Q 50 Implement a custom activation function that combines ReLU
and sigmoid
. Use it in a neural network model.
You are required to define a function combining both ReLU and Sigmoid functions. You will then use this function within a Keras model for neural network training.
Python
import tensorflow as tf
from tensorflow.keras.layers import Layer
class CustomActivation(Layer):
def call(self, inputs):
return tf.nn.relu(inputs) * tf.sigmoid(inputs)
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation=CustomActivation(), input_shape=(784,))
])
Similar Reads
Data Science Tutorial Data Science is a field that combines statistics, machine learning and data visualization to extract meaningful insights from vast amounts of raw data and make informed decisions, helping businesses and industries to optimize their operations and predict future trends.This Data Science tutorial offe
3 min read
Introduction to Machine Learning
What is Data Science?Data science is the study of data that helps us derive useful insight for business decision making. Data Science is all about using tools, techniques, and creativity to uncover insights hidden within data. It combines math, computer science, and domain expertise to tackle real-world challenges in a
8 min read
Top 25 Python Libraries for Data Science in 2025Data Science continues to evolve with new challenges and innovations. In 2025, the role of Python has only grown stronger as it powers data science workflows. It will remain the dominant programming language in the field of data science. Its extensive ecosystem of libraries makes data manipulation,
10 min read
Difference between Structured, Semi-structured and Unstructured dataBig Data includes huge volume, high velocity, and extensible variety of data. There are 3 types: Structured data, Semi-structured data, and Unstructured data. Structured data - Structured data is data whose elements are addressable for effective analysis. It has been organized into a formatted repos
2 min read
Types of Machine LearningMachine learning is the branch of Artificial Intelligence that focuses on developing models and algorithms that let computers learn from data and improve from previous experience without being explicitly programmed for every task.In simple words, ML teaches the systems to think and understand like h
13 min read
What's Data Science Pipeline?Data Science is a field that focuses on extracting knowledge from data sets that are huge in amount. It includes preparing data, doing analysis and presenting findings to make informed decisions in an organization. A pipeline in data science is a set of actions which changes the raw data from variou
3 min read
Applications of Data ScienceData Science is the deep study of a large quantity of data, which involves extracting some meaning from the raw, structured, and unstructured data. Extracting meaningful data from large amounts usesalgorithms processing of data and this processing can be done using statistical techniques and algorit
6 min read
Python for Machine Learning
Learn Data Science Tutorial With PythonData Science has become one of the fastest-growing fields in recent years, helping organizations to make informed decisions, solve problems and understand human behavior. As the volume of data grows so does the demand for skilled data scientists. The most common languages used for data science are P
3 min read
Pandas TutorialPandas is an open-source software library designed for data manipulation and analysis. It provides data structures like series and DataFrames to easily clean, transform and analyze large datasets and integrates with other Python libraries, such as NumPy and Matplotlib. It offers functions for data t
6 min read
NumPy Tutorial - Python LibraryNumPy (short for Numerical Python ) is one of the most fundamental libraries in Python for scientific computing. It provides support for large, multi-dimensional arrays and matrices along with a collection of mathematical functions to operate on arrays.At its core it introduces the ndarray (n-dimens
3 min read
Scikit Learn TutorialScikit-learn (also known as sklearn) is a widely-used open-source Python library for machine learning. It builds on other scientific libraries like NumPy, SciPy and Matplotlib to provide efficient tools for predictive data analysis and data mining.It offers a consistent and simple interface for a ra
3 min read
ML | Data Preprocessing in PythonData preprocessing is a important step in the data science transforming raw data into a clean structured format for analysis. It involves tasks like handling missing values, normalizing data and encoding variables. Mastering preprocessing in Python ensures reliable insights for accurate predictions
6 min read
EDA - Exploratory Data Analysis in PythonExploratory Data Analysis (EDA) is a important step in data analysis which focuses on understanding patterns, trends and relationships through statistical tools and visualizations. Python offers various libraries like pandas, numPy, matplotlib, seaborn and plotly which enables effective exploration
6 min read
Introduction to Statistics
Statistics For Data ScienceStatistics is like a toolkit we use to understand and make sense of information. It helps us collect, organize, analyze and interpret data to find patterns, trends and relationships in the world around us.From analyzing scientific experiments to making informed business decisions, statistics plays a
12 min read
Descriptive StatisticStatistics is the foundation of data science. Descriptive statistics are simple tools that help us understand and summarize data. They show the basic features of a dataset, like the average, highest and lowest values and how spread out the numbers are. It's the first step in making sense of informat
5 min read
What is Inferential Statistics?After learning basic statistics like how data points relate (covariance and correlation) and probability distributions, the next important step is Inferential Statistics. Unlike descriptive statistics, which just summarizes data, inferential statistics helps us make predictions and conclusions about
6 min read
Bayes' TheoremBayes' Theorem is a mathematical formula used to determine the conditional probability of an event based on prior knowledge and new evidence. It adjusts probabilities when new information comes in and helps make better decisions in uncertain situations.Bayes' Theorem helps us update probabilities ba
13 min read
Probability Data Distributions in Data ScienceUnderstanding how data behaves is one of the first steps in data science. Before we dive into building models or running analysis, we need to understand how the values in our dataset are spread out and thatâs where probability distributions come in.Let us start with a simple example: If you roll a f
8 min read
Parametric Methods in StatisticsParametric statistical methods are those that make assumptions regarding the distribution of the population. These methods presume that the data have a known distribution (e.g., normal, binomial, Poisson) and rely on parameters (e.g., mean and variance) to define the data.Key AssumptionsParametric t
6 min read
Non-Parametric TestsNon-parametric tests are applied in hypothesis testing when the data does not satisfy the assumptions necessary for parametric tests, such as normality or equal variances. These tests are especially helpful for analyzing ordinal data, small sample sizes, or data with outliers.Common Non-Parametric T
5 min read
Hypothesis TestingHypothesis testing compares two opposite ideas about a group of people or things and uses data from a small part of that group (a sample) to decide which idea is more likely true. We collect and study the sample data to check if the claim is correct.Hypothesis TestingFor example, if a company says i
9 min read
ANOVA for Machine LearningANOVA is useful when we need to compare more than two groups and determine whether their means are significantly different. Suppose you're trying to understand which ingredients in a recipe affect its taste. Some ingredients, like spices might have a strong influence while others like a pinch of sal
9 min read
Bayesian Statistics & ProbabilityBayesian statistics sees unknown values as things that can change and updates what we believe about them whenever we get new information. It uses Bayesâ Theorem to combine what we already know with new data to get better estimates. In simple words, it means changing our initial guesses based on the
6 min read
Feature Engineering
Model Evaluation and Tuning
Data Science Practice