Probability and Statistics in FE Electrical Exam
Have you ever wondered how electrical engineers design and analyze complex systems with multiple variables? Have you ever considered the importance of probability and statistics in electrical engineering?
If you are pursuing a career in electrical engineering, you must know the significance of probability and statistics in the FE Electrical Exam.
Electrical engineers need to design and analyze systems with high accuracy and reliability. They need to understand how to model and predict the behavior of electrical systems using mathematical tools.
Probability and statistics provide engineers with the necessary methodologies to analyze data, make data-backed decisions, and assess the performance of electrical systems.
In this blog, we will explore the role of probability and statistics in the FE Electrical Exam. We will cover the fundamental concepts, techniques, and formulas you need to know to excel in the exam. We will also discuss tips and tricks to help you prepare effectively for the exam.
So, whether you are an electrical engineering student or an engineer preparing for the FE Electrical Exam, this blog is for you. Join us as we explore the exciting world of probability and statistics in electrical engineering and go through some critical probability and stats problems, probability distribution tables, and probability formulas.
Importance of Probability and Statistics in Electrical Engineering

Probability and statistics play a crucial role in electrical engineering. Here are five examples of how probability and statistics are used in electrical engineering:
- Reliability Analysis – Probability theory models complex electrical systems with multiple variables. Statistical techniques analyze failure data and predict system reliability. The Weibull distribution models the lifetime of electronic devices.
- Signal Processing – Engineers use statistical techniques, such as Fourier and wavelet analysis, to analyze and transform signals. Probability theory models the random nature of noise and interference. Statistical signal processing removes noise and extracts meaningful information from signals.
- Control Systems – Probability theory models the uncertainty and variability of complex systems. Statistical techniques, such as Kalman filtering and stochastic control, design robust and adaptive control systems for changing conditions.
- Communication Systems – Probability theory models the random nature of interference and noise in communication channels. Statistical techniques, such as error-correcting codes and channel equalization, improve the reliability and efficiency of communication systems.
- Optimization – Probability theory models the uncertainty and variability of electrical systems. Statistical techniques like optimization algorithms and Monte Carlo simulations find the best solutions to complex optimization problems.
Probability

Imagine you’re at a carnival and come across a game booth with a spinning wheel. The operator tells you that you win a prize if the wheel lands on a particular color.
But before you play, you want to know your chances of winning. This is where probability comes in.
Probability is a way of measuring how likely an event is to occur. In this case, the event wins the prize by landing on the correct color.
To calculate the probability, you need to know the possible outcomes and how many will result in a win.
Let’s say there are 8 colors on the wheel, but only 1 is the winning color. This means there is only one way to win and seven ways to lose.
So the probability of winning is:
P(Win) = 1/8
This means your chance of winning the prize is 1 out of 8, or about 12.5%.
Probability can be used in many situations, from gambling to weather forecasting to evaluating and analyzing electrical systems, circuits, and signals. By understanding probability, you can make more informed decisions and better assess risk.
Probability laws and axioms
Probability Law / Axiom | Description | Example |
Kolmogorov’s Axioms | Three axioms define the properties of probability: non-negativity, additivity, and normalization. | The probability of an event must be non-negative (e.g., the probability of rain tomorrow is not negative); the probability of mutually exclusive events adds up (e.g., the probability of getting a 2 or a 4 on a dice roll is 1/6 + 1/6 = 1/3); the probability of the entire sample space is 1 (e.g., the probability of getting any number on a dice roll is 1). |
Law of Total Probability | Calculates the probability of an event by considering all possible outcomes of a related event. | The probability of getting wet in the rain depends on the probability of rain or the probability of a rainy day vs. a sunny day. |
Bayes’ Theorem | Calculates the conditional probability of an event based on prior knowledge or information. | The probability of getting a disease, given a positive test result, depends on the disease’s prevalence and the test’s accuracy. |
Conditional Probability | Calculates the probability of an event given that another event has occurred. | The probability of getting a job offer after an interview depends on the impression made. |
Independence | Two events are independent if the occurrence of one does not affect the probability of the other. | The probability of getting heads on a coin flip does not depend on whether it rained that day. |
Probability distributions
Probability distribution refers to how the possible values of a random variable are distributed across the range of the variable. Many probability distributions are used in statistics, each with its own properties and applications.
Normal Distribution
The normal distribution is a continuous probability distribution widely used in statistics. For example, we want to find the probability that a randomly selected individual from a population has a height between 60 and 70 inches, and we know that the mean height is 65 inches and the standard deviation is 2 inches.
In that case, we can use the normal distribution to calculate this probability. We first standardize the values using the formula:
z = (x – μ) / σ
where z is the standard score, x is the value of the random variable, μ is the mean, and σ is the standard deviation. Then, we use a standard normal distribution table or a calculator to find the probability:
P(60 < x < 70) = P((60 – 65) / 2 < z < (70 – 65) / 2)
= P(-2.5 < z < 2.5)
= 0.9938 – 0.0062
= 0.9876
Poisson Distribution
The Poisson distribution is a discrete probability distribution that is used to model the number of events that occur in a fixed interval of time or space, given the average rate of occurrence. The probability mass function of the Poisson distribution is given by:
P(k; λ) = (e^-λ * λ^k) / k!
where λ is the mean number of events in the given interval, and k is the number of events.
For example, we want to find the probability of precisely 3 traffic accidents in a given hour, given that the average rate of accidents is 2 per hour. In that case, we can use the Poisson distribution to calculate this probability. We use the above probability formula for respective values as:
P(k = 3; λ = 2) = (e^-2 * 2^3) / 3!
= 0.1804
Binomial Distribution
The binomial distribution is a discrete probability distribution that is used to model the number of successes in a fixed number of trials, given the probability of success in each trial. The probability mass function of the binomial distribution is given by:
P(k) = (n choose k) * p^k * (1 – p)^(n – k)
where n is the number of trials, p is the probability of success in each trial, and k is the number of successes.
For example, if we want to find the probability that a coin comes up heads exactly 3 times in 5 tosses, given that the probability of heads is 0.5, we can use the binomial distribution to calculate this probability. We use the formula
P(k) = (n choose k) * p^k * (1 – p)^(n – k)
P(k = 3) = (5 choose 3) * 0.5^3 * (1 – 0.5)^(5 – 3)
= 0.3125
Expected value and variance
Expected value & variance are two crucial concepts in probability theory and statistics. The expected value of a random variable is a measure of the central tendency of the variable. At the same time, the variance is a measure of the spread or dispersion of the variable.
The expected value of a discrete random variable X is defined as:
E(X) = Σ xi * P(xi)
where xi are the possible values of X and P(xi) is the probability of X taking on the value xi. Essentially, we multiply each possible value of X by its probability and sum up the results. The expected value gives us a rough idea of the average value of X that we might expect to observe over many trials.
For example, let’s say we have a fair six-sided die. The possible values of X (the outcome of a single roll) are 1, 2, 3, 4, 5, and 6, each with probability 1/6. The expected value of X is:
E(X) = 1*(1/6) + 2*(1/6) + 3*(1/6) + 4*(1/6) + 5*(1/6) + 6*(1/6)
= 3.5
So we expect the average outcome of a single roll of the die to be 3.5.
The variance of a random variable X is defined as:
Var(X) = E((X – μ)^2)
where μ is the expected value of X. Essentially, we calculate the difference between each value of X and the expected value, square those differences, weight them by their probabilities, and sum up the results. The variance gives us a rough idea of how much the outcomes of X are likely to differ from the expected value.
Using the same example of the fair six-sided die, the variance of X is:
Var(X) = E((X – μ)^2)
= E((X – 3.5)^2)
= (1-3.5)^2*(1/6) + (2-3.5)^2*(1/6) + … + (6-3.5)^2*(1/6)
= 35/12
≈ 2.92
So we expect the outcomes of the die rolls to be scattered around the expected value of 3.5, with an average squared deviation of about 2.92.
Let’s move towards a more technical example combining the concept of expected value, variance, and probability distribution to give a clearer insight into these topics.
Suppose we have a collection of balls containing 3 red balls, 4 green balls, and 9 blue balls. We randomly select a ball from the collection and define the random variable X as follows:
- X = 1 if we select a red ball
- X = 0 if we select a green or blue ball
We can represent the probability distribution of X using a probability mass function as follows:
X | 0 | 1 |
P(X) | 13/16 or 0.8125 | 3/16 or 0.1875 |
The expected value of X is
E(X) = 0*(13/16) + 1*(3/16) = 3/16
We expect to select a red ball approximately 3/16 or 18.75% of the time.
The variance of X is:
Var(X) = E((X – μ)^2)
= (0 – 3/16)^2*(13/16) + (1 – 3/16)^2*(3/16)
= 39/256
≈ 0.152
The outcomes of selecting balls from this collection are relatively spread out, with an average squared deviation of about 0.152 from the expected value.
Joint and marginal distributions
Discuss this key concept of probability and statistics in the FE electrical exam context and scenario. Suppose we have an electrical system with two components, component A and component B.
We are interested in analyzing the system’s performance and have data on the time to failure for each component. Let’s say we have the following data:
- Component A: 100 hours, 200 hours, 300 hours, 400 hours
- Component B: 50 hours, 100 hours, 150 hours, 200 hours
We can represent this data using a joint probability distribution, which gives the probability of each combination of component A’s time to failure and component B’s time to failure.
For example, the joint probability of component A failing after 200 hours and component B failing after 100 hours is the probability that both events occur together.
We can represent this joint probability distribution using the following probability distribution table:
A/B | 50 hours | 100 hours | 150 hours | 200 hours |
100 hours | 0 | 1/16 | 0 | 0 |
200 hours | 1/16 | 1/16 | 1/16 | 0 |
300 hours | 0 | 1/16 | 0 | 1/16 |
400 hours | 0 | 0 | 1/16 | 1/16 |
Each entry in the table represents the joint probability of component A failing after a certain number of hours and component B failing after a certain number of hours.
Now, let’s say we are only interested in the performance of component A, and we want to calculate the probability distribution of its time to failure regardless of the time to failure of component B. This is called the marginal probability distribution of component A.
We can calculate the marginal distribution of component A by adding up the joint probabilities across all possible values of component B. The resulting probability distribution table is:
Time to failure (hours) | Probability |
100 | 1/16 |
200 | 3/16 |
300 | 1/16 |
400 | 2/16 |
This table gives the probability of component A failing after a certain number of hours, regardless of the time to failure of component B.
Simply, the above statistics practice problems shows that joint probability distributions give the probabilities of combinations of events occurring together, while marginal probability distributions give the probabilities of individual events occurring regardless of the other events.
In the electrical engineering field, joint and marginal distributions are often used to analyze the performance of complex systems with multiple components.
Covariance and correlation
Suppose we have two variables, X and Y, that we are interested in analyzing in an electrical circuit. X represents the current flowing through a component, and Y represents the voltage across the same component. Let’s say we have the following data:
Current (X) | Voltage (Y) |
1.2 | 4.3 |
2 | 6.1 |
3.1 | 9.2 |
4 | 12.3 |
We can calculate the covariance between X and Y to determine their relationship. The covariance measures how much the two variables change together. We can calculate the covariance using the following formula:
cov(X,Y) = (1/n) * ∑(X_i – X̄)(Y_i – Ȳ)
where n is the number of data points, X_i is the i-th data point of X, X̄ is the mean of X, Y_i is the i-th data point of Y, and Ȳ is the mean of Y.
Using the data above, we can calculate the mean of X and Y:
X̄ = (1.2 + 2.0 + 3.1 + 4.0)/4 = 2.325
Ȳ = (4.3 + 6.1 + 9.2 + 12.3)/4 = 7.475
Then, we can calculate the covariance using the formula:
cov(X,Y) = (1/4) * [(1.2-2.325)(4.3-7.475) + (2.0-2.325)(6.1-7.475) + (3.1-2.325)(9.2-7.475) + (4.0-2.325)(12.3-7.475)] = 10.345
The positive covariance indicates that X and Y tend to increase or decrease together, meaning that there is a relationship between the current and voltage in the circuit. However, the covariance value depends on the units of X and Y, making it difficult to compare across different systems.
To overcome this issue, we can use correlation, a standardized measure of the relationship between two variables. Correlation ranges between -1 and 1, where -1 indicates a perfect negative relationship, 0 indicates no relationship, and 1 indicates a perfect positive relationship.
We can calculate the correlation between X and Y using the following formula:
corr(X,Y) = cov(X,Y) / (σ_X * σ_Y)
where σ_X and σ_Y are the standard deviations of X and Y, respectively.
Using the data above, we can calculate the standard deviations of X and Y:
σ_X = sqrt((1/n) * ∑(X_i – X̄)^2) = 1.101
σ_Y = sqrt((1/n) * ∑(Y_i – Ȳ)^2) = 3.188
Then, we can calculate the correlation using the formula:
corr(X,Y) = 10.345 / (1.101 * 3.188) = 1.030
The positive correlation indicates a strong positive relationship between X and Y, meaning that as the current increases, the voltage also increases.
Simply, covariance measures the degree to which two variables change together, while correlation measures the strength and direction of the relationship between two variables. In electrical engineering, covariance and correlation are often used to analyze the relationships between different variables in complex systems.
Statistics

Data representation and visualization
In electrical engineering, data representation and visualization play a critical role in understanding complex systems and making informed decisions.
Here are some vital statistical models for data visualization and their examples in electrical engineering:
- Histograms: Histograms are used to visualize the distribution of a variable. In electrical engineering, histograms can represent the distribution of voltages, currents, and other electrical parameters.
- Box plots: Box plots are used to display the distribution of a dataset. In electrical engineering, box plots can be used to compare the performance of different circuits or devices.
- Scatter plots: Scatter plots visualize the relationship between two variables. In electrical engineering, scatter plots can be used to show the correlation between a system’s input and output voltages.
- Heat maps: Heat maps visualize data in a 2D matrix format. Heat maps represent power distribution across a printed circuit board (PCB) in electrical engineering.
- Time series plots: Time series plots are used to visualize changes in data over time. In electrical engineering, time series plots can be used to analyze the behavior of electrical signals over time.
These statistical models allow electrical engineers to gain insights into complex systems and make data-driven decisions. Data representation and visualization can help identify trends, outliers, and patterns that may not be visible otherwise. This can lead to more efficient designs, improved performance, and reduced costs.
Descriptive statistics
It’s a branch of statistics that deals with summarizing and describing the important features of a dataset. It provides insight into a dataset’s main characteristics, such as its central tendency, variability, and distribution.
Consider a case where electrical engineerings use descriptive statistics to evaluate the performance of a circuit. Suppose electrical engineering professionals want to analyze the power dissipation of a circuit. The professionals can use descriptive statistics to summarize the data, such as the mean, standard deviation, and range of power consumption.
By analyzing these statistics, they can gain insight into the average power consumption, the degree of variability in power consumption, and the distribution of power consumption.
This information can help the experts identify critical areas for improvement, optimize the circuit or system design, and reduce power consumption.
Hypothesis testing
It’s a statistical approach to determine whether the available data support a hypothesis about a population parameter. It involves formulating a null and alternative hypothesis, collecting data, and using statistical tests to determine whether the null hypothesis should be rejected in favor of the alternative hypothesis.
Consider an electrical engineer working on a new electrical system who wants to determine whether a new system performs better than the existing one. The engineer can create
- A null hypothesis – showing no difference in performance between the two components.
- An alternative hypothesis – showing that the new component performs better.
Mathematically, he defines the null hypothesis as:
H0: μA = μB
This means that there is no difference in the mean performance of the two systems.
The alternative hypothesis is:
Ha: μA ≠ μB
This means that there is a difference in the mean performance of the two systems.
Next, he collects data on the performance of the two systems. Let’s say he measures the performance of 10 samples of component A and 10 samples of component B. Then, calculate the mean and standard deviation of the two sets of samples:
- The sample mean of component A: x̄A
- The sample mean of component B: x̄B
- Sample standard deviation of component A: sA
- Sample standard deviation of component B: sB
The next step is to calculate the test statistic, which depends on the type of hypothesis test he is conducting. For instance, if he is conducting a two-sample (t-test) assuming equal variances, the test statistic is:
t = (x̄A – x̄B) / (sp * sqrt(2/n))
Where sp is the pooled standard deviation:
sp = sqrt(((nA-1)*sA^2 + (nB-1)*sB^2) / (nA + nB – 2))
nA and nB are the sample sizes of components A and B, respectively.
Once he calculates the test statistic, he can determine the p-value, which is the probability of obtaining a test statistic as extreme or more extreme than the observed one, assuming the null hypothesis is true.
If the p-value is below a predetermined significance level (e.g., 0.05), he will reject the null hypothesis in favor of the alternative hypothesis.
Regression analysis
In statistical analysis, regression analysis is a methodology used to find the relationship between a dependent variable and one or more independent variables. In simpler terms, it helps to understand how the change in one variable affects the change in another variable.
In electrical engineering, regression analysis can be used for various purposes, such as predicting the performance of a device or determining the relationship between different parameters in a system.
Conclusion
Probability and statistics in the FE electrical exam preparation are the most important topics that you must not overlook. From analyzing circuit performance to predicting power system behavior, these concepts are crucial in decision-making processes and help engineers make accurate decisions.
Mastering probability and statistics principles, you can better understand the underlying data and make more informed decisions. In addition, a strong understanding of these concepts is critical for passing the FE electrical exam and obtaining your engineering license.
Whether you are just starting your engineering journey or are a seasoned professional, continuing to refine your knowledge of probability and statistics is essential. Fortunately, there are many resources available to help you do so, including online study platforms like Study for FE.
So, whether you are studying for the exam or simply looking to deepen your understanding of these fundamental concepts, invest the time and effort needed to master probability and stats problems. It will undoubtedly pay off in the long run and help you succeed in your engineering career.