Network Security in FE Electrical

This is the first part of our detailed guide series on network security in the FE Electrical exam. The topic is divided into two portions to cover this domain’s key aspects as per the NCEES® FE Electrical guidelines and exam preparation roadmap.

  1. Security Triad and Port Scanning
  2. Vulnerability Testing (Network, Web, and PEN testing)

Let’s explore this in-depth topic to understand why network security in FE Electrical is important, not just from an exam standpoint but also in the evolving IT sector where network security is paramount.

We will also discuss examples and case studies of modern methodologies like Cloud and DevOps to give you a detailed insight into your career perspective.

Network Security Triad

network security triad

The Network Security Triad, commonly known as the CIA Triad, is a model designed to guide information security policies for information security within an organization.

The three components of the CIA Triad are Confidentiality, Integrity, and Availability. Each of these components represents a fundamental objective of security.

Related Reading

Components of Network Security Triad (CIA)


This triad aspect seeks to prevent the unauthorized disclosure of information. Confidentiality is critical for maintaining data privacy and ensuring that information is accessible only to authorized users.

Technological Frameworks for Confidentiality
1. Encryption

Encryption is converting plain text into a scrambled format known as ciphertext, which is unreadable to anyone except those possessing special knowledge, usually referred to as a key.

Symmetric Encryption (AES)

In symmetric-key encryption, the same key is used to encrypt and decrypt the data. When a message is encrypted using a symmetric key, the same key must be used for decryption.

*AES is a widely used symmetric encryption standard. It operates on fixed-size data blocks and uses 128, 192, or 256 keys. Its security is based on the difficulty of key discovery through brute-force attacks.

Asymmetric Encryption (RSA, ECC)

Asymmetric encryption uses a pair of keys – a public key and a private key. The public key is shared openly and is used to encrypt data. The corresponding private key is kept secret and is used to decrypt the data.

*One of the first public-key cryptosystems, RSA uses a pair of keys, one for encryption and the other for decryption. RSA’s security is based on the practical difficulty of factoring the product of two large prime numbers.

*ECC is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. It offers higher security with smaller key sizes compared to RSA.

2. Access Control

Access control is a method used to regulate who or what can view or use resources in a computing environment. The key framework used for the said purpose includes

  • Access Control Lists (ACLs): ACLs are a list of permissions attached to an object. They specify which users or system processes are granted access to objects and what operations are allowed on given objects.
  • Role-Based Access Control (RBAC): In RBAC, access decisions are based on individual users’ roles as part of an organization. Users are assigned roles, and those roles are assigned permissions.
  • Identity and Access Management (IAM): IAM systems are designed to identify, authenticate, and authorize individuals or groups of people to access applications, systems, or networks by associating user rights and restrictions with established identities.

Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are cryptographic protocols designed to provide communications security over a computer network. When a server and client communicate, SSL/TLS ensures that the data transmitted is encrypted and secure.

It involves using an SSL/TLS certificate, a small data file that digitally binds a cryptographic key to an organization’s details.

4. Data Masking

This is the process of hiding original data with random characters or data. The primary purpose of data masking is to protect the data that is considered sensitive while providing a functional alternative when actual data is not necessary.

For instance, in a test database, original data can be replaced with fictional but realistic entries.

5. Zero Knowledge Proofs

Zero Knowledge Proofs (ZKP) are a method by which one party can prove to another that they know a value without conveying any information apart from knowing the value. In a ZKP, the prover can demonstrate to the verifier that they possess certain knowledge without revealing what that knowledge is.

This is done through a process where the prover repeatedly provides evidence to the verifier so that the verifier becomes convinced of the prover’s knowledge.

Each of these technologies plays a critical role in information security, offering mechanisms to protect data integrity, confidentiality, and availability.

In the context of cryptography and cybersecurity, they are fundamental to developing a secure and resilient digital infrastructure.


This component assures that information is trustworthy and accurate. Integrity involves maintaining data’s consistency, accuracy, and trustworthiness over its entire lifecycle.

Technological Frameworks for Integrity
  1. Checksums and Hashing

Checksums and hashing algorithms ensure data integrity by creating a unique digital fingerprint of a file’s contents. If the data changes, even slightly, the resulting hash will change significantly.

Checksum: A checksum is a simple type of redundancy check used to detect data errors. It involves calculating a short fixed-size value (the checksum) from a data block. If the data changes, the checksum will likely change, indicating an error or alteration.

Hashing (SHA-256, MD5, SHA-1): Hash functions like SHA-256 (Secure Hash Algorithm 256-bit), MD5 (Message Digest 5), and SHA-1 (Secure Hash Algorithm 1) process data into a fixed-size string of characters, which represents the data’s signature. SHA-256, for example, generates a 256-bit hash, which is theoretically unique to each unique input. It’s highly sensitive to changes in the input data, making it an effective tool for ensuring data integrity.

  1. Digital Signatures

Digital signatures verify the authenticity and integrity of a message, software, or digital document. They confirm that the message or document was not altered after being signed.

They utilize asymmetric cryptography. The signer generates a hash of the message and then encrypts it with their private key. This encrypted hash and the message are sent to the recipient.

Upon receiving, the recipient decrypts the hash with the signer’s public key, generates a hash of the received message, and compares it to the decrypted hash. If they match, it proves the message’s integrity and the sender’s identity.

  1. Version Control

Version control systems like Git repositories are used to manage changes to documents, computer programs, large websites, and other collections of information.

Git is a distributed version control system. Each user’s working copy of the code is also a repository that can contain the full history of all changes.

In Git, every time you commit changes, it creates a unique SHA-1 hash. This allows tracking of history and changes over time, including who made changes and what was changed. It’s possible to revert to previous versions if necessary.


This element of the triad ensures that information is available when needed. Maintaining high availability involves ensuring timely and reliable access to data and resources.

Related Reading

Technological Frameworks for Availability
  1. Redundancy

In IT, the term redundancy refers to the duplication of critical components or functions of a system intending to increase the reliability of the system.

Redundancy can be implemented in various forms, such as having multiple hard drives (RAID – Redundant Array of Independent Disks), duplicate network connections, or even entire systems that mirror the operations of primary systems.

The idea is to have backup systems ready to take over without interruption in the event of a failure of the primary system.

In a server environment, for example, redundant servers might run in parallel. If one server fails, the others can seamlessly take over its workload without disrupting the service.

  1. Backup and Disaster Recovery

Backup and disaster recovery are critical components of an organization’s data protection strategy.

  • Backup: This involves making copies of data so that these additional copies may be used to restore the original data in case of loss. Backups can be done in various ways, including full backups, incremental backups, or differential backups. They are typically stored in separate physical or cloud locations.
  • Disaster Recovery: Disaster recovery focuses on the IT infrastructures that support enterprise-grade business functions. This involves having a set of data protection and information security policies, tools, and procedures to enable the recovery or continuation of vital technology infrastructure (clouds or in-house servers) following a natural or human-induced (intended) disaster.
  1. Network and Hardware Resilience

Network and hardware resilience refers to the ability of a computer network or hardware system to continue operating effectively in the event of one or more components failing. This involves designing systems with fail-safes, such as redundant hardware, fault tolerance, and high-availability clusters.

To ensure resilience, load balancers distribute workloads across multiple computing resources, such as servers, network links, or CPUs. This optimizes resource use maximizes throughput and prevents any one server from becoming a single point of failure.

*Load balancing is distributing network or application traffic across multiple servers to ensure no single server bears too much demand. By balancing application requests or network load efficiently across multiple servers, load balancing improves responsiveness and increases the availability of applications or websites.

Types of Load Balancing

  • Round Robin: Distributing requests sequentially across the group of servers.
  • Least Connections: Direct traffic to the server with the fewest active connections.
  • IP Hash: Allocating requests based on the IP address of the client.

If you are looking for a one-stop shop resource to make your FE Electrical exam study, take a look at our FE Electrical Exam Prep resource.

We have helped thousands of FE exam students pass their exam with our proven, on-demand content, and live-training.

CIA Application – Use-case of Cryptocurrency and Blockchain

CIA Application

Implementing the CIA (Confidentiality, Integrity, and Availability) Triad in the context of cryptocurrency and blockchain involves unique considerations and techniques, as these technologies inherently embody certain aspects of the CIA Triad.

Let’s explore each component with examples:

1. Confidentiality in Cryptocurrency and Blockchain

In blockchain and cryptocurrencies, confidentiality means ensuring that sensitive transaction details remain private and are only accessible to authorized parties.

How it is Implemented

Private Keys: Each user has a private key that secures their wallet and transactions. This key is a secret code allowing them to access and send their cryptocurrency. This key must remain confidential to prevent unauthorized access to the user’s funds.

Privacy Coins: Cryptocurrencies like Monero or Zcash implement advanced cryptographic techniques (like ring signatures or zk-SNARKs) to enhance transaction privacy, thereby ensuring the confidentiality of transaction details.


In Bitcoin, although the transaction details are public on the blockchain, the identity of the people transacting is not. The transactions are tied to a wallet address, not an individual’s identity.

However, more advanced privacy-focused coins like Monero encrypt the sender and receiver’s addresses and the transaction amount.

Related Reading

2. Integrity in Cryptocurrency and Blockchain

Integrity in blockchain and cryptocurrency refers to the assurance that the information (transaction data) is trustworthy and has not been altered or tampered with.

How it is Implemented

Immutability of Blockchain: Once data (a block of transactions) has been added to the blockchain, it cannot be changed without altering all subsequent blocks and the network consensus, which is practically impossible in a large, decentralized network.

Cryptographic Hash Functions: Blockchains use hash functions like SHA-256 in Bitcoin. Each block contains the hash of the previous block, creating a chain. Changing a single block would change its hash, breaking the chain and signaling tampering.


In Bitcoin, the blockchain’s integrity is maintained through the proof-of-work consensus mechanism. When a block is added to the chain, changing it would require re-mining it and all subsequent blocks, which is computationally infeasible.

3. Availability in Cryptocurrency and Blockchain

Availability in the context of blockchain and cryptocurrencies involves ensuring that the network is up and running and that users can access and transact their cryptocurrencies when needed.

How it is Implemented

Decentralized Network: Cryptocurrencies operate on a decentralized network of nodes (computers). This means there is no central point of failure, and the system resists attacks that could take down a centralized network.

Redundancy and Distribution: The blockchain is stored on multiple nodes worldwide, ensuring redundancy. If one or more nodes go down, the network functions seamlessly.


In Ethereum, the blockchain is maintained across thousands of nodes. Even if some nodes fail or are taken offline, the network remains operational, ensuring the availability of data and the ongoing processing of transactions.

Simply, implementing the CIA Triad in cryptocurrency and blockchain is inherently woven into the technology’s fabric. Blockchain’s decentralized, transparent, and immutable nature inherently addresses these security aspects, making it a robust platform for secure and reliable digital transactions.

*The degree to which each aspect is emphasized can vary between cryptocurrencies and blockchain implementations.

Port Scanning

Port scanning is a technique to identify open ports and services on a network host. It is a critical tool for network administrators for security and network troubleshooting.

Attackers can also use it to identify potential vulnerabilities by dodging security assessment tools and checks.

How Port Scanning Works

Fundamental Process

Port scanning involves sending packets to specific ports on a host and analyzing the responses to determine the port’s status. 

*Ports are essentially endpoints between two connections. A port number identifies a specific process to which an Internet or other network message is to be forwarded when it arrives at a server.

Response Analysis
  • Scenario 01 – If the port is open, the host will respond with a packet indicating it has received the request.
  • Scenario 02 – If the port is closed, the host will respond with a different type of packet indicating that the connection is refused.
  • Scenario 03 – If there is no response or the response is an error, it can be inferred that the port is filtered and inaccessible, likely by a firewall.

Related Reading

Types of Scans

Let’s uncover the details of each type of port scanning methodologies used in the industry:

1. TCP Connect/Full Open Scan

This is the most basic form of TCP scanning. The scanner attempts to establish a complete TCP connection with the target device. It involves sending a SYN packet (initiating a TCP connection) to the target port, waiting for an ACK packet in response (acknowledging the connection), and then sending a final ACK packet.

  • The scanner sends a SYN packet to the target port.
  • If the port is open, the target responds with a SYN-ACK packet.
  • The scanner completes the handshake by sending an ACK packet.
  • After establishing the connection, the scanner can determine that the port is open, which typically closes the connection.

*This method is easily detectable because it establishes a full connection, and the target system logs the complete handshake process.

2. SYN Scan (Half-Open Scan)

Also known as a “stealth scan,” is more subtle than a full open scan. The SYN scan sends a SYN packet as if it will open a connection but stops the process as soon as it receives a SYN-ACK response, never completing the handshake.

  • The scanner sends a SYN packet to the target port.
  • If the port is open, the target responds with a SYN-ACK packet.
  • Instead of completing the handshake with an ACK packet, the scanner sends a RST (reset) packet to abort the connection.

*This type of scan is less detectable as it does not establish a full TCP connection, making it harder for intrusion detection systems to spot.

3. UDP Scan

UDP scans identify open UDP ports on the target. Since UDP is a connectionless protocol, the scanner sends UDP packets to various ports and waits for a response.

  • The scanner sends a UDP packet to a target port.
  • If the port is open, there may be no response or a protocol-specific response.
  • If the port is closed, the target usually responds with an ICMP port unreachable error.

*UDP scans can be slow and less reliable than TCP scans. They are also more difficult to detect because a lack of response (which might indicate an open port) can also be due to packet filtering.

4. ACK Scan

This scan is used primarily to understand firewall rules. It doesn’t tell whether a port is open or closed; instead, it provides information about how the firewall filters packets.

  • The scanner sends an ACK packet to the target port.
  • If a firewall statefully filters the port, there may be no response, or the firewall may respond with an RST packet.
  • Based on the response or lack thereof, the scanner can infer the presence and rules of a firewall.

*ACK scans are more about mapping firewall configurations than identifying open ports. They are detectable by firewalls and intrusion detection systems.

Each type of scan serves a different purpose in network security assessment. These scans are integral tools in the toolkit of network administrators and security professionals for assessing and strengthening network security.

Real-World Examples of Network Security

Understanding how these principles translate into real-world applications is crucial. Here’s how we bridge the gap between theory and practice.

While firewalls and encryption algorithms are essential, the FE Electrical Exam focuses on a practical understanding of how electrical engineers implement these security measures. Including:

Protecting Critical Infrastructure: Electrical engineers are at the forefront of safeguarding power grids from cyberattacks. Secure network design principles are crucial. This might involve segmenting the grid network into isolated zones, each with its level of access control. Intrusion detection systems continuously monitor network activity for suspicious behavior, acting as an early warning system for potential threats.

Securing Industrial Automation Systems: Modern factories and industrial facilities rely heavily on interconnected automation systems. Network segmentation plays a vital role here. By creating separate networks for critical control systems and less sensitive operations, engineers can limit the potential damage caused by a security breach. Secure communication protocols like HTTPS encrypt data transmission between machines, safeguarding sensitive information and ensuring smooth operation. 

Relatable Examples

Understanding the “why” behind security measures is as important as the “how.” Here’s how relatable examples can enhance learning:

Scenario 1: Imagine a power plant network. A firewall acts as a security checkpoint, filtering incoming and outgoing traffic. It allows authorized communication between control systems and designated monitoring stations while blocking unauthorized access attempts from the internet. This prevents potential hackers from disrupting critical power generation and distribution operations.

Scenario 2: An industrial assembly line utilizes a segmented network. The control system responsible for operating robots and machinery resides on a separate network from the system used for employee workstations. This network segmentation limits the potential impact of a security breach on the production line, ensuring continued operation and preventing product quality issues.

Related Reading


Now you have a rich idea about network security at FE Electrical. We recommend you complete this study session by reading the second part of this guide to understand how different security assessment tools and vulnerability testing methodologies work.

For an efficient FE Electrical exam preparation, check out our wealth of resources, guides, and FE electrical courses at Study for FE – your first point of contact for all things FE.


Licensed Professional Engineer in Texas (PE), Florida (PE) and Ontario (P. Eng) with consulting experience in design, commissioning and plant engineering for clients in Energy, Mining and Infrastructure.