What is Cryptography? The words literally means “hidden writing”. Cryptography the field concerned with encoding and decoding information, and, more generally, communications security. The ability to hide information from unauthorized viewers is vital to communications security and modern computing. This chapter will provide an overview of the concepts of modern cryptography and communications security, as well as many common methods and practices of encrypting and decrypting information. The core Security+ exam objectives covered in this chapter are as follows:
- Summarize general cryptography concepts
- Use and apply appropriate cryptographic tools and products
- Explain the core concepts of public key infrastructure
- Implement PKI, certificate management, and associated components
General Cryptography Concepts
We employ cryptography to achieve four main goals: confidentiality, integrity, authentication, and non-repudiation.
Confidentiality is the assurance that only the intended recipient can read the contents of the message. Integrity ensures that the message cannot be changed between the source and the destination. Authentication validates the identity of the sender of any information. Non-repudiation is closely tied to authentication and integrity, in that it establishes that the authenticated person is the only person that could have sent the message and that the message is, in fact, what the sender originally sent.
Confidentiality, integrity, and authentication can be implemented individually, though generally, confidentiality strongly implies integrity. Non-repudiation is the establishment of integrity and authentication together. This section will cover the following topics:
- Symmetric versus asymmetric
- Fundamental differences and encryption methods
- Transport encryption
- Key escrow
- Digital signatures
- Use of proven technologies
- Elliptic curve and quantum cryptography
Symmetric versus Asymmetric
The two general types of encryption in use in computing today are symmetric (or secret key) encryption and asymmetric (or public key) encryption.
In symmetric encryption, the same key is used to both encode and decode a message. This key must be kept secret, so the encryption must be between two, and only two, parties.
In asymmetric encryption, there are two keys in use. One, the public key, is known to everybody. This key can be used to encode information, but it cannot decode information. Anyone can encode information using the public key to encode information, but only the holder of the second key can decode that information. The second key in asymmetric encryption is the private key. Each public key has a corresponding private key. The private key can decrypt data that has been encrypted with the public key. This allows any public user to send data that can only be read by one person, the holder of the private key. The private key must be kept secret.
Fundamental Differences and Encryption Methods
In symmetric encryption, there are two basic classes of ciphers, or encrypting methods. One is to take a block of data of predetermined size and apply the cipher to that data. This is known as a block cipher. The other method is to apply the encryption to the data one digit at a time, known as a stream cipher.
Stream ciphers tend to encrypt and decrypt more quickly, but they also tend to be more vulnerable to unauthorized decryption. For instance, the WEP vulnerability discussed in the wireless section in Chapter 3 is due to the reuse of initialization vectors in the implementation of the RC4 stream cipher.
Block ciphering, on the other hand, tends to be computationally expensive. With a stream cipher, the encrypting host just has to add or subtract a certain number of bits or bytes depending on the cipher position. With a block cipher, the mass of data of a specific size is put through a series of complex mathematical processes to output ciphertext that is the same size as the input, but not easily related to the input. As the output of a block cipher is not easily relatable to the input of a block cipher, it does not risk being broken by reuse.
Transport encryption is an application of cryptography to protect network communications. In transport encryption, the entirety of the content between two end points is encrypted for the duration of a connection. One common application of transport encryption is a VPN. The traffic between both points is encrypted before it is sent. When transport encryption is in use, the connection that is made is referred to as an encrypted connection.
As mentioned earlier, non-repudiation exists when a message is guaranteed to be from a certain source, and guaranteed to be unchanged from the message the sender originally sent. With a guarantee of non-repudiation, it is impossible for the original sender to deny being the true sender.
One method of guaranteeing the source and integrity of a file is through the use of a digital signature. This is the addition of a cryptographically generated file created with the private key of a certain user. This signature file can only be created by the owner of the private key, and the signature will only verify successfully against a completely unchanged file. This firmly establishes both authentication of the sender and integrity of the sent information.
Hashing is the application of a one-way cryptographic function to a block of data to create a short string of text, or a hash, of a defined length. This hash will not be strictly unique, as the size of the hash will not be as large as all possible inputs. Though a collision (matching hash from differing inputs) is possible, it is highly unlikely.
Because any individual hash does not correspond to only one input, it is impossible to determine the original input used to create the hash. Because the chance of the hash of two different blocks of data corresponding by chance is so low, a hash can be used to verify the integrity of a file.
Hashing can provide a great deal of confidence that any data received has not been changed in transmission. For example, when a user wishes to download a file and verify its authenticity, the software company can post the file for download, along with a hash generated from the file. The user can download the file and use well-known processes to create a hash for the file. The user can then compare the hash he or she generated to the hash the software company posted. Even if the user acquired the file from a source other than the software company, the hash can assure the user that the file remains unchanged.
Hashing can also be used to store passwords securely and to reduce the risk of a password database being compromised. When authenticating a password against the database, the authentication service will make a hash of the submitted password and compare the password to the hash stored in the database.
The generation of a hash relies only on well-known processes, not any secret cryptographic information. While hashing verifies integrity, it does not authenticate the identity of the sender. MD5 and SHA-1 are common cryptographic hash functions, though MD5 is now considered insecure.
In asymmetric encryption, what happens when the digital copy of a secret key is lost? If there is only one copy of the secret key, any files that have been encrypted with the public key become permanently irretrievable.
One method that can be employed to protect against the loss of data is the use of a key escrow service. The key escrow service must be a trusted third party. If an unauthorized user gains access to the private key, he or she will be able to read data encrypted with the public key.
For symmetric encryption, the same principles apply. Because symmetric encryption uses the same key to both encode and decode the information, access to the key is required to read any data protected by symmetric encryption. No matter which method is used to encrypt your data, access to the secret key will allow the reading of the data.
Retrieving a key from key escrow will grant access to encrypted data. It is vital that any process to retrieve a key from escrow meet strict security requirements, such as multifactor authentication and multiple person authorization.
Steganography is a form of cryptography that does not necessarily indicate that there is a message at all. Information is often embedded in existing files, or injected into otherwise normal streams of data.
Steganography allows a user to hide a message in plain sight. Information can be hidden in images, sound files, or even imprinted on physical objects. Though steganography does not necessarily rely on encryption, it can allow communication without detection.
The addition of a digital signature to a file functions similarly to a hash of that file. A digital signature verifies the integrity of a file, and that nothing has been changed since the file was signed. A digital signature also includes an identity authorization component.
Unlike a simple hash function, digital signature creation requires a private key as an input. This creates a special digital signature that can be read, if accompanied by the signed file. The digital signature verification process requires comparing the contents of the digital signature file to the signed file. If the signature and file match, both the origin and the integrity of the file can be assured.
Use of Proven Technologies
Many specific technologies exist to implement the different types of cryptography discussed so far. For symmetric cryptography, we might use 3DES, AES, or Blowfish. For public key cryptography we might use PGP/GPG with RSA.
When implementing cryptography, it is important to use cryptography that has been open to public scrutiny and is still considered secure. New technologies may represent a theoretical improvement in security, but a proven technology with no known practical attacks is a safe bet for strong protection in at least the short term.
When commonly used technologies are found to be insecure, they should be phased out. WEP, SSH, and DES have well-known vulnerabilities that, though they are safe against casual prying eyes, will not prevent a determined or sophisticated attacker from gaining access to your data.
Elliptic Curve and Quantum Cryptography
Advances in cryptography are constantly being proposed, tested, and refined. Currently, asymmetric encryption takes a large amount of processing power. This is because current public key encryption performs a number of mathematical functions on the product of two large prime numbers.
Elliptic curve cryptography uses curves that are defined mathematically to generate key pairs. This process is less computationally expensive and may prove an effective encoding method.
Another innovation is the use of quantum cryptography. Quantum Key Distribution is the most commonly known method of using the properties of quantum mechanics to communicate securely. Eavesdropping on quantum communications introduces changes to the communication itself. Quantum Key Distribution relies on this property of quantum mechanics, rather than on the mathematical difficulty of factoring the products of large numbers, to ensure secure key distribution.
These methods provide theoretical advantages in cost of computation and communications security. Given time, they may well grow to become standards in our cryptographic toolkit.
Appropriate Cryptographic Tools and Products
When applying cryptography and cryptographic principles to data, it is vital to understand the tools and products being used. This section will cover everything from wireless to data to connection encryption standards, including the following topics:
- WEP versus WPA/WPA2 and pre-shared key (PSK)
- One-time pads
- Whole disk encryption
- Comparative strengths of algorithms
- Use of algorithms with transport encryption
When it comes to wireless communications, the interception of the communication is even more trivial than with wired communications. Anyone in range of the transmitters can see the traffic, so encryption is the default.
As wireless technology has matured, so have the encryption standards wireless technology uses. One of the common early wireless encryption technologies is WEP (Wired Equivalent Privacy). WEP uses either a 64- or 128-bit key and an RC4 stream cipher to encrypt communications. Unfortunately, WEP encryption has some important weaknesses, such as reuse of initialization vectors, which can allow an attacker to recover the WEP key or decrypt WEP-encrypted data.
WEP is now considered insecure, and WPA (Wi-Fi Protected Access) is a newer option, which addresses the shortcomings of WEP. WPA implements TKIP (Temporal Key Initialization Protocol), a different method of exchanging initialization vectors, and one that mixes the secret key with the initialization vector, where WEP just appends them. WPA still uses RC4 but no longer sends the initialization vector in the clear, preventing the most common avenue of attack against WEP.
WPA is fundamentally the same technology as WEP, but it is implemented in a slightly more secure manner. WPA2 is the standard for the current generation of wireless encryption. Rather than relying on the RC4 stream cipher, it makes use of the more secure AES (Advanced Encryption Standard) block cipher. In addition, WPA requires the use of CCMP (Counter Cipher Mode with Block Chaining Message Authentication Code Protocol) to ensure the integrity and confidentiality of the data.
Variants of WPA2, such as WPA2 Enterprise and WPA2 with RADIUS, do not require the exchange of a secret key, but instead use RADIUS to authenticate users, while still providing secure point-to-point communication.
MD5 (Message Digest 5) is not an encryption protocol, but a one-way cryptographic hash function. As with any hash function, MD5 hashes are commonly used to ensure data integrity, since the hash of any given data will vary based on small changes to the data. MD5, however, has been shown to be vulnerable to collision attacks and prefix attacks, meaning an attacker can create data to take the place of the hashed data, and then compute a special prefix that will result in the new data having the same hash. Thus, for applications that rely on hashed data being collision-free, MD5 is no longer sufficient. Because MD5 is vulnerable to collision attacks, it is no longer considered secure, and it has largely been replaced by newer hashing functions, such as SHA and its variants.
SHA (Secure Hash Algorithm) is another hashing algorithm. There are a few implementations of SHA, each given a different identifier – SHA-0, SHA-1, SHA-2, and the proposed SHA-3. The first implementations of SHA to gain widespread acceptance was SHA-1, which created a 160-bit hash for a given block of data. SHA-0 was proposed but not implemented due to perceived security flaws. Since the widespread implementation of SHA-1, security flaws have also been discovered for it, and collision attacks have been published. SHA-1 is a former FIPS (Federal Information Processing Standard), recommended for use in U.S. government systems. Though SHA-1 is still in wide use, SHA-1 should ideally no longer be used for applications that require collision resistance.
SHA-2 comes in four variants, with digest sizes of 224, 256, 384, or 512 bits. The larger output, as well as a slightly more complex method of generating the digest, has kept SHA-2 considered secure. SHA-2 is the preferred current SHA standard. SHA-1 has been superseded by SHA-2 as the current FIPS standard for use in U.S. government systems.
The National Institute of Standards and Technology is reviewing proposed SHA-3 standards. Though they have not yet announced a new algorithm, they are expected to select from a list of finalists before the end of 2012.
RIPEMD, RACE (Research and Development in Advanced Communication Technologies in Europe) Integrity Primitives Evaluation Message Digest, is another hashing function, which outputs a 128-bit digest. The variants of RIPEMD include 160-, 256-, and 320-bit versions. Though many well-known hashing algorithms were developed by the NSA and released to the public for examination after development, RIPEMD was designed as a European Union project. RIPEMD collisions have been found for the original 128-bit algorithm, but not for 160, 256, or 320 RIPEMD.
AES (Advanced Encryption Standard) is a block cipher for use in symmetric, or shared secret, encryption. AES is part of the FIPS for symmetric encryption. AES is a 128-bit block symmetric cipher that supports 128-, 192-, or 256-bit keys. It is considered to be strong encryption and is commonly used in such applications as WPA-2 to secure wireless communications.
DES (Data Encryption Standard) is a 64-bit block cipher symmetric encryption algorithm that uses a 56-bit key. This keysize is insufficient to protect against brute force attacks from modern computing systems. To work around this small keysize, 3DES is now commonly used in applications where DES may have been used previously.
3DES (Triple DES) is simply the application of DES encryption three times to the data to be encrypted. The data can be encrypted with different keys on each pass or multiple times with the same key. Though encrypting with the same key is supported, it is not recommended. Using the same key each time is cryptographically equivalent to using DES. Data that is 3DES encrypted is much more strongly protected than DES-protected data.
HMAC (Hashed Message Authentication Code) is an algorithm that can be used, like a hash, to verify the integrity of the data. It also uses a shared secret key combined with the hash to authenticate the sender of the data. HMAC can be used with any hash or symmetric encryption algorithm, and the strength of the integrity and authentication is based on the strength of the protocols. It is common to implement HMAC with MD5 or SHA, but any hash function can be used.
RSA is an algorithm with a name that directly references its creators – Ron Rivest, Avi Shamir, and Leonard Adleman – who first publicly described the RSA algorithm. It is based on the difficulty in factoring the products of large prime numbers. The RSA algorithm uses two large prime numbers to create a private key, and the product of those two prime numbers to create a public key. In theory, the difficulty of computing the factors of the public key will keep RSA-encrypted messages secure.
Users of this algorithm utilize a special RSA token that gives passwords for one-time use. These passwords change after a set amount of time, usually around 60 seconds. As the size of the key directly affects the difficulty of factoring the key, larger key sizes are more secure.
A one-time pad is a very special type of shared secret cryptographic system. Instead of applying complex mathematical algorithms and a short shared secret to create strings of ciphertext, a one-time pad applies a very simple mathematical algorithm and a shared secret to plaintext to encode it. The encryption process is often a simple bit-shift by the value of the position of the random shared secret key. What makes a one-time pad special is the key length necessary to encode or decode the information. The minimum length for the shared secret is the length of the data to be transmitted. Furthermore, each one-time pad, as the name implies, can be used only once.
One-time pads are, given random generation procedures, completely cryptographically secure. The data that is transmitted after being encrypted by a one-time pad does not contain any information about the plaintext, other than the maximum length of the transmission. The primary drawback of one-time pads is the necessity of distributing keys equal to the length of any encrypted transmission prior to the transmission.
RC4 was invented by the same Ron named in the RSA algorithm, Ron Rivest. It is a stream cipher that is commonly used to protect SSL and WEP traffic. It is commonly misidentified as a cryptographic hashing function. Though RC4 is not necessarily inherently flawed, the first few bits of a keystream are strongly non-random and can be vulnerable when keys are reused or when related keys are used, as in the WEP implementation. RC4 supports a key size between 40 and 2048 bits.
CHAP (Challenge Handshake Authentication Protocol) is an authentication scheme commonly used to establish remote connections and maintain them, while protecting against session hijacking. CHAP uses a 3-way handshake to verify a user’s identity. A shared secret exists between the two parties but is never transmitted. The authentication (or challenging) server sends a challenge message to the requesting client. The client appends the shared secret to the challenge message, hashes it (such as with MD5), and sends a response. The server also calculates the hash of the challenge message and shared secret. If the server’s hash matches, the server sends an acknowledgement to the client, successfully completing their 3-way handshake.
PAP (Password Authentication Protocol) is an inherently insecure method of network authentication. Unlike secure authentication protocols, PAP requires that the password traverse a network in clear text. PAP is the most basic type of network authentication. It contains no inherent security or encryption.
Microsoft developed a user-based network authentication protocol to be compatible with LANMAN, while providing a higher level of security. Microsoft labeled this new backwards compatible technology NTLM (New Technology LAN Manager).
LANMAN is a legacy authentication protocol that limited passwords to 14 uppercase characters, and encoded passwords over seven characters as two separate data blocks, encrypted with DES. This effectively meant that breaking a password only requires choosing from a very limited set of possible passwords.
What NTLM did to improve the situation was allow passwords to take up to 127 characters, instead of seven characters for a password block, and to hash the password with MD4, instead of the weaker DES.
NTLMv2 further improved on the NTLM model by implementing stronger technologies and a 3-way handshake to transmit authentication data. Using HMAC-MD5 to issue challenges that include changing data, such as a timestamp, domain name, and random data, along with the same MD4 hashing to store passwords in NTLM, makes the authentication process much more resistant to eavesdropping.
Blowfish is a strong symmetric key encryption algorithm designed by Bruce Schneier. It uses a 64-bit block cipher and accepts variable key lengths, from 32 to 448 bits. Blowfish uses 16 rounds of encryption and operates at a speed comparable to DES, with much stronger security and no known attacks that can overcome all 16 rounds of encryption. One interesting fact about the Blowfish cipher is that it has been released into the public domain and is free for anyone to use.
PGP (Pretty Good Privacy, the commercial implementation) and GPG (GNU Privacy Guard, the open source implementation of PGP) are some of the most commonly used encryption application suites. PGP and GPG are often referred to collectively as PGP/GPG. They interoperate with one another by combining symmetric and asymmetric encryption.
PGP certificates are not generated by a certificate authority, but, rather, are self-generated. A user will commonly generate his or her own key and have other users verify their identity and sign their key, creating a web of trust. The strength of the level of trust the certificates can be given is directly related to the level of trust one gives to the key signers.
PGP can be configured to use any number of different encryption algorithms, but is most commonly used with asymmetric encryption algorithms such as RSA. Because of the web of trust and support for a variety of independent encryption protocols, PGP/GPG can provide authentication, integrity, and encryption for communication between a wide variety of endpoints. PGP can be used to encrypt e-mails, directories, partitions, and many other types of information and communication. Its typical use is for e-mail encryption.
Whole Disk Encryption
Whole disk encryption is the process of encrypting the entire contents of a drive, including any operating system or temporary files created during the use of the system. Full-disk encryption methods, such as Microsoft’s Bit-locker or the open-source TrueCrypt, can be used to implement full-disk encryption.
Access to the encrypted drive can be controlled at a hardware level, such as with a TPM (Trusted Platform Module), or through the use of a passphrase, PIN, or external security FOB. If the authentication method is lost, damaged, or unavailable, the contents of the drive will remain unavailable. As with other encryption methods, it is important to maintain a backup of the authentication mechanism, such as in a key escrow service.
Like Blowfish, Twofish is a symmetric block cipher encryption protocol that supports a key size of 128, 192, or 256 bits. Bruce Schneier, the author of Blowfish, worked with a team to develop Twofish as a strong encryption algorithm and candidate for the AES (Advanced Encryption Standard). Twofish encrypts data at a similar computational overhead to AES and is considered strong, though it is not as widely used as Blowfish.
Like Blowfish, Twofish implements 16 rounds of encryption. Theoretical attacks have been found that may make attacks against six of the 16 rounds of encryption computationally feasible. These attacks, however, are not generalizable to the full 16 rounds of encryption. Twofish is still considered cryptographically secure.
Comparative Strengths of Algorithms
When selecting an encryption algorithm, it may be necessary to weigh the computational cost of encrypting and decrypting the data with access to the secret key against the computational cost of breaking the encryption and gaining unauthorized access to the data. Though computation cost may dictate using a smaller key for encryption, it should never dictate using an inherently insecure algorithm.
Despite their generally lower computational cost, insecure technologies should not be used where security is required. For instance, DES is now considered insecure, but 3DES is still in common use. Both are considered weaker than Blowfish (which functions at a speed similar to DES) and much weaker than AES and Twofish.
Similarly, older hash functions such as MD4 and MD5 should not be used because of their known weaknesses. SHA-2 should be implemented, with a hash size suitable for the application. When it comes to wireless encryption, WEP is now trivially simple to defeat. WPA, and especially WPA-2, is a stronger option, especially for modern hardware.
Use of Algorithms with Transport Encryption
Encryption can be used on a per-session basis to protect communication between two end-nodes, even when the intervening networks are untrusted. When encryption is established on such a per-session basis, we refer to it as transport encryption.
Secure Socket Layer (SSL) is the predecessor to the more current TLS. SSL uses asymmetric encryption for key exchange, and then protects the contents of the session with symmetric encryption.
TLS is the successor to SSL. It is the application suite commonly used to secure e-mail, FTP (FTP–Secure), and web traffic (HTTP-Secure).
IPSec (Internet Protocol Security) is an application-agnostic method of securing network communications. When IPSec is in use, all packets passed between the secured hosts are encrypted and encapsulated in a new packet to be decrypted by the other host. This technology creates a VPN (Virtual Private Network), which secures communication between any two points. When one of the points of the IPSec connection is not the final destination of the packet, such as a connection to a VPN concentrator, any hosts beyond the decryption point will be able to see the communication as if they were on the same network.
IPSec supports hashing and digital signatures for authentication and data integrity, as well as a collection of encryption algorithms such as DES, 3DES, and AES for confidentiality.
SSH (Secure Shell) is a protocol for protecting private data, and ensuring data integrity and confidentiality. SSH is primarily used for creating encrypted terminal sessions. It can also be used with the associated SSH File Transfer Protocol (SFTP) and the SCP (Secure Copy) Protocol. Unlike SSL and TLS, SSH does not add authentication to an existing protocol, but, rather, it represents a new protocol to perform similar tasks in a secure manner.
SSH has undergone numerous revisions. A number of early SSH versions are now considered to provide inadequate security. The current revision of SSH is SSH-2.
HTTPS (Hypertext Transfer Protocol Secure) is a standard HTTP session protected with TLS. HTTPS communications use this protocol suite to authenticate the identity of the remote host, as well as the integrity and confidentiality of the data.
Though legacy applications will refer to SSL-protected HTTP sessions as HTTPS, current implementations of HTTPS should support only TLS.
Core Concepts of Public Key Infrastructure
In any organization, ensuring encryption of data is a primary concern. However, understanding the underlying concepts of that encryption is important when designing and deploying a public-key infrastructure (PKI), as well as when troubleshooting the system and recovering data. This section will cover the following topics:
- Certificate authorities and digital certificates
- Recovery agent
- Public key
- Private key
- Key escrow
- Trust models
Certificate Authorities and Digital Certificates
When employing PKI in a company, it is vital to verify the authenticity of the certificates being used to authenticate owners of public keys in a key pair. This allows secure, trusted communication among discrete parties. Certificate authorities (CAs) are trusted third-party issuers of digital certificates, allowing web browsers and server applications to create secure communications, as well as enabling PKI. Government entities may offer their own CAs, as well as large corporations. Public keys signed by a trusted certificate are located in a browser’s trusted root CA. You can view these keys and their corresponding certificates by going to the certificates section of your browser.
Certificate revocation lists (CRLs) are important components of CAs, as they contain public lists of certificates that are no longer valid and thus no longer “trusted”. These certificates can be expired, ones whose CA cannot be identified, or compromised server keys that should not be trusted.
PKI, or Public-key Infrastructure, is the mechanism used to manage digital certificates. PKI, with respect to this section, involves the use of asymmetric encryption through the employment of public and private key pairs.
Public keys are available for anyone to see and use and are used for encryption. Private keys are kept secret and are used for decrypting anything that was encrypted with the corresponding public key. The CA, usually a system housed on a server in a corporate network, issues individual users public keys and is the system responsible for the maintenance thereof, to include revocation. Because there is a higher authority ensuring users are bound to appropriate public keys, these keys can be trusted (assuming the CA itself is trusted).
Recovery agents exist in a system usually only as an account that holds multiple users’ private keys, for retrieval only when users lose access to their accounts and cannot decrypt information that is encrypted with their public keys. Attempts to encrypt data using a recovery agent account will merely encrypt the data with the recovery agent’s certificate, rather than the certificates maintained within its trust.
As mentioned in the PKI section, a public key is one-half of the PKI key-pair that is completely public and allows anyone to encrypt information using that key. However, the only way to decrypt that information, given the nature of PKI (asymmetric encryption), is to decrypt it using the corresponding private key. An example of this would be the encryption of a digital signature in an e-mail and its subsequent decryption using the user’s public key; this ensures authenticity and non-repudiation of the e-mail (the sender proves and also cannot lie about his or her identity).
The private key is the second key in the PKI key-pair and is used to decrypt public-key encrypted information. This key must be kept secret and should usually be backed up (copied offsite) or maintained in a recovery agent account. Two examples of private key usage are the encryption of EFS data and the encryption of a digital signature or e-mail hash in a digitally signed e-mail.
The Registration Authority (RA) is the second major authority next to the CA. The RA ensures public keys are attached to user accounts appropriately through a process that includes user registration and verification.
Key escrow is closely related to recovery agents in that backed-up copies of private keys may be held in some sort of account, such as an Active Directory account, to which only administrators have access.
The main trust model is the one widely used: the CA issuing certificates is trusted by all parties involved in the PKI process – the recipient of the certificate and those who must rely on the certificate’s authenticity. This usage of a trusted CA is very common. There are approximately 36 to 37 trusted root CAs, and all certificates eventually trace their origins back to these, regardless whether they are government or commercial. Within a corporation, an enterprise CA will commonly be used that issues certificates to users for internal server and remote access.