Learn about cryptographic algorithms interactively
An interactive tool for exploring how modern cryptographic algorithms work. Experiment with real cryptographic operations, compare algorithms, and build intuition for the building blocks that secure the internet.
Cryptographic algorithms are used to construct security protocols that provide:
Only authorized parties can read the data. Achieved through encryption (symmetric and asymmetric).
Data has not been altered or corrupted. Achieved through hashing and MACs (HMAC, authenticated encryption).
The data and communicating parties are genuine. Achieved through digital signatures and certificates (PKI).
The sender cannot deny having sent the data. Achieved through digital signatures — only the private key holder can sign.
Each tab focuses on a different category of cryptographic operation:
Generate cryptographic keys for various algorithms. Understand security levels, entropy, and key size equivalence across algorithm families.
Encrypt and decrypt data with a shared secret key. Compare AES-GCM, AES-CTR, AES-CBC, and ChaCha20-Poly1305.
Encrypt with a public key, decrypt with a private key. Explore RSA-OAEP and X25519 + ChaCha20-Poly1305.
Compute cryptographic hashes and HMACs. See the avalanche effect, compare SHA-256, SHA-512, and BLAKE2b.
Understand why password hashing is deliberately slow. Compare PBKDF2, bcrypt, scrypt, and Argon2id with adjustable parameters.
Sign messages and verify signatures. Explore RSA-PSS, ECDSA, and Ed25519. See how tampering is detected.
Walk through a Diffie-Hellman key exchange step-by-step with Alice and Bob. See how a shared secret is established over an insecure channel.
Simulate a certificate authority: create a root CA, issue certificates, verify trust chains, and revoke certificates.
Simulate a TLS 1.3 handshake — see how key exchange, signatures, hashing, and symmetric encryption combine to secure every HTTPS connection.
Explore X3DH key agreement and the Double Ratchet — how Signal, WhatsApp, and others achieve forward secrecy for every message.
See how ECDSA signatures, double SHA-256, and Merkle trees secure transactions, addresses, and proof-of-work mining.
Understand the quantum threat, NIST's new ML-KEM and ML-DSA standards, lattice-based cryptography, and the transition timeline.
Split secrets into shares with a threshold — any k shares reconstruct the secret, but k-1 reveal nothing. Used in Vault/OpenBao unseal.
The bridge between raw key material and usable keys. Extract entropy, expand with domain separation.
Generate the same one-time passwords as your authenticator app. Built entirely on HMAC-SHA1.
Create, decode, and verify JWTs. Understand the header.payload.signature structure and common attacks.
Build hash trees interactively. See how changing one leaf changes the root. Used in Git, Bitcoin, and CT logs.
Prove you know a secret without revealing it. Hash-based commitment scheme with the Ali Baba cave analogy.
Cryptography is among the oldest information sciences — and one of the most consequential.
The Caesar cipher (c. 50 BC) shifted each letter by a fixed amount — one of the first known substitution ciphers. The Vigenere cipher (1553) used a repeating keyword to vary the shift, resisting simple frequency analysis for centuries.
These ciphers relied on the algorithm being secret. Modern cryptography inverts this: the algorithm is public, and only the key is secret (Kerckhoffs's principle, 1883).
The German Enigma machine used rotors and plugboards to create polyalphabetic substitution ciphers with astronomical key spaces. Breaking it at Bletchley Park — by Alan Turing, Gordon Welchman, and others — shortened the war by an estimated two years and laid the foundations of computer science.
The lesson: even seemingly unbreakable systems fall to mathematical insight combined with computational power.
1976 — Diffie-Hellman: Whitfield Diffie and Martin Hellman published "New Directions in Cryptography", introducing public-key cryptography and key exchange — the foundation of the Key Exchange tab.
1977 — RSA: Rivest, Shamir, and Adleman created the first practical public-key encryption system, still widely used today.
2001 — AES: After a public competition, NIST selected Rijndael as the Advanced Encryption Standard, replacing DES. AES remains the dominant symmetric cipher.
2005-present: Elliptic curve cryptography (ECC) matured, offering equivalent security with smaller keys. Daniel Bernstein's Curve25519 (2005) and Ed25519 (2011) became preferred for their simplicity and resistance to implementation errors. TLS 1.3 (2018) mandated forward secrecy and removed legacy algorithms.
Modern cryptography secures nearly every digital interaction:
The next frontier is post-quantum cryptography — new algorithms (like ML-KEM / Kyber, ML-DSA / Dilithium) designed to resist quantum computers running Shor's algorithm. NIST finalized the first PQC standards in 2024.
Most cryptographic vulnerabilities come from misuse of correct algorithms, not from breaking the math:
crypto.getRandomValues() or equivalent CSPRNG.verify=False) defeats the entire purpose of TLS.A common misconception: encoding (Base64, hex, URL encoding) transforms data into a different representation but provides zero confidentiality. Anyone can decode it — no key is needed.
| Encoding | Encryption | |
|---|---|---|
| Purpose | Format conversion | Confidentiality |
| Key required? | No | Yes |
| Reversible by anyone? | Yes | No (key holder only) |
| Example | btoa("secret") = "c2VjcmV0" | AES-GCM(key, "secret") = random-looking bytes |
Storing passwords as Base64 (CWE-261) or transmitting secrets as hex in URLs provides no protection. If you can decode it without a key, so can an attacker.
Cryptographic keys must be generated from a cryptographically secure pseudorandom number generator (CSPRNG). Browser environments use crypto.getRandomValues(), which draws entropy from the OS kernel.
Keys are the foundation of every operation in this workbench. Generate a key here, then use it in the other tabs to encrypt, sign, or exchange data.
Different algorithm families need different key sizes to achieve the same security level. "128-bit security" means an attacker needs ~2128 operations to break it.
| Security | Symmetric | RSA | ECC |
|---|---|---|---|
| 80 bits | — | 1024 | 160 |
| 112 bits | 3DES (112) | 2048 | 224 |
| 128 bits | AES-128 | 3072 | 256 (P-256) |
| 192 bits | AES-192 | 7680 | 384 (P-384) |
| 256 bits | AES-256 | 15360 | 521 (P-521) |
Source: keylength.com (NIST recommendations)
Entropy measures unpredictability. A 256-bit key generated from a CSPRNG has 256 bits of entropy. A key derived from the password "password123" has far less — perhaps 20 bits.
Never use Math.random() for cryptography — it is not cryptographically secure. Always use crypto.getRandomValues() or libsodium's randombytes_buf().
Common mistake: Using a timestamp or counter as a "random" value. These are predictable and provide zero cryptographic security.
Keys generated in this demo exist only in browser memory and are lost on page refresh. In production:
Keys must be managed through their entire lifecycle (based on NIST SP 800-57):
Cross-cutting concerns that apply throughout: Access Control (restrict who can use or manage keys based on roles), Audit Logging (track all key usage for compliance and forensics).
Key wrapping encrypts keys with other keys, creating a hierarchy:
This separation means compromising the database (where wrapped DEKs are stored) doesn't expose the keys — the attacker also needs the KEK from the HSM.
Key isolation — keeping cryptographic operations separate from application logic:
Symmetric encryption uses the same secret key to both encrypt and decrypt data. It is fast, efficient, and the workhorse of modern cryptography — used to protect data at rest and in transit.
Operate on fixed-length chunks (blocks) of data, typically 128 bits (16 bytes).
Block ciphers require a mode of operation (ECB, CBC, CTR, GCM) to handle data larger than one block. The mode determines how blocks are chained together — see "Modes of Operation" below.
Operate on data one bit (or byte) at a time by XORing with a pseudorandom keystream.
Note: AES in CTR or GCM mode effectively turns the block cipher into a stream cipher by encrypting sequential counter values to generate a keystream.
GCM (Galois/Counter Mode) is an authenticated mode: it encrypts data and produces an authentication tag that detects tampering. This is the recommended default.
CTR (Counter Mode) provides confidentiality only — no integrity check. An attacker can flip bits in the ciphertext without detection.
CBC (Cipher Block Chaining) is a legacy mode vulnerable to padding oracle attacks if not combined with a MAC.
ECB (Electronic Codebook) encrypts each block independently with the same key. Identical plaintext blocks produce identical ciphertext blocks, leaking patterns:
CBC fixes this by XORing each plaintext block with the previous ciphertext block before encryption. The IV randomizes the first block:
CBC decryption reverses the process — each decrypted block is XORed with the previous ciphertext block:
Rule of thumb: Always prefer authenticated encryption (GCM or ChaCha20-Poly1305). CBC requires a separate MAC (e.g., HMAC) to provide integrity, and is vulnerable to padding oracle attacks if implemented incorrectly.
An Initialization Vector (IV) or nonce ensures that encrypting the same plaintext with the same key produces different ciphertext each time. Reusing an IV with the same key can catastrophically break security:
AES operates on fixed 128-bit (16-byte) blocks. In GCM mode:
The result is ciphertext (same length as plaintext) + a 16-byte authentication tag. The tag is checked during decryption — if it doesn't match, decryption is rejected entirely.
Key sizes: AES-128 uses 10 rounds, AES-192 uses 12 rounds, AES-256 uses 14 rounds. The performance difference is minimal (~15%), so AES-256 is commonly preferred for its larger security margin.
Paste PEM-formatted keys (from OpenSSL, etc.). For RSA, paste the public key to encrypt and private key to decrypt.
Asymmetric (public-key) encryption uses a pair of keys: a public key anyone can use to encrypt, and a private key only the owner can use to decrypt. This solves the key distribution problem — you can publish your public key openly.
RSA can only encrypt messages shorter than its key size (e.g., ~190 bytes for RSA-2048 with OWASP + SHA-256). For larger data, real-world systems use hybrid encryption:
Hybrid Encryption:
Hybrid Decryption:
TLS, PGP, and Signal all use this pattern. X25519 + ChaCha20-Poly1305 (libsodium's "box" construction) does this automatically.
In practice, a secure message from Alice to Bob combines hashing, signing, symmetric encryption, and asymmetric encryption together:
Encrypt + Sign (Alice sends to Bob):
Alice generates a random symmetric key, encrypts the plaintext, encrypts the symmetric key with Bob's public key, and signs the hash of the plaintext with her private key. She sends the ciphertext, encrypted key, and signature to Bob.
Decrypt + Verify (Bob receives from Alice):
Bob uses his private key to recover the symmetric key, decrypts the ciphertext, hashes the recovered plaintext, and verifies the signature using Alice's public key. If the hashes match, both authenticity and integrity are confirmed.
OAEP (Optimal Asymmetric Encryption Padding) adds randomized padding before RSA encryption. Without it, RSA is deterministic — the same plaintext always produces the same ciphertext, enabling chosen-plaintext attacks.
PKCS#1 v1.5 padding (the older scheme) is vulnerable to Bleichenbacher's attack. Always use OAEP.
| RSA-2048 | X25519 | |
|---|---|---|
| Security level | ~112 bits | ~128 bits |
| Public key size | 256 bytes | 32 bytes |
| Key generation | Slow (primes) | Fast |
| Encryption speed | Moderate | Fast |
A hash function maps arbitrary-length input to a fixed-length output (the "digest"). It is one-way (cannot be reversed), deterministic (same input = same output), and collision-resistant (hard to find two inputs with the same hash).
Preimage resistance: Given a hash value h, it should be computationally infeasible to find any input x such that Hash(x) = h. An ideal 256-bit hash has 256-bit security against preimage attacks.
Second preimage resistance: Given an input x1, it should be infeasible to find a different input x2 such that Hash(x1) = Hash(x2). This ensures you can't substitute a different document with the same hash.
Collision resistance: It should be infeasible to find any two distinct inputs that hash to the same value. Due to the birthday problem, a 256-bit hash offers only ~128-bit collision security — a collision becomes likely (>50%) after ~2128 hashes, not 2256.
The birthday bound is why hash output size matters: SHA-256 (256 bits) provides ~128-bit collision security, SHA-512 provides ~256-bit. The pigeonhole principle guarantees collisions must exist (infinite inputs, finite outputs), but finding one should be computationally impractical.
Restricted preimage space: Note that preimage resistance assumes the input space is large. An 8-character password from 70 possible characters has only ~576 billion possibilities (~49 bits of entropy) — easily brutable even at SHA-256's slower speeds. This is why password hashing uses slow algorithms, not fast hashes.
Changing even a single bit of the input should change roughly half the bits of the output. This property makes it impossible to learn anything about the input from the hash.
Plain hash: anyone can compute. Salted hash: prevents rainbow tables. HMAC: requires a secret key, proving both integrity and authenticity.
Hash (e.g., SHA-256): Anyone can compute it. Used for integrity checks, fingerprints, and content addressing.
HMAC (Hash-based Message Authentication Code): Requires a secret key. Proves both integrity and authenticity — only someone with the key can produce or verify the tag.
Use cases: API request signing (HMAC-SHA256), JWT tokens, webhook verification (e.g., GitHub, Stripe).
In 2017, Google's SHAttered project demonstrated the first practical SHA-1 collision — two different PDFs with the same SHA-1 hash. Generating a collision cost ~$110,000 in compute and has dropped since.
SHA-1 should not be used for security purposes. It remains available here for educational comparison only.
SHA-256 uses the Merkle-Damgard construction:
SHA-512 works identically but uses 1024-bit blocks, 80 rounds, and 64-bit words — producing a 512-bit digest. On 64-bit CPUs, SHA-512 can actually be faster than SHA-256 because it uses native 64-bit arithmetic.
Password hashing is deliberately slow. Unlike regular hashing (fast by design), password hashing functions are tuned to take hundreds of milliseconds per hash — making brute-force attacks against stolen password databases impractical.
A salt is random data mixed into the hash input. Without it, identical passwords produce identical hashes, enabling rainbow table attacks — precomputed lookup tables of common password hashes.
With a unique salt per user, an attacker must brute-force each password individually, even if two users chose the same password.
PBKDF2 is CPU-hard only — it can be massively parallelized on GPUs (thousands of cores). An attacker with a GPU cluster can test billions of passwords per second.
scrypt and Argon2id are memory-hard: each hash requires a large block of RAM (64MB-1GB). GPUs have limited memory per core, so parallelism is constrained by memory bandwidth, not compute.
Recommendation: Use Argon2id if available, PBKDF2 with 600,000+ iterations as a fallback (OWASP 2023).
Argon2id is a hybrid of two modes:
Argon2id combines both: the first half of each pass uses Argon2i (side-channel safe), the second half uses Argon2d (GPU-resistant). This gives the best of both worlds.
Parameters explained:
Real-world hashcat benchmarks on 2x NVIDIA RTX 3090 GPUs show why algorithm choice matters:
| Algorithm | Speed | 8-char password |
|---|---|---|
| MD5 | 138 GH/s | ~24 seconds |
| NTLM | 249 GH/s | ~13 seconds |
| SHA-256 | 18.7 GH/s | ~3 minutes |
| SHA-512 | 5.5 GH/s | ~10 minutes |
| WPA2 (PBKDF2, 4095 iter) | 2.2 MH/s | ~17 days |
| bcrypt (cost 10) | ~5 kH/s | ~21 years |
| Argon2id | ~1 kH/s | ~105 years |
Times assume 8-character random password from 95 printable ASCII characters (958 ≈ 6.6 trillion combinations, average 50% of keyspace). RTX 4090 is roughly 2x faster. An attacker with a GPU cluster can try billions of fast hashes per second — but only thousands of bcrypt/Argon2id hashes per second. This is why password-specific algorithms exist.
A defense-in-depth approach to password storage:
Step 3 (encrypting the hash) adds a layer of defense: even if an attacker steals the password hashes (via SQL injection for example), they still need the encryption key (stored separately, e.g., in an HSM or environment variable) to attempt cracking.
Do not:
sha256(password + salt) — use a proper KDFConsider: Using a trusted Identity Provider (SAML2, OpenID Connect SSO) instead of storing credential hashes directly. This delegates password management to a system designed for it.
Paste PEM-formatted keys. Private key is needed to sign, public key to verify.
A digital signature proves three things: authentication (who sent it), integrity (it wasn't altered), and non-repudiation (the signer can't deny signing). Only the private key holder can sign, but anyone with the public key can verify.
| Encryption | Signing | |
|---|---|---|
| Purpose | Confidentiality | Authenticity |
| Encrypt/Sign with | Public key | Private key |
| Decrypt/Verify with | Private key | Public key |
| Message visible? | No (encrypted) | Yes (plaintext + signature) |
A signature is bound to the exact message content. Changing even a single character makes the signature invalid.
Try it: Use the main panel to sign a message, then change one character in the message text and click "Verify Signature" — it will fail.
Ed25519 is deterministic: signing the same message with the same key always produces the same signature. This eliminates an entire class of bugs — a bad random number generator during ECDSA signing leaked Sony's PS3 private key in 2010.
RSA-PSS and ECDSA are randomized: each signature differs even for the same message. This is safe when the RNG works correctly, but fragile if it doesn't.
RSA signing is the "reverse" of encryption. Given key pair (e, d, N):
PSS padding (Probabilistic Signature Scheme) adds a random salt before signing. This means the same message produces different signatures each time, which provides a stronger security proof than the older PKCS#1 v1.5 scheme. Real implementations of RSA require padding schemes like RSA-OAEP to be secure (CWE-780).
ECDSA works differently: it uses a random nonce k to compute a point on the elliptic curve, then derives the signature (r, s) from that point and the private key. If k is ever reused or predictable, the private key can be recovered — this is exactly what happened in the 2010 PS3 key leak. Ed25519 avoids this by deriving the nonce deterministically from the private key and message.
ECDSA and EdDSA provide digital signatures using elliptic curves.
h = SHA-256(plaintext)s = k-1 * (h + r * privateKey) (mod n)h = SHA-256(plaintext)s1 = s-1 (mod n)R' = (h * s1) * G + (r * s1) * publicKeygit commit -S creates GPG/SSH-signed commitsKey exchange (or key agreement) lets two parties establish a shared secret over an insecure channel, without ever transmitting the secret itself. This is the foundation of secure communication on the internet.
The mathematical structure (discrete logarithm or elliptic curve) ensures that computing the shared secret from two public keys alone is computationally infeasible — this is the Diffie-Hellman problem.
In this metaphor, "colors" represent numbers in the real Diffie-Hellman algorithm, and "mixing" is analogous to the mathematical operations. Just as mixing specific colors produces a result that can't easily be reverse-engineered without knowing the original components, DH uses mathematical properties to ensure only the two participating parties can calculate the shared secret.
Diffie-Hellman described as a metaphor with colors for illustration. In the real algorithm, the "colors" are very large numbers and the "mixing" is modular exponentiation — an operation that is easy to compute forward but computationally infeasible to reverse.
Key exchange security relies on a math problem that's easy in one direction but hard to reverse:
Classic Diffie-Hellman: Given a generator g and a prime p, computing gx mod p is fast. But given gx mod p, finding x is computationally infeasible for large p. This is the discrete logarithm problem.
Elliptic Curve (X25519): Instead of modular exponentiation, we use scalar multiplication on an elliptic curve. Given a base point G and a scalar x, computing x·G (adding G to itself x times on the curve) is fast. But given x·G, finding x is hard. Curve25519 was designed by Daniel J. Bernstein to be fast, safe, and resistant to implementation mistakes.
Analogy: Mixing paint colors. It's easy to mix yellow + blue to get green, but given the green, it's nearly impossible to separate it back into the original yellow and blue.
Diffie-Hellman described using very small numbers for illustration purposes. In reality, the prime (P) should be a 2048-bit or longer value, and the private keys should generally be 256-bit or longer random values.
Reversing the exponentiation modulo P operation (computing the private key from the public key) is a hard problem with no known efficient solution. This is known as the discrete logarithm problem.
Elliptic curves are defined by an equation of the form y2 = x3 + ax + b, where a and b are constants that determine the curve's shape. These curves have useful properties for cryptography:
Interactive visualizations (MIT licensed, by Andrea Corbellini):
Point addition on EC over reals
ECClines-3.svg, CC BY-SA 3.0, Emmanuel Boutet
EC over finite field F61
Elliptic_curve_on_Z61.svg, CC0 Public Domain
ECC over a finite field Fp:
Given P and G, it is computationally infeasible to determine k (compute k = P / G). This is the Elliptic Curve Discrete Logarithm Problem (ECDLP).
ECC allows much smaller keys and fewer computations to achieve equivalent security to RSA, making it particularly useful for mobile and low-power devices. ECC can be used directly for key agreement (ECDH) and signatures (ECDSA/EdDSA), but must be combined with a symmetric algorithm for encryption (hybrid scheme).
ECDH using a "toy" curve (Curve61, F61, generator P = (5, 7)) for illustration. In real-world use, Curve25519 or NIST P-256 curves with 256-bit primes are used.
Run the demo on the left, then check what an eavesdropper on the channel can see:
What Eve sees and why it doesn't help:
| What Eve captures | Why it's useless |
|---|---|
| Public parameters (G, p, curve) | These are public by design — knowing them is like knowing the rules of the game. They don't reveal any secret. |
| Alice's public key (A = ka * G) | This is the result of scalar multiplication. Recovering ka from A and G requires solving the discrete logarithm problem — computationally infeasible for 256-bit curves. |
| Bob's public key (B = kb * G) | Same problem. Eve would need kb to compute the shared secret, but extracting it from B is as hard as breaking the entire scheme. |
The core asymmetry: Computing A = ka * G is fast (polynomial time). Reversing it — finding ka given A and G — has no known efficient algorithm. The shared secret S = ka * B = kb * A = ka * kb * G requires at least one private key to compute. Eve has neither.
Even with both public keys, Eve cannot compute ka * kb * G. This is the Computational Diffie-Hellman (CDH) assumption: given G, ka*G, and kb*G, computing ka*kb*G is hard. No shortcut is known that avoids solving the discrete log first.
Basic Diffie-Hellman is vulnerable to Man-in-the-Middle attacks: an attacker who can intercept and modify messages can substitute their own public keys and establish separate shared secrets with each party.
Solution: Authenticate the public keys using digital signatures, certificates (TLS), or out-of-band verification (Signal's safety numbers). This is why key exchange is always combined with authentication in practice.
If you use ephemeral key pairs (generated fresh for each session and discarded afterward), compromising a long-term key does not reveal past session keys. This property is called forward secrecy.
TLS 1.3 requires ephemeral ECDH for every connection. Signal uses the Double Ratchet algorithm, generating new keys for every message.
This demo simulates a simplified certificate authority chain. Create a root CA, issue certificates, and verify the trust chain — the same process that makes HTTPS work.
PKI is the system of roles, policies, and procedures that binds public keys to identities. It's how your browser knows that the server at google.com is actually operated by Google.
Trust flows from root CAs (pre-installed in your OS/browser) through intermediate CAs to end-entity certificates (your server's cert).
Your OS ships with ~150 trusted root certificates. Every HTTPS connection is verified against this trust store.
Certificates serve different purposes, controlled by Key Usage and Extended Key Usage extensions:
| Type | Purpose | Example |
|---|---|---|
| TLS Server (DV) | Proves a server controls a domain | Let's Encrypt cert for your website |
| TLS Server (OV/EV) | Also verifies the organization's legal identity | Bank or government website |
| TLS Client | Authenticates a client to a server (mutual TLS) | API authentication, zero-trust networks |
| Code Signing | Signs software binaries | macOS notarization, Windows Authenticode |
| S/MIME | Email encryption and signing | Digitally signed corporate email |
| CA Certificate | Signs other certificates | Root and intermediate CAs |
Domain Validation (DV) certificates only prove domain control and are the most common type (Let's Encrypt). Organization Validation (OV) and Extended Validation (EV) require additional vetting of the organization's identity, but browsers no longer display EV indicators differently from DV, reducing their perceived value.
Certificate lifetimes have shortened dramatically over the past decade, driven by security concerns:
| Year | Maximum Lifetime | Driver |
|---|---|---|
| Before 2015 | 5 years | Industry practice |
| 2015 | 3 years (39 months) | CA/Browser Forum Ballot 193 |
| 2018 | 2 years (825 days) | CA/Browser Forum Ballot 193 |
| 2020 | 398 days (~13 months) | Apple unilaterally enforced, others followed |
| 2029 | 47 days | CA/B Forum SC-081 (voted 2025, phased reduction through 2029) |
Why shorter? Shorter lifetimes reduce the window of exposure if a private key is compromised or certificate details become stale. They also force automation — manual certificate renewal doesn't scale at 47-day intervals, making ACME (Let's Encrypt) effectively mandatory.
Let's Encrypt already issues 90-day certificates and is preparing for even shorter lifetimes. Their short-lived certificates initiative explores 6-day certificates that wouldn't need revocation at all — they'd simply expire before most attacks could exploit them.
An X.509 certificate contains:
CN=www.example.com)The signature is the critical piece: it proves the CA vouches for the binding between the subject and the public key.
If a private key is compromised, the certificate must be revoked. Two mechanisms exist:
In practice, browsers like Chrome use CRLSets — a curated, compressed list pushed via browser updates — rather than checking CRL/OCSP for every connection.
Let's Encrypt (launched 2015) revolutionized PKI by offering free, automated certificates via the ACME protocol (RFC 8555). Before it, certificates cost $50-300/year and required manual setup.
Let's Encrypt issues ~4 million certificates per day and serves over 300 million websites. Certificates are valid for 90 days and are auto-renewed, which limits exposure from key compromise.
Certificate Transparency (CT) is a system of public, append-only logs that record all issued certificates. If a CA issues a fraudulent certificate (e.g., for google.com), it will appear in CT logs and be detected.
Since 2018, Chrome requires all new certificates to include CT log entries (SCTs). You can search CT logs at crt.sh.
This demo simulates a simplified TLS 1.3 handshake, showing how the cryptographic primitives from the other tabs combine to establish a secure connection. Each step uses real cryptographic operations.
The client generates an ephemeral ECDH key pair and sends the public key along with a list of supported cipher suites and a random nonce.
The server generates its own ephemeral ECDH key pair, selects a cipher suite, and sends its public key and server random back.
The server proves its identity by sending a certificate (long-term signing key) and signing the handshake transcript with it. The client verifies the signature.
Both parties independently derive the same shared secret from ECDH, then use HKDF to derive traffic keys. Neither the shared secret nor the traffic keys are ever transmitted.
The handshake is complete. Both parties now use the traffic keys for AES-GCM encrypted communication. Try sending messages between client and server.
TLS (Transport Layer Security) is the protocol that makes HTTPS work. Every time you see the padlock in your browser, a TLS handshake has occurred. TLS 1.3 (2018) simplified the handshake from 2 round-trips to just 1 round-trip.
TLS 1.3 combines four primitives you've explored in the other tabs:
| Purpose | Primitive | Tab |
|---|---|---|
| Key exchange | ECDH (P-256 or X25519) | Key Exchange |
| Server identity | ECDSA / RSA-PSS / Ed25519 | Signatures |
| Bulk encryption | AES-128-GCM or ChaCha20-Poly1305 | Symmetric |
| Key derivation | HKDF-SHA256 / SHA384 | Hashing |
| TLS 1.2 | TLS 1.3 | |
|---|---|---|
| Round trips | 2 | 1 |
| Key exchange | RSA or ECDH | ECDH only |
| Forward secrecy | Optional | Required |
| Ciphers | Many (some weak) | 5 strong suites |
| 0-RTT resumption | No | Yes |
TLS 1.3 removed RSA key exchange entirely — ensuring forward secrecy for all connections. If a server's long-term key is compromised, past sessions remain secure because ephemeral ECDH keys were used and discarded.
TLS 1.2 with DHE (2 round trips):
TLS 1.3 with ECDHE (1 round trip):
TLS 1.3 combines the key share into the ClientHello, eliminating a full round trip. Both sides independently derive handshake keys, traffic secrets, and IVs via HKDF using the shared secret and transcript hash.
In this demo, both parties generate ephemeral ECDH keys — used once and discarded. Even if the server's long-term signing key is stolen tomorrow, an attacker cannot:
The signing key only proves identity — it is never used for encryption. This separation of concerns is a core design principle of TLS 1.3.
TLS 1.3 supports only 5 cipher suites (compared to dozens in TLS 1.2):
TLS_AES_128_GCM_SHA256TLS_AES_256_GCM_SHA384TLS_CHACHA20_POLY1305_SHA256TLS_AES_128_CCM_SHA256TLS_AES_128_CCM_8_SHA256This demo simulates TLS_AES_128_GCM_SHA256 with ECDH P-256 — the most widely used combination.
Real TLS 1.3 includes additional details this demo omits for clarity:
The Signal Protocol (used in Signal, WhatsApp, and Google Messages) provides end-to-end encrypted messaging with forward secrecy and post-compromise security. This simulation uses real ECDH and HKDF operations.
Designed by Moxie Marlinspike and Trevor Perrin, the Signal Protocol is widely considered the gold standard for end-to-end encrypted messaging.
Bitcoin uses a small set of well-understood cryptographic primitives. Every transaction, address, and block relies on the same building blocks you've explored in the other tabs.
| Component | Primitive | Purpose |
|---|---|---|
| Addresses | SHA-256 + RIPEMD-160 | Hash public key to create a short address |
| Transactions | ECDSA (secp256k1) | Sign transactions to prove ownership of funds |
| Blocks | Double SHA-256 | Block header hash for proof-of-work |
| Merkle tree | SHA-256 | Efficiently verify transaction inclusion |
| Key derivation | HMAC-SHA512 | HD wallet key derivation (BIP-32) |
| Taproot (2021) | Schnorr signatures (BIP-340) | More efficient and private multi-sig |
When Alice sends Bitcoin to Bob:
A Bitcoin address is derived from a public key through multiple hash steps:
The one-way nature of SHA-256 and RIPEMD-160 means you cannot derive the public key from an address, and you cannot derive the private key from the public key (elliptic curve discrete log problem).
Mining is a brute-force search for a nonce that makes the block header hash start with a required number of zero bits:
This works because SHA-256 is a random oracle — the only way to find a hash below the target is to try many nonces. The difficulty adjusts every 2016 blocks (~2 weeks) so blocks are found approximately every 10 minutes.
Bitcoin's security model relies on standard, well-tested cryptographic primitives — not novel cryptography. Satoshi Nakamoto's innovation was combining them into a trustless consensus system.
Bitcoin uses the secp256k1 elliptic curve (y2 = x3 + 7 over a 256-bit prime field), chosen by Satoshi because it was an underused curve with no suspicious constants — reducing the risk of a backdoor.
The ECDSA tab in this workbench uses P-256 (NIST's standard curve). secp256k1 provides equivalent security but with different performance characteristics. Note: this workbench cannot demonstrate secp256k1 directly because Web Crypto API only supports NIST curves.
The Taproot upgrade (2021) introduced Schnorr signatures (BIP-340) alongside ECDSA:
A sufficiently powerful quantum computer running Shor's algorithm could break secp256k1 ECDSA and recover private keys from public keys. However:
Quantum computers threaten current public-key cryptography. NIST finalized the first post-quantum standards in 2024. This page explains the threat, the new algorithms, and the transition timeline.
Shor's algorithm (1994) can efficiently factor large integers and compute discrete logarithms on a quantum computer. This breaks:
| Algorithm | Used In | Broken By |
|---|---|---|
| RSA | TLS, S/MIME, code signing | Integer factorization → Shor's |
| ECDSA / ECDH | TLS, Bitcoin, Signal | Discrete log on EC → Shor's |
| DH / DSA | Legacy protocols | Discrete log → Shor's |
| AES-256 | Symmetric encryption | Grover's halves security to 128 bits — still safe |
| SHA-256 | Hashing | Grover's halves preimage security — still safe |
Key point: Only asymmetric algorithms are threatened. Symmetric encryption and hashing remain secure with doubled key/output sizes. The concern is "harvest now, decrypt later" — adversaries recording encrypted traffic today to decrypt once quantum computers arrive.
After an 8-year public competition, NIST standardized three algorithms in August 2024:
Formerly "Kyber." Based on the Module Learning With Errors (MLWE) problem on lattices. Replaces ECDH key exchange.
| Parameter Set | Security | Public Key | Ciphertext |
|---|---|---|---|
| ML-KEM-512 | 128-bit (NIST Level 1) | 800 bytes | 768 bytes |
| ML-KEM-768 | 192-bit (NIST Level 3) | 1,184 bytes | 1,088 bytes |
| ML-KEM-1024 | 256-bit (NIST Level 5) | 1,568 bytes | 1,568 bytes |
Formerly "Dilithium." Also lattice-based (MLWE + MSIS). Replaces ECDSA/Ed25519 signatures.
| Parameter Set | Security | Public Key | Signature |
|---|---|---|---|
| ML-DSA-44 | 128-bit (Level 2) | 1,312 bytes | 2,420 bytes |
| ML-DSA-65 | 192-bit (Level 3) | 1,952 bytes | 3,309 bytes |
| ML-DSA-87 | 256-bit (Level 5) | 2,592 bytes | 4,627 bytes |
Formerly "SPHINCS+." Based entirely on hash functions (no lattices). Larger signatures but relies on minimal assumptions — only the security of the hash function.
Public keys: 32-64 bytes. Signatures: 7,856-49,856 bytes depending on parameter set. Intended as a conservative fallback if lattice-based schemes are broken.
The Learning With Errors (LWE) problem: given a system of approximate linear equations over a finite field, recover the secret vector. The "errors" (small random noise) make this computationally hard — even for quantum computers.
Without the errors, this is ordinary linear algebra (easy). With small errors, it becomes a hard lattice problem. ML-KEM works over modules of polynomial rings for efficiency — hence "Module-LWE."
Intuition: Imagine trying to solve a system of equations where every answer is slightly wrong. A few equations, you can brute-force. Thousands of equations with noise in a high-dimensional lattice — no known efficient algorithm, classical or quantum.
| Operation | Classical | Size | Post-Quantum | Size |
|---|---|---|---|---|
| Key exchange pub key | X25519 | 32 B | ML-KEM-768 | 1,184 B |
| Key exchange ciphertext | X25519 | 32 B | ML-KEM-768 | 1,088 B |
| Signature pub key | Ed25519 | 32 B | ML-DSA-65 | 1,952 B |
| Signature | Ed25519 | 64 B | ML-DSA-65 | 3,309 B |
PQC keys and signatures are 30-50x larger than classical equivalents. This has real implications for TLS handshake sizes, certificate chains, and bandwidth-constrained protocols like IoT.
Generate real classical keys, then compare their sizes to PQC equivalents.
Simulate a hybrid key exchange like Chrome's TLS 1.3 implementation: combine a real X25519 exchange with a simulated ML-KEM-768 encapsulation, then derive the final shared secret from both via HKDF.
The security of ML-KEM relies on hard lattice problems. Below is a 2D lattice — click the canvas to place a target point, and try to find the closest lattice point. In 2D this is easy; in 500+ dimensions, it's computationally infeasible even for quantum computers.
An adversary records encrypted traffic today, hoping to decrypt it once quantum computers arrive. How urgent is the PQC transition for your data?
PQC algorithms are designed to resist both classical and quantum attacks. The transition is happening now — Chrome already uses hybrid PQC key exchange for most HTTPS connections.
Current deployments use hybrid key exchange: combine a classical algorithm (X25519) with a PQC algorithm (ML-KEM-768). The shared secret is derived from both.
This ensures that if either algorithm is broken (ML-KEM by new cryptanalysis, or X25519 by quantum computers), the connection remains secure. Chrome ships this as X25519Kyber768Draft in TLS 1.3.
The trade-off: TLS ClientHello grows from ~256 bytes to ~1,400 bytes. Measurable but acceptable for most connections.
Estimates vary widely. Breaking RSA-2048 requires ~4,000 error-corrected logical qubits. Current quantum computers have ~1,000-1,500 noisy physical qubits.
Most experts estimate 2030-2040 for cryptographically relevant quantum computers. The "harvest now, decrypt later" threat means we need to deploy PQC for key exchange before quantum computers arrive, even if that's a decade away.
Split a secret into multiple shares so that only a threshold number of shares can reconstruct it. No single share reveals any information about the secret.
Paste shares below (one per line, in hex or base64 format). You need at least k shares to recover the secret.
After splitting, try reconstructing with fewer than the threshold number of shares. The reconstruction will fail or produce garbage — demonstrating that k-1 shares reveal nothing about the secret.
Invented by Adi Shamir in 1979, this scheme splits a secret into n shares such that any k shares can reconstruct the secret, but k-1 shares reveal nothing — not even a single bit.
The secret is encoded as the constant term of a random polynomial of degree k-1 over a finite field:
Each share is a point (xi, f(xi)) on this polynomial. Given k points, Lagrange interpolation uniquely determines the polynomial and recovers f(0) = secret. With fewer than k points, infinitely many polynomials fit — so no information about the secret leaks.
Example (k=2): Two points define a line. If you know one point, the line could have any slope — the y-intercept (secret) could be anything. With two points, there's exactly one line.
| Shamir SSS | Multi-Sig | |
|---|---|---|
| Secret exists | Only during split/reconstruct | Never assembled |
| Trust model | Reconstructor sees secret | No single party sees key |
| Flexibility | Any k-of-n threshold | Algorithm-specific |
| Example | Vault unseal | Bitcoin multi-sig |
KDFs are the bridge between raw key material (a DH shared secret, a password, or existing key) and usable cryptographic keys. They extract entropy and expand it into keys of the required length.
Derive multiple keys from the same input by changing the "info" parameter. This is how TLS derives separate client/server keys from one shared secret.
A KDF takes input key material (which may have uneven entropy distribution) and produces uniformly random output suitable for use as cryptographic keys.
HKDF (RFC 5869) operates in two stages:
PRK = HMAC-Hash(salt, IKM) — compresses the input key material into a fixed-length pseudorandom key, even if the input has non-uniform entropy.OKM = HMAC-Hash(PRK, info || counter) — expands the PRK into as many bytes as needed, using the "info" parameter for domain separation.The info parameter is critical: it ensures that keys derived for different purposes (encryption vs authentication) are cryptographically independent, even from the same input.
| KDF | Input | Used In |
|---|---|---|
| HKDF | High-entropy (DH output, random key) | TLS 1.3, Signal, WireGuard |
| PBKDF2 | Low-entropy (passwords) | Wi-Fi WPA2, disk encryption |
| Argon2 | Low-entropy (passwords) | Password storage |
| scrypt | Low-entropy (passwords) | Cryptocurrency wallets |
Generate time-based and counter-based one-time passwords — the same codes your authenticator app produces. Built entirely on HMAC.
TOTP and HOTP turn HMAC into one-time passwords. Every authenticator app (Google Authenticator, Authy, 1Password) uses this exact algorithm.
hash = HMAC-SHA1(secret, counter)code = truncated_value mod 10^digitsHOTP (RFC 4226) is identical but uses a monotonically increasing counter instead of time. TOTP (RFC 6238) builds on HOTP by using time as the counter.
Create, inspect, and verify JWTs — the token format used in OAuth, OpenID Connect, and API authentication. See how the signature prevents tampering.
A JWT is three Base64url-encoded parts separated by dots: header.payload.signature. The signature prevents tampering but the payload is not encrypted — anyone can decode it.
Header: {"alg": "HS256", "typ": "JWT"} — specifies the signing algorithm.
Payload: Contains claims (sub, iat, exp, custom data). Visible to anyone — never put secrets here.
Signature: HMAC-SHA256(base64url(header) + "." + base64url(payload), secret) — proves the token hasn't been modified and was issued by someone with the secret.
exp claim live forever if stolen.Build a hash tree interactively and see how changing one leaf changes the root. Merkle trees efficiently verify data integrity — used in Git, Bitcoin, certificate transparency, and IPFS.
After building, change one character in any data block and rebuild. Only the hashes on the path from that leaf to the root change — but the root hash is completely different.
A Merkle tree (hash tree) is a binary tree where each leaf is a hash of a data block and each internal node is a hash of its two children. The root hash is a fingerprint of all the data.
H(block)H(H1 + H2)Merkle proofs: To prove a block is in the tree, you only need log2(n) hashes (the sibling at each level). For 1 million blocks, that's just 20 hashes — not all million.
Prove you know a secret without revealing it. This simulation demonstrates the concept using a hash-based commitment scheme.
The prover commits to a secret by publishing its hash. The verifier challenges the prover to demonstrate knowledge without seeing the secret itself.
Imagine a cave with a ring-shaped tunnel and a locked door in the middle. Alice (prover) claims she knows the password to the door. Bob (verifier) wants proof without learning the password.
After 20 rounds, a faker has only a 1-in-million chance of succeeding. Bob is convinced Alice knows the password — but he never learned it, and a video of the interaction proves nothing to a third party (Alice could have staged it by agreeing with the cameraman).
A zero-knowledge proof lets a prover convince a verifier that a statement is true without revealing any information beyond the validity of the statement itself.