A deep engineering breakdown of where filtering happens — and why it looks like “magic”.
Why HTTPS dies right after connect: DPI, SNI, and the ClientHello trap
You send a SYN, you get a SYN-ACK, the TCP handshake is perfect. Port 443 is open, routing works, and the server IP is reachable. Everything looks normal — until the exact moment your browser sends the first “real” data. The connection instantly resets, a packet arrives, and the session is dead. If you’ve ever wondered why the network lets you knock on the door, but “shoots” you when you introduce yourself — this is where protocol architecture meets middleboxes.
Q: What happens on the wire — TCP → TLS → HTTP?
Think of modern web access as a strict chain. If the chain breaks, it almost always breaks at a specific link — and that link is visible in packet captures.
- 1. TCP 3-way handshake SYN → SYN-ACK → ACK. You now have a transport channel, nothing is encrypted yet.
- 2. TLS handshake the browser sends TLS ClientHello to negotiate ciphers, versions, and parameters for encryption.
- 3. HTTP request only after TLS is established does the browser send the encrypted HTTP request (e.g., GET /…).

Q: If HTTPS is encrypted, how does DPI know where I’m going?
DPI (Deep Packet Inspection) is not “just routing.” It tries to recognize destinations by inspecting payload, not only IP headers. The key detail: the earliest TLS handshake data is partially visible to the network.
Historically, the crucial field is SNI (Server Name Indication) inside the ClientHello. SNI carries the hostname the client wants (e.g., example.com) and, in classic deployments, it is exposed to observers. That single design choice became the Achilles’ heel of “encrypted web” filtering.
And here’s a subtle but important TLS 1.3 detail: TLS 1.3 ClientHello always contains extensions (minimally supported_versions). That makes ClientHello a stable and predictable “observation point” for DPI logic.
Q: Why does the connection die right after ClientHello?
There are two broad filtering models:
- Passive DPI + injection the device monitors traffic and, upon matching a signature, injects a forged TCP control packet (often RST/FIN) pretending to be the server.
- Active DPI inline the device sits on-path and can drop packets directly, preventing the handshake from completing.
In both cases, the “tell” is timing: TCP succeeds, then TLS ClientHello triggers the action. From the DPI perspective, that’s the first moment it can reliably classify the destination at scale.
Q: How do I tell “server reset” from “middlebox reset”?
You can’t prove attribution with one metric — but you can build a strong case with a few fingerprints. A popular one is TTL inconsistency: injected packets often look “too close” compared to what you’d expect from a distant origin.
- Compare TTL of normal server packets vs the reset-killer packet. If the killer arrives with a TTL pattern that suggests only a few hops, it’s a strong hint of on-path injection.
- Compare TCP options (window scaling, SACK permitted), window size patterns, and sequencing behavior. Forged packets frequently mismatch what the real server would emit.
- Watch for races in passive injection, the forged RST has to be created and delivered fast enough to “win” over the genuine server flow. Timing artifacts are common.
Q: Why do “tiny segments”, reordering, or overlaps confuse DPI?
This is the non-magic explanation: DPI is a middlebox, not an endpoint. Endpoints reassemble streams according to OS TCP/IP stack rules; DPI engines often implement a simplified (or performance-constrained) model.
Researchers have described a classic weakness: if the middlebox reconstructs a different byte stream than the endpoint (because of out-of-order delivery, overlap ambiguity, or tiny segmentation), then signature matching can fail — even though the server still reconstructs the “real” content.
In practice, this aligns perfectly with the “gatekeeper with short memory” metaphor: DPI wants deterministic, cheap parsing at line rate. Stateful, perfect reassembly for every flow is expensive — and that’s where gaps appear.
Q: Why do classic TCP tricks break on HTTP/3 and QUIC?
HTTP/3 is HTTP semantics mapped over QUIC. QUIC is a secure transport protocol running over UDP, with its own state machine and cryptographic protection.
That matters because many “TCP-era” manipulations depend on TCP mechanics (segment boundaries, sequence behavior, window hints). With QUIC, the visibility and controls are different — and many transport details are protected by design. In short: if your method relies on tweaking TCP fields, it simply doesn’t apply to QUIC flows.
Q: Is there a future where DPI can’t read SNI at all? (ECH)
Yes — and it’s already partially deployed. ECH (Encrypted Client Hello) encrypts the sensitive part of ClientHello, including what used to expose SNI-like information. In plain terms: the network still sees “this is TLS,” but loses easy hostname visibility.
Practical reality check:
- Firefox enables ECH by default (when the target and infrastructure support it).
- Large CDNs Large CDNs (e.g., Cloudflare) document ECH as a way to hide SNI from intermediaries.
- ECH ECH needs configuration delivery — commonly via SVCB/HTTPS DNS records (otherwise there’s nothing to encrypt “to”).

Q: How do real tools implement packet-level logic? (WinDivert vs NFQUEUE)
At the implementation level, the difference is mostly about where interception happens and how verdicts are applied:
- Windows Tools commonly rely on WinDivert: a packet interception driver that allows user-mode capture, filtering, modification, and reinjection in real time. It’s a practical way to build “smart middlebox behavior” on the client device.
- Linux Many router-grade setups rely on NFQUEUE: Netfilter can queue packets to a userspace program, which inspects/modifies them and then returns a verdict (accept, drop, etc.) back to the kernel networking stack.
A quick “Wireshark checklist”
If you want a fast, non-hand-wavy diagnosis, capture a failing session and check:
- 1. Do you see a clean SYN/SYN-ACK/ACK, followed by a ClientHello?
- 2. What packet kills the session: RST, FIN, or silent drops/timeouts?
- 3. Do TTL and TCP options of the killer packet match the real server behavior?
- 4. Is the failure pinned to the ClientHello moment (not later HTTP data)?
- 5. Does the behavior change between HTTP/2 (TCP) and HTTP/3 (QUIC)?
Note: This article is an engineering explanation of how the protocol stack and middleboxes interact. If you’re testing, do it on networks you own or are authorized to assess, and prefer legitimate privacy and security measures.