power efficient ip lookup with supernode caching

13
Power Efficient IP Loo kup with Supernode Cac hing Lu Peng, Wencheng Lu*, and Lide Duan Dept. of Electrical & Computer Engineeri ng Louisiana State University *Dept. of Computer & Information Science & Engineering University of Florida IEEE GLOBECOM 2007

Upload: richard-rivers

Post on 02-Jan-2016

30 views

Category:

Documents


0 download

DESCRIPTION

Power Efficient IP Lookup with Supernode Caching. Lu Peng, Wencheng Lu*, and Lide Duan Dept. of Electrical & Computer Engineering Louisiana State University *Dept. of Computer & Information Science & Engineering University of Florida IEEE GLOBECOM 2007. Related work. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Power Efficient IP Lookup with Supernode Caching

Power Efficient IP Lookup with Supernode Caching

Lu Peng, Wencheng Lu*, and Lide DuanDept. of Electrical & Computer Engineering Louisian

a State University*Dept. of Computer & Information Science & Enginee

ring University of FloridaIEEE GLOBECOM 2007

Page 2: Power Efficient IP Lookup with Supernode Caching

Related work

Page 3: Power Efficient IP Lookup with Supernode Caching

Proposed supernode caching The authors propose a supernode based caching scheme to e

fficiently reduce IP lookup latency in network processors. The authors add a small supernode cache in-between the pr

ocessor and the low level memory containing the IP routing table in a tree structure.

A supernode is a tree bitmap node. In a 32-level binary tree, we represent it by an 8-level supern

ode tree if we compress all 4-level subtrees, whose roots are at level that is multiple of 4 (level 0, 4, …, 28), to be supernodes.

The supernode cache stores recently visited supernodes of the longest matched prefixes in the IP routing tree.

A supernode hitting in the cache reduces the number of accesses to the low level memory, leading to a fast IP lookup.

Page 4: Power Efficient IP Lookup with Supernode Caching

Proposed supernode caching (cont’)

Page 5: Power Efficient IP Lookup with Supernode Caching

Proposed supernode caching (cont’) For a destination IP address

Step 1: We search the corresponding set by its left most 12 bits.

Step 2: Bits 12 to 23 and bits 12 to 15 are used as tags for 24-bit and 16-bit IP prefixes.

Step 3: If both tags match, we select the longer one.

Step 4: After tag comparison, we select the corresponding entry in the cache’s data

array which is the address of the supernode in memory. A cache miss will result in cache update and replacement. The authors employ an LRU replacement policy. The search continues from the matched supernode in memory and

proceeds downwards until the longest matching prefix is found. If a 4-level super-node is found in the cache, we only need one mem

ory access for the next hop.

Page 6: Power Efficient IP Lookup with Supernode Caching

Proposed supernode caching (cont’) The authors store the root (level 1) and the

second level supernode addresses containing 8 bits prefixes into a fully associative cache. This is practical because there are at most 28 + 1 = 257 addresses.

For the third level and the fourth level supernodes, we reduce the search latency by introducing a set-associative cache.

Page 7: Power Efficient IP Lookup with Supernode Caching

Experiment results

Routing tables from http://bgp.potaroo.net Trace files from ftp://pma.nlanr.net/trace/ Four schemes

Without cache With an IP address cache

Set associative cache With a TCAM With a supernode cache

Page 8: Power Efficient IP Lookup with Supernode Caching

Experiment results (cont’)

Page 9: Power Efficient IP Lookup with Supernode Caching

Experiment results (cont’)

Page 10: Power Efficient IP Lookup with Supernode Caching

Experiment results (cont’)

Page 11: Power Efficient IP Lookup with Supernode Caching

Experiment results (cont’)

Page 12: Power Efficient IP Lookup with Supernode Caching

Experiment results (cont’)

Page 13: Power Efficient IP Lookup with Supernode Caching

Conclusion According to the experiment results

An average 69%, up to 72% of total memory accesses can be avoided by using a small 128KB supernode cache.

A 128KB of the proposed supernode cache outperforms a same size of set-associative IP address cache 34% in the average number of memory accesses.

Compared to a TCAM with the same size, the proposed supernode cache saves 77% of energy consumption.