Also published in Engineering Progress, Fall 2014
Qun (Q.) Jane Gu is quite concerned about the “Interconnect Bottleneck.”
Gu, an assistant professor in the Department of Electrical and Computer Engineering, understands the biggest challenge that faces the World Wide Web. In a word: space.
This seems counter-intuitive, since users generally imagine the Web’s parameters to be boundless. But we’re talking about bandwidth space, which definitely has its limits.
The Internet’s exponential growth, fueled by its increasing importance to world communications and commerce, has placed ever-expanding demands on inter- and intra-chip communication bandwidth. But the electromagnetic spectrum, in terms of the ranges currently exploited, has only so much “space.” The cellular and wireless spectrums already are quite crowded, due to the explosive growth of both users and the increasingly varied — and sizable — types of data being transmitted. Hence, the interconnect bottleneck: the limits on integrated circuit performance that result from connections between components (as opposed to their internal speed).
The communications industry needs to achieve a superior interconnect solution that delivers high energy efficiency, high bandwidth density, high reliability and low cost … not to mention fast adaptability and scaling capabilities.
She studied Ph.D. work at UCLA, embracing research that reflected her true passion: using silicon semiconductor processing to design circuits, in order to meet the ever-increasing demands for the universal capabilities that users expects from their gadgets. This quickly led to her work with interconnect: the high-velocity Internet that provides connectivity via an array of state-of-the-art products.
“Back then, interconnect established chip-to-chip connections in vertical stacking chips or three dimensional ICs is challenging,” she recalls. “The goal was high data rate, low power, small size, and high reliability. We successfully developed a method, capacitive coupling based short distance interconnect scheme, which achieved an extremely high data rate with very low power and high efficiency without any process modification for high reliability.” She was only two years into her Ph.D. work when this research was published at the 2004 Institute of Electrical and Electronics Engineers (IEEE) International Symposium on Circuits and Systems (ISCAS). The further improved work was published in the flagship conference in solid-states circuits society, IEEE International Solid-State Circuits Conference (ISSCC) in 2007.
Gu completed her doctorate in 2007, and worked in research and teaching positions in California and Florida. She joined the UC Davis College of Engineering in August 2012.
“UC Davis and my department environments are marvelous,” she says. “I’m indebted to all my colleagues here; they’re approachable, supportive and collaborative in all sorts of ways, large and small. Our department staff members also are extremely helpful; when I need something, they always respond as quickly possible. I feel very welcome here.”
She made herself noticed almost immediately. In December 2013, the National Science Foundation’s Division of Electrical Communications and Cyber Systems awarded her a five-year grant of $400,000, for a research project titled “Terahertz Interconnect, the Last Centimeter Data Link.”
Which brings us back to space.
“Interconnect is faced with two challenges. The major goal is to solve the need to keep increasingly the data transmission rate from chip to chip, or within a single chip. The data transmission rate always increases and never stops because the demand keeps going up. So the density of data being transmitted — bandwidth density — must be allowed to increase, without increasing the size of the device.
“Cell phones can’t be allowed to get physically larger,” she laughs, “or they won’t fit into our pockets. That means our desire to transmit more and more data must be squeezed into the same physical space.”
But that’s only part of the problem.
“We also must increase power efficiency, or else power consumption will increase in tandem with the increasing data rate. That would be bad: In a very few years, the power consumption related to data transmission alone would hit intolerable levels, and all the chips would burn up. So, power must be reduced by orders of magnitude.”
Gu believes she has a solution, and it relates to the terahertz (THz) region within the electromagnetic spectrum, which covers a frequency range roughly 100 times that currently occupied by all radio, television, cellular radio, Wi-Fi, radar and other users.
In other words, plenty of space.
“The terahertz region hasn’t been studied or developed much yet, because its frequency has been regarded as too high for interface with conventional electronics, and too low for the optical spectrum. Despite this, its position ‘in the middle’ can be used to leverage the advantages from both the electronics and optical sides.”
It’s essentially a hardware problem, which is right up Gu’s alley.
“We must design an electronic device that can generate and receive such high-frequency signals; that’s the electronics side. And we also need to design the ‘channel’ that can provide the low-loss/high-bandwidth capabilities; that’s the optical side. Then we need to integrate these two achievements, to produce the desired terahertz interconnect.”
Gu expects to develop the first prototype within two years. “It won’t have very high throughput and very low power consumption immediately,” she admits, “but it will demonstrate proof of concept. After that, we’ll work to improve performance for higher bandwidth density and lower power consumption.”
All of which begs a question: Given the always-increasing growth of data movement — not merely phone calls, but also video streaming, cloud storage and the exchange of sensitive data, to cite just a few examples — will bandwidth technology itself be replaced by something entirely new and different, that we can’t even imagine yet?
“Not any time soon,” Gu smiles, “but technology does always evolve!”
— Derrick Bang