(Total Views: 374)
Posted On: 11/07/2018 6:41:19 AM
Post# of 871
It is clear the Industry is over a barrel while looking to future nodes of 400G and 800G, none of the legacy technologies are able to scale down in cost to $1/Gb and down in power to 1v, and the primary reason is the fact that all legacy technologies are CAPPED at standalone speed at 35-50 Gbaud range, so they are all forced to use increased lanes and complex modulation schemes to get to the next node(s) and that creates quite a deal of extra costs and extra problems with error corrections that the Industry doesn't like
Only LWLG has a technology that has been proven can scale to standalone speeds now already three times faster than all the legacy technologies, 150Gbs standalone, and the mathematics say there is even much more standalone speed than proven today
So when the 50Gbs is optimized and out to customers, it will be a no brainer for customers knowing that future stanalone speeds of 100Gbs and 150Gbs can also then be optimized, and Industry can breathe in relief for the next couple of nodes to come with simple NRZ, OOK
White Paper: How Fiber Optics Painted Itself into a Corner
Link
http://lightwavelogic.com/Profiles/Investor/I...Validate=3
Excerpt
Recall that two different solutions for 400G are shown in Figure 2. One reason that multiple solutions exist is discomfort with the use of 8 lanes. Parallelism beyond 4 lanes has not proved economical or small enough previously. But the industry is even more leery of increasing the modulation complexity. Complex modulation schemes are more sensitive to noise and are more error-prone. Notice that in Figure 3 the difference in PAM4 levels is only 1/3 has much as for NRZ. As a result, the optoelectronics on both transmit and receive ends must distinguish between these smaller differences in signal level. They require power-hungry signal processing electronics not only to encode and decode the data but to offset these errors. The industry already has experience with very complex modulations schemes (e.g. the gray and black circles in Figure 1) and deems them too large, too power-hungry, and way too expensive—in other words, practical only for national backbone networks. Moreover, they are approaching fundamental limits.
Only LWLG has a technology that has been proven can scale to standalone speeds now already three times faster than all the legacy technologies, 150Gbs standalone, and the mathematics say there is even much more standalone speed than proven today
So when the 50Gbs is optimized and out to customers, it will be a no brainer for customers knowing that future stanalone speeds of 100Gbs and 150Gbs can also then be optimized, and Industry can breathe in relief for the next couple of nodes to come with simple NRZ, OOK
White Paper: How Fiber Optics Painted Itself into a Corner
Link
http://lightwavelogic.com/Profiles/Investor/I...Validate=3
Excerpt
Recall that two different solutions for 400G are shown in Figure 2. One reason that multiple solutions exist is discomfort with the use of 8 lanes. Parallelism beyond 4 lanes has not proved economical or small enough previously. But the industry is even more leery of increasing the modulation complexity. Complex modulation schemes are more sensitive to noise and are more error-prone. Notice that in Figure 3 the difference in PAM4 levels is only 1/3 has much as for NRZ. As a result, the optoelectronics on both transmit and receive ends must distinguish between these smaller differences in signal level. They require power-hungry signal processing electronics not only to encode and decode the data but to offset these errors. The industry already has experience with very complex modulations schemes (e.g. the gray and black circles in Figure 1) and deems them too large, too power-hungry, and way too expensive—in other words, practical only for national backbone networks. Moreover, they are approaching fundamental limits.
(1)
(0)
Scroll down for more posts ▼