Wherein Lafayette gets favorable press:
The New York Times, in a story unfavorably comparing US broadband to world standards, spares a paragraph to laud Lafayette and other muni-built challengers to the turgid American broadband scene.
The big But:Some American cities have such superfast broadband that if they were ranked against foreign countries, several, like Bristol, Va., Chattanooga, Tenn., and Lafayette, La., would rank in the top 10.Those three cities built municipal fiber-optic networks, and those networks can operate just as fast as the swiftest connections in Hong Kong, Seoul and Tokyo.
But...and there is a but:
But those speeds can come at a very high price. In Chattanooga, Internet service of 1 gigabit a second costs a consumer $70. But in Lafayette, the same speed costs nearly $1,000 a month. In Seoul, it’s about $31 — a result of government subsidies to encourage Internet use.Here's the thing. Having world-class broadband is a good thing. But, really, a 1000 dollars for a residential gig is just plain, flat too much. Why? Because no one uses a gig continuously except a large scale commercial concern. Such (rare) businesses need a full gig continuously. They need to have that gig plus quality assurance at a scale that the best-effort residential network cannot provide. For a dedicated commercial line with a full gig and quality assurance a 1000 bucks would be a great price. But that is not what is being offered for 70 dollars in Chattanooga nor in in the Kansas Cities for the same 70 dollars or, for that matter, the 31 subsidized dollars in Seoul. LUS Fiber is also offering a residential, best effort gig. But it is offering it at 1000 dollars and and that's not reasonable. Chattanooga, and Google both demonstrate that you can do a best-effor gig for less, much less.
These other guys are able to sell a gig for only 70 bucks because, frankly, real users never, ever use anything like that much capacity for anything but the smallest fraction of a moment. A gig user will typically put no more strain on the network's paid-for capacity than a 100 or a 50 meg user. That's why Chattanooga doesn't see a reason to charge 10 multiples of a 100 meg capacity and Google don't see a reason to offer anything but a simple 1 gig to all.
Given all that LUS Fiber should follow suite and charge the going 70 dollars for a gig. Just to maintain bragging rights if nothing else. And if we aren't going to do so LUS Fiber should tell Lafayette just exactly why we are not. Now there is good reason to think that declining to sell a gig of network "speed" at a reasonable price is fundamentally the honest thing to do. (But selling that gig at
so high a price that it is not a remotely rational purchase is a bad way of evading telling people just why you don't want to sell them a gig of "speed.")
Why the mania for selling a gig to residents is pretty much dishonest:
To be frank, LUS Fiber might be telling itself that it cannot provide a gig "speed (see below)*" reliably—and that it won't participate in pretending that it can. It is certainly true that they cannot. And it's true of Google and especially Chattanooga as well. That's because most sites cannot serve out data in a gig stream and anyway aren't serving data in large enough chunks that a gig of capacity would benefit an end-user. Google can locate its enormous caches inside its own network and for those hits it can provide surprising speed. But even mighty Google cannot make a server faster or make the fragments of an internet page large enough to benefit from a true gig. What a gig residential connection typically buys a residential consumer is something effectively south of a 100 megs anytime they want it. That will feel instantaneous in most uses. And that is the glorious grail of transport: perceptual instantaneous response—which at higher speeds becomes more a matter the number of requests for information and the length of time each round-trip request takes.
The bottom line is that the speed we experience, which is the speed we care about, is outside of any ISP's control. Folks who are selling a gig of capacity as if it were a decent proxy for speed are being, at best, disingenuous.
*speed: Ok, lets at least try to understand what speed really is and why nobody can guarantee "a gig" of speed. A gig is basically a capacity measure. It's the size of the pipe; what amount of data it will transmit when things are optimal. When we talk about speed we are usually trying (poorly) to talk about responsiveness. If we have to wait "too long" our speed is slow. If we can live with the wait we are usually willing to say our speed is "ok." If we don't have to wait at all we say "that's fast!" So you can see that when capacity is the largest limiting factor, when the pipe is always full, it is reasonable to focus on capacity and getting more of it—lack of capacity is keeping you waiting.
But capacity hasn't been the limiting factor for most of us in a long time, if it ever was. Capacity matters most when when we want to download something big and use it right away. That's pretty much exclusively videos/movies these days. Because capacity is limited we've learned how to stream big video files; streaming lets us start watching when just the first little bits have been downloaded. A stream is deliberately designed to not saturate the connection if at all possible, to not use the full capacity available. Examples are YouTube and Netflix. Netflix's Super HD 1080p service at its best setting only uses 7Mb/s. Only. 7. megs. For the most extreme activity most of us will engage in in any month. Capacity is not the bottleneck any more. Thinking a gig of capacity will enhance most of our experience is illusory.
No, most of the problem with getting to perceptually instantaneous responses lies with aggregate response times, not capacity. And experienced response times are, chiefly, a product of the number of round trip requests for information that a web page/app makes of a multitude of servers across the web and round trip time for each request (aka latency). Many of these request for information must be made serially, that is they have to wait for a response to the first request before making a, second better, informed request. So if the current average is something like each page addressing 16 different domains, and the average domain getting something like 39 request from just one of its domains its gets pretty clear that the aggregate round trip time for the requests gets large fast—even before you consider the serial nature of a large fraction of those requests. Very quickly network latency and the design of the requests of the web page/app become, together, the limiting factor. That's where we are today. [For a less simplified version of the larger problem see Bandwidth IS NOT Speed]
Unfortunately no ISP, not LUS Fiber, not Google, not Chattanooga has much to do with overall responsiveness (latency X requests). Now its generally true that all-fiber networks have nicely lower in-network latency. But it can't control latency beyond its network borders. Further, no ISP has anything at all to do with common web design practices regarding requests. All of that means they can't really make claims about "speed." But marketing demands differentiators and capacity is something that ISPs can actually offer. Hence the ads about bandwidth.