Posted by: Rich Clarke on June 10, 2010 at 12:23 pm
The FCC’s annual report on competition in mobile wireless is 308 pages chock-a-block with detailed information and analysis about developments in this quickly-evolving business. We read how customers are continuing to flock to mobile services – especially broadband data. We are informed that TracFone, a subsidiary of global giant América Móvil, has become the fifth-largest carrier in the U.S. with over 14.4 million subscriptions. We learn that U.S. wireless customers pay only a third as much per minute as their international peers and consume three times as many minutes.
We read that over the past three years text messaging per user has octupled as per-message prices have dropped by 70%. We are informed that our smartphone take-up leads the world and is continuing to explode – driven by multiple new app stores and mobile device operating systems. We are told that customer churn between carriers remains significant but stable at one out of every four customers each year, indicating that lock-in is weak but service quality is not declining. We learn that even more capable 4th generation wireless technologies are on the cusp of being deployed. And we read that no other major developed country has as many similarly-sized large competitors or is less concentrated.
Faced with all of this left-brain information and analysis indicating the tremendous pro-consumer performance of the U.S. mobile wireless industry, it is astounding that certain commenters have chosen a right-brain reaction to fixate on one concentration statistic as the report’s headline conclusion. This scarlet statistic is 2848, a weighted average of the Herfindahl-Hirschman Indexes (HHIs) that the FCC has calculated for each of 172 geographical areas (known as EAs) covering the country. While some people are anxious because by the FCC’s calculation of this statistic, its value now exceeds 2800 and has risen moderately over the past five years, economists know this is of no great concern. There are several reasons.
Posted by: Brent Olson on June 9, 2010 at 10:33 am
ICYMI, the Broadband Internet Technical Advisory Group (TAG) was announced this morning. TAG is a group of leading broadband and high-tech companies, spearheaded by Professor Dale Hatfield, a former Federal Communications Commission (FCC) Chief Technologist and currently the Executive Director of the Silicon Flatirons Center.
What does TAG intend to do? This is from its news release, and sums up the group and its mission nicely:
“The TAG’s mission is to bring together engineers and other similar technical experts to develop consensus on broadband network management practices or other related technical issues that can affect users’ Internet experience, including the impact to and from applications, content and devices that utilize the Internet.”
Posted by: Joan Marsh on June 4, 2010 at 12:14 pm
One of the many things that caught my eye in the FCC’s most recent Wireless Competition Report is a detailed analysis of carrier spectrum holdings. Not surprising – spectrum has been called the lifeblood of the industry.
In the report, the Commission takes an in-depth look at who holds what and where. But they draw a line at 1 GHz, giving spectrum holdings below this level its own set of stats. As a wise man once said, “Statistics are like a bikini. What they reveal is suggestive, but what they conceal is vital.”
So let’s look first at what the 1 GHz analysis reveals. The Commission concludes that lower frequency bands, such as the 700 MHz and Cellular, “possess more favorable intrinsic spectrum propagation characteristics than spectrum in high bands.” True, but not particularly vital. From a historical perspective, the introduction of 120 MHz of PCS spectrum at 1.9 GHz revolutionized the industry, clearly demonstrating that higher-band spectrum can and has played a significant role in fostering competition.
Backing the notion of “favorable” low band propagation characteristics, the report cites a propagation model developed to estimate coverage requirements in different bands. The model concludes that lower frequency spectrum requires fewer sites. We agree. But while such models have abstract validity, they say little about the capacity-centric deployments that network providers are designing today to support 3G and 4G services.
Posted by: Carl Povelites on May 27, 2010 at 2:13 pm
I have now been in the wireless industry for more than 20 years. With the release of the FCC’s 14th Annual Wireless Competition Report, I’ve begun to wonder whether I have been sleeping the last couple of years or whether I am on the show Lost (maybe living in an alternative universe?). For those who know me, I am no beauty and I certainly don’t get enough sleep (although I do enjoy my naps), I have come to conclude that it is the latter and the smoke monster is very disconcerting.
Having been around awhile, I lived through the early days when the states and the FCC were grappling with how they were going to regulate this thing called wireless (actually, we called it cellular then – yes, I’m that old). Remember, the wireless sector was conceived as competitive with two licenses being awarded in each market. In these early days, we were busy running from state to state, filing Certificates of Public Convenience and Necessity (CPCNs) in order to get approval to build facilities and start offering services to customers. In many states, we filed tariffs and even had to file tariff changes 30 days in advance for any promotion that we wanted to offer (this made it so much simpler to find out what your competitor was up to ahead of time – now all we have are rumors in the blogosphere). Eventually, the industry was deregulated in many states.
With the passage of the Omnibus Budget Reconciliation Act in 1993, Congress authorized spectrum auctions and set forth a National Framework for the regulation of wireless, preempting states on entry and rate regulation. It also set forth the requirement for the FCC to produce its annual competition report. With the PCS spectrum auctions, competition and the competitive nature of wireless were cemented.
Posted by: Hank Hultquist on May 26, 2010 at 1:06 pm
When a Red Sox fan hears the words “separation” and “freeze” in close proximity, he or she might think of Ted Williams. Unless, of course, that Red Sox fan also happens to be a telecom policy wonk (like your humble blogger). In which case, he or she might also think of “Part 36” and the regulatory alchemy by which “costs” are “separated” between the “interstate jurisdiction” and “intrastate jurisdiction.” I put these words in quotes because their peculiar usage in this arcane context should always be kept “separated” (at least mentally) from their normal meanings. In this context, we’re not talking about costs but expenses; they’re not separated like an egg, but divided like the Solomonic child; and the distinct jurisdictions arise only as a consequence of the formalities associated with the process. At its heart, “separations” is an artifice created to enable bi-jurisdictional regulation.
The rate-making process for regulated utilities always involved somewhat complicated formulas for translating capital expenditures, overhead and administrative costs (SG&A), and other expenses into prices. But only in America did regulators further “refine” this process by subdividing these things into their “interstate” and “intrastate” subparts. (I like to think of them as telecom equivalents to Leibniz’s monads, but that’s not quite right since I have complete confidence in the ability of ingenious regulators to divide them even further.)