Broadband and Governance: Empowerment or Illusion?
Proponents of ICT4D, roaming the corridors of power restlessly, find reasoned arguments for the support of the rapid dissemination of broadband connectivity in India seem to bounce endlessly off the walls. In the meantime, the doors of decision makers seem ever more open to the blandishments of commercial technology providers, whose bulging balance sheets reflect their seductive views on where the demand really lies: in the ready pockets of the arrivistes.
Do alternate technologies exist in reality, and can they really provide meaningful leverage for development? Here’s a quick look at the choices for India.
Smart connectivity, a sea change from the analogue technologies ubiquitously deployed in the developed economies of the 20th century, appears to be a powerful argument for the spread of equitable governance. Proponents of these technologies argue persuasively that a “knowledge society” is one armed with more information (and by corollary, better information): better information enables better choices.
Paradoxically, despite the apparently liberalised economy now prevailing, the overwhelming thrust of technology deployments in fact continue to be ‘dumb’ solutions. Some admittedly wear a digital guise: GSM and CDMA and the newer, digitally enhanced, landline switched circuit technologies of the last decade. However, like wolves in sheep’s clothing, these solutions conceal their capacity to take more than they give: the illusion of information transformation, when the reality is barely more than a mere conveyance of ephemeral data.
All data transfers take place through centralised ‘switches’, currently powerful microprocessor controlled devices demanding enormous infrastructure in terms of building and electricity to perform. Any interruption in the ability of these switches to function results in total breakdown of service. This data exchange therefore comes at a huge, yet mostly hidden, cost.
The picture of the Indian ship of state racing through the ocean of economic development, with the skyline stippled by the awesome beauty of icebergs, comes irresistibly to mind. The dependence of ordinary people on faceless and occasionally unresponsive commercial entities for basic telecommunications, representing a paradigm shift from an earlier, exceedingly inadequate but state-supported, system is clear. It was necessitated by an unfortunate belief that communication was a luxury: this fallacy is completely discredited today, with telecommunications the very backbone of grassroots-driven development.
Traditional telecommunication solutions, both landline (wired or fiber, the concept is the same) and mobile, use the principle of circuit switching. In this model, an exclusive circuit is reserved for each conversation, or exchange of data. In effect, a portion of the entire connectivity infrastructure is devoted entirely to this particular dialogue. Digital enhancements to this model enable such sophisticated features as the sharing of multiple conversations in the same space, in the form of conference calls, but each additional participant actually occupies an additional circuit.
At this point, I think it is important to note that the apparent exclusivity of the circuit by no means assures personal privacy: the nature of the solution in fact leaves immense scope for the subtle content transfer of such exchanges to third parties, with or without the knowledge or consent of the conversationalists.
This is equally true of both GSM and CDMA as well, although being digital, these technologies are inherently capable of powerful encryption. There is little doubt that both protocols are compromised by design, provided with backdoor approaches to decryption.
Quite apart from these characteristics, this kind of communication network also features the need to reserve a complete end-to-end circuit for each call. GSM and CDMA technology networks also ‘poll’ devices and switches frequently, using reserved frequencies in order to achieve this activity. These frequencies are also used for ‘handshaking’, the mutual exchange of identification needed to correctly place the call, prior to allocating the dedicated network resource needed to maintain it until conclusion.
To summarise, circuit-switched telecommunications have a history of over a century of existence, and have progressed from electromechanical, completely analogue servo-mechanised switchgear, to electronic, largely digitally operated solid-state mechanisms. They are characterised by dependence on intermediate expensive and resource-hungry devices for proper functioning of the system.
The logistics of delivering call fulfilment also demands enormous resource allocation from the network, with complete end-to-end circuits locked up exclusively for the entire duration of the call, and additional spectrum reserved for ‘handshaking’ and ‘polling’.
Interestingly, the lengthy history of this exclusively dedicated resource paradigm makes it difficult for many users in India to even conceive of alternates. However, they do exist, and have come about from the diametrically opposite direction of digital computer technology development.
While microprocessors are heavily used in modern switched circuit telecommunications, they are mainly used to control the switching function, and play little role in subsequent activity. Microprocessors have, on the other hand, been central to the development of low-cost, so-called ‘personal’ computing, systems built on relatively inexpensive general purpose computers that enable a variety of applications from games to heavy-duty scientific calculations.
At the very early stages of the development process of microcomputers, it became an obvious advantage to be able to link them together in digital networks, harnessing the power of both devices and human users to work together collaboratively. Until 1995, such networks were largely connected physically, using various sophisticated cabling techniques to enhance the quality and throughput of data interchange.
However, that year, IEEE, the Institute of Electrical and Electronics Engineers, the standards body for electronics, issued the 802.11 standard for wireless data networking. It had been under discussion for several years, and finally all the participating manufacturers agreed to settle on a specification that all could meet. Almost immediately, the new standard became known as Wi-Fi, a play on the acoustic home audio quality standards known colloquially as Hi-Fi.
The new standard allowed suitably equipped computers to exchange data wirelessly, using tiny RF transceivers built on circuit boards with the necessary computer serial communication interfaces. Current interfaces include fast USB; this opens innovative possibilities that are the subject of enthusiastic research and development, about which more follows.
A fundamental difference between ‘mobile’ telephony (actually, ‘cellular’ telephony is a more accurate description) and the 802.11x standards is the fact that the latter operate on the exchange of self-addressed ‘packets’ of data, rather than the exclusive switching of entire end-to-end circuits. Essentially, any slice of spectrum in any physical geographical segment of the network is only used for the time it takes to transfer a single packet from one node to the next.
The specifications were designed to enable wireless connectivity at relatively close range, mimicking LAN standards that use UTP cable, and using industry standard Internet Protocol (IP). For this reason, such wireless networks are nicknamed WLAN (wireless local area network), and offer data throughput rates that parallel those available in wired/cabled networks.
Very quickly, do-it-yourself enthusiasts found that by tweaking the hardware with improved antennae, it was very easy (and with home-built antennae, very cheap) to extend the distance between wireless points, from the original 100 meters to hundreds, then thousands of meters. While effective communications need line of sight between points (nodes), this can (and has been shown to) extend till hundreds of kilometers. Recent advances such as the USB Wi-Fi dongle have been adapted to build even more sophisticated and reliable high-gain antennae, almost literally in kitchens, using cheap and convenient kitchen gear.
Commercial hardware manufacturers also began producing devices and antennae that exploited this feature, thus adding public credibility to the development. Since the devices are commonly sold for domestic use by a multiplicity of vendors, they exploit competitive market forces, especially with regard to costs: for example, the street price of a USB 802.11 b/g mini-device (external) has dropped in price from about Rs 1,000 a year back to Rs 200 currently.
An important factor is that spectrum regulators across the world (including in India) have allowed unlicensed outdoor use of the frequency band for this purpose. Actually, the original spectrum (nominally 2.4 GHz) was unlicensed to begin with, under international agreement, as ‘junk’, or unreserved, spectrum available for domestic use in microwave ovens, cordless phones and so on, but it is important to specifically allow its unrestricted outdoor use.
Modern variations of the standard (labeled ‘a’, ‘b’, ‘g’ and the latest ‘n’, released in September 2007) use other frequencies, but the unlicensed use of such frequencies is not universal (in India, one such band, nominally called 5.1 GHz, is restricted for indoor and campus use only).
As pointed out above, the development of this ‘industry’ was shared between the corporate sector and do-it-yourself enthusiasts, with much of the fruits of research being available in the public domain. This allowed the growth of public ‘free’ networks (ie, free of proprietary access): importantly from the point of view of this article, such networks have been very crucial to the provision and sustenance of rural networks.
Perhaps the most impressive of these ‘community’ networks is in Djursland, a rural district of Denmark. As of date, some 20,000 rural homes are connected across several hundred square kilometers. This area was a dying rural farming community, where modern societal services such as telecom, health and transport were being discontinued. This situation prevailed until 2003, when the network was initiated. Some 35 commercial telecom providers had either outright refused service or proposed nonviable pricing plans at that point. The Djursland network, in contrast, is maintained and physically grown by its own community members.
In India, several scientific and technological institutions have demonstrated the practical utility of such networks, including IIT Kanpur. However, the only very large network in existence locally is the 2,000 plus nodes of the AirJaldi network run by the Tibetan Technology Center in and around Dharamsala, in Himachal. There are many other smaller networks, run by NGOs and local communities, scattered across the country.
Following the development of WLAN, commercial companies have been researching other ‘business models’ using wireless. The emerging standard, called WiMax, promises to deliver broadband across medium distances using a cellular distribution. It is currently under commercial testing in several regions, including India.
While this protocol also involves packet-switched data exchanges, all packets must be transacted through central servers rather than being self-addressed. Obviously, it is possible to rationalise some amount of packet size between the address and information components, an this account for the increased data throughput capability. However, increases in capacity in the new 802.11 ‘n’ standard makes some of this advantage moot. Cost (total cost of installation) comparisons between Wi-Fi and WiMax indicate a twenty-fold increase in the case of the latter.
To summarise, it is economically and technologically possible for communities to set up and run their own very wide area data networks, primarily using industry-standard devices sold for domestic use and thereby taking advantage of economies of scale in their manufacture and marketing. It is also possible that new commercial wireless distribution of broadband data services will become commonplace in the future.
Since the exchange of data packets is entirely digital, digitally processed functions such as audio, video and multimedia simply represent resource allocations in the total data packet interchange, and given sufficient bandwidth can be served effortlessly within and through the network. Network applications such as VoIP, videoconferencing and so on are ubiquitous, with innovative variations possible in education, healthcare and other useful socially desirable possibilities are daily transforming service deliveries in these sectors.
Importantly, the technology inherently allows users to merge relatively seamlessly between data (common) and dedicated content streams such as telephony and television. This means that it is possible, in theory, to substitute traditional content access technologies with wireless data networks. Of course, in the interest of efficiency and minimising gross network utilisation, some applications, such as access to archives, are better run from servers that are deliberately kept as local as possible.
As it happens, such access is interestingly different, conceptually, from their parallels in the historical telecommunication paradigm. Continuous audio and video, for instance, arise from ‘streaming’ content delivered from storage servers, just as audio and television is delivered from storage media. The fundamental difference lies in the way that the storage can be accessed, which is by deliberate selecting the preferred choice. This can be as particular as a single ‘track’, or file.
In traditional media, radio and television, this is done by ‘tuning’ the receiver to a particular station, and further ‘drill-down’ is not possible. Interactive multimedia isn’t even possible. And telephony is a completely different arena from the push media.
Naturally, this dynamic has significant (and not unexpectedly, a positive) influence and impact on the efficiency of network resource allocation and utilisation.
It is not the intention here, given limited space, to exhaustively explain these philosophical differences, that arise primarily from the technological underpinnings. The interest here is to understand the ‘business models’ that dictate how such technologies are actually deployed.
Since the traditional media are conceptually end-to-end, the infrastructure for delivery must necessarily be created in detail, from the point of content creation to the ‘last mile’ delivery to the end-user. This is not required in the case of IP based digital data transactional systems, where individual ‘lakes’ of information resources are ‘pooled’ together through interconnection.
Extraordinarily, the ‘lakes’ are actually created by the users themselves, thus transforming the ‘last mile’ into the ‘first mile’. From the developmental viewpoint (and of course, a society that is not developing is stagnating), the difference is staggering.
Do the interconnects (regional, national, international) still remain as part of the infrastructure resources that society needs to externally (ie, through complexes of public, private or joint sector services) provide?
Until the development of IP-based wireless, this was the case, but is no longer so. To a large extent, IP based (cable) networks grew out of shared infrastructure, although there has been a stream of propaganda that a single US armed forces research program was responsible for the growth of the Internet. As the popular expression goes, such statements are rather economical with the truth.
While not pretending to argue that this kind of community exercise can be repeated at the global level, at the fine-grained – and even national – level, the situation is different. Many metropolitan areas around the world are choosing to set up their own, public, networks today. The ability to attract the sort of intelligent and hardworking people who typically need access to interconnected networks far outweighs the cost of setting up and maintaining free to access wireless interconnectivity. Urban conglomerations need high value, high revenue generating citizenry, in order to offset their costs and remain good places to work and live.
Rural areas are no different, although nearly everywhere in the world, the population density needed for society to exist is far lower, and the revenue generating potential even more so. This low economic density actually discourages providers of commercial services from investing in the level of switched network resources that assure high-quality connectivity. Poor connectivity, in turn, discourages high-value citizenry from staying rural. The problem of people emigrating from rural to urban centers is huge: both areas suffer almost insoluble difficulties as a result.
As far as rural telecom goes in India, the performance of the corporate sector is almost as dismal as that of the public sector, that once held the Indian telecom service as a monopoly. Recent astounding growth levels in total telecom density, almost entirely due solely to the mobile sector, and in fact quite possibly, more than offsetting the dropout rate of the landline segment (figures on true telecom penetration are unfortunately skewed badly for reasons well known to many, although not germane to this article) are sadly confined to the urban sector.
The digital enhancements of the traditional (if that is the right word) interpersonal telecommunication media – text messaging, caller identification, etc – have opened up new possibilities for increasing their relevance to economically poor areas. These are characterised by extraordinarily low teledensity (even by Indian standards – overall density is in the low double digits, but rural density is still in single digits).
There has therefore been a frisson of recent interest in maximising the usage of such media.
Unfortunately, it is difficult to imagine that this is nothing more than a chimera. While it is entirely true that modern smartphones, the enduser device of choice, are quite open to the development of specialised software applications, there are issues.
For instance, the operating systems used for these devices are supposed to hew to a standard. In reality, the implementations of individual manufacturers are sufficiently different that each application needs to be individually tweaked. Thus a user organisation (such as a micro-bank) is forced to buy exactly the same telephone to actualise a synergising technology deployment. Compared to parallel devices emerging from the computer sector, this is a major limitation.
The problem of connectivity is far more serious. At this point in time, only the public sector company (BSNL) has a presence in most rural areas, and it has a policy of refusing service (roaming) to other service providers. The company is subject to external ministerial supervision, and has been in the public eye for its 2 year delay in the purchase of new switchgear compatible to enhanced data services (so-called 3G equipment). The decision was finally made in September 2007 – and it was for a money-saving investment in more 2G equipment, thus effectively blocking several categories of data services for the foreseeable future.
Frankly, this would be a good thing, had the government actually followed a practice of technology neutral decision making. This turns out not to be the case. Whether it is spectrum availability, or hardware, or governance, decisions are nearly always skewed towards favouring particular technologies or vendors. In the case of telecommunications, the two are often synonymous, because commercial vendors overwhelmingly bank on technology differentiation in a complex and competitive global market.
To some extent, the situation in the personal computing sector is quite different. There are only three major varieties of operating systems, and only two vendor-specific hardware platforms on which they are deployed. Application compilation for each system is also largely a done deal.
It is true that device development in the handheld segment is not quite as far along as in the telephone segment. The proprietary environment surrounding telephony is largely responsible for this situation. Handhelds acquired acceptability simultaneously with cellular deployment. However, wireless ‘desktops’, devices that mimic the look and feel of older landline telephones, also exist, and are deployed in India within the designated Fixed Wireless Local Loop telephony license.
This year, the introduction of the Apple (a major US IT company) iPhone signals the first serious salvo in the Cold War for the handheld telecommunications device space. It also uses the proprietary Mac OS X operating system developed for personal desktop computers, but since this is built on the Mach kernel (derived from Unix, a minicomputer OS, whose functioning and enhanced features draw heavily from the Open Source and Free Software movements), it is fairly easy for ordinary people to program special applications.
The device is intended to break existing paradigms in the telecommunications space, and has already sold over 1 million pieces since its introduction in June. The company has also dropped its introductory price between $100 to $200, a staggering 40% reduction, making it exceedingly competitive with comparable devices from the telecommunication sector.
Since it is primarily a computer cum media device, with GSM telephony as a special feature, it inherently uses Wi-Fi (and Bluetooth, a very short distance wireless protocol) for connectivity.
Alternates from other vendors in the computer sector include mini-laptops, devices with screen sizes of under 15 cm (diagonally measured, the usual nomenclature) and keyboards with largish buttons. These are much lighter than typical laptops, and offer much longer battery operation, the critical factor for handheld devices. The standard laptop-sized ‘notepad’ could also become a significant device in this space, being a thin, touch-screen format, with no inconvenient bulky keyboard. However, it has not found major market acceptance so far.
To summarise, therefore, in the Indian context, rural telecommunication choices are at a cusp. Intriguingly, perhaps for the first time in the nation’s history, the choice is not between particular technologies, but rather between particular technologies and a completely hands-off, technology-neutral approach.
On the one hand, the government can continue to support individual telecom players, at least three of whom are already financial behemoths, having benefited enormously from the present licensing regime. On the other, the telecommunications sector can be opened up to community-led, grassroots driven growth, with the connectivity paradigm shifting from the view that spectrum is a scarce resource, to one where radio frequency spectrum is regarded as a true public resource, a commons, with no special reservations or allocations to vendors of either technology or devices.
The government needs to take hard realistic look at the development paradigm. Technology choices may continue to be driven from the top, but decisions that take years of deliberation (necessary partly because of their long-term implications, inherent because of the overweening responsibility, but where accountability is not a hallmark, at least not in our recent history) tend to fall short of the need.
The alternate is to trust citizens to make the best choices for themselves. Given the oft-expressed desire to create a Knowledge Society, this might be a good place to start.