The announcement in Australia by the minister of a AUD60 million ‘Digital Regions Initiative’ to improve NGN-based health, emergency and education services highlights a worldwide issue of growing urgency, namely how to ensure effective telecommunication services in times of natural disasters. In short, the problem should be greatly eased if and when access to the Internet becomes ubiquitous, and in this regard the evolution of all-IP ‘next generation networks’ will play a vital role by widening the scope of channels and services through which emergency messages can be relayed and vital information accessed.
But there are major hurdles involved even when the investment problems have been overcome. The key problem is how to prioritize emergency calls and emergency access to databases and to information sources so that the emergency services (police, rescue forces, medics, NGOs, military, government, and so on) can do their job fully informed.
Prioritization has never been much of a problem across the (essentially voice) PSTN because networks are designed to prioritize at moments when traffic greatly exceeds normal ‘busy hour’ volumes. They prioritize access and they prioritize routing, and if necessary they can send messages explaining the situation to end users to deter them from serial redialing. All-IP networks are yet to meet these challenges partly because such networks, where they exist, are immature, and partly because they combine elements of the PSTN quality-control functions and the IP-world ‘best effort’ approaches. They contain elements directly related to telecoms and elements directly related to IT— which traditionally work to totally different standards of performance. And because the control and information functions – such as directories, billing and accounts, subscriber service levels, and so forth – are distributed across a host of different servers within an NGN (an all IP network), any new protocols and algorithms need to work with them all, or at least with a critical subset of them.
In fact the telecoms world (the ‘Bellheads’) and the Internet/IT world (the ‘Netheads’) often seem to have diametrically opposite approaches to solving the problem. The ‘Bellheads’ look towards standards and integrated functions. The ‘Netheads’ look towards a DiffServ (or differentiated services) network architecture which in some case they argue could be simply outsourced to a service provider.
A myriad of other problems also need to be solved. For example, how to locate the address of an emergency caller if they are using VoIP from an unregistered location and with a number that may have been assigned from overseas? A US Working Group identified 14 challenges of this nature to be dealt with by any Emergency Telecommunications Service (ETS). If Australia comes up with workable solutions there will be a world market for them. Standards bodies from both the telecoms and the Internet/IT worlds have been working on different approaches to ETS for a number of years, but as the technologies evolve so do the challenges, and so do the possible answers. The past year has seen enormous loss of life and property from bush and forest fires to earthquakes, from torrential storms, flooding and landslides to tsunami and inundations. Often these natural disasters are exacerbated by human actions, such as deforestation and illegal logging, but human action can also mitigate. The building of an ETS capacity into future telecommunications networks is an issue that deserves the attention of policy makers, operators and vendors alike.