A better feel for the value the Teletraffic Research Centre can provide to potential clients can be obtained by a quick look at some of the case studies provided.
- Timetable Experimentation for Optimising Use of Facilities
Large educational institutions generally struggle to manage their timetables; where tens of thousands of classes may need to be allocated to many hundreds of facilities.
Each of these allocations may be subject to room or room facility preferences, instructor and student preferences and student break time needs.
Commercial timetabling software is not usually suitable to conduct what-if analyses, such as whether a timetable is still viable if preferences are changed or facilities are closed.
Conducting these analyses can be crucial to managing facilities efficiently, with significant savings possible if an institution can reduce its operating expenses.
The TRC has developed a timetabling consultancy portfolio together with a suite of software called Suntex that aids in the process of timetable experimentation. Using Suntex the TRC can replicate an institution’s timetable in an experimental laboratory and answer many of the questions that are needed in managing facilities efficiently.
The TRC has consulted in various projects such as:
- Analysing the effects on room utilisation, for example after closing facilities. Is a global timetable still viable?
- Effects on student conflicts, for example after changing room preferences or class days or times.
- Modelling clinical timetables to determine the required type and extent of new facilities, such as specialty clinical facilities and instructional rooms.
- Telstra: DiaMOND Network Analysis: Squeezing the Most from High Speed Networks
Telstra's Frame Relay and ATM network have supported a wide variety of customers and applications.
Over time, the network evolves and expands to accommodate growth in demand. Being able to verify that changes to network topology and capacity meet detailed performance and reliability requirements is a complex task, especially when details of network layer routing rules need to be taken into account.
Telstra needed techniques and tools to assist in the process of change verification so as to ensure that such requirements continue to be met.
We undertook the research and development required to meet that need. The project involved:
- Interfacing to databases containing information on Telstra 's ATM and Frame Relay network topology and configuration.
- Implementing network routing protocols (such as Private Network Node Interface, PNNI).
- Implementing vendor specific fault recovery algorithms.
- Developing algorithms to determine whether Telstra specific design guidelines would be achieved for a given network topology and utilisation.
The final system developed, known as Diamond, has played an integral role in optimising Telstra's network infrastructure investment.
- Australian Government: Optimising Call Centre Performance
A major government organisation wanted to investigate the potential of applying new business rules in their call centre operations in an effort to improve their overall service quality. Their call centres, which handle over 2,000,000 calls per year, serve professional agents and the general public, answering questions across a wide range of topics.
The organisation needed to know whether the goals of the new business rules could be achieved through the use of smart configuration of the call centre systems and if so, could those rules be implemented in the existing call centre environment.
The organization wanted to simultaneously achieve different service level targets for the professional agents and the general public. Standard approaches to providing differentiated service levels usually result in (a) neither group achieving the desired target, or (b) the lower priority group being given excessively poor performance. Neither of these were considered to be acceptable.
We designed and analysed new call centre routing options that achieve specified target service levels for each group, across a wide range of traffic load profiles, over measurement periods ranging from an hour to a day. We then quantified the robustness of the solution to variability in traffic demand, team sizes, and the mix of calls coming into the call centres.
We showed that the call centre routing options we designed could cut costs in terms of the number of call centre staff, increase robustness to traffic variability, and simultaneously achieve different target service levels for each calling group.
By working with the call centre system and network equipment suppliers to the organisation, we were able to help determine that the new solution was not only good on paper, but implementable in practice.
- Foursticks: Improving Quality of Service in Enterprise Networks
Foursticks NP Gateway brings enhanced Quality of Service and improved bandwidth efficiency to enterprise networks by enabling network administrators to control network resource allocation at a user and application level of granularity. This enables, for example, the prioritisation of mission critical application traffic over non-business traffic.
The interactions between network protocols, quality of service management and user behaviour are complex at the best times and change significantly from one customer environment to the next. Foursticks needed the capability to model those complex interactions in order quantify the benefits of NP Gateway to clients while also evaluating the efficiency of their underlying implementation.
We developed a network simulator that includes a variety of Quality of Service management implementations, including patented Foursticks algorithms. The simulator supports a wide range of network protocols, applications and user behaviour models and enables detailed investigation of the dynamic interaction of the NP Gateway and the underlying communications protocols.
Foursticks can now quickly and efficiently model proposed deployment options for their customers and evaluate the efficiency of their underlying implementation. The simulator has led to quick identification of issues in existing deployments and to recommendations on addressing those issues. Further, we have been able to examine enhanced Quality of Service mechanisms and identify those providing the greatest benefits to Foursticks' clients.
- Smart Internet CRC: Control of Multiple Audio Streams
A growing area of technical importance is that of distributed virtual environments for work and play. For the audio component of such environments to be useful, great emphasis must be placed on the delivery of high quality audio scenes in which participants may change their relative positions. By maintaining synchronization, an end-user is ensured a more stable audio environment, which will assist in maximizing the usability of the application especially when the application characteristics are highly dynamic.
The Smart Internet CRC was interested in designing a multimedia application focussed on an immersive virtual environment. In this project, it was apparent that an integral part of constructing such an environment was the realism of the audio component. To further add complexity, there was a desire to enable a large number of users to communicate with and over-hear other users in the environment, such as would occur in a cafe environment in the real world. The Teletraffic Research Centre's contribution to this project was the design of synchronization algorithm to assist in the delivery of high quality interactive audio to the end-users.
The delivered solution was in the form of an efficient algorithm that can achieve and maintain relative synchronization between audio streams in a real time audio mixing environment.
The algorithm does not attempt to maintain absolute synchronization between audio streams, as this would require the use of global timing. Rather, the algorithm attempts to maintain consistency within the mixing process such that the alignment of audio samples from each stream remains as constant as possible given the random elements of network delay.
The algorithm is able to adapt quickly to gross changes in the underlying delays of each stream, as might result from network link failures. At the same time, the proposed algorithm is robust to short term variations in delay resulting from audio stream packets being queued at routers.
- VoIP Service Provider: Robustness and Control of VoIP Control Systems
Our client was deploying equipment in support of Voice over IP (VoIP) call delivery to major corporate customers.
Facilities were not available to undertake testing of the system at high traffic loads, at or near the advertised capacity of the systems - so how was our client going to predict the performance of the system in such situations?
We developed a detailed model of the proposed VoIP delivery platform and analysed its robustness, with particular focus on the reliability mechanisms built into the platform.
We uncovered fundamental shortcomings in the design of the system, specifically, that the call throughput of the system would drop dramatically if the platform was offered calls at a rate marginally above its engineered capacity. The impact on the customer would be that they would be unable to make and receive calls, which could be potentially disastrous in certain environments, for example, in call centre based operations.
Our client consequently did not put the platform into service in high value, high traffic environments.
- Network Topology Inference: Figuring Out Network Topology from Partial Information
Internet providers are generally unwilling to share information about their networks. The information, such as topology, routing policies and so on, would be very useful to network researchers, but also of great interest to their competitors and intelligence agencies.
Researchers undertaking network analysis studies need reasonable representations of real networks in order to validate their research, especially in areas such as network management, reliability and resilience. Algorithm performance depends on the type of network and traffic to which it is applied, and so realistic experimental environments are needed.
Reverse engineering of the Internet has been seen as a valuable activity for researchers and various projects internationally have provided datasets that are invaluable to network researchers. It is therefore somewhat surprising that few efforts have been made to validate the methods and results of such projects.
We have developed and validated a new network topology inference methodology that can be used to obtain shortest path link weights on interior links of networks, in other words, for determining the internal structure of major sub-networks of the Internet.
The difficulty in assessing the accuracy of such inferences is that a non-unique set of link-weights may produce the same routing, and so simple measurements of accuracy, even where ground truth data are available, do not capture the usefulness of a set of inferred weights. We developed a new measure, predictive power, to assess the quality of a specific inference process, and found that the process is reasonably accurate, particularly for networks with low average node degree.