- More Granularity
Effective bandwidth management is about having good information. The more important a link the more granular our information needs to be. Five minute polling may be alright for much of the infrastructure but it's totally inadequate for a critical link carrying latency sensitive data. A 10 second 100% spike completely obliterating the SLAs of all critical applications will be reduced to a 3% or less bump in a five minute polling cycle. Even a 30 second event will be reduced to only 10%. Being able to poll critical links at high frequencies of 10 seconds and below gives you information you need to be transient.
- Latency Measures
Regardless of bandwidth, what is important for the majority of today's applications are the latency and jitter characteristics of the link. With moden queuing technologies even high bandwidth utilisation may not have an adverse impact on the quality of the services provided by the link.These very important KPIs should be monitored to help build good baselines and spot deviations immediately.
NetFlow, jFlow, sFlow and all of their kind universally help you to understand what makes up the traffic causing your bandwidth utilisation. Without the need for probes or sniffers, you can quickly and continuously monitor the makeup of the traffic traversing your most critical links. You can then quickly identify high bandwith users, wasteful traffic and gain an understanding of the applications using your network. This is essential to enable you to identify how best to prioritise traffic and create policies that eliminate wasteful traffic.
Once you understand the traffic on your network, the next step is to classify and prioritise it. This allows truly important traffic to travel unhindered even in high bandwidth utilisation conditions while the rest waits for the best effort service. Monitoring the utilisation of the queues is key for verifying your configuration which can often be complex and for the process of continuous optimisation. Baselining the performance of each queue and alerting when anomalies happen before the entire circuit is affected will enable you to be more proactive. Comparing the configuration of QoS queues with the information derived from NetFlow will ensure that your enterprise applications always have the level of service required.
Continuous monitoring plays a critical role in bandwidth management. Trending and baselining bring automation and proactive prediction that most other techniques miss. Using highly granular baselines of 15 minutes or under over predictable business intervals can enable you to quickly spot deviations from the norm before they affect the performance of applications and the quality of user experience. Trending however lets us predict what may lie in the future based on granular historical data. Generally in order to have a meaningful prediction a significant amount of historical data should be available - about 6 times the length of the interval you are trying to predict. This data must be granular to ensure that the statistical analysis can be as sophisticated as possible including peaks, valeys and repeating patterns.