Jeff Davey, CIO, GE Oil & Gas, Digital and Digital Solutions
In today’s complex computing environment, many different technologies are used to conduct business. A typical business would have many types of mobile software applications, voice solutions, e-Commerce products, collaboration tools, Internet of Things (IoT) and Industrial Internet of Things (IIoT) applications all running on the corporate network and devices. The competition for network bandwidth and compute resources requires new tools and processes to help ensure a consistent and exceptional end-user experience. Many of today’s applications require intense computing resources. Here are some examples of technologies that we should take note of:
- Software-as-a-Service (SaaS) applications
- Cloud-based applications leveraged from globally remote sites
- Hybrid applications (traditional on-premises apps integrated with Saas apps)
- Media-rich applications
- Engineering Product Lifecycle Management (PLM) applications
- Real-time collaboration tools
- Social media
- Data Lake data ingestion and data retrieval
- Internet of Things (IoT) or Industrial Internet of Things (IIoT) –drive new data-intensity challenges
o Data collection points - Sensors
o Edge devices, appliances, controllers and gateways that collect, store and forward real-time data for IIoT applications
As the need for IoT/IIoT data increases, we need to pay particular attention to these Big Data generating solutions. IIoT solutions typically collect massive amounts of data. The storage and retrieval of the IIoT data in applications must be considered when planning the computing environment. All of these applications and technologies will drive variability in the network and compute environment. Unless managed well, all of these applications and technologies could cause substantial variation in end-user experience.
Global end-user feedback – “My application runs slow!”
Most IT professionals have heard this comment from their internal end-users or external customers. If we don’t have the correct tools, processes, and management rigor in place, we won’t know where the “slow application” problem exists. Is the slow application responsiveness caused by poorly written application in need of optimization? Is it the data store or database technology that needs to be tuned? Are dynamic compute events occurring that are driving spikes in network usage? Without baseline data, we don’t know where to start investigating the problem(s).
Most IT professionals have heard this comment from their internal end-users or external customers
In order to get the needed visibility, we need to leverage Application Performance Monitoring (APM) tools. With the APM tools in place, we can begin to understand our computing environment and the effect on the actual end-user’s experience. The APM tools typically provide visibility into the following areas:
- Application topology
- Historical Benchmarking and trending of the application performance
- Service Level Agreement measurements (SaaS & network providers)
- End user experience and usage data
- DevOps tools – application performance benchmarking
In addition, some applications have built-in transaction monitors that can measure key strokes, and the response time as the end-user is using the software. This type of data is very useful to help find the slow transactions. When you combine the transaction monitoring tools with output from the APM tools, you have good visibility of what is occurring in the environment.
Helpful considerations and solutions
Gaining visibility to today’s global computing environment variables is critically important. Only then can we pinpoint the issues that may affect the user population. It is very helpful to have as much correlated management data as possible. Today’s APM vendors are increasing their wing-to-wing views of compute environments. The APM data correlation, provide insights that lead us to appropriate performance solutions. In some cases, the data will allow for better discussions with Cloud-based application, network, and IoT/IIot solution providers. With relevant APM data, discussions with service providers are more meaningful. Some examples of how the APM data can provide insights:
- Cloud-based application performance variability in global hosting locations
- Identifying the need for increased bandwidth to appropriate levels
- Stress testing networks and applications to show performance vulnerabilities
Future trends in APM
I would expect to see more machine learning and analytics in future versions of APM tools. The use of machine learning should help identify “normal” compute environment reactions versus something that is out of the ordinary. This will enable better insights into the computing environments and their operations.
Conclusion and final thoughts
APM is critically important in today’s business environment. The media-rich applications and IIoT data collection processes, analytics, and applications are all taxing our network and computing resources. In today’s globally diverse network environment, we have less direct control of the underlying networks. The combination of internal business networks, Internet, mobile devices, and SaaS architecture is becoming the new norm. We must have visibility and true management of the end-user experience This is true for Application providers that must know how their product and services are being delivered to their customers. It is also true for businesses that have customer facing applications that must ensure a superb customer experience. Internal company users also need and expect a robust user experience for the applications they use to conduct business. APM gives us the necessary tools, processes and capabilities to manage the end-user experience, applications, and performance of today’s business-critical applications.