Asset Management: How to optimize spend, minimize breaches, and shrink downtime
By defining asset management, understanding its challenges and identifying gaps in your strategy, the payoff can be substantial for your overall enterprise IT environment.
There are as many definitions of asset management as there are organizations who utilize it. For the purposes of this blog, let’s define asset management as understanding what assets you’ve purchased and deployed into your IT environment, and that you’re monitoring up to the point of disposition—that is, when all of the sensitive data on the assets is completely erased. There are challenges to implementing this concept (and making it a proactive effort rather than a reactive one) that can prevent it from being the most effective. When done properly, however, asset management can be a valuable key to realizing significant benefits to your organization in terms of optimizing spend, minimizing costly breaches and downtime, and maximizing productivity.
Challenges: Comprehensive Communication and Control
Think of the overall goal as being this: the implementation of an effective, sustainable IT asset management solution over the lifecycle of your assets (from purchasing, deployment, monitoring and de-commissioning), all the while maintaining control of the process from a financial, operational and ownership point of view.
The greatest challenge to that goal? The process of asset management affects a number of departments in your organization, each with its respective IT systems and each with its own primary objectives and needs. For example, the purchasing department is concerned with requisitioning and timely/cost-effective procurement of assets; accounting is concerned with initial costs and tracking the usable life of assets to maintain the highest cost efficiencies; IT is charged with validating all assets on the network at any given time, tracking what is active, where it is deployed, what IT addresses they represent, image setup and management, etc.
The challenge comes in when these somewhat disparate systems with differing goals fail to communicate seamlessly, leading to a breakdown in that all-important comprehensive control of the process that prevents optimum effectiveness—and the achievement of the desired benefits stated above.
Begin by Identifying the Gaps
So how can you start implementing asset management more effectively to eliminate/minimize the typical pitfalls? The first step is to identify “gaps” in communication or effectiveness in the above integrated processes. Typically, the dysfunction falls into one of two categories: 1) the tracking of assets, or 2) the monitoring and managing of the assets.
Optimally, asset tracking occurs for the full accounting lifespan of the assets, typically 3 years plus 3-6 months after purchase. Once these assets are deployed, they must be tracked closely in order to make the wisest financial decisions regarding repair, replacement and optimal utilization. This is especially crucial as departments require an increase or decrease in computing power or scalability caused by changing business circumstances. Thus, meticulous asset tracking leads to not only smarter financial decisions, but also wiser operational decisions for the entire time the assets live within an organization’s IT environment.
When it comes to the monitoring and managing of each asset (ideally, down to the serial number, mac address, owner and physical location ), the optimal approach is definitely a proactive one. From the time of procurement, each unit should be operationally identified and cataloged, then monitored and managed for image requirements, deployment location, network connection, OS level, patch compliance, licensing status, and anti-virus/malware/spyware requirements and activities. This type of proactive monitoring leads to the best visibility and control of that asset throughout its entire organizational lifespan, and helps identify problems before they become major issues that spread throughout the network.
Address the Problems: Manual or Self-Healing?
When problems do arise, one possible corrective action is a manual fix, which is time- and cost-consuming, not scalable, and usually based on the skill set and availability of your internal staff.
Alternatively, the industry is moving toward the implementation of a self-healing approach to network issues. With self-healing, a “ping”—or regular “health check”—identifies a problem at the unit/device level, then applies an automated in-place protocol to immediately fix that problem. Self-healing can address such issues as over-consumption of memory by applications, revelation of known or unknown viruses, outdated anti-virus software, outdated OS status, etc. A self-healing approach to these issues all adds up to greater focus for the users on their primary job responsibilities; they are no longer sidetracked by computer problems. Therefore, the overall organization enjoys greater productivity resulting from that focus, and overall better network security due to the prevention of small problems becoming bigger issues.
Endpoint Visibility, Control, and Stability
How does an organization ensure this kind of prevention and security in an environment where there may be hundreds of thousands of units/devices deployed on the network, including everything from switches and routers to wireless access points like printers, scanners, laptops and computers?
A sound endpoint protection strategy today includes endpoint security tools that can provide constant visibility and pinging of every unit on the network on a regular, sustainable basis. Then, an immediate action can be taken when an irregularity, issue or suspected problem arises. These corrective actions include such things as automatically upgrading an anti-virus program when it goes down-level, quarantining a unit when an issue has been identified, alerting system administrators of suspected intrusion attempts such as DDOS attacks, shutting down a singular IP address to protect the entire network, and more.
The Big Payoff
This heightened, complete and automated endpoint visibility and control—due to proactive monitoring combined with automated self-healing protocols—is the true key to realizing the payoffs of a well-implemented asset management strategy: actionable insights, reduced risk to the IT environment, lowered need for IT staff additions to handle issues, increased productivity, and significantly reduced costs.
Imagine saving 5-10% of your procurement budget for new hardware because sound asset management has helped you enhance and prolong the life of your devices. Imagine saving another 5-10% in productivity loss because your IT department was alerted and able to take corrective action on an issue before the user even became aware of it. What could you do with that flexibility in your budget?
In broader terms, imagine not being part of this statistic: according to Forbes, the average cost of a data breach globally amounts to $3.86 million, or $148 per affected personal, health, financial or other electronic record. In the U.S. last year, the cost was $7.9 million per breach.
Enjoying the former statistics and avoiding becoming a part of the latter is the big payoff to successful asset management. Begin now, and your organization will be closer to realizing those payoffs—and gaining the marketplace advantages those payoffs bring.
To learn more about how ITS can help your organization’s asset management strategy be more successful, check out our client computing offerings or contact us today for more information.
Jesse Alexander is the President of Innovative Technology Solutions.