HomeMy WebLinkAboutMaster IT Plan 2017 FinalMASTER PLAN 2017
Information Technology & Systems
Ian Fitzgerald
Manager Information Technology & Systems
Long range plan of Technologies and Systems supporting District’s Electric and Water Services
1 |P a g e
Table of Contents
Table of Figures ________________________________________ 2
Executive Summary _____________________________________ 3
History and Background ____________________________________ 3
Network Infrastructure__________________________________________ 3
Remote Site Communication _____________________________________ 4
Applications & Databases _______________________________________ 5
Geographic Information Systems & Computer Aided Drafting _________ 5
Supervisory Control and Data Acquisition _________________________ 5
Business Intelligence Dashboards _______________________________ 5
Automated Meter Infrastructure _______________________________ 5
Customer Information System & Portals __________________________ 6
Technology Acceptance _____________________________________ 6
Establish an IT Governance structure ____________________________ 6
Enhance technology leadership roles ____________________________ 7
Make technology training mandatory and routine __________________ 7
Getting Started with the Plan __________________________________ 7
Efficiency Metrics _______________________________________ 8
Network Infrastructure Upgrades _________________________________ 8
Business Process Improvements ___________________________________ 9
System Planning Criteria ________________________________ 12
Method Evaluation _______________________________________ 12
Maintenance and Support ______________________________________ 12
Network Build-Out ____________________________________________ 12
Bandwidth & Performance ___________________________________ 13
Network Clients ____________________________________________ 13
Campus Extension __________________________________________ 14
Hardware Refresh Cycle ________________________________________ 14
Key Challenges ____________________________________________ 14
Useful Life Guidelines _______________________________________ 14
Factors That Determine Useful Life _____________________________ 16
Operating Cost ____________________________________________ 17
Recommendations _________________________________________ 17
Existing System ________________________________________ 18
Network Infrastructure ____________________________________ 18
Network Switching ____________________________________________ 18
Core Switches _____________________________________________ 18
Data Center Switches _______________________________________ 18
Edge Switches _____________________________________________ 19
LTE Routers _______________________________________________ 19
Wireless Access ______________________________________________ 19
Fiber Optic Cables_____________________________________________ 20
Cable Build-Out ____________________________________________ 20
Abandoned Pipe Re-Use _____________________________________ 22
Security Infrastructure _____________________________________ 23
Firewalls _________________________________________________ 23
Access Control & Authentication_______________________________ 23
Intrusion Detection / Prevention ______________________________ 23
Video Surveillance __________________________________________ 24
Data Center Infrastructure _________________________________ 24
Server Technology ____________________________________________ 24
Virtual Desktops ______________________________________________ 26
Phone Services _______________________________________________ 26
Two-Way Radio ______________________________________________ 26
SCADA Infrastructure ______________________________________ 27
Financial Impact _______________________________________ 29
Capital Improvement ______________________________________ 29
New Purchases and Build-Out ___________________________________ 29
Hardware Refresh ____________________________________________ 30
Maintenance and Support __________________________________ 31
Final Overview ________________________________________ 32
2 |P a g e
Table of Figures
Tables
Table 1: Cost Savings from Server Technology Upgrade ................... 8
Table 2: Savings in Power Outage Response – Jan & Feb 2017 ....... 10
Table 3: Estimated Yearly Savings Automating Leak Notifications .. 11
Table 4: Ever increasing resource needs ........................................ 13
Table 5: Detailed Build-Out Cost Estimate...................................... 29
Table 6: Detailed Hardware Refresh Cost Estimate ........................ 30
Table 7: Detailed Maintenance & Support Cost Estimate ............... 31
Maps
Map 1: Internet of Things (IoT) Build-Out ...................................... 14
Map 2: Fiber Optic Cable Full Build-Out Design .............................. 20
Map 3: Abandon Water Pipe Repurposed for Communication ....... 22
Figures
Figure 1: Cost Savings from Server Technology Upgrade ................. 9
Figure 2: Process Improvements for Power Outage Response ........10
Figure 3: Process Improvements for Water Leak Notifications .......11
Figure 4: Bandwidth Growth with Radio Addition ..........................13
Figure 5: Bandwidth Growth with Camera Addition .......................13
Figure 6: Ericsson Report - Explosion of IoT ....................................14
Figure 7: Equipment Required at Stations ......................................14
Figure 8: Network Hardware Useful Life Expectation .....................15
Figure 9: Basic Network Architecture Design ..................................18
Figure 10: HDC Server Hardware Capacity ......................................25
Figure 11: HDC Storage Utilization .................................................25
Figure 12: CRP Hardware Capacity .................................................25
Figure 13: CRP Storage Utilization ..................................................25
Figure 14: VDI Hardware & Storage Capacity..................................26
Figure 15: District Unified Communication Diagram .......................26
Figure 16: DMR Radio Architect Design ..........................................27
Figure 17: Typical Water SCADA RTU Layout ..................................28
3 |P a g e
Executive Summary
The following document presents the Truckee Donner Public Utility
District (District) – Information Technology (IT) Master Plan. This Plan
is the culmination of a comprehensive technology assessment and
planning process which has included input from all executive and
department stakeholders.
This Plan is intended to be adaptive and flexible in order to balance
the diverse technology needs of the District. The Plan does this by
creating new processes for managing technology that will improve
services and systems for our customers (external and internal). The
Plan seeks to create operational synergy by creating new processes
and supporting the people that deliver technology services at the
District. This Plan also establishes a foundation for sustainable
technology planning.
History and Background
In 2010, the District made a conscious decision; in order to provide
more reliable and efficient electric and water services to the District’s
rate payers, all the while maintaining a steady employment level and
costs, technology needed to play a much larger role: accentuating the
automation of tasks, the management of data, and the improvement
of security to District facilities, staff, and rate payers.
Network Infrastructure
Since this time, the District has made major upgrades and
renovations to their existing IT network infrastructure. In 2011, the
District revamped its entire data network, adding a new data center
and replacing outdated switches with state-of-the-art Layer 2/3
network switches (core, data center, and edge). This work integrated
both physical and wireless network onramps with the added security
of a new firewall and network authentication system.
The following year, the District migrated all their physical servers to
an advanced blade server chassis and virtual environment software.
This server environment has allowed the District to significantly
reduced labor hours required to maintain approximately seventy
servers, while providing high availability and improved
performances.
In 2013, the virtual server environment was extended to include a
disaster recovery environment, ensuring essential District servers
and data were maintained at an offsite location. Later that year, the
District replaced all desktop computers with a Virtual Desktop
Infrastructure (VDI) allowing staff to connect to their personal
desktops from any location on any device; revolutionizing how
business can be conducted at the District all the while reducing the
District’s attack surface.
Continuing the fast pace of the IT infrastructure replacement, all the
District’s communication systems were then upgraded to improve
reception, reliability, and device agnostic choices. Old analog phones
were replaced with an IP phone environment in 2014, and a very
outdated two-way radio system was replaced in 2015 with a ground-
breaking new two-way DMR radio technology.
Capping off the IT network upgrades, the District’s new disaster
recovery data center, situated at the corporation yard, went online
in January 2016; providing true redundancy of servers and data
between the main office and a backup site.
Current network upgrade projects, completed by the summer 2017,
include:
4 |P a g e
·Adding 69 new HD security cameras to 16 existing cameras,
providing video security for 22 of the District’s 60 facility
properties
·Upgrading the District’s Layer 3 core switches; increasing
capacity, redundancy and reliability
·Adding secure LTE network capabilities to remote sites
without fiber communications
Remote Site Communication
During the same time-frame as the network being brought to a
modern level of technology, the District was improving
communications to many of its remote sites: pump stations, tanks,
wells, and substations.
Communication in the Truckee, California region is notoriously
difficult; traversing elevation changes of 3000 feet, snowfall in the
tens of feet, and hundred feet pine trees with needle lengths the
same as a 900MHz bandwidth. Traditional and even the newest
wireless/radio technology are unable to bridge many of the
communication gaps between the outlining District properties and
the main office building.
In 2011, the District began the design of their SCADA Reliability
Improvement Project. This project is intended to connect every one
of the District’s remote facility infrastructure buildings, water and
electric, with 68.6 miles of redundant, secure, highly reliable fiber
optic cable. Due to the complexity, time, and limited resources, the
project is set to be completed in stages. Currently four stages have
been completed.
The first phase, consisting of 6.93 miles and 5 stations (Martis Valley
Substation, Truckee Substation, Glenshire Drive Well, Old
Greenwood Well, and District Headquarters) was completed in 2012.
Phase two went west to the Donner Lake geographic area in 2013,
entailing 7.25 miles of cable, one microwave link, and 6 stations
(Donner Lake Substation, Donner Lake Tank, Red Mountain Booster,
Richards Pump Station, West Reed Control Valve, and Wolfe Estates
Pump Station & Tank).
In 2014, the third stage was completed with 8.07 miles of cable, 3
microwave links, and another 8 stations coming online (6170 Tank,
China Camp Pump Station, Corp Yard Disaster Recovery, Donner View
Pump Station & Tank, Fibreboard Well, Palisades Pump Station &
Tank Prosser Heights Well, and Prosser Village Well).
A fourth phase was constructed in 2017, bringing the design build out
close to a 50% completion rate; adding another 10.04 miles of cable
and 9 more stations (College Control Valve, Gateway Control Valve,
Glenshire Control Valve, Glenshire Distribution Station, Hirschdale
Well & Tank, Northside Well, Sanders Well, Strand Pump Station, and
Well 20). Future phases will bring an additional 23 more stations
online with an added 36.31 miles of cable.
Last link communication (short communication hops not impeded by
terrain) will further extend the IP Network to an additional 20
locations required for SCADA and AMI collection via microwave, Wi-
Fi, LTE, and radio technologies. Currently, four microwave links and
one wireless hop exist: Donner Lake Tank to Red Mountain, Old
Greenwood to Glenshire Distribution Station, 6170 Tank to
Ponderosa Palisades, Ponderosa Palisades to Donner View, and
Martis Valley Substation to Beacon Hill. As fiber optic cable access
extends, these links will move to new locations, bringing more
stations online quicker SCADA Reliability Improvement Project can
sustain.
5 |P a g e
Applications & Databases
Improved network infrastructure and communications have laid the
backbone for the driving forces behind these upgrades; the need to
improve efficiency, reliability, automation, government
transparency, and customer service. The application level of the
Information Technology Department aims to provide, integrate, and
maintain software and databases that will allow district staff to
manage, model, and design assets (GIS & CAD), automate commodity
flow (SCADA), analyze data (BI Dashboards), collect revenue (AMI),
and serve our customers (CIS & Customer Portals).
Geographic Information Systems & Computer Aided Drafting
The District maintains two automated mapping and design software:
AutoCAD and ArcGIS. The GIS, since 2000, has maintained a single
point of truth for the District’s assets, and the basis for very advanced
modeling of the District’s infrastructure. Flagship add-on software to
the GIS include: Designer (electric infrastructure design), Responder
(outage management), InfoWater (hydraulic modeling), Fiber
Manager (fiber optic management), GoSync (field mapping &
inspection), and ArcFM Web (web map portal).
Supervisory Control and Data Acquisition
Both the electric and water departments implemented SCADA in
1995 to operate and maintain their respective network
infrastructure. The Electric SCADA monitors voltage, kVA, and load
for every circuit. The product went through a significant upgrade in
2014, migrating from an OpenVMS physical server to a Windows
based virtual server. Advanced revenue metering systems were
deployed in 2015, providing better power purchase accounting.
Water SCADA was originally a home-grown software developed at
University of Nevada Reno. It had served as an adequate system,
automating the movement of water through the Truckee region, but
has lately begun to fail due to the system growing larger, and running
on old antiquated software. Replacement of the water SCADA began
in 2013, encompassing new RTU and PLC hardware, as well as new
software. The project is designed and programed in-house,
systematically replacing the old system in stages. Donner Lake’s five
stations went online in early 2015 and Glenshire’s five stations came
online in the fall of 2016. Project replacement is scheduled to finish
in 2021.
Business Intelligence Dashboards
Unique challenges to provide transparency and insight from multiple
databases and sources to both internal staff and external customers
is often a struggle by many agencies. Business Intelligence (BI)
dashboards provide a way for staff and customers to view large
amounts of data in a way that is easy to read and understand.
Many of these dashboards, developed in-house with Logi Analytics,
are currently in use today; with external site MyH20 a prime example
how this technology can provide valuable information to customers
to understand their water usage; if they are experiencing leaks, and
whether or not there are meeting California’s drought requirements.
BI Dashboards will continue to be developed to ensure the mass
amounts of data are in front of the right people at the right time
aiding in a better decision process.
Automated Meter Infrastructure
In early 2000, the District moved from manually reading meters to
AMR (Automated Meter Reading) technology which required District
vehicles to drive-by meters. This technology saved countless hours
of labor, reducing meter reading staff to a half position today.
With the water department laying the ground work for AMI
(Automated Meter Infrastructure) technology in 2007, the District
6 |P a g e
built a radio-based network infrastructure, providing true systematic
reading automatically from meter to MDM (Massive Data
Management Databases) without any staff labor involved.
Using the exact same infrastructure in place, the electric department
is in the process of testing electric AMI meters. Based on early
results, it is expected the District IT department will be able to
provide one unified AMI network to serve both the electric and water
departments. This merger not only saves money for equipment,
servers and databases, but will reduce IT staff hours to manage and
maintain the system. Improved meter technology also provides a
customer service benefit; speeding up connect/disconnects,
providing pre-paid options for consumers, and allowing our
customers to gain better insight to their commodity use.
Customer Information System & Portals
All the development and improvements to technology here at the
District lead to one absolute purpose: the ability to provide the best
customer experience possible.
Since 1998, the District has maintained a Customer Information
System (CIS) and Accounting and Billing System (ABS). This system
has been the basis for all customer interactions from paying a bill to
creating work orders. The system was extending in 2013 with the
addition of the SmartHub customer portal, enabling our rate payers
to view and pay bills, observe their usage patterns, and sign up for
notification of water leaks; either as an internet webpage or a
smartphone app.
Customers also have other tools at their disposal, including a
conservation rebate portal, and the District website. The District
website was upgrade in 2011, and is due for another upgrade in 2017.
Technology Acceptance
IT resources and expenditures are likely to remain steady for the
foreseeable future while the demands for technology services will
continue to grow. In concert with establishing District-wide
principles for guiding IT decision-making and emphasizing increased
communication and coordination of IT efforts, this Plan seeks to
optimize the District’s resources and leverage its strengths to meet
growing demands.
A common theme has presented itself in years past regarding the
acceptance and use of technology.Every organization should expect
to face luddites, people who aren’t naturally tech-savvy, and
naysayers whose knee-jerk reaction is to oppose new things. There
are always some people who have their routines, and they just don’t
want to change. This attitude persists as long as the organization
permits it.
Although all the initiatives presented in the Plan are important, to
encourage the use and ensure the success of any technology at the
District, the following three must be addressed first as they will serve
as the foundation for future progress:
Establish an IT Governance structure
IT governance describes the process by which stakeholders have
input into priority setting, risk assessment, policy setting, and
decision making processes. IT governance is distinct from day to day
information technology management. It is imperative that IT
governance effectively encompass management, administrative, and
operational areas of the District. The Plan includes a matrix of
accountability and emphasizes transparency of process.
This Plan puts forth a new model for IT governance that enhances
communication, places technology customers at the center of
7 |P a g e
decision-making, sets technical standards, and aligns decisions with
the District’s strategic direction through the creation of two new
committees that will serve distinct roles. First, the IT Steering
Committee (ITSC) will be responsible for the overall direction of IT at
the District. Second, the Technical Review Board (TRB) will be
charged with the creation and maintenance of consistent technology
standards for the entire District community. The new model will also
establish “Participate IT.” This open forum concept will allow for the
entire District community to have input into the decision-making
process.
Enhance technology leadership roles
The creation of the IT support team in 2015 was a good start in
providing IT leadership throughout the District.This “group of
evangelists” replicate the organization and include the District’s star
performers.Employees assigned to these roles help provide support
on technology, and are responsible for maintaining communication
and collaboration with all District and IT staff
However, the real goal of the leadership role is to help people cross
the bridge — to get them comfortable with the technology, to get
them using it, and to help them understand how it makes their lives
better. In other words, coach others on how to use the tools to their
benefit
Make technology training mandatory and routine
Bringing new technology and tools into the District can increase
productivity, and help you make better, faster decisions. But getting
every employee on board is often a challenge.
Training should allow for a path to institutionalize the new
technology and show employees that the District is transitioning
from the old way of working to the new one. In addition to having
kick-off training for new technology, training should continue
incrementally as users require to become comfortable, as well as
making the technology become the part of the work place routine.
Training can come in the form of group, one-on-one sessions, and
even weekly tips & tricks emails.
Getting Started with the Plan
Successfully implementing the IT Master Plan will require thoughtful
execution of a collaborative process that targets outcomes supported
by the entire District. Gaining and maintaining the support of the
employees will require clear, consistent, and accurate
communication on behalf of District leadership throughout the
implementation process.
Following these recommendations, a framework is built for input
from management and employees into the decision making process
around technology at the District. Implementing the recommended
IT Governance initiative should begin immediately and be a priority
goal of the District’s executive management. IT Governance, in
particular, should not rely upon individuals. This will provide the
framework to support many of the other IT changes that have been
identified in the Plan.
8 |P a g e
Efficiency Metrics
Given today’s economy and the need to do more with reduced
resources, budgets, etc., organizations are looking to improve
efficiencies across departments and business units. Companies that
do not leverage information technology as a key part of their business
strategy to cut costs and increase productivity may ultimately cease
to function in today’s technology dependent world. Simply put, the
proper positioning of IT and its associated processes is now vital to
keeping and growing your business.
The deployment and continued upgrade of technology at the District
has provided three main benefit impacts: Cost Avoidance, Cost
Reduction, and Process Improvements.
Cost avoidance is the calculated value of the difference between
what we actually spend on projects or processes and what we would
have spent had we maintained our old habits and methods of
performing work.
Cost reduction is the process of looking for, finding, and removing
unwarranted expenses from a business to increase profits without
having a negative impact on product quality.
Process improvements involves reviewing current processes and
improving them, or putting processes in place where nothing formal
currently exists. Often, intangible benefits fit within the process
improvement realm, like improved customer satisfaction or
increased compliancy.
Ensuring the District continues to deploy the latest technology in a
reasonable timeframe, and continue to improve business processes
with technology advantageous in mind; the District will continue to
see more work product, with improved efficiencies throughout the
company.
Going forward, projects like the ones highlighted below, will continue
to apply the metrics of Cost Avoidance, Cost Reduction, and Process
Improvements, to ensure they are worthy of the District’s
investment.
Network Infrastructure Upgrades
The initial purchase price for Network Infrastructure upgrades can be
daunting. Often technology upgrades regarding software or
hardware may look expensive, particularly when knowing life spans
for such upgrades may only be a few years. However, it must be
noted that by not doing these upgrades will cost the District much
more, in the long run, than the initial upgrade price tag.
Table 1: Cost Savings from Server Technology Upgrade
3-Tier Nutanix Delta Legacy
Legacy vs. Nutanix
$675,000 $487,303 $187,697
$216,667 $0 $216,667
$0 $0 $0
$240,000 $0 $240,000
$3,750 $1,950 $1,800
$105,600 $98,400 $7,200
$0 $41,867 -$41,867
$1,241,017 $629,521 $611,496
$57,857 $23,762 $34,095
$143,744 $42,874 $100,869
$224,333 $67,679 $156,654
$0 $60,827 -$60,827
$92,640 $59,040 $33,600
$625,000 $243,750 $381,250
$1,143,574 $497,932 $645,642
Total CapEx & OpEx $2,384,591 $1,127,452 $1,257,138
* Year 1 Prism Pro Subscription included with initial purchase
Post Warranty Support
Server Virtualization Software Support
Administration FTE
Capital Expense
Total Capital Expense
Server Virtualization Software/Hypervisor
Capitalized Professional Services/Installation
Compute Layer (Blades, Rackmount Servers) vs. Nutanix
Data Storage Services
Storage Area Network & Ethernet Switches
SAN Ports & Cables
Fabric Interconnects
Prism Pro Licensing Subscription*
Operating Expense
Operating Expense
Data Center Rack Space
Power & Cooling
9 |P a g e
Take for example the new server upgrades. The upfront costs for
upgrading the District’s server infrastructure (Compute, Storage, and
Network Access) is approximately $480,000. This is a large purchase
for a product that may become obsolete in year six. There are,
nevertheless, large savings to the District with the purchase of new
server technology. (See Table 1: Cost Savings from Server Technology
Upgrade)
Had the District continued to use the older servers or server
technology, there would have been a significant increase in
operational expenses (OpEx) and capital savings (CapEx) to the
District over the next 5 years. Due to the new servers smaller
footprint, unified converged architecture, and minimized hardware
parts, the District will benefit from a cost avoidance of approximately
$611,496, and a cost reduction of $645,542, for a total real savings of
$1,257,138 over the 5 year life of the servers.
Figure 1: Cost Savings from Server Technology Upgrade
Network infrastructure upgrades also provide intangible benefits as
well. Old technology continues a complex and difficult to manage silo
infrastructure; new servers, on the other hand, increase ease of use
and decrease hardware maintenance. Technical debt is created
through old technology, like being unable to upgrade software on
unsupported hardware, ensuring new technology enhancements and
security improvements are not being taken advantage of. New
technology provides improvements to both security and
performance as storage and process demands on the system increase
daily.
Although there is a cost to every network infrastructure upgrade,
performance and process improvements are created, allowing
District staff to continue to increase work product and services to
employees and rate payers, all without increasing District labor costs.
Business Process Improvements
Process metrics evaluate business processes, establish process
improvement goals, and measure progress against those goals. Without
process metrics in place, you have no visibility into the effectiveness of
the changes you make to business processes. Process metrics evaluate
the progress of District components, discover areas of the agency that
need more attention or resources, and recognize components that are
effective or show impressive improvements over time.
Two of the more effective business processes changed at the District
through the implementation of technology have been electric outage
response and water leak identification and notification. In lieu of the
savings in labor costs, or changing business process from slow
synchronous execution, to instantaneously real-time asynchronous
execution; the real value of these process improvements lies the
increased transparency and timeliness of providing critical information
to our customers.
10 |P a g e
When considering power outage response, the process goal was to
decrease both response and restoration times, reduce customer service
call volumes, and increase customer awareness and access to outage
location and restoration time variables.
Looking at ‘Figure 2: Process Improvements for Power Outage Response’,
it is easy to see how going from a synchronous, heavy labor focused
transaction in reporting, dispatching and resolving power outages, to an
asynchronous, heavily automated process has not only decreased
dispatch and restoration times, but call volumes have greatly decreased
and customer access to outage information is fully transparent and in
real-time.
Figure 2: Process Improvements for Power Outage Response
A few technology projects, when combined, have allowed for this
business process to advance. AMI technology now allows for meters to
instantaneously send notifications at a loss of power event.
Implementing an Outage Management System on top of the District’s
GIS applications can then, within seconds, predict electrical fault
locations, and instantaneously notify crews of outage locations and
extent. At the same time, the same information is feeding a public facing
business intelligence dashboard, providing real-time outage and
restoration information to our customers.
In the past, customers would have had to call into the District, often
waiting in a que for a CSR to answer, which then, when connected, costs
the District 5 minutes per call of labor to report outages or request
outage updates. During the time period of January 4, 2017 – February
28, 2017 there were 6,800 calls from customers; which may seem like a
lot until you recognize over 15,000 customers went directly to the
outage website.
Table 2: Savings in Power Outage Response – January & February 2017
CSR
Cost/Min
3-Man Line
Crew Cost/Min
0.96 4.8
Calls Saved Incidents > 1
Website Redirect 15000 299 # Major Incidents
Call Minutes Saved 225000 8970 Minutes Saved to Restore
Labor $ Saved $216,000 $43,056 Labor $ Saved
Couple the value saved by not answering calls with a 30 minute
reduction per major incident to identify an electrical fault and setup
restoration, there is an argument to be made that not only did the
District save $259,000 in labor costs (see Table 2) during the storms
of 2017; but customers had power restored more quickly, and
11 |P a g e
customers were more quickly informed as to when their power
would be back on.
Water leak notifications processes have also improved greatly through
the advent of technology.
Consider this: prior to the implementation of AMI (Automated Meter
Infrastructure), utilities had to either manually walk or drive to each and
every meter to collect a reading. This often meant meters were read
only once per month. Much of this information collected then required
an employee to manually update or insert the reads into the customer
billing system (CIS). From here, an intensive analysis is done on the data
to pull the customers experiencing leaks. Often due to the amount of
time, only the largest leaks were identified. Then those notifications
were sent in monthly bills. Customers, if they are even identified, find
out about leaks 6-8 weeks after the meter was read.
Figure 3: Process Improvements for Water Leak Notifications
Not only is there a significant amount of labor to read the meters, upload
the data, analyze the data and mail the notifications, but data entry
errors was the norm, not the exception.
Today’s District business process for leak notifications not only entirely
eliminates the human labor aspect of the process, but eliminates any
data errors, all the while collecting, analyzing, and notifying customers
just mere hours after the leak is first detected. As a result of the
notifications, customers are able to view actual water usage, which
identifies when leaks began and how much water may be leaking. This
self-help usage portal has significantly reduced District customer service
call volumes as well.
Table 3: Estimated Yearly Savings Automating Leak Notifications
Savings Type Result Savings
Cost Reduction 1% Power Consumption Costs from Reduced Water Pumping ~ $7,000 / Year in Power Costs Saved
Cost Reduction 2 FTE to 1/2 FTE reduction Meter Reader $180,000 / Year Saved
Cost Avoidance 8 hours reduced / week analyzing data by Engineering $60,000 / Year Avoided
Cost Avoidance Access To MyH20 80% Reduction in Water Usage Calls
Non-Financial Auto Meter Usage Tracking Minimal Data Errors
Each and every other technical project aimed at improving business
process has similar success stories, both in cost avoidance, cost
reduction, and other non-financial benefits.
It is important to continue to evaluate the cost avoidance, cost reduction
and process improvements for all information technology projects
moving forward as the base metrics of establishing a return on
investment in technology here at the District.
12 |P a g e
System Planning Criteria
This section provides a discussion on the system criteria developed
for evaluating master planning scenarios. It also includes cost
estimating criteria used in developing cost estimates and
determining the financial impact on the recommended
improvements.
Method Evaluation
There are three distinct factors that determine where costs need to
be allocated, and which capital projects are required to maintain and
improve system performance: Maintenance & Support, Network
Build-Out, and Hardware Refresh Cycle.
Maintenance and Support
The first criteria is the cost to maintain and support the hardware and
software infrastructure already invested by the District. In the
information technology sector, the initial purchase of hardware or
software is only the initial cost. Often, there are continued annual
costs for maintenance and support of these products until their end-
of-life (EOL). Often these costs are 20%-30% of initial costs per year.
Maintenance in this section means those preventive, diagnostic,
updating, replacement, and repair procedures of hardware or
software that the District has in place. Maintenance is provided by
the vendor who makes the product at question. Specific
maintenance might include:
·periodic replacement of parts and renewal of consumable
supplies;
·repair or replacement of faulty components;
·periodic inspection and cleaning of equipment;
·updating or upgrading hardware and software, including
installing new operating system versions;
·installing and removing equipment and applications.
The term support refers to the actions taken on behalf of users rather
than to actions taken on equipment and systems. Support denotes
activities that keep users working or help users improve the ways
they work. Included under support might be such items as:
·help desks and other forms of putting a person in touch with
another person to resolve a problem or provide advice;
·automated information systems, such as searchable
frequently-asked-question (FAQ) databases or newsletters;
·initial training and familiarization tours for equipment and
software, whether automated or conducted by a human;
·instructional and curriculum integration support, usually
through observation and personal interaction between a
teacher and a technology coordinator; and
·technology integration support for administrative
applications, usually conducted through specialized
consultants or software/systems vendors.
It should also be noted, that without continued payment into
maintenance and support contracts, the District would also be
restricted from even using the product due to license restrictions.
Maintenance & support costs per device and/or software type is
more detailed in Table XX, within the Financial Impact section.
Network Build-Out
The District is still in the phase of building out the network to full
capacity. Capacity can be evaluated three-fold: bandwidth &
performance, network clients, and campus extension.
13 |P a g e
Bandwidth & Performance
As computer systems continue to advance, and new software and/or
devices are added to the network, there is often additional resource
taxes on the network; continually requiring more bandwidth,
memory, and processing power. The following measures are often
considered important:
·Bandwidth commonly measured in bits/second is the
maximum rate that information can be transferred
·Throughput is the actual rate that information is transferred
·Latency the delay between the sender and the receiver
decoding it, this is mainly a function of the signals travel time,
and processing time at any nodes the information traverses
·Jitter variation in packet delay at the receiver of the
information
·Error rate the number of corrupted bits expressed as a
percentage or fraction of the total sent
Figure 4: Bandwidth Growth with Radio Addition
Figure 5: Bandwidth Growth with Camera Addition
Five years ago, the District never reached 1 GB/s bandwidth
requirements; today 2.5 GB/s is the norm. Large storage amounts
were also not a need for the District five years ago, however, with the
addition of HD security cameras, AMI, and new security technologies,
Big Data analytics are becoming the norm. This has put requirements
for larger, faster storage and network needs.
Table 4: Ever increasing resource needs
Year Operating System CPU Memory
Hard
Drive Graphics
1998 Windows 95/NT 90 MHz 16MB 80MB 0MB
2004 Windows 2000/XP 1.5 GHz 384 MB 2.2GB 64MB
2009 Windows 7 2.4 GHz 2GB 8GB 256MB
2015 Windows 8.1 2.5GHz 4GB 65GB 1GB
As software and devices require more resources, it is imperative that
the District keep up with the requirements to ensure the system is
working at its fastest capacity, and not allowing employees to wait
for the processing of data.
Network Clients
As the Internet of Things continues its aggressive expansion, the
adding of client devices will continue to grow exponentially. The
Ericsson Mobility Report (2015) puts Machine to Machine (M2M)
growth at 25% year over year up to 2021. Goldman Sachs IoT Primer
(2014) sees a potential of 10x as many things to the internet by 2020.
14 |P a g e
Figure 6: Ericsson Report - Explosion of IoT
The District has seen a growth of approximately 130 devices on the
network in 2010, to close to 1000 devices today. As AMI technology
continues to advance, it is very likely that by 2025, the District will
have over 40,000 devices connected.
Continuing to build network and computer resources to support this
large amount of clients is imperative to ensuring the continued
operations of the District.
Campus Extension
The District’s service territory sprawls approximately 45+ square
miles. Beyond just the headquarter building located at 11570 Donner
Pass Road, the District requires network connectivity to many other
sites throughout the territory. These off-site locations are imperative
to the operations of the District’s electric and water infrastructures;
evolving from just reading statistical values 5 years ago, to full remote
and automated operations of electric and water commodity flow
today , to customer control via pre-pay options in the next 5 years.
Allowing for this advanced commodity flow functionality, equipment
needs to be in place through the District’s territory to collect and
broadcast communications to these devices situated in the field.
Map 1: Internet of Things (IoT) Build-Out
14 |P a g e
Stations require a significant amount of network and SCADA
equipment to support. These include fiber, LTE, and/or radio
communication devices, network switches, POE injectors, UPS power
supplies, wireless access points, AMI collectors, surveillance cameras
and SCADA RTU and PLC equipment. For network components,
hardware costs can be up to $10,000, and SCADA RTU equipment has
a range of $15,000-$20,000 per station.
Figure 7: Equipment Required at Stations
Smart Grid and AMI require significantly less network equipment to
support, outside of the AMI collectors located at many of the main
stations. Most of these devices, potentially over 30,000 devices, have
network communications built into the infrastructure device like
meters, reclosers, switches and transformers. Managing and
communicating with these vase amount of devices will, however, put
a significant tax on the network in terms of bandwidth, latency and
storage requirements.
Network build-out is harder to put costs too, as it heavily depends
upon the speed in which software develops and deployed at the
District, the speed and volume in which new technology is deployed
like Smart Grid, AMI and security cameras, as well as new initiatives
not yet required or mandated on the District. The most accurate
prediction of build-out involves each of the main stations, and their
default equipment build-out as describe in Figure 7. Currently 22
large stations have been completed, with another 29 large stations,
and 23 small stations still to build. Using this criteria of large and small
stations to complete over the next ten years, a more detailed build-
out cost estimate is put forth in Table XX, in the Financial Impact
section.
Hardware Refresh Cycle
Key Challenges
§The primary factors that determine the useful life of
enterprise equipment are market innovation, vendor end of
life (EOL) policies, operating life and operating cost
§Limited lifetime warranties, higher mean time between
failures (MTBF) design criterion for critical networking
components and modular platforms are affecting enterprise
useful life assumptions in a positive way
§Two primary inhibitors to extending the useful life of older
network equipment are the vendors' EOL support programs
and the critical role of the equipment in the network
Useful Life Guidelines
Sometimes referred to as the technological life of an asset, the useful
life reflects how long the equipment can be used before the product
becomes functionally obsolete — that is, when the risk associated
15 |P a g e
with the product becomes too great, or when the operational costs
make a transition to a new product an economic advantage. Useful
life represents the normal time a piece of equipment is expected to
be in place in an average enterprise network. Unanticipated changes
to the operating environment can affect the equipment's useful life.
For example, a significant expansion to the business that puts
increasing demands on a core switch or new application architectures
that change the LAN infrastructure could negatively affect the
anticipated useful life.
During periods of rapid innovation, network infrastructure
components tend to be replaced on a regular and short cycle.
Historically, data-networking equipment was replaced every three or
four years, and it was a fairly common practice to lease equipment
for three years and then "rip and replace" the equipment for a new
Figure 8: Network Hardware Useful Life Expectation
solution. Traditional voice equipment was at the other end of the
spectrum, remaining in the infrastructure for seven to 12 years or
more, with few or no hardware upgrades, but these former norms
have changed considerably. Due to the increased standardization and
stable requirements of edge switching, limited lifetime warranties
offered by several vendors and increasing MTBF, the useful life of this
type of equipment has increased to seven to 10 years. As a result of
better quality and reliability when compared with older wireless LAN
(WLAN) standards, IEEE 802.11n equipment useful life stands in the
five- to seven-year range. Enterprises continue to struggle to use the
capacity that is available as part of 802.11n, even without using some
of the scalability functionality that is already available. There will be
a lot of early adopters for 802.11ac in the home market, but no
traction in the enterprise. In most cases, industry recommendation is
that IT organizations use core switches and routers for five to seven
years. Replacement should not be done on a regular schedule, but
should be based on:
§Analysis of new requirements
§The cost of operating the old equipment
§The level of risk associated with operating long-lived network
assets
In some circumstances, it may be possible to extend the useful life
beyond seven years. This type of equipment may be negatively
impacted by capacity increases (for example, LAN backbone traffic or
increasing WAN speeds), which may lower its useful life.
Alternatively, these assets may be redeployed, for example, by
moving the core switch to handling aggregated or even edge-traffic.
Compared with core switches and routers, some of the newer data
center technologies can have shorter useful lives. These include
fabrics, fabric extenders and input/output (I/O) convergence, whose
useful life ranges from four to seven years. Until these new
technologies and products have a proven track record, we advise a
slightly more conservative approach when setting useful life
16 |P a g e
expectations. We expect application delivery controllers (ADCs) and
WAN optimization controllers (WOCs) to have a three- to five-year
useful life. There remains significant innovation in these markets,
which may lead to forced software or hardware upgrades and,
consequently, reduced useful life. The useful life of WOCs is still
limited by their use of hard disks. We find that new features, such as
new Secure Sockets Layer (SSL) key size, in the ADC market can lead
to upgrade requirements. Security requirements can be split
between threat-facing and non-threat facing equipment. Threat
facing devices will usually have a shorter life (three to five years).
Unified threat management devices will reduce the overall life,
because of the requirement to expand as one or more particular
functions consume all the resources of the appliance. Longer life
cycles (five to seven years) can be attained by using dedicated
function appliances.
New IP telephony (IPT) equipment has a significantly shorter life cycle
(five to seven years) than the traditional time division multiplexing
(TDM) equipment (seven to 12 years), which IPT has largely replaced.
We expect the call setup hardware to have a life span similar to
general-purpose servers, although the software is likely to be
covered through software support contracts and have a shorter
useful life. After two ways of innovation (move from Integrated
Services Digital Network [ISDN] to Internet Protocol [IP], and
standard-definition [SD] to high-definition [HD] video resolution),
videoconferencing equipment's useful life has stabilized between
four to six years. Although there are new features, such as 2K line
video, 3D video and new codecs, which will be put into place for new
installations, it is unlikely to prematurely retire existing installations.
Most clients consider "good enough" video to be adequate for most
purposes.
Factors That Determine Useful Life
Four primary factors determine a product's useful life in an enterprise
network.
Market Innovation
The relative stability of a product is key for determining the useful life
of most products. Markets that are increasingly standardized or have
progressed further down the commoditization curve provide the
impetus to increase or stabilize the useful life of products. Products
with a smaller percentage of software or stable software features are
also good candidates for extended life. Market innovations do not
necessarily require or force an upgrade. For example, there is no
need to upgrade a workgroup LAN to 10GbE. However, a
requirement for Power over Ethernet (PoE or PoE +) for items like
security cameras or some high-end WLAN access points (APs) may
force a technology upgrade. Other new requirements — such as
broad deployments of network access control or WOCs — may be
better handled by overlays, while enabling the switch and router
installation to remain in place to extend their useful lives. Other parts
of the network, such as network security and ADCs, have more
innovation and critical demands for new capabilities. For example,
the migration of 2048-bit or 4096-bit SSL keys has necessitated a
move toward ADCs with higher overall performance.
Vendor EOL Policies
Vendor EOL announcements trigger a series of events that lead to the
end of support for a product. Although the lack of a support contract
is an issue for network operations, it does not result in a mandatory
requirement to replace the equipment. In some circumstances, it is
perfectly fine to get support from a third-party vendor. It is important
to understand what an EOS announcement means. Although it
impacts and influences useful life of a product, it doesn't have to
dictate it. In the case of Cisco, an EOS announcement causes a specific
17 |P a g e
chain of events. The final date that Cisco will accept orders for new
networking equipment is approximately six months after an EOS
announcement. Starting with this EOS date, Cisco will provide full
software and hardware support for the product for a total of five
years, presented as three years for software and five years for
hardware. Software support generally means that bugs will be fixed
and security vulnerabilities will be closed. There may be some feature
upgrades (especially if the product is part of a family where active
developments are still being performed). After the third year, Cisco
will only provide hardware support (basically replacement for failed
components). This can be a competitive differentiator, especially for
products that are Internet-facing and require security patches to
lower risk. Most other vendors have some variations on these five-
year, EOS support options. Some workgroup switches will include
some form of lifetime warranty for the hardware, but may exclude
power supplies and fans in other cases. Enterprises need to carefully
understand the fine print on what is covered on these often-limited
lifetime warranties. A final vendor issue in determining the useful life
of a product may come down to luck and careful buying. Buying a
product near the end of its time in a product portfolio can reduce its
useful life in the network. Although organizations should be aware of
where a product fits in a vendor's life cycle, it's not always easy to
predict when a vendor will update its product portfolio.
Operating Life
Operating life affects useful life and is specifically tied to the
hardware design of the product. It is related to, but not the same as,
the product's MTBF, which is calculated based on a curve that
predicts a level of failure in the product line. Historically, most
network equipment was designed to have MTBF of approximately
100,000 hours (roughly 11 years). Failures often occur in power
supplies and fans, although environmental issues can also affect the
longevity of semiconductor components. Looking at new hardware
design, fixed form-factor switches are being designed with increasing
MTBF — in many cases, 200,000 hours or more. Thus, for some
equipment, the operating life will no longer be part of the equation
to determine the useful life. Switches equipped for PoE+ are likely to
have a shorter operating life than those without PoE+, because of
larger power supplies, more heat and increased air-cooling
requirements.
Operating Cost
This is the final consideration when determining useful life. The price
of some equipment — particularly Ethernet workgroup switches —
has declined significantly in the past five to 10 years. In most cases,
software and hardware service contracts are related to the original
equipment costs. When you add in the arrival of new lifetime
warranties and more energy-efficient products that are available on
the market, we have seen cases in which replacing older LAN
switches with new ones — especially those that offer lifetime
warranties — can have an ROI of two years or less.
Recommendations
§Upgrade or replace network equipment only when the risks
become unacceptable or significant new technical
requirements emerge
§Analyze and understand each major product category and
end of sale (EOS) announcements from different vendors to
determine the associated risks and prepare a migration plan
§Do not follow predetermined, regular upgrade cycles for
network equipment, since business, application and
technical requirements can impact useful life positively and
negatively
18 |P a g e
Using the above recommendations, a detailed hardware refresh cost
is fully detailed in Table XX, in the Financial Impact section.
Existing System
This section provides a description of the existing IT applications and
infrastructure. The system is broken down into Network, Security,
Communications, Data Center, SCADA, and Applications.
Network Infrastructure
Networks are the “plumbing systems” that convey electronic data
from once place to its intended destination. Data may be conveyed
through physical cables including fiber optics or wireless means such
as radio frequency and cellular networks. They are the backbone
unto which all information travels.
Network Switching
Core Switches
A core switch is a high-capacity switch generally positioned within the
backbone or physical core of a network. Core switches serve as the
gateway to all data center and edge switches, and to the Internet - it
provides the final aggregation point for the network and allow
multiple aggregation modules to work together.
In 2016, District core switching was upgraded from one core switch
with redundant control modules, to four core switches working in
Hot Standby Routing Protocol (HSRP) mode used for establishing a
fault-tolerant default gateway. (See Figure 9 – Basic Network
Architecture Design) Two core switches are located at the District
headquarter data center, and two more at the Corp Yard disaster
recovery data center. This design provides high availability for the
heart of the District network infrastructure.
Data Center Switches
Data center switches deliver key scalable features that meet the
demands of today’s virtualized and cloud multi-vendor
environments. Considering the District network architecture is
heavily designed as an internal cloud, where all applications,
desktops and data reside; Data center switches are critical devices to
the operations of the District.
The District operates two data centers; one at the District
Headquarters – HDC and one at the Corp Yard – CRP. HDC has a
capacity of six cabinets of 48 Rack Units (RU) each. CRP has a capacity
of four cabinets of 48 Rack Units (RU) each, with the ability to add
one more cabinet.
CORE#1
CORE#2 CORE#4
CORE#3
DC SWITCH DC SWITCH
EDGE SWITCH
SERVERS / VDI SERVERS / VDI
HDC (DATA CENTER)CRP (DISASTER CENTER)
FIELD STATIONS
OFFICES
EDGE SWITCH
WIRELESS
ACCESS POINT
WIRELESS
ACCESS POINT
Single Mode Fiber
1GB or 40GB
Multi Mode Fiber
10 GB
Cat Cable
1 GB
Figure 9:Basic Network Architecture Design
19 |P a g e
The data center is supported by seven top-of-rack and fabric
interconnect switches; five located at HDC, and two located at CRP.
The fabric interconnect switches support 10 GB bandwidth, whereas
the top-of-rack switches support only 2 GB. Upgrades to these
switches will include increase top-of-rack switches to 10 GB services.
Edge Switches
Edge switches are the gateways to the District network, connecting a
few to a maximum of 48 endpoint (client) devices: laptops, desktops,
security cameras, and PLCs (Programmable Logic Controllers). For
this reason, edge switches generally are considered less crucial than
core switches to a network’s smooth operation; the loss of one edge
switch only impacts a handful of devices.
The District uses two different types of edge switches. One type is
designed with the office in mind: non-ruggedized with 1GB ports with
POE (Power-over-Ethernet); and one for satellite locations like
substations and pump houses: ruggedized with 100MB ports,
generally without POE. The District currently operates seven edge
switches at the headquarters, and one edge switch at each satellite
station where communication paths exist. Currently 23 satellite
stations are online, with 7 more coming online by spring of 2017.
Final build out project 64 edge switches.
Due to Edge Switches locations away from the redundancy of the
data center, the switches rely on UPS (Uninterrupted Power Source)
devices to moderate power losses and maintain end-point device
connectivity. Many sites require power generation to ensure power
is never totally lost.
It is critical that the District maintain one spare for every edge-switch
model to ensure we never lose access to any device for more than
one hour, should the switch fail.
LTE Routers
LTE routers allow the District to port LAN network traffic over the
WAN (internet) through secured encrypted tunnels. These routers
provide 5 useable IP address per subnet at stations where fiber
options have not been landed. It ensures the District has high speed
bandwidth, with high security policies in place, at locations where the
District have not been able to communicate with network packets in
the past.
LTE routers may also be deployed at stations without a true Fiber
Optic loop, ensuring there is high-bandwidth redundancy at these
locations. These options are critical in allowing for operations to
properly operate SCADA controls from the back office.
It is critical that the District maintain one spare to ensure we never
lose access to any device for more than one hour, should the LTE
router fail.
Wireless Access
Like edge switches, wireless access points (WAPS) are the access on-
ramps of the network for endpoint devices. Unlike switches, which
require physical cables to connect endpoint devices, WAPS allow for
connection via radio airwaves; usually in the 2.4 and 5 MHz range.
Wireless access to computer networks are becoming the normal
industry standard. Ensuring many of the District devices have the
option to connect to the network wirelessly, allows staff and vehicles
to “float” between Headquarters, the Corp Yard, and satellite
stations without circumventing security, nor increasing their network
connection times.
For shorter hops between satellite locations that will not have fiber
optic cable landed, WAPS can act as a bridge between locations,
20 |P a g e
offering up to 300MB/sec speed, providing for a more cost effective
solution at certain locations for the District.
The District currently operates 30 wireless access points, with a
potential build out of 54 units.
Fiber Optic Cables
Cable Build-Out
A comprehensive design of layer 1 communication between District
headquarters and the 51 satellite facility locations was completed in
2011 (Map 2).
The full build out encompasses 68.6 miles of fiber optic cable, broken
down into 216, 144, 96, 48, 24, and 12 strand count cable. Each of
the fifty-one (51) stations, which will have fiber cable landed, will
have two stochastic routes back to both District headquarters and
the disaster recovery center located at the Corp Yard., ensuring
Map 2: Fiber Optic Cable Full Build-Out Design
22 |P a g e
redundancy and reliability.
Abandoned Pipe Re-Use
The District currently owns 9.35 miles of pipe (Map 3) that is no
longer usable by the water utility for the purpose of traversing water.
These abandoned pipes have the potential to be re-used for the
purpose of network communication.
This opportunity to re-use existing infrastructure, for other purposes
than its’ original has the potential to save the District a considerable
amount of money, both during build, and increasing reliability and
maintenance in the future.
As example, re-purposing the abandoned pipe in the Tahoe Donner
subdivision will change the original design of the fiber optic cable
infrastructure from being 100% overhead build, to a 70%
Map 3: Abandon Water Pipe Repurposed for Communication
underground, 30% overhead build. Moving 70% of the build from
overhead to underground will increase the reliability, and reduce the
maintenance cost considerably, due to the fact that tree falls will be
removed from the risk factor completely in these areas.
23 |P a g e
Security Infrastructure
In 2013, the President of the United States issued an executive order
to improve critical infrastructure cybersecurity.Repeated cyber
intrusions into critical infrastructure demonstrate the need for
improved cybersecurity. The cyber threat to critical infrastructure
continues to grow and represents one of the most serious national
security challenges we must confront. The national and economic
security of the United States depends on the reliable functioning of
the Nation's critical infrastructure in the face of such threats. It is
the policy of the United States to enhance the security and
resilience of the Nation's critical infrastructure and to maintain a
cyber-environment that encourages efficiency, innovation, and
economic prosperity while promoting safety, security, business
confidentiality, privacy, and civil liberties.
Firewalls
Firewalls are the District’s first line of defense against unauthorized
access, while permitting outward communication. The firewall is a
network security system that monitors and controls the incoming and
outgoing network traffic based on predetermined security rules.
The District is constantly being attacked by unauthorized elements,
both directly from the internet, as well as from within via email or
website links.
Currently the District operates one firewall, which is a single point of
failure to outside communication. Adding a second firewall at the
Corp Yard Data Center will ensure reliability and resiliency of the
District’s front line defense.
Access Control & Authentication
Access Control Server (ACS) is an access policy control platform that
helps the District comply with growing regulatory and corporate
requirements. By integrating with other access control systems, it
helps improve productivity and contain costs. It supports multiple
scenarios simultaneously, including:
·Device administration: Authenticates administrators, authorizes
commands, and provides an audit trail
·Remote Access: Works with VPN and other remote network
access devices to enforce access policies
·Wireless: Authenticates and authorizes wireless users and hosts
and enforces wireless-specific policies
·Network admission control: Communicates with posture and
audit servers to enforce admission control policies
Intrusion Detection / Prevention
An Intrusion Detection System (IDS) is a network security technology
originally built for detecting vulnerability exploits against a target
application or computer.
Vulnerability exploits usually come in the form of malicious inputs to
a target application or service that attackers use to interrupt and gain
control of an application or machine. Following a successful exploit,
the attacker can disable the target application (resulting in a denial-
of-service state), or can potentially access to all the rights and
permissions available to the compromised application.
Intrusion Prevention Systems (IPS) extends IDS solutions by adding
the ability to block threats in addition to detecting them and has
become the dominant deployment option for IDS/IPS technologies.
Multiple solutions work in unison at the District to provide layers of
security. Maintaining hardware and software focused on Intrusion
24 |P a g e
detection and prevention are paramount to both the safety of the
District’s infrastructure and customer’s private data.
Video Surveillance
Closed-circuit television (CCTV), also known as video surveillance, is
the use of video cameras to transmit a signal to a central location, on
a limited set of monitors. The District’s CCTV systems operates only
as required to monitor a particular event. The current system,
utilizes network-attached storage devices, providing recording for
weeks at a time, with a variety of quality and performance options
such as motion detection and email alerts.
Video surveillance plays a significant role in protecting the District’s
facilities, employees and customers from harm, theft, malfunctions,
and tampering. Eighty-five surveillance cameras are currently
deployed at twenty-one satellite facility stations as well as the District
headquarters. Future roll-out includes up to 120 additional cameras
at 40 additional satellite facility stations.
With all this high-definition video security, comes large amounts of
network traffic. Although the network is currently designed to
handle existing camera infrastructure, add cameras in the future will
require re-evaluation of the network.
Data Center Infrastructure
A data center is a facility used to house computer systems and
associated components, such as telecommunications and storage
systems. It generally includes redundant or backup power supplies,
redundant data communications connections, environmental controls
(e.g., air conditioning, fire suppression) and various security devices.
The District’s goals are to maintain a Tier III Data Center at both the
District headquarters and Corp Yard that meet the standards of the
Telecommunications Industry Association and the Uptime Institute. The
minimum qualifications for a Tier III data center are:
·Meets or exceeds all Tier I and Tier II requirements
·Multiple independent distribution paths serving the IT
equipment
·All IT equipment must be dual-powered and fully compatible
with the topology of a site’s architecture
·Concurrently maintainable site infrastructure with expected
availability of 99.982%
Currently these goals are being met.
Server Technology
The District operates two server clusters: production located at
District headquarters and Recovery at Corp Yard.
HDC Data Center
The server cluster, located at District headquarters, consists of a two-
chassis setup, with each chassis consisting of one fabric-interconnect,
one 8-slot blade server chassis, consisting of two blade servers for
server VMs, and one blade server for VDI, and two storage units,
consisting of varying hard drive size and speeds.
25 |P a g e
Figure 10: HDC Server Hardware Capacity
HDC Data Center currently runs at below 50% capacity, allowing the
building and adding of up to 50 new servers at no extra cost.
Figure 11: HDC Storage Utilization
CRP Disaster Recovery Center
The server cluster, located at Corp Yard, consists of a one-chassis
setup, consisting of one fabric-interconnect, one 8-slot blade server
chassis, consisting of two blade servers for server VMs, and two
storage units, consisting of varying hard drive size and speeds.
Figure 12: CRP Hardware Capacity
Hardware and storage capacity run low at this location due to it
being mostly in a cold standby mode. All but a handful of servers
operate in normal capacity over at HDC. The Corp Yard Data Center
requires to have almost as much capacity as HDC to ensure full
operations, should the HDC Data Center be unavailable.
Figure 13: CRP Storage Utilization
26 |P a g e
servers run on the Headquarter cluster, and is setup to fail-over to
the Corp Yard in the event of a disaster. Currently, resources are
adequate for current needs, however, storage technology is
beginning to get old.
Virtual Desktops
Virtual desktop infrastructure (VDI) is the practice of hosting
a desktop operating system within a virtual machine (VM) running
on a centralized server.VDI is a variation on the client/server
computing model, sometimes referred to as server-based computing.
Figure 14: VDI Hardware & Storage Capacity
The District is licensed to operate up to 60 virtual desktops
simultaneously. Hardware capacity is designed to ensure all 60
desktops are capable of running on a single server, should a server
fail. Currently both VDI Servers reside within the District
Headquarter cluster. The District is currently at maximum capacity
for the VDI servers. A third server will be required in the future to
allow for VDI at the Corp Yard during a disaster.
Phone Services
Voice over Internet Protocol (VoIP), is a technology that allows the
District to make voice calls using a broadband Internet connection
instead of a regular (or analog) phone line.
The District operates two servers within a cluster for performance
and reliability of the phone system. One server resides at HDC and
the other at CRP.
Figure 15: District Unified Communication Diagram
There are currently 73 IP phones, with another 60 desktop Jabber
clients, and 30 IOS Jabber clients currently in service. Phones can be
added to the existing system as long as new licenses are purchased.
Two-Way Radio
The District operates a DMR Tier III two-way radio communication for
field operations. DMR Tier III covers trunking operation in frequency
27 |P a g e
bands 66–960 MHz Tier III supporting voice and short messaging
handling. It also supports packet data service in a variety of formats,
including support for IPv4 and IPv6. The District were early adopters
of this technology.
Figure 16: DMR Radio Architect Design
Radio communications are supported by two base stations, each with
one control channel, and 3 communication channels, located at the
Old Greenwood Well and Donner View Hydro stations. Each station
is equipped with two repeaters, a multi-coupler, two transmit
combiners, and a single multi-frequency antenna.
Both stations communicate with each other through a central
controller at HDC. A fail-over controller resides at CRP.
There are currently 64 mobile radios in service, functioning as the
prime communication device for personal working in the field.
SCADA Infrastructure
Supervisory control and data acquisition (SCADA) is a system for
remote monitoring and control that operates with coded signals over
communication channels. The District employs SCADA for the both
the electric and water systems.
A SCADA system usually consists of the following subsystems:
·Remote terminal units (RTUs) connect to sensors in the
process and convert sensor signals to digital data. They have
telemetry hardware capable of sending digital data to the
supervisory system, as well as receiving digital commands
from the supervisory system. RTUs often have embedded
control capabilities such as ladder logic in order to
accomplish Boolean logic operations.[3]
·Programmable logic controller (PLCs) connect to sensors in
the process and convert sensor signals to digital data. PLCs
have more sophisticated embedded control capabilities than
RTUs. PLCs do not have telemetry hardware, although this
functionality is typically installed alongside them. PLCs are
sometimes used in place of RTUs as field devices because
they are more economical, versatile, flexible, and
configurable.
·A telemetry system is typically used to connect PLCs and
RTUs with control centers, data warehouses, and the
enterprise. The District plans on using five types of telemetry,
depending on network build-out, and telemetry
requirements. These include fiber optics, microwave,
wireless, LTE, and radio.
28 |P a g e
·A data acquisition server is a software service which uses
industrial protocols to connect software services, via
telemetry, with field devices such as RTUs and PLCs. It allows
clients to access data from these field devices using standard
protocols.
·A human–machine interface or HMI is the apparatus or
device which presents processed data to a human operator,
and through this, the human operator monitors and interacts
with the process. The HMI is a client that requests data from
a data acquisition server or in most installations the HMI is
the graphical user interface for the operator, collects all data
from external devices, creates reports, performs alarming,
sends notifications, etc.
·A historian is a software service which accumulates time-
stamped data, Boolean events, and Boolean alarms in a
database which can be queried or used to populate graphic
trends in the HMI. The historian is a client that requests data
from a data acquisition server.[5]
·A supervisory (computer) system, gathering (acquiring) data
on the process and sending commands (control) to the
SCADA system.
·Various processes and analytical instrumentation.
The electric SCADA system currently monitors the District’s four
substations, the Glenshire distribution, and all 16 circuits. Near
future additions include adding some monitoring devices to reclosers
and capacitor banks.
The water SCADA system is currently being upgraded to a new
environment with all new hardware and software. The system
backend is complete with a data acquisition server, historian, and
HMI. New RTU cabinets, encompassing PLC hardware have been
purchased for 27 of the 58 locations, with ten of these fully
commissioned, replacing the old Donner Lake and Glenshire SCADA
systems. It is envisioned that all well buildings, as well as the Gateway
valve and College Valve will be commissioned in 2017.
Figure 17: Typical Water SCADA RTU Layout
Purchase of an additional 31 RTU cabinets (approximately $15,000 -
$20,000 each) is anticipated to be completed in 2018, 2020, and
2022, respectively.
29 |P a g e
Financial Impact
Outlined below is a comprehensive cost analyst to build-out, refresh,
and maintain the District’s IT hardware and software infrastructure.
Labor and miscellaneous expenses are not accounted for in these
estimates.
Capital Improvement
Two main factors influence capital improvement expenditures for
Information Technology: New purchases to obtain full build-out, and
hardware replacement (refresh) cost, ensuring continued reliability.
Table 5: Detailed Build-Out Cost Estimate
Device Type Count Avg Cost 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025
LAN Switching (Edge) 32 4,500.00 0 7 7 7 6 6 0 4 0
0 0 31500 31500 31500 27000 27000 0 18000 0
Access Points & Controller 24 3,700.00 0 5 5 5 5 5 5 5 4
0 0 18500 18500 18500 18500 18500 18500 18500 14800
Server Hardware 1 130,000.00 0 0 0 0 0 45000 0 0 0 0
VDI Hardware 1 40,000.00 0 40000 0 0 0 0 0 0 0 0
Data Center Switching 4 40,000.00 0 0 40000 40000 80000 0 0 0 0 0
Storage Hardware 1 30,000.00 0 30000 0 0 0 30000 0 0 0 0
Security Hardware (Firewalls, ACS) 2 20,000.00 0 40000 0 0 0 0 0 0 0 0
Security Hardware (Cameras) 125 3,500.00 10 17 10 17 10 17 10 17 10
0 35000 59500 35000 59500 35000 59500 35000 59500 35000
Mobile Devices (Phones, Tablets) 20 800.00 4 4 4 4 4
0 3200 0 3200 0 3200 0 3200 0 3200
SCADA RTU 31 16,000.00 12 12 7
0 0 192000 0 192000 0 112000 0 0 0
UPS 26 1,500.00 6 7 6 7
0 9000 0 10500 0 9000 0 10500 0 0
POE 23 300.00 5 6 6 6
0 1500 0 1800 0 1800 0 1800 0 0
Microwave Dishes 4 12,500.00 0 25000 0 0 25000 0 0 0 0 25000
0 183700 341500 140500 406500 169500 217000 69000 96000 78000
New Purchases and Build-Out
In order to allow for both full computer capacity within the District
headquarters, for current employee count, future employee growth,
and future SCADA supporting technology, with an additional 30 sites
not online today; many computer devices will require to be added
30 |P a g e
over the next 10 years. The table below is a detailed estimate of
which and how many of those devices will be required, with an
estimated cost breakdown over a ten year period.
Table 6: Detailed Hardware Refresh Cost Estimate
Device Type Count
(BuildOut)Avg Cost Today Value 2025 Value Year Install
Refresh
Years 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025
LAN Switching (Edge) 32 (64) 4,500.00 126,000.00 360,000.00 2011-2016 7 6 6 8 8 4 5 13 6
0 0 27000 27000 36000 36000 18000 22500 58500 27000
Radio Hardware 8 33,750.00 270,000.00 270,000.00 2015 7 0 0 0 0 0 0 270000 0 0 0
Radio Units 67 1,000.00 67,000.00 67,000.00 2015 10 0 0 0 0 0 0 0 0 0 67000
Access Points & Controller 30 (54) 3,700.00 111,000.00 259,000.00 2011-2016 6 2 3 6 4 8 19 8 16
0 0 7400 11100 22200 14800 29600 70300 29600 59200
Core Switching 4 50,000.00 200,000.00 200,000.00 2016 5 130000 0 0 0 0 150000 0 0 0 0
Server Hardware 3 130,000.00 390,000.00 390,000.00 2012/2013 5 2 1 2 1
0 310000 0 0 0 0 0 310000 0 0
VDI Hardware 2 (3) 40,000.00 80,000.00 120,000.00 2013 6 0 0 40000 80000 0 0 0 0 40000 80000
IP Phone Server 2 20,000.00 40,000.00 40,000.00 2014 6 0 0 0 0 40000 0 0 0 0 0
IP Phones 140 500.00 70,000.00 70,000.00 2014 7 0 0 0 0 0 70000 0 0 0 0
Desktops 70 450.00 31,500.00 31,500.00 2015 7 0 0 0 0 0 0 31500 0 0 0
Data Center Switching 7 10,000.00 70,000.00 70,000.00 2011/2012 4 0 0 70000 0 0 70000 0 0 0 70000
Storage Hardware 3 30,000.00 90,000.00 90,000.00 2012/2013 3 0 90000 0 0 90000 0 0 90000 0 0
Security Hardware (Firewalls, ACS) 2 20,000.00 40,000.00 40,000.00 2011 3 40000 0 0 40000 40000 0 40000 40000 0 40000
Security Hardware (Cameras) 85 (210) 3,500.00 297,500.00 735,000.00 2013/2016 8 7 9 69 8
0 0 0 0 0 0 24500 31500 241500 28000
Mobile Devices (Phones, Tablets) 118 800.00 94,400.00 94,400.00 2015-2016 3 10 10 35 10 10 10 39 10 10 10
8000 8000 28000 8000 8000 8000 31200 8000 8000 8000
SCADA RTU 23 (54) 16,000.00 368,000.00 1,120,000.00 2015/2016 12 0 0 0 0 0 0 0 0 0 0
UPS 30 (56) 1,500.00 45,000.00 105,000.00 2011-2016 5 6 6 8 8 4 5 5 6 6
0 9000 9000 12000 12000 6000 7500 7500 9000 9000
POE 15 (38) 300.00 4,500.00 16,500.00 2015-2016 5 6 6 8 8 4 5 5 6 6
0 1800 1800 2400 2400 1200 1500 1500 1800 1800
Stand-Alone Servers 6 10,000.00 60,000.00 60,000.00 2011-2016 6 2 1 1 1 2 1 1
20000 0 10000 10000 0 10000 20000 0 10000 10000
Microwave Dishes 8 (12) 12,500.00 100,000.00 125,000.00 2013-2014 12 0 0 0 0 0 0 0 0 0 25000
Cost 198000 418800 193200 190500 250600 366000 473800 581300 398400 425000
Capital 2,554,900 4,263,400
Hardware Refresh
The District’s current hardware devices have a set lifetime of
usefulness, as outlined in the Hardware Refresh section above.
Below is a detailed estimate of what devices, how many, and when
they are expected to require replacement. Hardware refresh costs
will continue to go up for the next ten years as a continued push
toward full build-out will continue to add new devices to the network
every year.
31 |P a g e
Maintenance and Support
The District has yearly set costs to allow for the use, upgrade and
failure replacement of hardware and software used throughout the
District, as outline in the Maintenance & Support section. It is
estimated that these support and maintenance costs will increase
approximately 2% a year.
Table 7: Detailed Maintenance & Support Cost Estimate
Services Today ($)2020 ($)2025 ($)
Backup Software 4,500 4,950 5,445
Board: TV Coverage & Streaming 15,000 16,500 18,150
Business Intelligence Software 16,500 18,150 19,965
Business OS and Production Software 30,000 33,000 36,300
Computer Aided Drafting Software 4,100 4,510 4,961
Customer Service and Accounting 260,000 276,690 307,198
Customer Service: After-hours Answering Service 35,000 38,500 42,350
Database Software 1,000 1,100 1,210
Field GIS Software 9,200 10,120 11,132
GIS Software 30,000 33,000 36,300
Hydraulic Modeling 3,750 4,125 4,538
Intrusion Protection 9,000 9,900 10,890
Large UPS 3,200 3,520 3,872
Microwave Hardware 7,000 7,700 8,470
Mobile Telecommunications 50,000 55,000 60,500
Network and Security Devices 30,000 33,000 36,300
Pole Load Modeling Software 3,000 3,300 3,630
Radio Communication 11,500 12,650 13,915
SAG & Tension Modeling Software 5,500 6,050 6,655
SCADA/GIS Software 45,000 49,500 54,450
Security Cameras 2,000 2,200 2,420
Storage Hardware and Software 11,500 12,650 13,915
Utilities: Telephone and Internet 30,000 33,000 36,300
Virtual Desktop Software 3,000 3,300 3,630
Virtual Server Software 16,000 17,600 19,360
Virus Protection Software 2,600 2,860 3,146
Water AMI 2,500 2,750 3,025
Web Page Hosting 4,300 4,730 5,203
Total Yearly Cost 640,850 695,625 768,026
Final Overview
The District’s IT investment is a critical foundation upon which the
District’s entire business structure is built upon. System design,
system control, revenue generation, asset management, finances,
security, and customer service all rely on the complex system the
District’s IT department has created. It is now one of the District’s
most critical assets. Without a proper build-out, refresh, and
maintenance program in place, all other District Business avenues
will falter. System stability, reliability, access, and speed are the main
objectives we strive to achieve with this Master Plan.