HomeMy WebLinkAbout10 IT Mater PlanTo: Board of Directors
From: Ian Fitzgerald
Date: September 06, 2017
Subject: Discussion of the 2017 Information Technology Master Plan
1. WHY THIS MATTER IS BEFORE THE BOARD
District staff has prepared a draft of the 2017 Information Technology Master Plan
The Plan). This workshop will review the draft document and discuss future steps.
2. HISTORY
In 2010, the District made a conscious decision; in order to provide more reliable and
efficient electric and water services to the District's rate payers, all the while
maintaining a steady employment level and costs, technology was needed to play a
much larger role: accentuating the automation of tasks, the management of data, and
the improvement of security to District facilities, staff, and rate payers.
Network Infrastructure
Since this time, the District has made major upgrades and renovations to their
existing IT network infrastructure. In 2011, the District revamped its entire data
network, adding a new data center and replacing outdated switches with state-ofAhe-
art Layer 2/3 network switches (core, data center, and edge). This work integrated
both physical and wireless network onramps with the added security of a new firewall
and network authentication system.
The following year, the District migrated all their physical servers to an advanced
blade server chassis and virtual environment software. This server environment has
allowed the District to significantly reduced labor hours required to maintain
approximately seventy servers, while providing high availability and improved
performances.
In 2013, the virtual server environment was extended to include a disaster recovery
environment, ensuring essential District servers and data were maintained at an offsite
location. Later that year, the District replaced all desktop computers with a Virtual
Desktop Infrastructure (VDI) allowing staff to connect to their personal desktops from
any location on any device; revolutionizing how business can be conducted at the
District all the while reducing the District's cyber attack footprint.
Remote Site Communication
During the same time -frame as the network being brought to a modern level of
technology, the District was improving communications to many of its remote sites:
pump stations, tanks, wells, and substations.
Communication in the Truckee area is notoriously difficult; traversing elevation
changes of 3000 feet, snowfall in the tens of feet, and hundred feet pine trees with
needle lengths the same as a 900MHz bandwidth. Traditional and even the newest
wireless/radio technology are unable to bridge many of the communication gaps
between the outlining District properties and the main office building.
In 2011, the District began the design of their SCADA Reliability Improvement Project.
This project is intended to connect most of the District's remote facility infrastructure
buildings, water and electric, with 68.6 miles of redundant, secure, highly reliable fiber
optic cable. Due to the complexity, time, and limited resources, the project is set to be
completed in stages. Currently four stages have been completed.
3. NEW INFORMATION
This is the District's first Information Technology Master Plan. The main purpose of
the Information Technology Master Plan is to consider the impact technology
purchases have on the District, providing estimates for future build -out, future
replacement, and ongoing maintenance costs. The Plan also outlines potential risks to
the District if the technology is not maintained to a specific level of technology age,
and support.
This Plan is intended to be adaptive and flexible in order to balance the diverse
technology needs of the District. The Plan achieves this by creating new processes for
managing technology that will improve services and systems for our customers
(external and internal). The Plan seeks to create operational synergy by creating new
processes and supporting the people that deliver technology services at the District.
This Plan also establishes a foundation for sustainable technology planning.
Network Infrastructure
Continuing the fast pace of the IT infrastructure replacement, all the District's
communication systems were upgraded to improve reception, reliability, and device
agnostic choices. Old analog phones were replaced with an IP phone environment in
2014, and a very outdated two-way radio system was replaced in 2015, with a ground-
breaking new two-way DMR radio technology.
Capping off the IT network upgrades, the District's new disaster recovery data center,
situated at the corporation yard, went online in January 2016; providing true
redundancy of servers and data between the main office and the backup site.
Current network upgrade projects, completed by the summer 2017, include:
• Adding 69 new HD security cameras to 16 existing cameras, providing video
security for 22 of the District's 60 facility properties;
Upgrading the District's Layer 3 core switches; increasing capacity, redundancy
and reliability;
Adding secure LTE network capabilities to remote sites without fiber
communications; and
Adding Electric AMI communication, data management and presentation
technology
Remote Site Communication
A fourth phase was constructed in 2017, bringing the design build out close to a 50%
completion rate; adding another 10.04 miles of cable and 9 more stations (College
Control Valve, Gateway Control Valve, Glenshire Control Valve, Glenshire Distribution
Station, Hirschdale Well & Tank, Northside Well, Sanders Well, Strand Pump Station,
and Well 20). Future phases will bring an additional 23 more stations online with an
added 36.31 miles of cable.
Last link communication (short communication hops not impeded by terrain) will
further extend the IP Network to an additional 20 locations required for SCADA and
NMI collection via microwave, WkFi, LTE, and radio technologies. Currently, four
microwave links and one wireless hop exist: Donner Lake Tank to Red Mountain, Old
Greenwood to Glenshire Distribution Station, 6170 Tank to Ponderosa Palisades,
Ponderosa Palisades to Donner View, and Martis Valley Substation to Beacon Hill.
Applications &Databases
Improved network infrastructure and communications have laid the backbone for the
driving forces behind these upgrades; the need to improve efficiency, reliability,
automation, and customer service. The application level of the Information
Technology Department aims to provide, integrate, and maintain software and
databases that will allow District staff to manage, model, and design assets (GIS &
CAD), automate commodity flow (SCADA), analyze data (BI Dashboards), collect
revenue (AMI), and serve our customers (CIS & Customer Portals).
Geographic Information Systems &Computer Aided Drafting
The District maintains two automated mapping and design software: AutoCAD and
ArcGIS. The GIS, since 2000, has maintained a single point of truth for the District's
assets, and the basis for very advanced modeling of the District's infrastructure.
Flagship add -on software to the GIS include: Designer (electric infrastructure design),
Responder (outage management), InfoWater (hydraulic modeling), Fiber Manager
(fiber optic management), GoSync (field mapping & inspection), and ArcFM Web (web
map portal).
Supervisory Control and Data Acquisition
Both the electric and water departments implemented SCADA in 1995 to operate and
maintain their respective network infrastructure. The Electric SCADA monitors
voltage, kVA, and load for every circuit. The product went through a significant
upgrade in 2014, migrating from an OpenVMS physical server to a Windows based
virtual server. Advanced revenue metering systems were deployed in 2015, providing
better power purchase accounting.
Wafer SCADA was originally a home-grown software developed at University of
Nevada, Reno. It had served as an adequate system, automating the movement of
water through the Truckee region, but has lately begun to fail due to the system
growing larger, and running on old antiquated software. Replacement of the water
SCADA began in 2013, encompassing new RTU and PLC hardware, as well as new
software. The project is designed and programed in-house, systematically replacing
the old system in stages. Donner Lake 's five stations went online in early 2015 and
Glenshire's five stations came online in the fall of 2016. Project replacement is
scheduled to finish in 2021.
Business Intelligence Dashboards
Unique challenges to provide transparency and insight from multiple databases and
sources to both internal staff and external customers is often a struggle by many
agencies. Business Intelligence (BI) dashboards provide a way for staff and
customers to view large amounts of data in a way that is easy to read and understand.
Many of these dashboards, developed in-house with Logi Analytics, are currently in
use today; with external site MyH2O a prime example how this technology can provide
valuable information to customers to understand their water usage; if they are
experiencing leaks, and whether or not there are meeting California's drought
requirements. BI Dashboards will continue to be developed to ensure the mass
amounts of data are in front of the right people at the right time aiding in a better
decision process.
Automated Meter Infrastructure
The water department laid the ground work for AMI (Automated Meter Infrastructure)
technology in 2007, the District built a radio -based network infrastructure, providing
true systematic reading automatically from meter to MDM (Massive Data Management
Databases) without any staff labor involved.
Using the exact same infrastructure in place, the electric department is in the process
of implementing electric AMI meters. Based on early results, it is expected the District
IT department will be able to provide one unified AMI network to serve both the
electric and water departments. This merger not only saves money for equipment,
servers and databases, but will reduce IT staff hours to manage and maintain the
system. Improved meter technology also provides a customer service benefit;
speeding up connect/disconnects, providing pre -paid options for consumers, and
allowing our customers to gain better insight to their commodity use.
Customer Information System &Portals
All the development and improvements to technology here at the District lead to one
absolute purpose: the ability to provide the best customer experience possible.
Customers also have other tools at their disposal, including a conservation rebate
portal, and the District website. The District website was upgrade in 2011, and is due
for another upgrade in 2017.
There are three distinct factors that determine where costs need to be allocated, and
which capital projects are required to maintain and improve system performance:
Maintenance & Support, Network Build -Out, and Hardware Refresh Cycle. The
Master Plan comprehensively outlines the details of each factor, and the potential
costs they will have on the District over the next ten years.
4. FISCAL IMPACT
There is no fiscal impact associated with this workshop.
5. RECOMMENDATION
Receive this report and provide comments.
i ,lam/W
JdW
Stephen Hollabaugh
Assistant General Manager
Michael D. Holley
General Manager
Public Utility District
Information Technology &Systems
Long range plan of Technologies and Systems supporting District's Electric and Water Services
Draft
Ian Fitzgerald
Manager Information Technology, Systems
U� rake
Table of Contents
Table of Contents
Table of Figures
Diagrams_
Figures
Maps
Tables
Executive Summary
3
3
3
3
3
History and Background 4
Network Infrastructure 4
Remote Site Communication 5
Applications & Databases 6
Geographic Information Systems & Computer Aided Drafting _ 6
Supervisory Control and Data Acquisition 6
Business Intelligence Dashboards 6
Automated Meter Infrastructure 6
Customer Information System & Portals 7
Technology Acceptance
Establish an IT Governance structure
Enhance technology leadership roles
Make technology training mandatory and routine
Getting Started with the Plan
System Planning Criteria
Method Evaluation
Maintenance and Support _
Network Build -Out
Bandwidth & Performance
Network Clients
Campus Extension
0
r
9
10
10
11
Hardware Refresh Cycle
Key Challenges
Useful Life Guidelines
Factors That Determine Useful Life
Operating Cost
Recommendations
Existing System,
Network Infrastructure
Network Switching
Core Switches
Data Center Switches _
Edge Switches
Wireless Access
Fiber Optic Cables
Cable Build -Out
Abandoned Pipe Re -Use
Security Infrastructure
Firewalls
10
10
10
12
13
13
14
14
14
14
14
15
15
16
16
17
Access Control &Authentication 18
Intrusion Detection / Prevention 18
Video Surveillance 19
Data Centerinfrastructure 19
ServerTechnology 19
Virtual Desktops 21
Phone Services 21
Two -Way Radio 21
SCADA Infrastructure 22
Financial Impact,
Capital Improvement
New Purchases and Build -Out
Hardware Refresh
24
24
24
0
1�P���
Maintenance and Support
Labor
0
Error! Bookmark not defined.
Table of Figures
Diagrams
Diagram 1: Equipment Required at Stations,,, 10
Diagram 2: Basic Network Architecture Design...............................14
Diagram 3: HDC Server Hardware Capacity ..................................... 20
Diagram 4: HDC Storage Utilization ................................................. 20
Diagram 5: CRP Hardware Capacity.................................................20
Diagram 6: CRP Storage Utilization..................................................20
Diagram 7: VDI Hardware & Storage Capacity......." so mama am am bd oammabom *I'd 21
Diagram 8: District Unified Communication Diagram......................21
Diagram 9: DMR Radio Architect Design... am NOME Dammam am am am WE 22
Diagram 10: Typical Water SCADA RTU Layout ............................... 23
Figures
Figure 1: BANDWIDTH GROWTH WITH RADIO ADDITION ........ Error!
Bookmark not defined.
Figure 2: BANDWIDTH GROWTH WITH CAMERA ADDITION ..... Error!
Bookmark not defined.
Figure 3: ERICSON REPORT: EXPLOSION OF IOTError! Bookmark not
defined.
Figure 4: Network Hardware Useful Life Expectation ......................11
Maps
Map 1: Internet of Things (IoT) Build-Out........................................11
Map 3: Abandon Water Pipe Repurposed for Communication.......17
Tables
Table 1: EVER INCREASING RESOURCE NEEDS,.,, am 11 Mona** 10
Table 2: Detailed Build -Out Cost Estimate,, **some *among 0
Table 3: Detailed Hardware Refresh Cost Estimate ...........................0
Table 4: Detailed Maintenance & Support Cost Estimate .................0
Executive Summary
The following document presents the Truckee Donner Public Utty
District (District) — Information Technology (IT) Master Plan. This Plan
is the culmination of a comprehensive technology assessment and
planning process which has included input from all executive and
department stakeholders.
This Plan is intended to be adaptive and flexible in order to balance
the diverse technology needs of the District. The Plan does this by
creating new processes for managing technology that will improve
services and systems for our customers (external and internal). The
Plan seeks to create operational synergy by creating new processes
and supporting the people that deliver technology services at the
District. This Plan also establishes a foundation for sustainable
technology planning.
History and Background
In 2010, the District made a conscious decision; in order to provide
more reliable and efficient electric and water services to the District's
rate payers, all the while maintaining a steady employment level and
costs, technology needed to play a much larger role: accentuating the
automation of tasks, the management of data, and the improvement
A security to District facilities, staff, and rate payers.
Network Infrastructure
Since this time, the District has made major upgrades and
renovations to their existing IT network infrastructure. In 2011, the
District revamped its entire data network, adding a new data center
and replacing outdated switches with state-of-the-art Layer 2/3
network switches (core, data center, and edge). This work integrated
both physical and wireless network onramps with the added security
of a new firewall and network authentication system.
The following year, the District migrated all their physical servers to
an advanced blade server chassis and virtual environment software.
This server environment has allowed the District to significantly
reduced labor hours required to maintain approximately seventy
servers, while providing high availability and improved
performances.
In 2013, the virtual server environment was extended to include a
disaster recovery environment, ensuring essential District servers
and data were maintained at an offsite location. Later that year, the
District replaced all desktop computers with a Virtual Desktop
Infrastructure (VDI) allowing staff to connect to their personal
desktops from any location on any device; revolutionizing how
business can be conducted at the District all the while reducing the
District's attack surface.
Continuing the fast pace of the IT infrastructure replacement, all the
District's communication systems were then upgraded to improve
reception, reliability, and device agnostic choices. Old analog phones
were replaced with an IP phone environment in 2014, and a very
outdated two-way radio system was replaced in 2015 with a ground-
breaking new two-way DMR radio technology.
Capping off the IT network upgrades, the District's new disaster
recovery data center, situated at the corporation yard, went online
in January 2016; providing true redundancy of servers and data
between the main office and a backup site.
Current network upgrade projects, completed by the summer 2017,
include:
• Adding 69 new HD security cameras to 16 existing cameras,
providing video security for 22 of the District's 60 facility
properties
• Upgrading the District's Layer 3 core switches; increasing
capacity, redundancy and reliability
• Adding secure LTE network capabilities to remote sites
without fiber communications
Remote Site Communication
During the same time -frame as the network being brought to a
modern level of technology, the District was improving
communications to many of its remote sites: pump stations, tanks,
wells, and substations.
Communication in the Truckee, California region is notoriously
difficult; traversing elevation changes of 3000 feet, snowfall in the
tens of feet, and hundred feet pine trees with needle lengths the
same as a 900MHz bandwidth. Traditional and even the newest
wireless/radio technology are unable to bridge many of the
communication gaps between the outlining District properties and
the main office building.
In 2011, the District began the design of their SCADA Reliability
Improvement Project. This project is intended to connect every one
of the District's remote facility infrastructure buildings, water and
electric, with 68.6 miles of redundant, secure, highly reliable fiber
optic cable. Due to the complexity, time, and limited resources, the
project is set to be completed in stages. Currently four stages have
been completed.
Phase two went west to the Donner Lake geographic area in 2013,
entailing 7.25 miles of cable, one microwave link, and 6 stations
(Donner Lake Substation, Donner Lake Tank, Red Mountain Booster,
Richards Pump Station, West Reed Control Valve, and Wolfe Estates
Pump Station & Tank).
In 2014, the third stage was completed with 8.07 miles of cable, 3
microwave links, and another 8 stations coming online (6170 Tank,
China Camp Pump Station, Corp Yard Disaster Recovery, Donner View
Pump Station & Tank, Fibreboard Well, Palisades Pump Station &
Tank Prosser Heights Well, and Prosser Village Well).
A fourth phase was constructed in 2017, bringing the design build out
close to a 50% completion rate; adding another 10.04 miles of cable
and 9 more stations (College Control Valve, Gateway Control Valve,
Glenshire Control Valve, Glenshire Distribution Station, Hirschdale
Well & Tank, Northside Well, Sanders Well, Strand Pump Station, and
Well 20). Future phases will bring an additional 23 more stations
online with an added 36.31 miles of cable.
Last link communication (short communication hops not impeded by
terrain) will further extend the IP Network to an additional 20
locations required for SCADA and AMI collection via microwave, Wi-
Fi, LTE, and radio technologies. Currently, four microwave links and
one wireless hop exist: Donner Lake Tank to Red Mountain, Old
Greenwood to Glenshire Distribution Station, 6170 Tank to
Ponderosa Palisades, Ponderosa Palisades to Donner View, and
Martis Valley Substation to Beacon Hill. As fiber optic cable access
extends, these links will move to new locations, bringing more
stations online quicker SCADA Reliability Improvement Project can
The first phase, consisting of 6.93 miles and 5 stations (Martis Valley sustain.
Substation, Truckee Substation, Glenshire Drive Well, Old
Greenwood Well, and District Headquarters) was completed in 2012.
Applications & Databases
Improved network infrastructure and communications have laid the
backbone for the driving forces behind these upgrades; the need to
improve efficiency, reliability, automation, government
transparency, and customer service. The application level of the
Information Technology Department aims to provide, integrate, and
maintain software and databases that will allow district staff to
manage, model, and design assets (GIS & CAD), automate commodity
flow (SCADA), analyze data (BI Dashboards), collect revenue (AMI),
and serve our customers (CIS & Customer Portals).
Geographic Information Systems &Computer Aided Drafting
The District maintains two automated mapping and design software:
AutoCAD and ArcGIS. The GIS, since 2000, has maintained a single
point of truth for the District's assets, and the basis for very advanced
modeling of the District's infrastructure. Flagship add -on software to
the GIS include: Designer (electric infrastructure design), Responder
(outage management), InfoWater (hydraulic modeling), Fiber
Manager (fiber optic management), GoSync (field mapping &
inspection), and ArcFM Web (web map portal).
Supervisory Control and Data Acquisition
Both the electric and water departments implemented SCADA in
1995 to operate and maintain their respective network
infrastructure. The Electric SCADA monitors voltage, kVA, and load
for every circuit. The product went through a significant upgrade in
2014, migrating from an OpenVMS physical server to a Windows
based virtual server. Advanced revenue metering systems were
deployed in 2015, providing better power purchase accounting.
Water SCADA was originally ahome-grown software developed at
University of Nevada Reno. It had served as an adequate system,
automating the movement of water through the Truckee region, but
has lately begun to fail due to the system growing larger, and running
on old antiquated software. Replacement of the water SCADA began
in 2013, encompassing new RTU and PLC hardware, as well as new
software. The project is designed and programed in-house,
systematically replacing the old system in stages. Donner Lake's five
stations went online in early 2015 and Glenshire's five stations came
online in the fall of 2016. Project replacement is scheduled to finish
in 2021.
Business Intelligence Dashboards
Unique challenges to provide transparency and insight from multiple
databases and sources to both internal staff and external customers
is often a struggle by many agencies. Business Intelligence (BI)
dashboards provide a way for staff and customers to view large
amounts of data in a way that is easy to read and understand.
Many of these dashboards, developed in-house with Logi Analytics,
are currently in use today; with external site MyH20 a prime example
how this technology can provide valuable information to customers
to understand their water usage; if they are experiencing leaks, and
whether or not there are meeting California's drought requirements.
BI Dashboards will continue to be developed to ensure the mass
amounts of data are in front of the right people at the right time
ng in a better decision process.
Automated Meter Infrastructure
In early 2000, the District moved from manually reading meters to
AMR (Automated Meter Reading) technology which required District
vehicles to drive -by meters. This technology saved countless hours
of labor, reducing meter reading staff to a half position today.
With the water department laying the ground work for AMI
(Automated Meter Infrastructure) technology in 2007, the District
bWit a radio -based network infrastructure, providing true systematic
reading automatically from meter to MDM (Massive Data
Management Databases) without any staff labor involved.
Using the exact same infrastructure in place, the electric department
is in the process of testing electric AMI meters. Based on early
results, it is expected the District IT department will be able to
provide one unified AM network to serve both the electric and water
departments. This merger not only saves money for equipment,
servers and databases, but will reduce IT staff hours to manage and
maintain the system. Improved meter technology also provides a
customer service benefit; speeding up connect/disconnects,
providing pre -paid options for consumers, and allowing our
customers to gain better insight to their commodity use.
Customer Information System &Portals
All the development and improvements to technology here at the
District lead to one absolute purpose: the ability to provide the best
customer experience possible.
Since 1998, the District has maintained a Customer Information
System (CIS) and Accounting and Billing System (ABS). This system
has been the basis for all customer interactions from paying a bill to
creating work orders. The system was extending in 2013 with the
addition of the SmartHub customer portal, enabling our rate payers
to view and pay bills, observe their usage patterns, and sign up for
notification of water leaks; either as an internet webpage or a
smartphone app.
Customers also have other tools at their disposal, including a
conservation rebate portal, and the District website. The District
website was upgrade in 2011, and is due for another upgrade in 2017.
Technology Acceptance
IT resources and expenditures are likely to remain steady for the
foreseeable future while the demands for technology services will
continue to grow. In concert with establishing District -wide
principles for guiding IT decision -making and emphasizing increased
communication and coordination of IT efforts, this Plan seeks to
optimize the District's resources and leverage its strengths to meet
growing demands.
A common theme has presented itself in years past regarding the
acceptance and use of technology. Every organization should expect
to face luddites, people who aren't naturally tech -savvy, and
naysayers whose knee-jerk reaction is to oppose new things. There
are always some people who have their routines, and they just don't
want to change. This attitude persists as long as the organization
permits it.
Although all the initiatives presented in the Plan are important, to
encourage the use and ensure the success of any technology at the
District, the following three must be addressed first as they will serve
as the foundation for future progress:
Establish an IT Governance structure
IT governance describes the process by which stakeholders have
input into priority setting, risk assessment, policy setting, and
decision making processes. IT governance is distinct from day to day
information technology management. It is imperative that IT
governance effectively encompass management, administrative, and
operational areas of the District. The Plan includes a matrix of
accountability and emphasizes transparency of process.
This Plan puts forth a new model for IT governance that enhances
communicationI places technology customers at the center of
71�a��
decision -making, sets technical standards, and aligns decisions with
the District's strategic direction through the creation of two new
committees that will serve distinct roles. First, the IT Steering
Committee (ITSC) will be responsible for the overall direction of IT at
the District. Second, the Technical Review Board (TRB) will be
charged with the creation and maintenance of consistent technology
standards for the entire District community. The new model will also
establish "Participate IT." This open forum concept will allow for the
entire District community to have input into the decision -making
process.
Enhance technoloav leadership roles
The creation of the IT support team in 2015 was a good start in
providing IT leadership throughout the District. This "group of
evangelists" replicate the organization and include the District's star
performers. Employees assigned to these roles help provide support
on technology, and are responsible for maintaining communication
and collaboration with all District and IT staff
However, the real goal of the leadership role is to help people cross
the bridge — to get them comfortable with the technology, to get
them using it, and to help them understand how it makes their lives
better. In other words, coach others on how to use the tools to their
benefit
Make technology training mandatory and routine
Bringing new technology and tools into the District can increase
productivity, and help you make better, faster decisions. But getting
every employee on board is often a challenge.
Training should allow for a path to institutionalize the new
technology and show employees that the District is transonin5
from the old way of working to the new one. In addition to having
Kick-off training for new technology, training should continue
incrementally as users require to become comfortable, as well as
making the technology become the part of the work place routine.
Training can come in the form of group, one-on-one sessions, and
even weekly tips & tricks emails.
Getting Started with the Plan
Successfully implementing the IT Master Plan will require thoughtful
execution of a collaborative process that targets outcomes supported
by the entire District. Gaining and maintaining the support of the
employees will require clear, consistent, and accurate
communication on behalf of District leadership throughout the
implementation process.
Following these recommendations, aframework is built for input
from management and employees into the decision making process
around technology at the District. Implementing the recommended
IT Governance initiative should begin immediately and be a priority
goaI of the District's executive management. IT Governance, in
particular, should not rely upon individuals. This will provide the
framework to support many of the other IT changes that have been
identified in the Plan.
S�Pa�.
System Planning Criteria
This section provides a discussion on the system criteria developed
for evaluating master planning scenarios. It also includes cost
estimating criteria used in developing cost estimates and
determining the financial impact on the recommended
improvements.
Method Evaluation
There are three distinct factors that determine where costs need to
be allocated, and which capital projects are required to maintain and
improve system performance: Maintenance & Support, Network
Build -Out, and Hardware Refresh Cycle.
Maintenance and Support
The first criteria is the cost to maintain and support the hardware and
software infrastructure already invested by the District. In the
information technology sector, the initial purchase of hardware or
software is only the initial cost. Often, there are continued annual
costs for maintenance and support of these products until their end -
of -life (EOL). Often these costs are 20%-30% of initial costs per year.
Maintenance in this section means those preventive, diagnostic,
updating, replacement, and repair procedures of hardware or
software that the District has in place. Maintenance is provided by
the vendor who makes the product at question. Specific
maintenance might include:
• periodic replacement of parts and renewal of consumable
supplies;
• repair or replacement of faulty components;
• periodic inspection and cleaning of equipment;
• updating or upgrading hardware and software, including
installing new operating system versions;
• installing and removing equipment and applications.
The term support refers to the actions taken on behalf of users rather
than to actions taken on equipment and systems. Support denotes
activities that keep users working or help users improve the ways
they work. Included under support might be such items as:
• help desks and other forms of putting a person in touch with
another person to resolve a problem or provide advice;
• automated information systems, such as searchable
frequently -asked -question (FAQ) databases or newsletters;
• initial training and familiarization tours for equipment and
software, whether automated or conducted by a human;
• instructional and curriculum integration support, usually
through observation and personal interaction between a
teacher and a technology coordinator; and
• technology integration support for administrative
applications, usually conducted through specialized
consultants or software/systems vendors.
It should also be noted, that without continued payment into
maintenance and support contracts, the District would also be
restricted from even using the product due to license restrictions.
Maintenance & support costs per device and/or software type is
more detailed in Table XX, within the Financial Impact section.
Network Build -Out
The District is still in the phase of building out the network to full
capacity. Capacity can be evaluated three -fold: bandwidth &
performance, network clients, and campus extension.
Bandwidth & Performance
As computer systems continue to advance, and new software and/or
devices are added to the network, there is often additional resource
taxes on the network; continually requiring more bandwidth,
memory, and processing power. The following measures are often
considered important:
• Bandwidth commonly measured in bits/second is the
maximum rate that information can be transferred
• Throughput is the actual rate that information is transferred
• Latency the delay between the sender and the receiver
decoding it, this is mainly a function of the signals travel time,
and processing time at any nodes the information traverses
• Jitter variation in packet delay at the receiver of the
information
• Error rate the number of corrupted bits expressed as a
percentage or fraction of the total sent
Figure 1: Bandwidth Growth with Radio Addition
Figure 2: Bandwidth Growth with Camera Addition
{005) %_p85 [30 deYs]
local piabe (Local Robe) (MDCCAB36506A
Five years ago, the District never reached 1 GB/s bandwidth
requirements; today 2.5 GB/s is the norm. Large storage amounts
were also not a need for the District five years ago, however, with the
addition of HD security cameras, AMI, and new security technologies,
Big Data analytics are becoming the norm. This has put requirements
for larger, faster storage and network needs.
Table 1: EVER INCREASING RESOURCE NEEDS
Year
Operating System
CPU
Memory
Hard
Drive
Graphics
1998
Windows 95/NT
90 MHz
16MB
80MB
OMB
2004
Windows 2000/XP
1.5 GHz
384 MB
2.2GB
64MB
2009
Windows 7
2.4 GHz
2GB
8GB
256MB
2015
Windows 8.1
2.5GHz
4GB
65GB
1GB
As software and devices require more resources, it is imperative that
the District keep up with the requirements to ensure the system is
working at its fastest capacity, and not allowing employees to wait
for the processing of data.
Network Clients
As the Internet of Things continues its aggressive expansion, the
adding of client devices will continue to grow exponentially. The
Ericsson Mobility Report (2015) puts Machine to Machine (M2M)
growth at 25%year over year up to 2021. Goldman Sachs IoT Primer
(2014) sees a potential of 10x as many things to the internet by 2020.
10�r_
Figure 3: Ericsson Report -Explosion of so,
Connected devices (billions(
2U1+ 41n6 '201G 1A17 iVVtO 2Ulg 2U20 20Tf
2016 :V'1
'TrOOmond lmr.Otlrle Orwnn4+Wr JudeOlm mgary rw�uJM
Ecamde0 or Al<M: MnMCWO G12. rti.»mur>rs MO utUrty mYterO, rtln01O morOnng.
WOW..Ac 11M WO♦w c ntuo" on0'thMw lro hOMM ImWt Oro of 0mrors
Esa O%m ruMwrrmrrflnCborYbs ditl z in JUOo. Y ( T1J0, d00.1 m00N bMOS. 9lu-Pay pl:ryen. gaup cao. Un. cuOI WO*Q o IAV) sMkk orn m[.
The District has seen a growth of approximately 130 devices on the
network in 2010, to close to 1000 devices today. As AMI technology
continues to advance, it is very likely that by 2025, the District will
have over 40,000 devices connected.
Map 1: Internet of Things (loTJ Build-Dut
Continuing to build network and computer resources to support this
large amount of clients is imperative to ensuring the continued
operations of the District.
Campus Extension
The District's service territory sprawls approximately 45+ square
miles. Beyond just the headquarter building located at 11570 Donner
Pass Road, the District requires network connectivity to many other
sites throughout the territory. These off -site locations are imperative
to the operations of the District's electric and water infrastructures;
evolving from just reading statistical values 5 years ago, to full remote
and automated operations of electric and water commodity flow
today , to customer control via pre -pay options in the next 5 years.
Allowing for this advanced commodity flow functionality, equipment
needs to be in place through the District's territory to collect and
broadcast communications to these devices situated in the field.
Ji—
T
��...�c»v. — _ _ __ _ • Stations (74)
Smart Grid Devices (2.048)
Electric AMI (13.646)
_ - WaterAMI (12,951)
11�Page
Stations require a significant amount of network and SCADA
equipment to support. These include fiber, LTE, and/or radio
communication devices, network switches, POE injectors, UPS power
supplies, wireless access points, AMI collectors, surveillance cameras
and SCADA RTU and PLC equipment. For network components,
hardware costs can be up to $10,000, and SCADA RTU equipment has
a range of $15,000-$20,000 per station.
Diagram 1: Equipment Required at Stations
AMI collcckae s M
71,
Ia u
Fiber Patch Panel
Copper Patch Panel
Network Switch
POE Injector
4Virelzss AP
UPS Po:ver Supply
IP Phone l__f]
lTE ❑r Radio
Smart Grid and AMI require significantly less network equipment to
support, outside of the AMI collectors located at many of the main
stations. Most of these devices, potentially over 30,000 devices, have
network communications built into the infrastructure device like
meters, reclosers, switches and transformers. Managing and
communicating with these vase amount of devices will, however, put
a significant tax on the network in terms of bandwidth, latency and
storage requirements.
Network build -out is harder to put costs too, as it heavily depends
upon the speed in which software develops and deployed at the
District, the speed and volume in which new technology is deployed
like Smart Grid, AMI and security cameras, as well as new initiatives
not yet required or mandated on the District. The most accurate
prediction of build -out involves each of the main stations, and their
default equipment build -out as describe in Figure XX, Currently 22
large stations have been completed, with another 29 large stations,
and 23 small stations still to build. Using this criteria of large and small
stations to complete over the next ten years, a more detailed build -
cost estimate is put forth in Table XX, in the Financial Impact
section.
Hardware Refresh Cycle
Key Challenges
■ The primary, factors that determine the useful life of
enterprise equipment are market innovation, vendor end of
life (EOL) policies, operating life and operating cost
■ Limited lifetime warranties, higher mean time between
failures (MTBF) design criterion for critical networking
components and modular platforms are affecting enterprise
useful life assumptions in a positive way
■ Two primary inhibitors to extending the useful life of older
network equipment are the vendors' EOL support programs
and the critical role of the equipment in the network
Useful Life Guidelines
Sometimes referred to as the technological life of an asset, the useful
life reflects how long the equipment can be used before the product
becomes functionally obsolete — that is, when the risk associated
10 � a
with the product becomes too great, or when the operational costs
make a transition to a new product an economic advantage. Useful
life represents the normal time a piece of equipment is expected to
be in place in an average enterprise network. Unanticipated changes
to the operating environment can affect the equipment's useful life.
For example, a significant expansion to the business that puts
increasing demands on a core switch or new application architectures
that change the LAN infrastructure could negatively affect the
anticipated useful life.
During periods of rapid innovation, network infrastructure
components tend to be replaced on a regular and short cycle.
Historically, data -networking equipment was replaced every three or
four years, and it was a fairly common practice to lease equipment
for three years and then "rip and replace" the equipment for a new
Figure 4: Network Hardware Useful Life Expectation
Yoans
e
B
41
■ ■ M
2
o a
M Vse/W Life Dej> a MYon
Sa.�ca: Gartrw (^uouw 1012)
solution. Traditional voice equipment was at the other end of the
spectrumI remaining in the infrastructure for seven to 12 years or
more, with few or no hardware upgrades, but these former norms
have changed considerably. Due to the increased standardization and
stable requirements of edge switching, limited lifetime warranties
offered by several vendors and increasing MTBF, the useful life of this
type of equipment has increased to seven to 10 years. As a result of
better quality and reliability when compared with older wireless LAN
(WEAN) standards, IEEE 802.11n equipment useful life stands in the
five- to seven-year range. Enterprises continue to struggle to use the
capacity that is available as part of 802.11n, even without using some
of the scalability functionality that is already available. There will be
a lot of early adopters for 802.11ac in the home market, but no
traction in the enterprise. In most cases, industry recommendation is
that IT organizations use core switches and routers for five to seven
years. Replacement should not be done on a regular schedule, but
should be based on:
■ Analysis of new requirements
■ The cost of operating the old equipment
■ The level of risk associated with operating long-lived network
assets
In some circumstances, it may be possible to extend the useful life
beyond seven years. This type of equipment may be negatively
impacted by capacity increases (for example, LAN backbone traffic or
increasing WAN speeds), which may lower its useful life.
Alternatively, these assets may be redeployed, for example, by
moving the core switch to handling aggregated or even edge -traffic.
Compared with core switches and routers, some of the newer data
center technologies can have shorter useful lives. These include
fabrics, fabric extenders and input/output (1/0) convergence, whose
useful life ranges from four to seven years. Until these new
technologies and products have a proven track record, we advise a
slightly more conservative approach when setting useful life
expectations. We expect application delivery controllers (ADCs) and
WAN optimization controllers (WOCs) to have a three- to five-year
useful life. There remains significant innovation in these markets,
which may lead to forced software or hardware upgrades and,
consequently, reduced useful life. The useful life of WOCs is still
limited by their use of hard disks. We find that new features, such as
new Secure Sockets Layer (SSL) key size, in the ADC market can lead
to upgrade requirements. Security requirements can be split
between threat -facing and non -threat facing equipment. Threat
facing devices will usually have a shorter life (three to five years).
Unified threat management devices will reduce the overall life,
because of the requirement to expand as one or more particular
functions consume all the resources of the appliance. Longer life
cycles (five to seven years) can be attained by using dedicated
function appliances.
New IP telephony (IPT) equipment has a significantly shorter life cycle
(five to seven years) than the traditional time division multiplexing
(TDM) equipment (seven to 12 years), which IPT has largely replaced.
We expect the call setup hardware to have a life span similar to
general-purpose servers, although the software is likely to be
covered through software support contracts and have a shorter
useful life. After two ways of innovation (move from Integrated
Services Digital Network [ISDN] to Internet Protocol [IP], and
standard -definition [SD] to high -definition [HD] video resolution),
videoconferencing equipment's useful life has stabilized between
four to six years. Although there are new features, such as 2K line
video, 3D video and new codecs, which will be put into place for new
installations, it is unlikely to prematurely retire existing installations.
Most clients consider "good enough" video to be adequate for most
purposes.
Factors That Determine Useful Life
Four primary factors determine a product's useful life in an enterprise
network.
Market Innovation
The relative stability of a product is key for determining the useful life
of most products. Markets that are increasingly standardized or have
progressed further down the commoditization curve provide the
impetus to increase or stabilize the useful life of products. Products
with a smaller percentage of software or stable software features are
also good candidates for extended life. Market innovations do not
necessarily require or force an upgrade. For example, there is no
need to upgrade a workgroup LAN to 10GbE. However, a
requirement for Power over Ethernet (PoE or PoE +) for items like
security cameras or some high -end WLAN access points (APs) may
force a technology upgrade. Other new requirements — such as
broad deployments of network access control or WOCs — may be
better handled by overlays, while enabling the switch and router
installation to remain in place to extend their useful lives. Other parts
of the network, such as network security and ADCs, have more
innovation and critical demands for new capabilities. For example,
the migration of 2048-bit or 4096-bit SSL keys has necessitated a
move toward ADCs with higher overall performance.
Vendor EOL Policies
Vendor EOL announcements trigger a series of events that lead to the
end of support for a product. Although the lack of a support contract
is an issue for network operations, it does not result in a mandatory
requirement to replace the equipment. In some circumstances, it is
perfectly fine to get support from a third -party vendor. It is important
to understand what an EOS announcement means. Although it
impacts and influences useful life of a product, it doesn't have to
dictate it. In the case of Cisco, an EOS announcement causes a specific
121J�.,
chain of events. The final date that Cisco will accept orders for new
networking equipment is approximately six months after an EOS
announcement. Starting with this EOS date, Cisco will provide full
software and hardware support for the product for a total of five
years, presented as three years for software and five years for
hardware. Software support generally means that bugs will be fixed
and security vulnerabilities will be closed. There may be some feature
upgrades (especially if the product is part of a family where active
developments are still being performed). After the third year, Cisco
will only provide hardware support (basically replacement for failed
components). This can be a competitive differentiator, especially for
products that are Internet -facing and require security patches to
lower risk. Most other vendors have some variations on these five-
year, EOS support options. Some workgroup switches will include
some form of lifetime warranty for the hardware, but may exclude
power supplies and fans in other cases. Enterprises need to carefully
understand the fine print on what is covered on these often -limited
lifetime warranties. A final vendor issue in determining the useful life
of a product may come down to luck and careful buying. Buying a
product near the end of its time in a product portfolio can reduce its
useful life in the network. Although organizations should be aware of
where a product fits in a vendor's life cycle, it's not always easy to
predict when a vendor will update its product portfolio.
Operating Life
Operating life affects useful life and is specifically tied to the
hardware design of the product. It is related to, but not the same as,
the product's MTBF, which is calculated based on a curve that
predicts a level of failure in the product line. Historically, most
network equipment was designed to have MTBF of approximately
100,000 hours (roughly 11 years). Failures often occur in power
supplies and fans, although environmental issues can also affect the
longevity of semiconductor components. Looking at new hardware
design, fixed form -factor switches are being designed with increasing
MTBF — in many cases, 200,000 hours or more. Thus, for some
equipment, the operating life will no longer be part of the equation
to determine the useful life. Switches equipped for PoE+ are likely to
have a shorter operating life than those without PoE+, because of
larger power supplies, more heat and increased air-cooling
requirements.
Operating Cost
This is the final consideration when determining useful life. The price
of some equipment — particularly Ethernet workgroup switches —
has declined significantly in the past five to 10 years. In most cases,
software and hardware service contracts are related to the original
equipment costs. When you add in the arrival of new lifetime
warranties and more energy -efficient products that are available on
the market, we have seen cases in which replacing older LAN
switches with new ones — especially those that offer lifetime
warranties — can have an ROI of two years or less.
Recommendations
■ Upgrade or replace network equipment only when the risks
become unacceptable or significant new technical
requirements emerge
■ Analyze and understand each major product category and
end of sale (EOS) announcements from different vendors to
determine the associated risks and prepare a migration plan
■ Do not follow predetermined, regular upgrade cycles for
network equipment, since business, application and
technical requirements can impact useful life positively and
negatively
13�?a_e
Using the above recommendations, a detailed hardware refresh cost
is fully detailed in Table XX, in the Financial Impact section.
Exist i n g Syst e m
This section provides a description of the existing IT applications and
infrastructure. The system is broken down into Network, Security,
Communications, Data Center, SCADA, and Applications.
Network Infrastructure
Networks are the "plumbing systems" that convey electronic data
from once place to its intended destination. Data may be conveyed
through physical cables including fiber optics or wireless means such
is radio frequency and cellular networks. They are the backbone
unto which all information travels.
Network Switching
Core Switches
A core switch is ahigh-capacity switch generally positioned within the
backbone or physical core of a network. Core switches serve as the
gateway to all data center and edge switches, and to the Internet - it
provides the final aggregation point for the network and allow
multiple aggregation modules to work together.
In 2016, District core switching was upgraded from one core switch
with redundant control modules, to four core switches working in
Hot Standby Routing Protocol (HSRP) mode used for establishing a
fault -tolerant default gateway. (See figure x.x — Basic Network
Architecture Design) Two core switches are located at the District
headquarter data center, and two more at the Corp Yard disaster
recovery data center. This design provides high availability for the
heart of the District network infrastructure.
Diagram 2: Basic Network Architecture Design
Single Mode Fiber
1GB or40G8
Multi Mode Fiber
10 GB
Cot Cable
1G8
HDC (DATA CENTER)
WIRELESS
,ACCESS POINT
WIRELE55
,ACCESS POINT
CRP (DISASTER CENTER)
Data Center Switches
Data center switches deliver key scalable features that meet the
demands of today's virtualized and cloud multi -vendor
environments. Considering the District network architecture is
heavily designed as an internal cloud, where all applications,
desktops and data reside; Data center switches are critical devices to
the operations of the District.
The District operates two data centers; one at the District
Headquarters —HDC and one at the Corp Yard —CRP. HDC has a
capacity of six cabinets of 48 Rack Units (RU) each. CRP has a capacity
of four cabinets of 48 Rack Units (RU) each, with the ability to add
one more cabinet.
�-
The data center is supported by seven top -of -rack and fabric
interconnect switches; five located at HDC, and two located at CRP.
The fabric interconnect switches support 10 GB bandwidth, whereas
the top -of -rack switches support only 2 GB. Upgrades to these
switches will include increase top -of -rack switches to 10 GB services.
Edge Switches
Edge switches are the gateways to the District network, connecting a
few to a maximum of 48 endpoint (client) devices: laptops, desktops,
security cameras, and PLCs (Programmable Logic Controllers). For
this reason, edge switches generally are considered less crucial than
core switches to a network's smooth operation; the loss of one edge
switch only impacts a handful of devices.
The District uses two different types of edge switches. One type is
designed with the office in mind: non-ruggedized with 1GB ports with
POE (Power -over -Ethernet); and one for satellite locations like
substations and pump houses: ruggedized with 100MB ports,
generally without POE. The District currently operates seven edge
switches at the headquarters, and one edge switch at each satellite
station where communication paths exist. Currently 23 satellite
stations are online, with 7 more coming online by spring of 2017.
Final build out project 64 edge switches.
Due to Edge Switches locations away from the redundancy of the
data center, the switches rely on UPS (Uninterrupted Power Source)
devices to moderate power losses and maintain end -point device
connectivity. Many sites require power generation to ensure power
is never totally lost.
It is critical that the District maintain one spare for every edge -switch
model to ensure we never lose access to any device for more than
one hour, should the switch fail.
LTE Routers
LTE routers allow the District to port LAN network traffic over the
WAN (internet) through secured encrypted tunnels. These routers
provide 5 useable IP address per subnet at stations where fiber
options have not been landed. It ensures the District has high speed
bandwidth, with high security policies in place, at locations where the
District have not been able to communicate with network packets in
the past.
LTE routers may also be deployed at stations without a true Fiber
Optic loop, ensuring there is high -bandwidth redundancy at these
locations. These options are critical in allowing for operations to
properly operate SCADA controls from the back office.
It is critical that the District maintain one spare to ensure we never
lose access to any device for more than one hour, should the LTE
router fail.
Wireless Access
Like edge switches, wireless access points (WAPS) are the access on -
ramps of the network for endpoint devices. Unlike switches, which
require physical cables to connect endpoint devices, WAPS allow for
connection via radio airwaves; usually in the 2.4 and 5 MHz range.
Wireless access to computer networks are becoming the normal
industry standard. Ensuring many of the District devices have the
option to connect to the network wirelessly, allows staff and vehicles
to "float" between Headquarters, the Corp Yard, and satellite
stations without circumventing security, nor increasing their network
connection times.
For shorter hops between satellite locations that will not have fiber
optic cable landed, WAPS can act as a bridge between locations,
151Page
offering up to 300MB/sec speed, providing for a more cost effective
solution at certain locations for the District.
The District currently operates 30 wireless access points, with a
potential build out of 54 units.
Fiber Optic Cables
Cable Build -Out
A comprehensive design of layer 1 communication between District
headquarters and the 51 satellite facility locations was completed in
2011 (Map XX).
Map 2: Fiber Optic Cable Full Build -Out Design
-ti `ieve I
,;1'
14T� I6Nl.
m*A,
is-+.MDCI IOCM
aesl /
it
The full build out encompasses 68.6 miles of fiber optic cable, broken
down into 216, 144, 96, 48, 24, and 12 strand count cable. Each of
the fifty-one (51) stations, which will have fiber cable landed, will
have two stochastic routes back to both District headquarters and
the disaster recovery center located at the Corp Yard., ensuring
jptl tc 1 ] •^� .
ISY01 Ifl.>1
�r
sla
FL
(ARl
�-Vmk
16�Page
redundancy and reliability.
Abandoned Pipe Ke-Use
The District currently owns 9.35 miles of pipe (Map XX) that is no
longer usable by the water utility for the purpose of traversing water.
These abandoned pipes have the potential to be re -used for the
purpose of network communication.
Map 3: Abandon Water Pipe Repurposed for Communication
underground, 30% overhead build. Moving 70% of the build from
overhead to underground will increase the reliability, and reduce the
This opportunity to re -use existing infrastructure, for other purposes
than its' original has the potential to save the District a considerable
amount of money, both during build, and increasing reliability and
maintenance in the future.
As example, re -purposing the abandoned pipe in the Tahoe Donner
subdivision will change the original design of the fiber optic cable
infrastructure from being 100% overhead build, to a 70%
maintenance cost considerably, due to the fact that tree falls will be
removed from the risk factor completely in these areas.
17
5
Security Infrastructure
In 2013, the President of the United States issued an executive order
to improve critical infrastructure cybersecurity. Repeated cyber
intrusions into critical infrastructure demonstrate the need for
improved cybersecurity. The cyber threat to critical infrastructure
continues to grow and represents one of the most serious national
security challenges we must confront. The national and economic
security of the United States depends on the reliable functioning of
the Nation's critical infrastructure in the face of such threats. It is
the policy of the United States to enhance the security and
resilience of the Nation's critical infrastructure and to maintain a
cyber-environment that encourages efficiency, innovation, and
economic prosperity while promoting safety, security, business
confidentiality, privacy, and civil liberties.
Fi ra inin 11 c
Firewalls are the District's first line of defense against unauthorized
access, while permitting outward communication. The firewall is a
network security system that monitors and controls the incoming and
outgoing network traffic based on predetermined security rules.
The District is constantly being attacked by unauthorized elements,
both directly from the internet, as well as from within via email or
website links.
Currently the District operates one firewall, which is a single point of
failure to outside communication. Adding a second firewall at the
Corp Yard Data Center will ensure reliability and resiliency of the
District's front line defense.
Access Control &Authentication
Access Control Server (ACS) is an access policy control platform that
helps the District comply with growing regulatory and corporate
requirements. By integrating with other access control systems, it
helps improve productivity and contain costs. It supports multiple
scenarios simultaneously, including:
• Device administration: Authenticates administrators, authorizes
commands, and provides an audit trail
• Remote Access: Works with VPN and other remote network
access devices to enforce access policies
• Wireless: Authenticates and authorizes wireless users and hosts
and enforces wireless -specific policies
• Network admission control: Communicates with posture and
audit servers to enforce admission control policies
Intrusion Detection /Prevention
An Intrusion Detection System (IDS) is a network security technology
originally built for detecting vulnerability exploits against a target
application or computer.
Vulnerability exploits usually come in the form of malicious inputs to
a target application or service that attackers use to interrupt and gain
control of an application or machine. Following a successful exploit,
the attacker can disable the target application (resulting in a denial -
of -service state), or can potentially access to all the rights and
permissions available to the compromised application.
Intrusion Prevention Systems (IPS) extends IDS solutions by adding
the ability to block threats in addition to detecting them and has
become the dominant deployment option for IDS/IPS technologies.
Multiple solutions work in unison at the District to provide layers of
security. Maintaining hardware and software focused on Intrusion
10
detection and prevention are paramount to both the safety of the
District's infrastructure and customer's private data.
Video Surveillance
Closed-circuit television (CCTV), also known as video surveillance, is
the use of video cameras to transmit a signal to a central location, on
a limited set of monitors. The District's CCTV systems operates only
as required to monitor a particular event. The current system,
utilizes network -attached storage devices, providing recording for
weeks at a time, with a variety of quality and performance options
such as motion detection and email alerts.
Video surveillance plays a significant role in protecting the District's
facilities, employees and customers from harm, theft, malfunctions,
and tampering. Eighty-five surveillance cameras are currently
deployed at twenty-one satellite facility stations as well as the District
headquarters. Future roll -out includes up to 120 additional cameras
at 40 additional satellite facility stations.
With all this high -definition video security, comes large amounts of
network traffic. Although the network is currently designed to
handle existing camera infrastructure, add cameras in the future will
require re-evaluation of the network.
Data Center Infrastructure
A data center is a facility used to house computer systems and
associated components, such as telecommunications and storage
systems. It generally includes redundant or backup power supplies,
redundant data communications connections, environmental controls
(e.g., air conditioning, fire suppression) and various security devices.
The District's goals are to maintain a Tier 111 Data Center at both the
District headquarters and Corp Yard that meet the standards of the
Telecommunications Industry Association and the Uptime Institute. The
minimum qualifications for a Tier III data center are:
• Meets or exceeds all Tier I and Tier II requirements
• Multiple independent distribution paths serving the IT
equipment
• All IT equipment must be dual -powered and fully compatible
with the topology of a site's architecture
• Concurrently maintainable site infrastructure with expected
availability of 99.982%
Currently these goals are being met.
Server Technology
The District operates two server clusters: production located at
District headquarters and Recovery at Corp Yard.
HDC Data Center
The server cluster, located at District headquarters, consists of a two -
chassis setup, with each chassis consisting of one fabric -interconnect,
one 8-slot blade server chassis, consisting of two blade servers for
server VMs, and one blade server for VDI, and two storage units,
consisting of varying hard drive size and speeds.
Diagram 3: HDC Server Hardware Capacity
OCW
Tod Proceoewo
�
TOW CPU Retanocrn
12"1 DRt
TOdl demon,
$11-75 CO
oWVnuol Fdvn RoovWmo
MOD
DDIZInOrd
vSPMFO Rn
❑
Pma
ioa
APU Fa11Over TR
25%
Voloo ter
ob
Y
Enableni
Y
°aa1e0
weoe _
CAP�
So
Mae.;;;
�1
G e"
:�..�
.MEN O., 110
pllOdfIon automaton IWAII FURY• WrnbaeO
urpra4an rnrevnolO. bDby Pnonry 1.
Prlarlry 2. an i
PNOV
reculnrmDnnOMlonc
oRs 1arSRc
tl
awwcoo
,ea
j A"Ourove Probe
YSOPe
G
Mduall Modhiromal
Powentron
42
FOwere6O11
T
D
low —_<0
HDC Data Center currently runs at below 50% capacity, allowing the
building and adding of up to 50 new servers at no extra cost.
Diagram 4: HDC Storage Utilization
rMeWn�.WBm.K P Weeptm9wam.n..nelm..p.�>rw,.er�.G....,.m.
6tFrad qr m er.nnwe
■ Gmwh.lS T Stomar
SOeIM YdSvf
■ Member Erennreor 9mwm
■ W%req
■ MVbF v
.. eW bandana
Srnem tRN9etlOn
TVWl SpeQ: 24.289 TE
Yup SPRV: 29.OB3 TE
PYe.lewe Spem: 157976
CRP Disaster Recovery Center
�. W.w.eWYYr
-.Jere lu2rnl
_ Y pr.Yrm
e11.33R5®pSJn61
M (ca
` 2u1 2 eel ro fe.v
The server cluster, located at Corp Yard, consists of cone -chassis
setupI consisting of one fabric -interconnect, one 8-slot blade server
chassis, consisting of two blade servers for server vMs, and two
storage units, consisting of varying hard drive size and speeds,
Diagram 5: CRP Hardware Capacity
jr-
CbeYr PoemrOoe
_
vSpnero OPS
7
Heeb
-MOSW
TOWITouNws
0
Oa cPU Rnnwrmra
Wncrc
ulpnnvn Marrow lower
cWryB.romaYa
Tpol Yem
25586 CP
ulyavon WOBnolo.
orMS.
rprnMww Ran ReBW .w, Ewe
MPnI
P,ONV
yl
EVCYorY
Dia a
wrommenaanom
rSprreru NAM
Power management eWemm On lover
Ce
proWOWp
DR9 mmenOeMne
B
w
OR9I W'
U
nrmw
0
CPIeW CoeWIMn
n
o�
Y r0°%
ROeOuma Powe 0
CPU Fv lmvr,nroalrola,
5094
r[Bpe 0
YenwRFalrwvlTNe9rWip'
99Y
-MftW UBMInee
Npvl uonllonnB
Enowo9
tYuvnn Wo
Powenvon E
POwe1MM 19
Toul 2S
MOVIC
�^Crry so 37 one
r�
w��.0011
10 CVO .13 ea To
Hardware and storage capacity run low at this location due to it
being mostly in a cold standby mode. All but a handful of servers
operate in normal capacity over at HDC. The Corp Yard Data Center
requires to have almost as much capacity as HDC to ensure full
operations, should the HDC Data Center be unavailable.
Diagram 6: CRP Storage Utilization
Tne WllrnvinB c9an vMlm be unllmtm OetNle and 1m epse u rour uenee ryrtem.
Ira Wove evubeM
■ Gmwe 1561 SCogr r,,:...,14-: a..riul:L.
51YM hbw
■ nMrmul 9aoumr5a
■ MYBa v
are mvwxwr
srnem wnn.Ren
GwnSo.w: a5.lanrE
servers run on the Headquarter cluster, and is setup to fail -over to
the Corp Yard in the event of a disaster. Currently, resources are
adequate for current needs, however, storage technology is
beginning to get old.
Virtual Desktops
Virtual desktop infrastructure (VDI) is the practice of hosting
a desktop operating system within a virtual machine (VM) running
on a centralized server. VDI is a variation on the client/server
computing model, sometimes referred to as server -based computing.
Diagram 7: VDI Hardware &Storage Capacity
vm ta�amr
14:==
� TWIPreCe4sM: 72
Tome r,aanon talemnonc o
cPu rREE et.ea oR:
Rz CAPACIW so ST ol:
uEMOT' FREE: 1Ta.TS Ge
USED'=712GS cAPAClW'Mae
sTOMX MEE'=WDe
VSED' g1- c ACM I." TB
Cluster Resources
❑
Guster Consumers
❑
Hosts
2140stS
I
Resource Pools 0
Total Processors
32
VAPPS 0
Total CPU Resources
86.37 Git
Vntual L13Chlnes
Total Memory
383.86 G8
Powerebon 59
Total Virtual Flash Resources
0.008
POWBfed-0R 0
EVC Mode
Dlsabled
Total 59
r_.Tage
❑I
I g
Wit; al SANUcenshtp
❑Ii
The District is licensed to operate up to 60 virtual desktops
simultaneously. Hardware capacity is designed to ensure all 60
desktops are capable of running on a single server, should a server
fail. Currently both VDI Servers reside within the District
Headquarter cluster. The District is currently at maximum capacity
for the VDI servers. A third server will be required in the future to
allow for VDI at the Corp Yard during a disaster.
Phone Services
Voice over Internet Protocol (VoIP), is a technology that allows the
District to make voice calls using a broadband Internet connection
instead of a regular (or analog) phone line.
The District operates two servers within a cluster for performance
and reliability of the phone system. One server resides at HDC and
the other at CRP.
Diagram 8: District Unified Communication Diagram
Truckee Donner PUD Unifted Collaboration High Level Design (HLD)
Optic
."' Pu„bg- �I coawonwn
coon no 4�lag F+Pnuvay
NW4�er y`r,,, CUC Vubtkb r C�c f
HDC
PSTN
ail 0 Re
"Ill cuc amm..�..
Mtlp
t ErprMs v Sumcnoer sul a b
— 1410
There are currently 73 IP phones, with another 60 desktop Jabber
clients, and 30 IOS Jabber clients currently in service. Phones can be
added to the existing system as long as new licenses are purchased.
Two -Way Radio
The District operates a ID Tier III two-way radio communication for
field operations. DMR Tier III covers trunking operation in frequency
bands 66-960 MHz Tier III supporting voice and short messaging
handling. It also supports packet data service in a variety of formats,
including support for IPv4 and IPv6. The District were early adopters
of this technology.
Diagram 9: DMR Radio Architect Design
Orti' op11FA NFN mrE2 Ow OIFiMM
s
Both stations communicate with each other through a central
controller at HDC. A fail -over controller resides at CRP.
There are currently 64 mobile radios in service, functioning as the
prime communication device for personal working in the field.
SCADA Infrastructure
Supervisory control and data acquisition (SCADA) is a system for
remote monitoring and control that operates with coded signals over
communication channels. The District employs SCADA for the both
the electric and water systems.
A SCADA system usually consists of the following subsystems:
• Remote terminal units (RTUs) connect to sensors in the
process and convert sensor signals to digital data. They have
telemetry hardware capable of sending digital data to the
supervisory system, as well as receiving digital commands
from the supervisory system. RTUs often have embedded
control capabilities such as ladder logic in order to
accomplish boolean logic operations.[31
• Programmable logic controller (PLCs) connect to sensors in
.y„ the process and convert sensor signals to digital data. PLCs
have more sophisticated embedded control capabilities than
Radio communications are supported by two base stations, each with
one control channel, and 3 communication channels, located at the
Old Greenwood Well and Donner View Hydro stations. Each station
is equipped with two repeaters, a multi -coupler, two transmit
combiners, and a single multi -frequency antenna.
RTUs. PLCs do not have telemetry hardware, although this
functionality is typically installed alongside them. PLCs are
sometimes used in place of RTUs as field devices because
they are more economical, versatile, flexible, and
configurable.
• A telemetry system is typically used to connect PLCs and
RTUs with control centers, data warehouses, and the
enterprise. The District plans on using five types of telemetry,
depending on network build -out, and telemetry
22�Page
requirements. These include fiber optics, microwave,
wireless, Ll and radio.
• A data acquisition server is a software service which uses
industrial protocols to connect software services, via
telemetry, with field devices such as RTUs and PLCs. It allows
clients to access data from these field devices using standard
protocols.
• A human —machine interface or HMI is the apparatus or
device which presents processed data to a human operator,
and through this, the human operator monitors and interacts
with the process. The HMI is a client that requests data from
a data acquisition server or in most installations the HMI is
the graphical user interface for the operator, collects all data
from external devices, creates reports, performs alarming,
sends notifications, etc.
• A historian is a software service which accumulates time -
stamped data, boolean events, and boolean alarms in a
database which can be queried or used to populate graphic
trends in the HMI. The historian is a client that requests data
from a data acquisition server.1s]
• A supervisory (computer) system, gathering (acquiring) data
on the process and sending commands (control) to the
SCADA system.
• Various processes and analytical instrumentation.
The electric SCADA system currently monitors the District's four
substations, the Glenshire distribution, and all 16 circuits. Near
future additions include adding some monitoring devices to reclosers
and capacitor banks.
The water SCADA system is currently being upgraded to a new
environment with all new hardware and software. The system
backend is complete with a data acquisition server, historian, and
HMI. New RTU cabinets, encompassing PLC hardware have been
purchased for 27 of the 58 locations, with ten of these fully
commissioned, replacing the old Donner Lake and Glenshire SCADA
systems. It is envisioned that all well buildings, as well as the Gateway
valve and College Valve will be commissioned in 2017.
Diagram 10: Typical Water SCADA RTU Layout
an
_:
Pa
oocil�
all
OR
1!1M.q
on
on_ue
I
Ikin
lnwin
on_4
Ideal@
Millipedes
11In
it
in
In
small
all
Take
W4.1l
n Moslem
44 4
4L alp
Car
on
In
island
allies
!li �'�
oo�ile
oo_ue
In
small
ICI
Ill
64
1
died
kb
I4 F
are
are
9elui
i�«=
Is
„I
M
Ih ��
all
linwil
sale
II ��
11721all
In
of
IIad
In
�II
I�'Ill l kill l 111
Purchase of an additional 31 RTU cabinets (approximately $15,000 -
$20,000 each) is anticipated to be completed in 2018, 2020, and
2022, respectively.
23�Page
Financial Impact
Outlined below is a comprehensive cost analyst to build -out, refresh, and maintain the District's IT hardware and software infrastructure. Labor
and miscellaneous expenses are not accounted for in these estimates.
Capital Improvement
Two main factors influence capital improvement expenditures for Information Technology: New purchases to obtain full build -out, and hardware
replacement (refresh) cost, ensuring continued reliability.
New Purchases and Build -Out
In order to allow for both full computer capacity within the District
headquarters, for current employee count, future employee growth,
and future SCADA supporting technology, with an additional 30 sites
not online today; many computer devices will require to be added
over the next 10 years. The table below is a detailed estimate of
which and how many of those devices will be required, with an
estimated cost breakdown over a ten year period.
24�Page
Table 2: Detailed Build -Out Cost Estimate
Device Type Count Avg Cost
LAN Switching (Edge) 32 41500,00
Access Points & Controller 24 31700,00
Server Hardware
VDI Hardware
Data Center Switching
Storage Hardware
Security Hardware (Firewalls, ACS)
Security Hardware (Cameras)
Mobile Devices (Phones, Tablets)
SCADA RTU
UPS
POE
Microwave Dishes
1
130,000.00
1
40,000,00
4
40,000,00
1
30,000,00
2
20,000,00
125
31500,00
20 800.00
31 16,000.00
Z6 1,500.00
23 300.00
4 12,500.00
2016
2017
7
0
31500
5
0
18500
0
0
0
40000
0
0
0
30000
0
40000
25
0
87500
4
0
0
0
0
0
3200
12
0 192000
6
9000 0
5
1500 0
25000 0
2018 2019
7
0 31500
5
0 18500
0 130000
0 0
40000 40000
0 0
0 0
25
0 87500
4
0 3200
2020
2021
2022
6
0
27000
0
5
0
18500
0
0
0
0
0
0
0
80000
0
0
0
30000
0
0
0
0
25
0
87500
0
4
0
12
0 192000
7
10500 0
6
1800 0
0 25000
3200
0
7
0
112000
6
9000
0
6
1800
0
0
0
2023
2024
6
27000
0
0 0
7
10500 0
6
1800 0
0 0
27000
14800
87500
2025
6
4
0
0
0
0
0
25
4
3200
0
25000
0 286200 232000 323000 297000 177000 112000 148500 130000 157500
Hardware Refresh
The District's current hardware devices have a set lifetime of
usefulness, as outlined in the Hardware Refresh section above.
Below is a detailed estimate of what devices, how many, and when
they are expected to require replacement. Hardware refresh costs
will continue to go up for the next ten years as a continued push
toward full build -out will continue to add new devices to the network
every year.
0 � P
27000
14800
87500
2025
6
4
0
0
0
0
0
25
4
3200
0
25000
0 286200 232000 323000 297000 177000 112000 148500 130000 157500
Hardware Refresh
The District's current hardware devices have a set lifetime of
usefulness, as outlined in the Hardware Refresh section above.
Below is a detailed estimate of what devices, how many, and when
they are expected to require replacement. Hardware refresh costs
will continue to go up for the next ten years as a continued push
toward full build -out will continue to add new devices to the network
every year.
0 � P
Table 3: Detailed Hardware Refresh Cost Estimate
Count
Refresh
Device Type
Avg Cost
Today Value
2025 Value
Year Install
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
(BuildOut)
Years
LAN Switching (Edge)
32 (64)
4,500.00
126,000.00
360,000.0O
2011-2016
7
6
6
8
8
4
5
13
6
0
0
27000
27000
36000
36000
18000
22500
58500
27000
Radio Hardware
8
33,750,00
2700000,00
270,000.00
2015
7
0
0
0
0
0
0
270000
0
0
0
Radio Units
67
1,000.00
67,000.00
67,000.00
2015
10
0
0
0
0
0
0
0
0
0
67000
Access Points & Controller
30 (54)
3,700.00
111,000.00
259,000.00
2011-2016
6
2
3
6
4
8
19
8
16
0
0
7400
11100
22200
14800
29600
70300
29600
59200
Core Switching
4
50,000.00
200,000,00
200,000.00
2016
5
130000
0
0
0
0
150000
0
0
0
0
Server Hardware
3
130,000,00
3900000.00
390,000.00
2012/2013
5
2
1
2
1
0
310000
0
0
0
0
0
310000
0
0
VDI Hardware
2 (3)
40,000.00
80,000.00
120,000.00
2013
6
0
0
40000
80000
0
0
0
0
40000
80000
IP Phone Server
2
20,000.00
400000,00
40,000.00
2014
6
0
0
0
0
40000
0
0
0
0
0
IP Phones
140
500.00
70,000.0O
70,000.00
2014
7
0
0
0
0
0
70000
0
0
0
0
Desktops
70
450,00
31,500.00
31,500.00
2015
7
0
0
0
0
0
0
31500
0
0
0
Data Center Switching
7
100000,00
70,000,00
70,000.00
2011/2012
4
0
0
70000
0
0
70000
0
0
0
70000
Storage Hardware
3
30,000.0O
90,000,00
90,000.00
2012/2013
3
0
90000
0
0
90000
0
0
90000
0
0
Security Hardware (Firewalls, ACS)
2
20,000.0O
40,000,00
40,000.00
2011
3
40000
0
0
40000
40000
0
40000
40000
0
40000
Security Hardware (Cameras)
85 (210)
3,500.00
297,500.00
735,000.00
2013/2016
8
7
9
69
8
0
0
0
0
0
0
24500
31500
241500
28000
Mobile Devices (Phones, Tablets)
118
800.00
94,400,00
94,400,00
2015-2016
3
10
10
35
10
10
10
39
10
10
10
8000
8000
28000
8000
8000
8000
31200
8000
8000
8000
SCADA RTU
23 (54)
16,000,00
368,000.00
11120,000,00
2015/2016
12
0
0
0
0
0
0
0
0
0
0
UPS
30 (56)
11500,00
45,000.00
105,000.00
2011-2016
5
6
6
8
8
4
5
5
6
6
0
9000
9000
12000
12000
6000
7500
7500
9000
9000
POE
15 (38)
300.00
4,500.00
16,500.00
2015-2016
5
6
6
8
8
4
5
5
6
6
0
1800
1800
2400
2400
1200
1500
1500
1800
1800
Stand -Alone Servers
6
10,000,00
60,000.00
60,000.00
2011-2016
6
2
1
1
1
2
1
1
20000
0
10000
10000
0
10000
20000
0
10000
10000
Microwave Dishes
8 (12)
12,500,00
1000000,00
125,000,00
2013-2014
12
0
0
0
0
0
0
0
0
0
25000
Cost
198000
418800
193200
190500
250600
366000
473800
581300
398400
425000
Capital
2,554,900
4,263,400
`
Maintenance and Support
The District has yearly set costs to allow for the use, upgrade and
failure replacement of hardware and software used throughout the
District, as outline in the Maintenance & Support section. It is
estimated that these support and maintenance costs will increase
approximately 27o a year.
Tavie 4: Detailed Maintenance & Support Cost Estimate
Services Today ($) 2020 ($) 2025 ($)
Backup Software 47500 4,950 51445
Board: TV Coverage & Streaming 15,000 167500 183150
Business Intelligence Software 163500 18,150 19,965
Business OS and Production Software 307000 33,000 367300
Computer Aided Drafting Software 4,100 4,510 41961
Customer Service and Accounting 2607000 276,690 3077198
Customer Service: After-hours Answering Service 35,000 387500 427350
Database Software 11000 17100 17210
Field GIS Software 91200 10,120 117132
GIS Software 307000 33,000 367300
Hydraulic Modeling 31750 4,125 41538
Intrusion Protection 97000 91900 10,890
Large UPS 37200 31520 31872
Microwave Hardware 77000 77700 87470
Mobile Telecommunications 507000 55,000 60,500
Network and Security Devices 307000 33,000 36,300
Pole Load Modeling Software 37000 37300 37630
Radio Communication 117500 12,650 13,915
SAG & Tension Modeling Software 51500 67050 67655
SCADA/GIS Software 459000 497500 547450
Security Cameras 27000 2,200 21420
Storage Hardware and Software 115500 127650 13,915
Utilities: Telephone and Internet 307000 337000 36,300
Virtual Desktop Software 35000 37300 3,630
Virtual Server Software 167000 177600 197360
Virus Protection Software 27600 2,860 37146
Water AMI 27500 23750 37025
Web Page Hosting 49300 41730 57203
Total Yearly Cost 640,850 695,625 768,026
Final Overview
The District's IT investment is a critical foundation upon which the
District's entire business structure is built upon. System design,
system control, revenue generation, asset management, finances,
security, and customer service all rely on the complex system the
District's IT department has created. It is now one of the District's
most critical assets. Without a proper build -out, refresh, and
maintenance program in place, all other District Business avenues
will falter. System stability, reliability, access, and speed are the main
objectives we strive to achieve with this Master Plan.