Linda M. Sharp
Dr. Robert E. Culver
Directed Research Project – CIS590
September 5, 2014
Signatures/Approvals
Ben Doe
We Are Big, Incorporated
Chief Information Officer - Sponsor
Date
Natalie Black
Environmental Technologies Program
Vice President of Operations
Date
Ito Smith
Environmental Technologies Program
Program Manager
Date
Kevin Reid
Environmental Technologies Program
Information System Security Officer (ISSO)
Date
Change Description Form
Revision
Change Description
Changed By
Date
Approved By
Table of Contents
1.0 EXECUTIVE SUMMARY 6
2.0 PROJECT DESCRIPTION 7
2.1 Scope 7
2.2 Constraints and Assumptions 8
2.3 Goals and Objectives 9
2.4 Critical Success …show more content…
Factors 10
2.5 Risks 10
2.6 Schedule 10
2.7 Budget 11
2.8 Relationship to Other Systems/Projects 11
2.9 Project Authority 11
2.10 Contract Funding Authority 11
2.11 Project Manager 12
2.11.1 Systems Development Lead 12
2.11.2 Project Analyst 13
2.11.3 Requirements Engineer/Analyst 13
2.12 Responsibility 13
2.13 Authority 14
2.14 Decision Authority Oversight 14
3.0 DATABASE AND DATA WAREHOUSING DESIGN 16
3.1 Why do we need Relational Databases and Data Warehouses? 16
3.2 What Comes Next? 19
3.3 How will Data Flow? 21
4.0 CLOUD TECHNOLOGY AND VIRTUALIZATION 24
4.1 What Exactly is the Cloud? 24
4.2 How is the Cloud Used? 26
4.3 What is Virtualization in Relation to the Cloud? 28
5.0 NETWORK INFRASTRUCTURE AND SECURITY 29
5.1 Logical and Physical Network Topology 29
5.2 Network Vulnerabilities 31
5.3 Security Policy 33
6.0 FINDINGS 39
7.0 GLOSSARY 40
References 41
Appendix A 44
Table of Figures
Figure 1 - Example of Analysis done from RDMS 18
Figure 2 - E-R Diagram 19
Figure 3 - Tables 20
Figure 4 - Normalized Tables 21
Figure 5 - Data Flow Diagram for Hiring Process 22
Figure 6 - Data Flow Diagram for Sale Transactions 22
Figure 7 - Data Flow Diagram from Operational Database to Data Warehouse 23
Figure 8 - Practices SMEs Use to Overcome Cloud Challenges 25
Figure 9 - FIPS Security Objective Matrix 36
Figure 10 - Network Diagram 44
Figure 11 - Logical Network Model 44
Figure 12 - Physical Network Model 45
Table of TABLES
Table 1 - SANS Institute 's Seven Layer Security Model 32
1.0 EXECUTIVE SUMMARY
Green Computing Incorporated (Inc.) acquired Merrifield Enterprises with specific goals in mind. These ambitions include, but are not limited to:
Increasing revenue by joining the digital market place and expanding into worldwide markets;
Offering better customer support;
Introducing new customer services;
Reducing operating costs through the use of cloud and virtualization services;
Improving communications through on-line collaboration capabilities;
Shortening product development cycles through just-in-time inventory management processes; and,
Building partnerships with other companies.
The purpose of this document is to define the scope of the project, assign a Project Manager (PM), define the scope of authority and responsibilities of the PM, identify the project team and its functions and responsibilities, indicate the sources of funding and key stakeholders, and outline oversight and governance organizations. This document also lays the groundwork for decisions and planning regarding project direction, outcomes, and delivery. Upon approval of this project plan, work will be initiated on the project and the necessary resources will be committed to achieve project success.
The objectives that this project is seeking to achieve are:
Integration of all the systems (including databases and infrastructure) for these two organizations; and,
Addressing the need for a network topology that will address security, resiliency and business continuity challenges both day-to-day and in the event of a man-made (i.e., hacker) or natural (earth quake) disaster.
2.0 PROJECT …show more content…
DESCRIPTION
2.1 Scope
The scope of this project is to examine current computing solutions used by Green Computing, Inc. and Merrifield Enterprises and generate a report recommending how to integrate all systems (including databases and infrastructure).
All project life cycle phases as identified in the Green Computing, Inc.’s Life Cycle Management (LCM), Version 6.2 are in scope. The necessary requirements to meet phase reviews are documented in the Project Tailoring Agreement with the IT Governance Secretariat (ITGS).
In order to meet the target production date, data from Merrifield Enterprise’s HR database will be incorporated into Green Computing, Inc.’s CRM suite. As recommended by the research done by Varajão, based on well-defined objectives and requirements for the corporate merger, Merrifield’s sales database will not be incorporated.
As recommended in Powner’s GAO report which recommends the consolidation of data centers, Merrifield Enterprises’ data center will be retired in lieu of the cloud computing solution used by Green Computing, Inc.
Like Melvin’s study which examined the Social Security Administration’s need to evolve its technology to be more aligned with current computing solutions the merger of Merrifield Enterprises into Green Computing, Inc. (which will be growing from 25 employees to 65 as a result) there will be a need for business process re-engineering.
Green Computing, Inc. will continue to use a turnkey CRM suite of software applications. As Raffaelli examined in great detail, “organizations’ participation in relevant professional networks seems sufficient to foster the adoption of turnkey practices.”
As Merrifield Enterprise’s HR data will be ingested into Green Computing, Inc.’s CRM suite, there is no need for new interfaces between systems as a result of the corporate merger (Drelichowski). As discussed in Unterkalmsteiner’s research, there will be thorough testing of this data conversion before it is put into production.
Green Computing, Inc. uses a Service-Oriented Architecture supplied by a CSP and data warehousing service provider (Keith).
The research done by Phillips highlighted how ensuring your technology has a process improvement focus, uses standards needed for consistency and comparison, has a balanced set of data, and has user friendly with simple steps will improve your return on investment. As part of this corporate merger, Green Computing, Inc. will monitor the technology decisions to ensure compliance to these factors.
As conveyed in Liang’s research, once the corporate merger is complete and all employees are co-located in Green Computing, Inc.’s office space, three types of training will be provided: self-oriented, peer-oriented, and instructor-oriented. One topic that will be conveyed via the instructor-oriented courses will be “Interprofessional Education (IPE)” (Robichaud).
2.2 Constraints and Assumptions
The following constraint has been identified:
Green Computing, Inc.’s business hours for its employees is from 8am to 7pm.
The following assumptions have been made in defining the scope, objectives and approach:
As Green Computing receives its orders via on-line sales, its technology is viable 24 hours a day, 7 days a week.
The CSP has redundant power supply and a back-up site to ensure continuation of operations.
Green Computing, Inc. has an Information Technology (IT) staff to administer desktop, application and network infrastructure issues.
2.3 Goals and Objectives The goals of this project is to conduct research regarding the best computing technologies to address:
Consolidation of data and/or systems
Data warehousing
Virtualization of server resources
Cloud computing
Thin client solutions, and
Outsourcing.
The project team intends to achieve these goals through a comprehensive system requirements elicitation, documentation, and testing regimen of vendor deliverables with users and stakeholder participation from both organizations. Internal technical resources as well as contractor resources will be utilized to research the agreed upon solution.
The objective of this project is to provide a report reflecting modules that can be integrated into a single solution with stringent security controls required to pass Green Computing, Inc.’s Certification and Accreditation (C&A) evaluation and provide the following functionality:
System Administration - Provide features for privileged users to administer the project’s systems. This entails assigning system users’ access, role-based permissions, and user account management.
Document Management - Provide features for the creation and storage of documents representing the organization’s diverse business needs.
Workflow - Provide features for an automated method to route reports in various states of draft for review, comment, and approval.
Reporting - Provide features to allow project administrators to generate reports of system and user auditing, statistical reporting and performance metrics.
Search - Establish a robust search capability for all documents stored in electronic format.
Document Security - Provide features to allow for the storage of documents. Also provide a mechanism to utilize the Green Computing, Inc.’s Enterprise Digital Rights Management (DRM) Service to restrict document access and functions at the reader level.
External Dissemination - Provide a means to digitally share reports with external entities.
2.4 Critical Success Factors
Critical Success Factors (CSF) are activities essential to project success. They are the activities that must be performed well in order to obtain the objectives or goals of the project. The following CSF’s are relevant to this project:
Define and manage requirements accurately and effectively
Understand internal and external project dependencies
Coordinate the activities of shared resources
Maintain stakeholder involvement
Communicate effectively to reduce ambiguity
2.5 Risks
The following risks have been identified as possibly affecting the project during its progression:
The selected technology will not work as expected and is too complicated for the employees to use;
User requirements and expectations were not managed well or clearly defined;
Too many project changes resulted in an overly complex system that was hard to test;
The CSP has delivery problems; and,
Internal management challenges (Lane).
2.6 Schedule
In anticipation of the corporate merger, this project has a schedule of sixty days to produce its report. 2.7 Budget
This project has a base budget of $50,000.
2.8 Relationship to Other Systems/Projects
It is the responsibility of the business unit to inform IT of other business initiatives that may impact the project. The following are known business initiatives:
Cloud computing and data warehousing services are provided by an external service provider
Green Computing, Inc.’s turnkey CRM suite is hosted on this CSP’s cloud
Desktop, application and network support services are provided by Green Computing, Inc.’s internal IT staff.
2.9 Project Authority
Sponsorship of this project is provided by Green Computing, Inc.’s Chief Information Officer (CIO), Ben Doe. As project sponsor, CIO Doe is the primary stakeholder and will work with the PM on matters concerning project funding and scope and in reviewing project changes to ensure benefit to the project.
This project was approved by the Investment Management Board (IMB) to enter Concept Exploration at the 2/1/2014 Gate 0 Investment Review. On 4/1/2014, at the Gate 1 System Concept Review the IMB approved this project concept, sanctioned it as a formal project, and approved its procession to the Planning and Requirements Phase of the Green Computing, Inc.’s Systems Development Lifecycle.
The Sponsors’ Vice President of Operations of the Environmental Technologies Program, Natalie Black, will assist in providing clarification of project scope and business processes and will be the project’s primary customer interface. The Environmental Technologies Program Manager is Ito Smith. 2.10 Contract Funding Authority
Initial funding for this project has been committed by Green Computing, Inc. CIO and project sponsor, Ben Doe. Requests for additional funding for services, hardware, or software will be reviewed and approved by CIO Doe.
The Finance Division (FD) has the authority to obligate Green Computing, Inc. funds through its contracting and acquisition processes. FD is also responsible for any acquisition services that originate from this project and will review and approve all acquisition requests. The FD Funding Authority point of contact is Luwanna Johnson.
2.11 Project Manager
CIO Doe selected Jane Jameson as the PM for this project. The team for this project will consist of four individuals in addition to the PM; Systems Development Lead, Project Analyst, and two Requirements Engineer/Analysts. Ms. Jameson will work with CIO Doe to select four additional people to work on this project. These individuals will act in concert with each other to achieve project goals. The PM will facilitate communication amongst all team members providing leadership and fostering a common vision. This personnel will be matrixed to the project on an as-needed basis.
2.11.1 Systems Development Lead
The System Development Lead (SDL) provides subject matter expertise to the PM regarding issues pertaining to technologies employed in the system; coordinates technical details with other internal organizations and external groups, such as the development contractor’s technical staff and other support vendors. The SDL will be responsible for ensuring that the proposed project design is in line with Green Computing, Inc.’s technical architecture and is technically feasible.
2.11.2 Project Analyst
The Project Analyst provides subject matter expertise in the areas of customer processes concerning IT project management, IT governance and procurement, documentation management, cost reporting, and scheduling. The Project Analyst will be responsible for managing the project’s LCM process, analyzing vendor deliverables as well as cost and schedule performance. The Project Analyst also performs Quality Assurance and Risk Managements duties according to respective plans.
2.11.3 Requirements Engineer/Analyst
The Requirements Engineers/Analysts will manage the requirements development lifecycle. Through planning, identification, documentation, traceability, change management, and stakeholder agreement the Requirements Engineers/Analysts ensure that project scope is properly elaborated and communicated. The Requirements Engineers/Analysts will also be responsible for ensuring requirements are properly implemented and tested by ensuring bi-directional traceability of requirements to need, test, and test results.
2.12 Responsibility
The PM will be responsible for ensuring all key milestones are met within cost, schedule and scope constraints of the project. The PM will work closely with management in Green Computing, Inc. to ensure that assigned resources are used effectively and efficiently, and the project is properly staffed. Additionally the PM will be responsible for:
All project communications.
Preparation of a project plan that is realistic and acceptable to the customer and the organizational entities performing project work
Controlling changes to project cost, schedule and scope
Keeping executive management informed as to project status through established management reporting processes.
Ensure that all project team members are kept informed as to their responsibilities and to all changes affecting the project.
Measuring project cost and schedule performance and taking corrective actions as necessary.
2.13 Authority
To ensure this project meets its objectives the PM is authorized:
Direct access to the sponsors and customers on all project matters.
Direct access to executive management on all project matters.
To represent the Green Computing, Inc. to non-Green Computing, Inc. elements as the point of contact for the project. This includes contractors, sub-contractors, vendors and Other Government Agencies.
To direct project personnel, monitor project activity, and request progress status reports.
To revise the project plan as needed with customer approval.
To negotiate with management regarding project personnel assignments.
To manage project funds within the limits established by Green Computing, Inc. guidelines.
To delegate responsibilities and authority to appropriate project personnel.
2.14 Decision Authority Oversight
The following Green Computing, Inc. entities approve project plans, establish concurrence of phase completion, and provide authority to proceed to the next life cycle phase:
The Project Review Board (PRB) is the authority for approving requirements, plans, and schedules at the Green Computing, Inc.’s Lifecycle Management Planning & Requirements Review. The PRB ensures IT programs and projects are properly planned and evaluated, have a comprehensive acquisition strategy, and have the appropriate level of funding and infrastructure support necessary to deliver the capabilities on time and within budget.
The ITGS manages the Green Computing, Inc. IT Governance (ITG) program on behalf of the CIO with emphasis on the coordination of designated projects through the governance process. The ITGS establishes and enforces ITG processes, procedures, and identifies working groups to assist PM’s obtain life cycle phase gate approvals and with developing alternatives and recommendations to resolve project performance breaches.
The Enterprise Requirements Assessment Unit (ERAU) performs independent project reviews in support of life cycle control gates. ERAU looks at project organization in terms of cost, schedule, scope, and risk. Using an Earned Value Analysis approach, ERAU makes an independent assessment of project progress. It evaluates variances from the project plan in order to provide management with an objective view of project “health” and makes recommendations for change when appropriate.
The Technology & Development Board (TDDB) is the authority for approving detailed designs, user testing, and transition of the project to the operational environment at the Green Computing, Inc.’s Lifecycle Management Final Design, Test Readiness, and Operational Acceptance Reviews.
The project Sponsor and the Environmental Technologies Program Vice President of Operations have the authority to engage the PM to influence key events or major milestones.
3.0 DATABASE AND DATA WAREHOUSING DESIGN
3.1 Why do we need Relational Databases and Data Warehouses?
As stated by Marcus Kwan in his article titled “Big Data: Manage It, Don’t Drown In It”, “making decisions based on too much information that is not properly managed and classified can be just as dangerous as making decisions based on too little” data (sic). Walmart, for example, has achieved its powerful competitive advantage due to its ability to manage its data effectively and efficiently. This was achieved through a series of nested decisions that defined measurable and achievable deliverables (Chatterjee).
Many organizations prefer to manage their data using the Microsoft Office suite (i.e., Excel and Word). Although Microsoft Office is certainly a staple utility to be used in managing data, it is not sufficient to accommodate all the complexities that come with managing an international organization. These challenges include, but are not limited to:
Scalability needed as the volume of data increases exponentially;
Data access and distribution (security);
Latency vs. real time access via direct connectivity;
Robust analytical and visualization tools that enable customized: customer loyalty programs, advertising campaigns, and supplier relationships.
Proper data management should make it easier and more efficient for users, irregardless of their role, to access the data they need to perform their job tasks.
According to Wienclaw, the most popular type of database is the relational database. In a relational database, data are stored in two-dimensional tables comprising rows and columns. The table (also called a relation or file) describes an entity. The rows are its records, and the columns are its fields or attributes. A relational database management system works with two data tables at the same time and relates the data through links (e.g., a common column or field).
While an organization’s operational data base maintains current data, the data accumulated over time is stored in an ad hoc structure referred to as a data warehouse. Data warehouses help organizations understand and manage their activities (Neil).
As Green Computing, Inc. has experienced a growth rate of 20% each year for the past five years, and with its merger with Merrifield Enterprises, similar growth is anticipated for the next five. As such, scalability is an important factor to take into consideration as it makes these critical Information Technology (IT) decisions. According to Karen Kroll, in her “Catching and Managing New Data” article, “data processing and storage costs continue to decline, while analytical tools have become far more sophisticated.” You will be facing a three-fold challenge:
1. Ensuring sufficient application speed and redundancy to ensure no critical failures.
2. Applications that enable efficient data capture and storage. Access controls must be in place that enable role based sharing for and between those with the appropriate need to know. Analytical and visualization applications will enable informed decisions to be made proactively.
3. Desktop and mobile user interfaces.
Although in many regards, these challenges are all of equal importance, the Relational Database Management System (RDMS) design is key to achieve the desired efficiencies for the proposed new infrastructure. Fundamental improvements that you can expect by adopting a RDMS approach are: improved data standards resulting in more consistent data, control of redundancy, better data security, enriched data integrity, enriched ability to access and, where appropriate, share data, and an economy of scale achieved through a system designed to satisfy your organization’s requirements in one system rather than many separate files.
These advantages do come at a cost. A RDMS has higher overhead in comparison to the use of the Microsoft Office suite. Software, hardware, and the personnel needed to program, operate and maintain a RDMS does not come cheap. However, the Return On Investment (ROI) for this investment will help your company achieve your desired growth.
As Solomon Antony conveyed in his paper titled “Database Models for Questionnaire-Based Surveys”,
“Relational database software systems have features that help the user create an easy-to-use interface for data entry, data editing, and data retrieval. They also provide interfaces for exporting portions of the data as text files or as electronic spreadsheet files. The advantages of designing a relational database prior to data collection and using it for recording and querying afterwards may not be apparent to business researchers.”
An examples of the type of analysis and visualization that can be done by the entities defined in a RDMS can be found in Figure 1 below:
Figure 1 - Example of Analysis done from RDMS
3.2 What Comes Next?
The first step that needs to be taken to design a RDMS is to gather requirements. Diagramming the current way data is managed into an Entity-Relationship (ER) Diagram will help identify the entities, attributes of those entities, and the relationships between these entities and attributes (see Figure 2). This visual depiction of the data is a useful way to begin to validate it against current business practices. By enabling key stakeholders from all user groups to provide feedback, corrections can be made. These same stakeholders should also be part of the effort to capture improvements to the current workflow that need to be made to achieve the desired new capabilities.
Figure 2 - E-R Diagram The entities represented in the ER diagram are then converted to tables. The tables are comprised of the entities from the ER model. Where relationships exist between the tables their primary keys are complemented by foreign keys (which are the primary tables from the other tables). See Figure 3 to see the tables that correlate with the ER model from Figure 2.
Finally, the tables and their respective attributes need to be normalized. There are three “normal” forms:
1. Any repeating elements or groups of elements are removed from a table and made into their own table. In the context of a nationwide restaurant company, a good example of this would be pulling the personnel information out of the sales table making it its own table.
2. If a table has a concatenated primary key, each column in the table that is not part of the primary key must depend upon the entire concatenated key for its existence. Similar to the first normal rule explained before, the step that needs to be taken to rectify violations of this rule is to move the part of the concatenated key that is not dependent into its own table. An example for this restaurant owner is separating the items sold from the order table and making those attributes their own table.
3. Finally, for the third normal form, there can be no dependencies on non-key attributes. Separating customer information would be an example that would illustrated this for this customer.
Figure 4 illustrates final tables of this ER Diagram after being normalized to the third form.
Figure 4 - Normalized Tables
According to Mike Chapple, referential integrity is a database concept that ensures that relationships between tables remain consistent. When one table has a foreign key to another table, this concept ensures that you may not add a record to a table that contains the foreign key unless there is a corresponding record in the linked table. This ensures that changes made to the linked table are reflected in the primary table.
3.3 How will Data Flow?
Once the merger of Merrifield Enterprises into Green Computing, Inc. is complete, the standard operating procedures for data flow is diagrammed in Figures 5 and 6 below.
Figure 5 - Data Flow Diagram for Hiring Process
Figure 6 - Data Flow Diagram for Sale Transactions
To support some of the advanced analysis highlighted at the beginning of this presentation, as part of the technical support provided to merge the two organizations, all sales data (to include advertising, customer, sales person, product information, and transaction information) will be put into the data warehouse. This will be complemented by real time Extraction, Translation, and Loading (ETL) of this same scope of sales data into the same data warehouse. This is illustrated in Figure 7 below.
Figure 7 - Data Flow Diagram from Operational Database to Data Warehouse
4.0 CLOUD TECHNOLOGY AND VIRTUALIZATION
4.1 What Exactly is the Cloud?
As many of you may have seen in the trailer for Cameron Diaz and Jason Segel’s recent movie “Sex Tapes”, “Nobody understands the cloud—it’s a f****** mystery”.
In actuality with the wide spread propagation of the Internet, cloud computing provides the ability to access computing resources anytime, anywhere. According to the National Institute of Standards and Technology (NIST), there are four main types of deployment models:
Public clouds are hosted off-site and owned by a third-party company that sells could services to the public in a multi-tenant fashion. Public clouds are usually available to all members of the public or to large groups within an industry.
Community clouds are designed for use by a specific community of users that have shared concerns.
It may be owned and operated by one or more organizations in that community.
Private clouds are provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.
Hybrid clouds are a combination of clouds of different types. The individual clouds remain unique entities but are bound together by standardization or proprietary technology that allows data and application portability.
As Green Computing, Inc. depends on a CSP to host its digital market place, it needs a hybrid cloud solution. There are facets of its business that need to remain private, a hybrid solution will provide the security needed to support both of these competing requirements.
According to the research done by Lacity and Reynolds, there are common challenges that small and medium sized organizations like Green Computing, Inc. face when considering using a CSP. They also identified ten practices used by subject matter experts to overcome these challenges. This information can be found in Figure 8
below.
Green Computing, Inc. is seeking stakeholder buy-in by educating stakeholders about security in the cloud. As Green Computing, Inc. used cloud technology previous to the merger, gradual adoption can be used as Merrifield’s operations transition over into its operations.
Green computing, Inc. has found a cloud computing service provider that meets its needs (hosting its CRM, HR and sales systems). As a result, it has been able to reduce the number of personnel in its IT department to cover desktop support alone. The cost of a CSP is significantly lower for Green Computing, Inc. than the hardware, software and personnel services related to owning and managing its own infrastructure. Service Level Agreements (SLAs) with this service provider has made Green Computing, Inc. comfortable with relinquishing this control.
Lacity and Reynolds’ also highlighted how the small and mid-sized companies they studied were satisfied with the attention they got from their CSP. One example echoed an experience similar to one that Green Computing, Inc. had. In particular, when a desire was expressed to upgrade e-mail environments, the provider helped Green Computing, Inc. identify a solution that both met its needs and reduced cost. It was through experiences like this that Green Computing, Inc. established a mutually beneficial rapport with the CSP.
Green Computing, Inc.’s CIO, Ben Doe was hired by the organization because of his previous experience with cloud computing. It was through his leadership that the company chose to outsource its infrastructure. Although Green Computing, Inc.’s IT department is only responsible for desktop support, the CIO deliberately hires personnel with experience and certifications in a wider array of specialties like cloud computing.
According to the research done by Nadjaran Toosi, inter-cloud and cloud federation are new standards in the cloud computing industry that create an interconnected global “cloud of clouds” multiple-provider infrastructure that mimics the known term Internet, “network of networks.” For Green Computing, Inc.’s requirement to have a viable web site 24/7, these new standards provide additional cost savings when comparing using cloud computing to owning and managing its own infrastructure with disaster recovery. Furthermore, these standards enable Green Computing, Inc. the ability to be less dependent upon a single service provider.
4.2 How is the Cloud Used?
Businesses use the cloud to store large amounts of data instead of using expensive or limited server space. The cloud is a great option to back up information that is outdated or not of a sensitive nature, but still needs to be retained. It can be used to store very large files, such as videos, graphics, and photos that would take up a lot of storage space on a PC or server. When users need the files, they simply access the cloud and get the files they need.
Most cloud storage is sold as a measured service. An example of elasticity is if the user suddenly needed a lot more storage and was able to increase their cloud storage space from 15GB to 100GB very quickly. A decrease in the purchased resources, like discontinuing the 100GB plan once the user no longer needs it, is also an example of elasticity.
Thousands of computer and network applications reside in the cloud. Instead of installing an entire suite of software, a user only needs an Internet connection to access applications. If a user needs a project management tool for one project, they don’t have to buy it and download it to their PC. Instead, users can connect to a free or paid application and access it from the cloud. Green Computing, Inc. uses a CSP to host its CRM, HR and sales databases.
Enterprises may access the cloud for virtualization to bring in extra processing power to handle peak loads during different times of the day. The use of a virtual service prevents these businesses from having to buy new machines. Having its systems hosted in the cloud allows Green Computing, Inc. to use this extra processing power to use data warehousing to exploit its seemingly disparate datasets to identify sales patterns that help it to keeps its competitive advantage.
4.3 What is Virtualization in Relation to the Cloud?
In its final version of Full Virtualization Security Guidelines dated February 2, 2011, NIST explained "full virtualization" provides a complete simulation of underlying computer hardware, enabling software to run without any modification. Because it helps maximize the use and flexibility of computing resources—multiple operating systems can run simultaneously on the same hardware—full virtualization is considered a key technology for cloud computing, but it introduces new issues for IT security.
For cloud computing systems in particular, full virtualization can increase operational efficiency because it can optimize computer workloads and adjust the number of servers in use to match demand, thereby conserving energy and information technology resources. The NIST guide describes security concerns associated with full virtualization technologies for server and desktop virtualization and provides recommendations for addressing these concerns. Most existing recommended security practices also apply in virtual environments and the practices described in this document build on and assume the implementation of practices described in other NIST computer security publications.
Appendix A provides an illustration of how the use of a hybrid cloud solution with virtualization will benefit Green Computing, Inc.
5.0 NETWORK INFRASTRUCTURE AND SECURITY
After the merger, Green Computing, Inc. will employ 65 people. The facility that these people will be working is in a three-story office building. Every department has a segmented Local Area Network (LAN) which is connected to the corporate Wide Area Network (WAN) via a private cloud. Figure 10 in Appendix A shows a high level overview of this connectivity.
To achieve the goals listed above, these employees will need to be able to access the Intranet via the Internet, share digital files and multi-function printers. The Sales Department, which is staffed with 20 people, will need to be able to access the company’s Intranet via the Internet remotely using laptops, tablets and/or smart phones.
5.1 Logical and Physical Network Topology
The diagrams and documentation generated during logical modeling is used to determine whether the requirements of the business have been completely gathered or if more work is required before physical modeling commences.
See Figure 11 in Appendix A for the Logical Model Diagram for Green Computing, Inc. This illustrates the use of cloud services for private segmented LANs that connect computers, portable devices and all related peripheral equipment. These LANs all connect to the WAN.
Next a physical model is built that describes how the network will be constructed. As seen in Figure 12 in Appendix A, physical modeling involves the actual design of a network according to the requirements that were established during the logical modeling.
The network for the three-story building that Green Computing, Inc. will be occupying will need to be hierarchical in its design.
The core layer should incorporate redundant links that share the load between equal cost paths. It should be prepared to provide immediate response if a link failure occurs and be able to adapt quickly to change.
Although Enhanced Interior Gateway Routing Protocol (EIGRP), Open Shortest Path First (OSPF), and Intermediate System to Intermediate System (IS-IS) meet these needs, OSPF imposes a strict hierarchical design that requires its areas to map to the Internet Protocol (IP) addressing plan. This is difficult to accomplish, so that would not be an ideal protocol to use.
Both EIGRP and IS-IS are more flexible in regard to the hierarchical structure and IP addressing design. However, as Green Computing, Inc. uses Cisco networking gear, and EIGRP is one of Cisco’s proprietary protocols, it would be a good choice for the protocol to use for the network.
The distribution layer represents the connection point between the core and the access layers. As the network will need to communicate with business partners that use RIP and OSPF protocols, its distribution layer will have the job of redistributing between these protocols (in the access layer) and the EIGRP protocol used in the core layer.
The access layer will provide these business partners access to network resources for local and remote users. The IS-IS is not appropriate for the access layer because of its demand for more knowledge to configure. OSPF has high memory and processing power requirements. As Green Computing, Inc. prefers Cisco, EIGRP is an appropriate protocol choice for this layer as well.
When a network involves more than one routing protocol, redistribution is needed. Network administrators configure redistribution by specifying which protocols should insert routing information into other protocols’ routing tables. Although one-way redistributions are used the most, Green Computing, Inc. has the need for two-way configurations to enable the sharing of information bi-directionally with its business partners. It is important that these redistribution configurations use filters to maintain security, availability and performance.
The network can leverage Cisco’s IOS software. This software has the ability to connect Green Computing, Inc.’s bridged networks to its routed networks such that it can access the Internet.
When Green Computing, Inc. increased in size from 75 employees to 300 as a result of its merger with Merrifield Enterprises, a second core layer server was added, as were two additional distribution layer servers. The access layer followed the ratios defined when the network was initially stood up. The addressing and naming model held up to this level of scalability.
5.2 Network Vulnerabilities
Part of the analysis done when preparing network diagrams is the identification of issues that might hinder scalability availability, performance security manageability, usability, adaptability, and affordability.
Understanding the impact that cabling and wiring will have helps plan for enhancements and identification of other possible problems. Both the type of cabling and the distance between network segments need to be thoroughly documented.
Another important facet to the analysis performed is the identification of architectural and environmental constraints including but not limited to: air conditioning, heating, ventilation, power, electromagnetic interference, doors that can lock, space for cabling conduits, patch panels, equipment racks and work area for technicians to install and troubleshoot equipment.
Developing a baseline of network performance for each segment of the network allows issues to be identified and mitigated. Mean Time Between Failure (MTBF) and Mean Time to Repair (MTTR) statistics for past/current performance are used to analyze network availability. These baselines are also useful metrics to measure performance against (in this case) post-merger for improvements or degradation.
Network utilization (bandwidth used) statistics are used to identify peaks in traffic so the capacity requirements of the devices and network segments can be accurately evaluated, and adjusted as needed. Network accuracy is measured by monitoring Bit Error Rate (BER).
Analyzing network efficiency determines whether a need exists to adjust the Maximum Transmission Unit (MTU) on router interfaces.
Table 1 below illustrated the SANS Institute’s seven layer security model that is designed to protect a network infrastructure.
Table 1 - SANS Institute 's Seven Layer Security Model
Primary Reason for Layer
Characteristics
Enabled by
1
Physical
Preventing attacks from accessing a facility to gain data on servers, computers, other
-Breaking point for the network
-Easiest layer to secure
Site design, access control devices, alarms, cameras
2
VLAN
Used to segment networks primarily to group together common hosts for security
-Used to differentiate public and private networks
-Used to find exploited host
Implement ACL
3
ACL
Create and maintain Access Control Lists
-Allow and deny access between hosts on different networks
Routers and firewalls
4
Software
Keeping software up to date with patches and upgrades to mitigate vulnerabilities
-Reduces amount of exploits and vulnerability on specific host and application
Server side software must be maintained (HTTP(S))
5
User
User training and knowledge of network security
-Train users on what apps should be avoided
-Train users on how their systems work
Important to prevent compromise of a user account that allows domain access
6
Admin-istrative
Administrator training and knowledge of network security and have ability to train new employees
-Ensures that issues are quickly being resolved
-If admin layer is compromised it is likely that admin accounts are compromised 7
IT Depart-ment
Comprised of network security, technicians, architects and support professionals
-If IT layer fails attacker will have system level access to all resources/devices on network…routers, firewalls, proxies, and VPN
-Makes the network operational and maintains the network at all other layers
5.3 Security Policy
Green Computing, Inc. has designated its CIO Ben Doe as the person responsible for ensuring the policies and procedures outlined in the System Security Plan (SSP) are adhered to.
The System Owner (SO) is Natalie Black, the Vice President of Green Computing, Inc’s Operations. The SO is responsible for ensuring the necessary resources (e.g., equipment, personnel, finances, etc.) are allocated to properly maintain the security posture, procedures, and processes outlined in the SSP.
The Information System Security Officer (ISSO) for Green Computing, Inc. is Kevin Reid. The ISSO is responsible for operational security oversight of the network, updating the SSP, and ensuring that the security requirements, procedures and policies for the network are being adhered to.
As the infrastructure component that supports Green Computing, Inc., the network does not process information. Rather, it provides the transport mechanism for data that needs to be shared, transferred, and/or distributed across the organization. The network consists of core network components such as routers, switches. It also includes file servers which holds the data and the domain controllers that are used to manage access to the system.
These network components and servers are housed in Green Computing, Inc.’s server room in its primary facility. Physical security of this room is managed by the company’s Security Department. Administrator access is only through console keyboard (remote administration is not supported).
The network components are configured to failover both internally and externally to the contracted Cloud Service Provider (CSP). This service provider is required to comply with the Security Department’s physical security standard operating procedures. Quarterly audits as well as periodic random on-sites are done at the CSP site.
Both of the data centers are required to have 630 tons of air conditioning (water condenser based). Two-cell cooling towers (with back-ups) must be on the roof to exchange the heat from the water. The water is then pumped to 28 Liebert coolers in the data centers. Each cooler has two compressors and the units are grouped so that if one unit fails there is a back-up in the area.
Under floor remote water detectors are located throughout the data center. These sensors are tested periodically during routine preventive maintenance sessions. Thermostats and humidistats are strategically located to monitor the air temperature throughout the facility. When the temperatures exceed 60 degrees Fahrenheit, a thermal alarm sounds.
A sprinkler system covers the entire data center, including the adjacent UPS and battery rooms. Each sprinkler head is individually controlled and does not release until the sprinkler head reaches 145 degrees Fahrenheit. It will then deactivate when the temperature on the head drops below the activation temperature.
Two substations within Green Computing, Inc.’s building feed electricity to the data center. Each receives power from four feeders. Each substation feeds one of the two Uninterrupted Power Supply (UPS) systems. Each UPS system consists of two 1000KVA modules forming a full redundancy. The two UPS systems are supported by batteries that will provide power for 20 minutes at full load if the utility power goes out. The UPS systems are supported by two 1875 KVA generators located on the roof. The generators are connected to the UPS systems and automatically starts if the utility power goes out. The generators have the capacity to support the UPS systems, air conditioning and lighting for the building.
A Technical Configuration Change Board (TCCB) comprised of personnel with strong technical and decision making capabilities oversees all technical changes.
According to Anand’s report which details key facets of the Sarbanes-Oxley (SOX) Act, Section 404 of this Act is about ensuring that confidentiality of information is maintained to the point where only those who should have access to certain pieces of information (be it financial or operational) are the only ones who will. Likewise, by restricting write/change access to information without supervisory approval the integrity of the underlying data is protected.
SOX, like many regulations, stipulates a minimum period of time that organizations and their auditors must retain auditor-relevant documents in the event of litigation or future audits (see Section 802 mandates for seven-year record retention). It is important to not only ensure that these documents have been retained but also that they have not been tampered with.
The core information security tenet at work here is disaster recovery planning, which helps ensure that data and information that may be needed to satisfy the regulatory requirements of SOX are available when required.
Green Computing, Inc. has prepared a Federal Information Processing Standards (FIPS) 199 Categorization Report for its network and information systems. This will ensure they are in compliance with Federal Risk and Authorization Management Program (FedRAMP) requirements. This compliance will enable the company to pursue contracts with Federal Government agencies.
NIST Special Publication 800-60 Volume 2 Revision 1 defines the three security objectives for information and information services. FIPS 199 defines three levels of potential impact on organizations or individuals should there be a breach of security of these objectives. The potential impact to these objectives are defined in Figure 9:
Figure 9 - FIPS Security Objective Matrix
In its report Green Computing, Inc. reflected that their network and the data traversing it are categorized at a medium level. To prevent significant degradation in mission capability, Green computing has taken the following steps to insure its network, systems and data are secure:
Yearly mandatory on-line training covering:
Information Security
Continuity of Operations
Access controls
Account request procedures
Auditing procedures
Active Directory group controllers
Unsuccessful logon attempts
Account lock-out handling
User id and password for network and system access in the office
User id, password and smart cards for network and system access outside the office
Account termination procedures
Network base controls
Well segmented network preventing public access from sensitive data
Screened sub-networks
Demilitarized zone
Controlled wireless access points
Full redundancy
Content of data controls
Malware detection
Inappropriate data movement
Content blocking tools
Network monitoring
Antispam tools
Antivirus tools
URL blacklisting
System hardening
Patch management
Configuration management
Active hardware scanning tools
Inventory of all devices connected to the network
Continual monitoring with alerts
Enterprise License Agreement (ELA)
Identify, group and prioritize all systems on the network
Vulnerability management plan
Identification of vulnerability
Automated vulnerability scans
Vulnerability assessments
Prioritizing remediation based on significance of the system(s) it affects
Remediation testing in non-production environment
Data protection
Inventory organization’s information assets
Categorize each asset to its value and sensitivity based on the organization’s need to protect its confidentiality, integrity, and availability
Classify each asset as being:
Public information
For internal use only
Confidential information
Secret information
Determine whether data is “at rest” or “in motion”
Secure key management
Encryption
Digital certificates
Monitoring/Auditing data in motion to prevent data leakage
As it traverses network
At point of usage
Perimeter
Secure application development
Design phase
Development phase
Source code analysis
Preliminary security assessments
Final security assessment
O&M code change control process
System back-up
6.0 FINDINGS
A thorough analysis of the challenges Green Computing, Inc. faced as a result of its merger with Merrifield Enterprises led to the following findings:
The pre-existing relationships that Green Computing, Inc. has with its CSP and data warehouse service providers is scalable to meet the additional number of users and larger volume of transactions. Based on the current contract and SLA, the costs for this expansion will be less than $5,000 that will recur monthly. As anticipated growth in sales is ten times that (i.e., $50,000) this is more than an acceptable ROI.
Green Computing, Inc.’s IT department should be expanded by hiring two additional technicians for desktop support for the new employees.
7.0 GLOSSARY
Acronym
Meaning
BIT
C&A
Bit Error Rate
Certification and Accreditation
CIO
Chief Information Officer
CSF
Critical Success Factors
CRM
CSP
Customer Relationship Management
Cloud Service Provider
DRM
EIGRP
ER
Digital Rights Management
Enhanced Interior Gateway Routing Protocol
Entity Relationship
ERAU
ETL
Enterprise Requirements Assessment Unit
Extraction, Translation, and Loading
FD
FedRAMP
FIPS
Finance Division
Federal Risk and Authorization Management Program
Federal Information Processing Standards
HR
Human Resources
IMB
Investment Management Board
INC
Incorporated
IT
IP
IPE
IS-IS
ISSO
Information Technology
Internet Protocol
Interprofessional Education
Intermediate System to Intermediate System
Information System Security Officer
ITG
IT Governance
ITGS
Information Technology Governance Secretariat
LCM
MTBF
MTTR
MTU
O&M
OSPF
Lifecycle Management
Mean Time Between Failure
Mean Time To Repair
Maximum Transmission Unit
Operations and Maintenance
Open Shortest Path First
PM
Project Manager
PRB
RDMS
ROI
Project Review Board
Relational Database Management System
Return On Investment
SDL
SLA
SO
SOX
SSP
TCCB
System Development Lead
Service Level Agreement
System Owner
Sarbanes-Oxley Act
System Security Plan
Technical Configuration Change Board
TDDB
UPS
Technology Design and Development Board
Uninterrupted Power Supply
References
ACM Computing Surveys 47, no. 1: 7:1-7:47. Business Source Complete, EBSCOhost (accessed September 2, 2014).
Antony, S. (2012). DATABASE MODELS FOR QUESTIONNAIRE-BASED SURVEYS. International Journal Of Business, Marketing, & Decision Science, 5(1), 121-137.
Anand, S. (2008). Information Security Implications of Sarbanes-Oxley. Information Security Journal: A Global Perspective, 17(2), 75-79. doi:10.1080/19393550801953372
Briggs, L. L. (2013). BI Case Study. Business Intelligence Journal, 18(1), 33-35.
Chapple, Mark. Referential Integrity. Retrieved from http://databases.about.com/cs/administration/g/refintegrity.htm on August 10, 2014.
Chatterjee, S. (2013). Simple Rules for Designing Business Models. California Management Review, 55(2), 97-124.
Dean, T. (2010). CIS175: Network+ guide to networks: 2009 custom edition (5th ed.). Boston: Course Technology, Cengage Learning.
Drelichowski, L. L., Bobek, S. S., Bojar, W. W., Chęsy, W. W., Cilski, B. B., Czechumski, W. W., & ... Wawrzyniak, K. K. (2012). METHODOLOGICAL ASPECTS and CASE STUDIES of BUSINESS INTELLIGENCE APPLICATIONS TOOLS in KNOWLEDGE MANAGEMENT. Studia I Materialy Polskiego Stowarzyszenia Zarzadzania Wiedza / Studies & Proceedings Polish Association For Knowledge Management, (59), 3-227.
Englander. I. (2009). The architecture of computer hardware, systems software and networking
(4th ed.). Danvers, MA: John Wiley & Sons, Inc.
Keith, M., Demirkan, H., & Goul, M. (2013). Service-Oriented Methodology for Systems Development. Journal Of Management Information Systems, 30(1), 227-260. doi:10.2753/MIS0742-1222300107
Kroll, K. (2013). Catching and Managing New Data. Compliance Week, 10(110), 54-55.
Kwan, M. (2012). Big Data: Manage it, don 't drown in it. Futures: News, Analysis & Strategies For Futures, Options & Derivatives Traders, 41(7), 32-34.
Lacity, M. C., & Reynolds, P. (2014). Cloud Services Practices for Small and Medium-Sized Enterprises. MIS Quarterly Executive, 13(1), 31-44.
Lane, D. (2011). The chief information officer 's body of knowledge. Hoboken, New Jersey: John Wiley & Sons, Inc.
LANGO, J. (2014). Toward Software- Defined SLAs. Communications Of The ACM, 57(1), 54-60. doi:10.1145/2541883.2541894
Lane, D. (2011). The chief information officer 's body of knowledge. Hoboken, New Jersey: John Wiley & Sons, Inc.
Lowe, D. (2011). Networking All-in-one for Dummies. Hoboken, N.J.: Wiley.Oppenheimer, P. (2011). Top - down network design (3rd ed.). Indianapolis, IN: Pearson Education, Cisco Press.
Liang, J. (2012). Learning in troubleshooting of automotive braking system: A project-based teamwork approach. British Journal Of Educational Technology, 43(2), 331-352. doi:10.1111/j.1467-8535.2011.01182.x
Melvin, V. C. (2014). INFORMATION TECHNOLOGY, SSA Needs to Address Limitations in Management Controls and Human Capital Planning to Support Modernization Efforts. GAO Reports, 1-60.
NADJARAN TOOSI, ADEL, RODRIGO N. CALHEIROS, and RAJKUMAR BUYYA. 2014. "Interconnected Cloud Computing Environments: Challenges, Taxonomy, and Survey."
NANAVATI, M., COLP, P., AIELLO, B., & WARFIELD, A. (2014). Cloud Security: A Gathering Storm. Communications Of The ACM, 57(5), 70-79. doi:10.1145/2593686
Neil, C. G., De Vincenzi, M. E., & Pons, C. F. (2014). Design method for a Historical Data Warehouse, explicit valid time in multidimensional models. INGENIARE - Revista Chilena De Ingeniería, 22(2), 218-232.
Oppenheimer, P. (2011). Top - down network design (3rd ed.). Indianapolis, IN: Pearson Education, Cisco Press.
Ricardo, C. M. (2004). Database illuminated. (1 ed.). Sudbury, MA: Jones and Bartlett Publishers.
Phillips, J., & Phillips, P. (2013). Measuring the Return on Investment on Green Projects and Sustainability Efforts. Performance Improvement, 52(4), 38-52. doi:10.1002/pfi.21342
Powner, D. A. (2014). INFORMATION TECHNOLOGY: Leveraging Best Practices and Reform Initiatives Can Help Agencies Better Manage Investments. GAO Reports, 1.
RAFFAELLI, R., & GLYNN, M. (2014). TURNKEY OR TAILORED? RELATIONAL PLURALISM, INSTITUTIONAL COMPLEXITY, AND THE ORGANIZATIONAL ADOPTION OF MORE OR LESS CUSTOMIZED PRACTICES. Academy Of Management Journal, 57(2), 541. doi:10.5465/amj.2011.1000
Richardson, A. G. (2013). 6 Rookie Mistakes. PM Network, 27(3), 44-49.
Robichaud, P., Saari, M., Burnham, E., Omar, S., Wray, R., Baker, R., & Matlow, A. (2012). The value of a quality improvement project in promoting interprofessional collaboration. Journal Of Interprofessional Care, 26(2), 158-160. doi:10.3109/13561820.2011.637648
Schroeder, H. (2013). POST PROJECT ASSESSMENT: AN ART AND SCIENCE APPROACH. Academy Of Information & Management Sciences Journal, 16(1), 37-45.
Schwalbe, K. (2011). Information technology project management. (6th revised ed.). Boston: Course Technology-Cengage.
The Teardown. (2013). Engineering & Technology (17509637), 8(1), 88-89.
UNTERKALMSTEINER, M. M., FELDT, R. R., & GORSCHEK, T. T. (2014). A Taxonomy for Requirements Engineering and Software Test Alignment. ACM Transactions On Software Engineering & Methodology, 23(2), 16:1. doi:10.1145/2523088
Varajão, J., Dominguez, C., Ribeiro, P., & Paiva, A. (2014). CRITICAL SUCCESS ASPECTS IN PROJECT MANAGEMENT: SIMILARITIES AND DIFFERENCES BETWEEN THE CONSTRUCTION AND THE SOFTWARE INDUSTRY. Tehnicki Vjesnik / Technical Gazette, 21(3), 583-589.
Walters, S. (2013). Beyond Listening. Business Intelligence Journal, 18(1), 13-17.
Wienclaw, R. A. (2014). Database Management. Database Management -- Research Starters Business, 1-6.
http://www.developer.com/tech/article.php/641521/Logical-Versus-Physical-Database-Modeling.htm
http://www.nist.gov/itl/csd/virtual-020111.cfm
http://www.sans.org/reading-room/whitepapers/protocols/applying-osi-layer-network-model-information-security-1309
Appendix A
Figure 10 - Network Diagram
Figure 11 - Logical Network Model
Figure 12 - Physical Network Model