What information can observation as an assessment method give which cannot be given by testing

Assessment methods

Leighton Johnson, in Security Controls Evaluation, Testing, and Assessment Handbook (Second Edition), 2020

Evaluation methods and their attributes

Both of the primary testing documents from NIST, SP 800-53A and SP 800-115, address various methods and techniques for different kinds of testing activities. SP 800-53A states it as follows:

Assessment methods have a set of associated attributes, depth and coverage, which help define the level of effort for the assessment. These attributes are hierarchical in nature, providing the means to define the rigor and scope of the assessment for the increased assurances that may be needed for some information systems.

(a)

The depth attribute addresses the rigor of and level of detail in the examination, interview, and testing processes. Values for the depth attribute include basic, focused, and comprehensive. The coverage attribute addresses the scope or breadth of the examination, interview, and testing processes including the number and type of specifications, mechanisms, and activities to be examined or tested, and the number and types of individuals to be interviewed.

(b)

Similar to the depth attribute, values for the coverage attribute include basic, focused, and comprehensive. The appropriate depth and coverage attribute values for a particular assessment method are based on the assurance requirements specified by the organization.

These attributes are further explained in the supplemental NIST guidance for assessments for each family of controls found on the NIST site. The supplemental guidance is provided to assist the assessment activity for each family of controls as they are reviewed. The action statements provided in the assessment test plans are written using the basic (i.e., foundation) level of assessment depth and coverage attribute values. An increased rigor and/or scope of the action statement can be expressed by replacing the basic level of assessment depth and coverage attribute value with other defined values. The potential attribute values for varying depth and coverage for the assessment methods (Examine, Interview, and Test) in the action statements are as follows:

“Coverage Attribute Values:

Basic Sample attribute value is used to indicate a ‘basic’ level of scope or breath of coverage; that is, a representative sample of assessment objects (by type and number within type) to provide a level of coverage necessary for determining if the control meets the ‘basic’ criteria listed below.

Focused Sample attribute value is available for use to indicate a ‘focused’ level of scope or breadth coverage; that is, an extended basic sample to include other specific assessment objects important to achieving the assessment objective to provide a level of coverage necessary for determining if the control meets the ‘focused’ coverage criteria listed below.

Sufficiently Large Sample attribute value is available for use to indicate a ‘comprehensive’ level of scope or breadth of coverage; that is, an extended focused sample to include more assessment objects to provide a level of coverage necessary for determining if the control meets the ‘comprehensive’ coverage criteria listed below.

Depth Attribute Values:

Specific action verbs identified in SP 800-53A, Appendix D, in the definition of the examine method are employed in the application of the Action Steps of the Assessment Cases to indicate level of rigor for examining the different types of assessment objects (i.e., documentation, activities and mechanisms) as follows:

Examine documentation rigor—‘reading’:

Review attribute value for reading documentation is used for the ‘basic’ level of rigor and level of detail; that is, a high-level examination of documentation looking for required content and for any obvious errors, omissions, or inconsistencies.

Study attribute value for reading documentation is available for use for the ‘focused’ level of rigor and level of detail; that is, an examination of documentation that includes the intent of ‘review’ and adds a more in-depth examination for greater evidence to support a determination of whether the document has the required content and is free of obvious errors, omissions, and inconsistencies.

Analyze attribute value for reading documentation is available for use for the ‘comprehensive’ level of rigor and level of detail; that is, an examination of documentation that includes the intent of both ‘review’ and ‘study’; adding a thorough and detailed analysis for significant grounds for confidence in the determination of whether the required content is present and the document is correct, complete, and consistent.

Examine activities and mechanisms rigor – ‘watching’:

Observe attribute value for watching activities and mechanisms is used for the ‘basic’ level of rigor and level of detail; that is, watching the execution of an activity or process or looking directly at a mechanism (as opposed to reading documentation produced by someone other than the assessor about that mechanism) for the purpose of seeing whether the activity or mechanism appears to operate as intended (or in the case of a mechanism, perhaps is configured as intended) and whether there are any obvious errors, omissions, or inconsistencies in the operation or configuration.

Inspect attribute value for watching activities and mechanisms is available for use for the ‘focused’ level of rigor and level of detail; that is, adding to the watching associated with ‘observe’ an active investigation to gain further grounds for confidence in the determination of whether that the activity or mechanism is operating as intended and is free of errors, omissions, or inconsistencies in the operation or configuration.

Analyze attribute value for watching activities and mechanisms is available for use for the ‘comprehensive’ level of rigor and level of detail; that is, adding to the watching and investigation of ‘observe’ and ‘inspect’ a thorough and detailed analysis of the information to develop significant grounds for confidence in the determination as to whether the activity or mechanism is operating as intended and is free of errors, omissions, or inconsistencies in the operation or configuration. Analysis achieves this by both leading to further observations and inspections and by a greater understanding of the information obtained from the examination.

Interview individual or group rigor:

Basic attribute value for interviewing individuals and groups is used for the ‘basic’ level of rigor and level of detail; that is, a high-level interview looking for evidence to support a determination of whether the control meets the ‘basic’ interview criteria listed below.

Focused attribute value for interviewing individuals and groups is available for use for the ‘focused’ level of rigor and level of detail; that is, an interview that includes the intent of ‘basic’ and adds a more in-depth interview for greater evidence to support a determination of whether the control meets the ‘focused’ interview criteria listed below.

Comprehensive attribute value for interviewing individuals and groups is available for use for the ‘comprehensive’ level of rigor and level of detail; that is, an interview that includes the intent of both ‘basic’ and ‘focused’; adding a thorough and detailed analysis for significant grounds for confidence in the determination of whether the control meets the ‘comprehensive’ interview criteria listed below.

Test Mechanisms and Activities rigor:

Basic attribute value for mechanisms and activities is used for the ‘basic’ level of rigor and level of detail; that is, a basic level of testing looking for evidence to support a determination of whether the control meets the ‘basic’ test criteria listed below.

Focused attribute value for mechanisms and activities is available for use for the ‘focused’ level of rigor and level of detail; that is, a focused level of testing that includes the intent of ‘basic’ and adds a more in-depth testing for greater evidence to support a determination of whether the control meets the ‘focused’ test criteria listed below.

Comprehensive attribute value for mechanisms and activities is available for use for the ‘comprehensive’ level of rigor and level of detail; that is, a comprehensive level of testing that includes the intent of both ‘basic’ and ‘focused’; adding a thorough and detailed analysis for significant grounds for confidence in the determination of whether the control meets the ‘comprehensive’ test criteria listed below.

The depth and coverage attributes do not alter the logical sequencing, totality, or selection of evidence gathering actions; rather these attributes served as providing/supporting degrees of assessment rigor.2

These detailed explanations provide a good basis for understanding the different levels of assessment as well as defining how each attribute is uniquely identified and assessed during the differing levels of assessment: basic, focused, and comprehensive. To the assessor, the method for evaluating a control is often flexible based on organizational requests and assessor experience. Hence, assessor-defined parameters are often identified within the action statement for selecting an appropriate depth (i.e., level of detail required) and coverage (i.e., scope) of application of the assessment method for assessing the security control.

As assurance requirements increase with regard to the development, implementation, and operation of security and privacy controls within or inherited by the information system, the rigor and scope of the assessment activities (as reflected in the selection of assessment methods and objects and the assignment of depth and coverage attribute values) tend to increase as well.

I am going to put together the various criteria for evaluation and the methods interweaving the guidance to give a picture of how the methods, specifications, objects, and the activities work together to build a complete assurance case that is useable, presentable and, as best as possible, comprehensive enough for use in a Security Assessment Report to the organization's executives and authorizing officials.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128184271000082

Information Risk Assessment

Timothy Virtue, Justin Rainey, in HCISPP Study Guide, 2015

Assessment Methods

Assessment methods define the nature of the assessor actions and include:

Examine method: The process of reviewing, inspecting, observing, studying, or analyzing one or more assessment objects (i.e., specifications, mechanisms, or activities). The purpose of the examine method is to facilitate assessor understanding, achieve clarification, or obtain evidence.

Interview method: The process of holding discussions with individuals or groups of individuals within an organization to, once again, facilitate assessor understanding, achieve clarification, or obtain evidence.

Test method: The process of exercising one or more assessment objects (i.e., activities or mechanisms) under specified conditions to compare actual with expected behavior.

In all three methods, the results are used in making specific determinations called for in the determination statements and thereby achieving the objectives for the assessment procedure.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128020432000069

26th European Symposium on Computer Aided Process Engineering

Suriyaphong Nakem, ... Pomthong Malakul, in Computer Aided Chemical Engineering, 2016

5 Life cycle assessment (LCA)

LCA method based on ISO14040 series was utilized to evaluate waste management scenarios shown in Table 2 using commercial LCA software, SimaPro 7.1, with CML 2 baseline 2000 and Eco-indicator 99 methods to assess environmental impacts in two aspects: global warming potential (GWP as CO2 equivalent) and energy usage (as MJ). In this paper, the results of pipe & fitting (size 55 mm) were selected as a representative to study the environmental impacts of waste management scenarios of PVC products after being discarded. The service life-time of pipe & fitting were assumed 50 years with no maintenance required. All scenarios in Table 2 were evaluated for the possible reduction in environmental impacts by comparing with the base case as shown in Figure 5. The results showed that scenario 3 (100% incineration) was shown to have highest impacts in both GWP and energy usage. It should be noted that the study has also taken into account the grid-mix supply of electricity produced based on heating value of PVC. Scenario 1 (100% recycle) was shown to have the lowest impacts which could be attributed to the recovery of PVC to be used as secondary raw materials. From scenarios 4-8, it can be seen that increasing recycling of PVC pipe and fitting from 50% to 90% (scenario 4-8) could help reduce the environmental impacts significantly. By comparing with the base case scenario, GWP and energy resource used could be reduced as high as 22-58% and 12-37%, respectively. This clearly means that more recycle is better to the environment.

What information can observation as an assessment method give which cannot be given by testing

Figure 5. Environmental impacts of different end-of-life scenarios

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444634283502861

Assessment process

Leighton Johnson, in Security Controls Evaluation, Testing, and Assessment Handbook (Second Edition), 2020

SP 800-53A

An assessment procedure consists of a set of assessment objectives, each with an associated set of potential assessment methods and assessment objects. An assessment objective includes a set of determination statements related to the particular security or privacy control under assessment. The determination statements are linked to the content of the security or privacy control (i.e., the security/privacy control functionality) to ensure traceability of assessment results back to the fundamental control requirements. The application of an assessment procedure to a security or privacy control produces assessment findings. These findings reflect, or are subsequently used, to help determine the overall effectiveness of the security or privacy control.

Assessment objects identify the specific items being assessed and include specifications, mechanisms, activities, and individuals. Specifications are the document-based artifacts (e.g., policies, procedures, plans, system security and privacy requirements, functional specifications, architectural designs) associated with an information system. Mechanisms are the specific hardware, software, or firmware safeguards and countermeasures employed within an information system. Activities are the specific protection-related actions supporting an information system that involve people (e.g., conducting system backup operations, monitoring network traffic, exercising a contingency plan). Individuals, or groups of individuals, are people applying the specifications, mechanisms, or activities described above.

Assessment methods define the nature of the assessor actions and include examine, interview, and test. The examine method is the process of reviewing, inspecting, observing, studying, or analyzing one or more assessment objects (i.e., specifications, mechanisms, or activities). The purpose of the examine method is to facilitate assessor understanding, achieve clarification, or obtain evidence. The interview method is the process of holding discussions with individuals or groups of individuals within an organization to once again, facilitate assessor understanding, achieve clarification, or obtain evidence. The test method is the process of exercising one or more assessment objects (i.e., activities or mechanisms) under specified conditions to compare actual with expected behavior. In all three assessment methods, the results are used in making specific determinations called for in the determination statements and thereby achieving the objectives for the assessment procedure.

Assessment methods have a set of associated attributes, depth and coverage, which help define the level of effort for the assessment. These attributes are hierarchical in nature, providing the means to define the rigor and scope of the assessment for the increased assurances that may be needed for some information systems. The depth attribute addresses the rigor of and level of detail in the examination, interview, and testing processes. Values for the depth attribute include basic, focused, and comprehensive. The coverage attribute addresses the scope or breadth of the examination, interview, and testing processes including the number and type of specifications, mechanisms, and activities to be examined or tested, and the number and types of individuals to be interviewed. Similar to the depth attribute, values for the coverage attribute include basic, focused, and comprehensive. The appropriate depth and coverage attribute values for a particular assessment method are based on the assurance requirements specified by the organization. As assurance requirements increase with regard to the development, implementation, and operation of security and privacy controls within or inherited by the information system, the rigor and scope of the assessment activities (as reflected in the selection of assessment methods and objects and the assignment of depth and coverage attribute values) tend to increase as well.3

“In addition to selecting appropriate assessment methods and objects, each assessment method (i.e., examine, interview, and test) is associated with depth and coverage attributes that are described in (SP 800-53A), Appendix D. The attribute values identify the rigor and scope of the assessment procedures executed by the assessor. The values selected by the organization are based on the characteristics of the information system being assessed (including assurance requirements) and the specific determinations to be made. The depth and coverage attribute values are associated with the assurance requirements specified by the organization (i.e., the rigor and scope of the assessment increases in direct relationship to the assurance requirements).4

RMF step 4—assess security controls

The purpose of the Assess step is to determine if the controls selected for implementation are implemented correctly, operating as intended, and producing the desired outcome with respect to meeting the security and privacy requirements for the system and the organization.

SP 800-37, rev. 2 Table 7.1 provides a summary of tasks and expected outcomes for the RMF Assess step. Applicable Cybersecurity Framework constructs are also provided (Fig. 7.1).

Table 7.1. Assess tasks and outcomes5.

TasksOutcomes
TASK A-1 ASSESSOR SELECTION

An assessor or assessment team is selected to conduct the control assessments.

The appropriate level of independence is achieved for the assessor or assessment team selected.

TASK A-2 ASSESSMENT PLAN

Documentation needed to conduct the assessments is provided to the assessor or assessment team.

Security and privacy assessment plans are developed and documented.

Security and privacy assessment plans are reviewed and approved to establish the expectations for the control assessments and the level of effort required.

TASK A-3 CONTROL ASSESSMENTS

Control assessments are conducted in accordance with the security and privacy assessment plans.

Opportunities to reuse assessment results from previous assessments to make the risk management process timely and cost-effective are considered.

Use of automation to conduct control assessments is maximized to increase speed, effectiveness, and efficiency of assessments.

TASK A-4 ASSESSMENT REPORTS

Security and privacy assessment reports that provide findings and recommendations are completed.

TASK A-5 REMEDIATION ACTIONS

Remediation actions to address deficiencies in the controls implemented in the system and environment of operation are taken.

Security and privacy plans are updated to reflect control implementation changes made based on the assessments and subsequent remediation actions. [Cybersecurity Framework: Profile]

TASK A-6 PLAN OF ACTION AND MILESTONES

A plan of action and milestones detailing remediation plans for unacceptable risks identified in security and privacy assessment reports is developed. [Cybersecurity Framework: ID.RA-6]

As part of the Risk Management Framework, SP 800-37, rev. 2 provides an updated listing of the tasks and guidance for each task during the prosecution of the Assessment Phase.

What information can observation as an assessment method give which cannot be given by testing

Figure 7.1. SP 800-53A assessment case flow.

TASK A-1: ASSESSOR SELECTION—select the appropriate assessor or assessment team for the type of control assessment to be conducted

Primary role of responsibility: authorizing official

Organizations consider both the technical expertise and level of independence required in selecting control assessors for the security control assessments; however, this level of independence is not required for privacy control assessments. “Some organizations may select control assessors prior to the RMF Assess step to support control assessments at the earliest opportunity during the system life cycle. Early identification and selection of assessors allows organizations to plan for the assessment activities, including agreeing on the scope of the assessment. Organizations implementing a systems security engineering approach may also benefit from early selection of assessors to support verification and validation activities that occur throughout the system life cycle. Organizations ensure that control assessors possess the required skills and technical expertise to develop effective assessment plans and to conduct assessments of program management, system-specific, hybrid, and common controls, as appropriate. This includes general knowledge of risk management concepts and approaches as well as comprehensive knowledge of and experience with the hardware, software, and firmware components implemented. In organizations where the assessment capability is centrally managed, the senior agency information security officer may have the responsibility of selecting and managing the security control assessors or assessment teams for organizational systems. As controls may be implemented to achieve security and privacy objectives, organizations consider the degree of collaboration between security control and privacy control assessors that is necessary.

Organizations can conduct self-assessments of controls or obtain the services of an independent control assessor. An independent assessor is an individual or group that can conduct an impartial assessment. Impartiality means that assessors are free from perceived or actual conflicts of interest with respect to the determination of control effectiveness or the development, operation, or management of the system, common controls, or program management controls. The authorizing official determines the level of assessor independence based on applicable laws, executive orders, directives, regulations, policies, or standards. The authorizing official consults with the Office of the Inspector General, chief information officer, senior agency official for privacy, and senior agency information security officer to help guide and inform decisions regarding assessor independence.

The system privacy officer is responsible for identifying assessment methodologies and metrics to determine if privacy controls are implemented correctly, operating as intended, and sufficient to ensure compliance with applicable privacy requirements and manage privacy risks. The senior agency official for privacy is responsible for conducting assessments of privacy controls and documenting the results of the assessments. At the discretion of the organization, privacy controls may be assessed by an independent assessor. However, in all cases, the senior agency official for privacy is responsible and accountable for the organization's privacy program, including any privacy functions performed by independent assessors. The senior agency official for privacy is responsible for providing privacy information to the authorizing official.6

TASK A-2: ASSESSMENT PREPARATION—Develop, review, and approve a plan to assess the security controls

Primary role of responsibility: security control assessor, authorizing official

Security and privacy assessment plans are developed by control assessors based on the implementation information contained in security and privacy plans, program management control documentation, and common control documentation. Organizations may choose to develop a single, integrated security and privacy assessment plan for the system or the organization. An integrated assessment plan delineates roles and responsibilities for control assessment. Assessment plans also provide the objectives for control assessments and specific assessment procedures for each control. Assessment plans reflect the type of assessment the organization is conducting, including for example: developmental testing and evaluation; independent verification and validation; audits, including supply chain; assessments supporting system and common control authorization or reauthorization; program management control assessments; continuous monitoring; and assessments conducted after remediation actions.

Assessment plans are reviewed and approved by the authorizing official or the designated representative of the authorizing official to help ensure that the plans are consistent with the security and privacy objectives of the organization; employ procedures, methods, techniques, tools, and automation to support continuous monitoring and near real-time risk management; and are cost-effective. Approved assessment plans establish expectations for the control assessments and the level of effort for the assessment. Approved assessment plans help to ensure that appropriate resources are applied toward determining control effectiveness while providing the necessary level of assurance in making such determinations. When controls are provided by an external provider through contracts, interagency agreements, lines of business arrangements, licensing agreements, or supply chain arrangements, the organization can request security and privacy assessment plans and assessments results or evidence from the provider.7

TASK A-3: SECURITY CONTROL ASSESSMENT—Assess the security controls in accordance with the assessment procedures defined in the security assessment plan

Primary role of responsibility: security control assessor

“Control assessments determine the extent to which the selected controls are implemented correctly, operating as intended, and producing the desired outcome with respect to meeting security and privacy requirements for the system and the organization. The system owner, common control provider, and/or organization rely on the technical skills and expertise of assessors to assess implemented controls using the assessment procedures specified in assessment plans and provide recommendations on how to respond to control deficiencies to reduce or eliminate identified vulnerabilities or unacceptable risks. The senior agency official for privacy serves as the control assessor for the privacy controls and is responsible for conducting an initial assessment of the privacy controls prior to system operation, and for assessing the controls periodically thereafter at a frequency sufficient to ensure compliance with privacy requirements and to manage privacy risks. The senior agency official for privacy can delegate the assessment functions, consistent with applicable policies. Controls implemented to achieve both security and privacy objectives may require a degree of collaboration between security and privacy control assessors. The assessor findings are a factual reporting of whether the controls are operating as intended and whether any deficiencies in the controls are discovered during the assessment.”8 Assessor judgment is relied upon during the assessment because only deficiencies in controls that can be exploited by threat agents are considered vulnerabilities.

Control assessments occur as early as practicable in the SDLC, preferably during the development phase. These types of assessments are referred to as developmental testing and evaluation, and validate that the controls are implemented correctly and are consistent with the established information security and privacy architectures. Developmental testing and evaluation activities include, for example, design and code reviews, regression testing, and application scanning. Deficiencies identified early in the SDLC can be resolved in a more cost-effective manner. Assessments may be needed prior to source selection during the procurement process to assess potential suppliers or providers before the organization enters into agreements or contracts to begin the development phase. The results of control assessments conducted during the SDLC can also be used (consistent with reuse criteria established by the organization) during the authorization process to avoid unnecessary delays or costly repetition of assessments. Organizations can maximize the use of automation to conduct control assessments to increase the speed, effectiveness, and efficiency of the assessments, and to support continuous monitoring of the security and privacy posture of organizational systems.

Applying and assessing controls throughout the development process may be appropriate for iterative development processes. When iterative development processes (e.g., agile development) are employed, an iterative assessment may be conducted as each cycle is completed. A similar process is employed for assessing controls in commercial IT products that are used in the system. Organizations may choose to begin assessing controls prior to the complete implementation of all controls in the security and privacy plans. This type of incremental assessment is appropriate if it is more efficient or cost-effective to do so.

Common controls (i.e., controls that are inherited by the system) are assessed separately (by assessors chosen by common control providers or the organization) and need not be assessed as part of a system-level assessment. Organizations ensure that assessors have access to the information system and environment of operation where the controls are implemented and to the documentation, records, artifacts, test results, and other materials needed to assess the controls. This includes the controls implemented by external providers through contracts, interagency agreements, lines of business arrangements, licensing agreements, or supply chain arrangements. Assessors have the required degree of independence as determined by the authorizing official. Assessor independence during the continuous monitoring process facilitates reuse of assessment results to support ongoing authorization and reauthorization.

To make the risk management process more efficient and cost-effective, organizations may choose to establish reasonable and appropriate criteria for reusing assessment results as part of organization-wide assessment policy or in the security and privacy program plans. For example, a recent audit of a system may have produced information about the effectiveness of selected controls. Another opportunity to reuse previous assessment results may come from external programs that test and evaluate security and privacy features of commercial information technology products (e.g., Common Criteria Evaluation and Validation Program and NIST Cryptographic Module Validation Program). If prior assessment results from the system developer or vendor are available, the control assessor, under appropriate circumstances, may incorporate those results into the assessment. In addition, if a control implementation was assessed during other forms of assessment at previous stages of the SDLC (e.g., unit testing, functional testing, acceptance testing), organizations may consider potential reuse of those results to reduce duplication of efforts. And finally, assessment results can be reused to support reciprocity, for example, assessment results supporting an authorization to use.9

TASK A-4: SECURITY ASSESSMENT REPORT—Prepare the security assessment report documenting the issues, findings, and recommendations from the security control assessment

Primary role of responsibility: security control assessor

The results of the security and privacy control assessments, including recommendations for correcting deficiencies in the implemented controls, are documented in the assessment reports 92 by control assessors. If a comparable report meets the requirements of what is to be included in an assessment report, then the comparable report would itself constitute the assessment report. Organizations may develop a single, integrated security and privacy assessment report. Assessment reports are key documents in the system or common control authorization package that is developed for authorizing officials. The assessment reports include information based on assessor findings, necessary to determine the effectiveness of the controls implemented within or inherited by the information system. Assessment reports are an important factor in a determining risk to organizational operations and assets, individuals, other organizations, and the Nation by the authorizing official. The format and the level of detail provided in assessment reports are appropriate for the type of control assessment conducted, for example, developmental testing and evaluation; independent verification and validation; independent assessments supporting information system or common control authorizations or reauthorizations; self-assessments; assessments after remediation actions; independent evaluations or audits; and assessments during continuous monitoring. The reporting format may also be prescribed by the organization.

Control assessment results obtained during the system development lifecycle are documented in an interim report and included in the final security and privacy assessment reports. Development of interim reports that document assessment results from relevant phases of the SDLC reinforces the concept that assessment reports are evolving documents. Interim reports are used, as appropriate, to inform the final assessment report. Organizations may choose to develop an executive summary from the control assessment findings. The executive summary provides authorizing officials and other interested individuals in the organization with an abbreviated version of the assessment reports that includes a synopsis of the assessment, findings, and the recommendations for addressing deficiencies in the controls.10

TASK A-4: REMEDIATION ACTIONS—Conduct initial remediation actions on security controls based on the findings and recommendations of the security assessment report and reassess remediated control(s), as appropriate

Primary roles of responsibility: information system owner or common control provider; security control assessor

The security and privacy assessment reports describe deficiencies in the controls that could not be resolved during the development of the system or that are discovered post-development. Such control deficiencies may result in security and privacy risks (including supply chain risks). The findings generated during control assessments, provide information that facilitates risk responses based on organizational risk tolerance and priorities. The authorizing official, in consultation and coordination with system owners and other organizational officials, may decide that certain findings represent significant, unacceptable risk and require immediate remediation actions. Additionally, it may be possible and practical to conduct initial remediation actions for assessment findings that can be quickly and easily remediated with existing resources.

If initial remediation actions are taken, assessors reassess the controls. The control reassessments determine the extent to which remediated controls are implemented correctly, operating as intended, and producing the desired outcome with respect to meeting the security and privacy requirements for the system and the organization. The assessors update the assessment reports with the findings from the reassessment, but do not change the original assessment results. The security and privacy plans are updated based on the findings of the control assessments and any remediation actions taken. The updated plans reflect the state of the controls after the initial assessment and any modifications by the system owner or common control provider in addressing recommendations for corrective actions. At the completion of the control assessments, security and privacy plans contain an accurate description of implemented controls, including compensating controls.

Organizations can prepare an addendum to the security and privacy assessment reports that provides an opportunity for system owners and common control providers to respond to initial assessment findings. The addendum may include, for example, information regarding initial remediation actions taken by system owners or common control providers in response to assessor findings. The addendum can also provide the system owner or common control provider perspective on the findings. This may include providing additional explanatory material, rebutting certain findings, and correcting the record. The addendum does not change or influence the initial assessor findings provided in the reports. Information provided in the addendum is considered by authorizing officials when making risk-based authorization decisions. Organizations implement a process to determine the initial actions to take regarding the control deficiencies identified during the assessment. This process can address vulnerabilities and risks, false positives, and other factors that provide useful information to authorizing officials regarding the security and privacy posture of the system and organization including the ongoing effectiveness of system-specific, hybrid, and common controls. The issue resolution process can also ensure that only substantive items are identified and transferred to the plan of actions and milestones.

Findings from a system-level control assessment may necessitate an update to the system risk assessment and the organizational risk assessment. Risk assessments are conducted as needed at the organizational level, mission/business level, and at the system level throughout the SDLC. Risk assessment is specified as part of the RMF Prepare-Organization Level step and RMF Prepare-System Level step. The updated risk assessments and any inputs from the senior accountable official for risk management or risk executive (function) determines the initial remediation actions and the prioritization of those actions. System owners and common control providers may decide, based on a system or organizational risk assessment, that certain findings are inconsequential and present no significant security or privacy risk. Such findings are retained in the security and privacy assessment reports and monitored during the monitoring step. The authorizing official is responsible for reviewing and understanding the assessor findings and for accepting the security and privacy risks (including any supply chain risks) that result from the operation the system or the use of common controls.

In all cases, organizations review assessor findings to determine the significance of the findings and whether the findings warrant any further investigation or remediation. Senior leadership involvement in the mitigation process is necessary to ensure that the organization's resources are effectively allocated in accordance with organizational priorities—providing resources to the systems that are supporting the most critical missions and business functions or correcting the deficiencies that pose the greatest risk.“11

TASK A-5: PLAN OF ACTION AND MILESTONES—Prepare the plan of action and milestones based on the findings and recommendations of the assessment reports

Primary roles of responsibility: information system owner or common control provider

The plan of action and milestones is included as part of the authorization package. The plan of action and milestones describes the actions that are planned to correct deficiencies in the controls identified during the assessment of the controls and during continuous monitoring. The plan of action and milestones includes tasks to be accomplished with a recommendation for completion before or after system authorization; resources required to accomplish the tasks; milestones established to meet the tasks; and the scheduled completion dates for the milestones and tasks. The plan of action and milestones is reviewed by the authorizing official to ensure there is agreement with the remediation actions planned to correct the identified deficiencies. It is subsequently used to monitor progress in completing the actions. Deficiencies are accepted by the authorizing official as residual risk or are remediated during the assessment or prior to submission of the authorization package to the authorizing official. Plan of action and milestones entries are not necessary when deficiencies are accepted by the authorizing official as residual risk. However, deficiencies identified during assessment and monitoring are documented in the assessment reports, which can be retained within an automated security/privacy management and reporting tool to maintain an effective audit trail. Organizations develop plans of action and milestones based on assessment results obtained from control assessments, audits, and continuous monitoring and in accordance with applicable laws, executive orders, directives, policies, regulations, standards, or guidance.

Organizations implement a consistent process for developing plans of action and milestones that uses a prioritized approach to risk mitigation that is uniform across the organization. A risk assessment guides the prioritization process for items included in the plan of action and milestones. The process ensures that plans of action and milestones are informed by the security categorization of the system and security, privacy, and supply chain risk assessments; the specific deficiencies in the controls; the criticality of the identified control deficiencies (i.e., the direct or indirect effect that the deficiencies may have on the security and privacy posture of the system, and therefore, on the risk exposure of the organization; or the ability of the organization to perform its mission or business functions); and the proposed risk mitigation approach to address the identified deficiencies in the controls (e.g., prioritization of risk mitigation actions and allocation of risk mitigation resources). Risk mitigation resources include, for example, personnel, new hardware or software, and tools.12

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128184271000070

Risk Management

Stephen D. Gantz, Daniel R. Philpott, in FISMA and the Risk Management Framework, 2013

Assessment Methods

Despite the number and variety of risk assessment methods, approaches to risk assessment typically belong to one of three categories: quantitative, qualitative, or hybrid [62]. Quantitative risk assessment incorporates numeric values produced via direct measurement or observation or obtained through empirical evidence to enable use of mathematical and statistical analysis methods. Quantitative assessments express asset valuation and impact in dollars, time, or other continuous values, and use probability calculations or estimates to determine likelihood. Where objective, accurate measures are available, quantitative risk assessments produce risk determinations easily compared to each other and well suited to cost-benefit analysis. Quantitative assessments facilitate risk ranking and prioritization activities, but their validity depends on the ability of risk assessors to accurately determine values used in risk calculations. The emphasis on numeric scoring can give the impression of clear differences among risk ratings where no significant differences actually exist.

Qualitative assessments measure risk factors using categorical or ordinal ratings, often relying on the knowledge and expertise of risk assessors to correctly apply subjective and relative values. NIST information security standards and guidelines often use qualitative assessment scales, such as the low/moderate/high ratings used in security categorization. Qualitative analysis can be easier to apply than quantitative alternatives, particularly in public sector contexts where operational performance is not often measured in quantitative terms such as revenue or profit. Challenges associated with qualitative assessments include the inherent subjectivity associated with assigning ratings to risk factors and the difficulty in making meaningful differentiations and prioritizing among risk determinations with similar assigned values.

Hybrid assessment methods—often called “semi-quantitative” or “pseudo-quantitative”—add numerical scales to ordinal rating levels to support statistical analysis and facilitate better differentiation among assessed values and risk determinations than in purely qualitative approaches. The guidance on conducting risk assessments in Special Publication 800-30 Revision 1 uses this type of approach, defining five ordinal rating values (very low, low, moderate, high, and very high) for assessing threat sources, vulnerabilities, likelihood, impact, and risk, and assigning numeric rating scales to each value (0–4, 5–20, 21–79, 80–95, and 96–100, respectively) [63].

Warning

The use of numeric rating scales with qualitative assessment values does not change the subjectivity inherent in the rating process. Organizations hoping to improve their ability to compare and rank risk using semi-qualitative ratings must provide clear guidance and rating criteria to risk assessors to ensure that assessment ratings are used consistently.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597496414000138

Security Assessment Report

Stephen D. Gantz, Daniel R. Philpott, in FISMA and the Risk Management Framework, 2013

Depth and Coverage

For each assessment method described in Appendix D of Special Publication 800-53A (examine, interview, and test), the level of detail sought in the control assessment and scope of the assessment process is indicated using attributes for depth and coverage. The possible values for depth and coverage attributes are the same: “basic,” “focused,” or “comprehensive.” Appendix D of Special Publication 800-53A describes the implications of each depth and coverage attribute value, as summarized in Table 11.4, giving system owners and security control assessors explicit guidance how to conduct assessment activities at a level of assurance appropriate for the information system’s assigned impact level. Minimum assurance requirements for low-, medium-, and high-impact systems are specified in Appendix E of Special Publication 800-53 [33], while Special Publication 800-53A applies those requirements to each assessment method [1].

Table 11.4. Assessment Guidance by Depth and Coverage Attribute [32]

LevelDepthCoverage
Basic Consists of high-level reviews, checks, observations, or inspections of the assessment objects, discussions with individuals, or tests assuming no knowledge of internal control implementation details. This type of assessment is conducted using a limited body of evidence, generalized questions, or functional control specifications. Basic assessments provide a level of understanding of the security control necessary for determining whether the control is implemented and free of obvious errors Uses a representative sample of assessment objects to provide a level of coverage necessary for determining whether the security control is implemented and free of obvious errors
Focused Adds more in-depth studies/analyses of the assessment object. This type of assessment is conducted using a substantial body of evidence or documentation, in-depth questions, or high-level design and process descriptions for controls. Focused assessments provide a level of understanding of the security control necessary for determining whether the control is implemented and free of obvious errors and whether there are increased grounds for confidence that the control is implemented correctly and operating as intended Uses a representative sample of assessment objects and other specific assessment objects deemed particularly important to achieving the assessment objective to provide a level of coverage necessary for determining whether the security control is implemented and free of obvious errors and whether there are increased grounds for confidence that the control is implemented correctly and operating as intended
Comprehensive Consists of activities from basic and focused levels and more in depth, detailed, and thorough studies/analyses of the assessment object. This type of assessment is conducted using an extensive body of evidence or documentation, probing and in-depth questions, or detailed technical control specifications. Comprehensive assessments provide a level of understanding of the security control necessary for determining whether the control is implemented and free of obvious errors and whether there are further increased grounds for confidence that the control is implemented correctly and operating as intended on an ongoing and consistent basis, and that there is support for continuous improvement in the effectiveness of the control Uses a sufficiently large sample of assessment objects and other specific assessment objects deemed particularly important to achieving the assessment objective to provide a level of coverage necessary for determining whether the security control is implemented and free of obvious errors and whether there are further increased grounds for confidence that the control is implemented correctly and operating as intended on an ongoing and consistent basis, and that there is support for continuous improvement in the effectiveness of the control

As with assessment methods and objects, security assessment report detail should include the depth and coverage attributes corresponding to each assessed control, to give some indication to readers of the report as to the rigor and scope of the assessment procedures that were followed.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597496414000114

27th European Symposium on Computer Aided Process Engineering

Carla I.C. Pinheiro, Rui M. Filipe, in Computer Aided Chemical Engineering, 2017

2 Assessment methodology

The traditional assessment methods used in bachelor and master process control courses are usually based on the traditional components: exercises assignments, homeworks or tests and a final exam. An effective instructor understands that it is not enough to present course material to students and hope that they get it, assuming that some will and some will not. Learning occurs when there is an interplaying between the teaching process and the outcome. This paper presents a non-traditional method for assessing the students’ skills developed in the above mentioned Portuguese computer aided advanced process control courses which include three components: 1) an APC project performed in groups of two students using MATLAB with Simulink, and the Control System and MPC Toolboxes; 2) an individual test performed in software MATLAB/Simulink in class or three individual homework exercises, and 3) an oral presentation of the project as a 20 min seminar equally shared between both members of the group, followed by 15 min of discussion. A final exam is also available and can be used as the unique assessment method but the students always prefer the proposed assessment methodology based on the above mentioned 3 components. The case study APC project is the main assessment component accounting for an assessment weight between 50% and 65 %, allowing another concept adopted in our teaching strategy which is encouraging students team working. MATLAB package with Simulink, and the Control System and MPC Toolboxes are used during these courses and in the APC project.

The case study process provided to the students as a non-linear mathematical model or as a Simulink model of the “real” process plant allows to investigate and apply various control concepts, the tuning of the implemented controllers, and the performance and robustness of the proposed control strategies to meet the control objectives.

The students attending these courses may have different backgrounds and an additional effort may be required for some of them. The individual test or the homework exercises are used as individual assessment components to establish a grounding or a base line for all the students before starting to work on the APC project, reviewing introductory process control concepts and refreshing MATLAB. This component also contributes to a fair assessment as they provide individual grading for each student.

The idea for the assessment seminar is to “simulate” a more realistic and professional scenario in real industry, where the oral presentations will be framed. So, it is not expected that the students make their oral presentations in a classical academic fashion presenting all the work developed within the project, but that the students be selective and imaginative having in mind the following scenario: each group represents a “Process Control Engineer” of the process control department of a certain industry, that must present the objectives for changing the present control system, identifying some or many of the present control problems in the plant, and also present the main results of the study, conclusions and suggestions for changes in the present base case control structure by proposing a new advanced control system with operational advantages.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444639653504943

24th European Symposium on Computer Aided Process Engineering

Gustavo Lucas, ... Ana Carvalho, in Computer Aided Chemical Engineering, 2014

5 Conclusion

This study shows that LCA methods influence the structure and consequently the economic performance of bioethanol supply chains (Impact 2002+: NPV = −890 M€; Eco-Indicator 99: NPV = 990 M€; ReCiPe: NPV = - 996 M€)‥ In this sense, decision makers should carefully select the LCA method to apply in their projects, selecting them according to their features and final purpose. Moreover, this fact shows that decisions cannot be taken only based on environmental factors and that multi-objective approaches should be considered when assessing environmental concerns. This work also leads to an important future research path, where LCA methods should be studied in more detail so that some guidelines can be given in terms of the best method to apply.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444634566501630

13th International Symposium on Process Systems Engineering (PSE 2018)

José O.N. de Jesus, ... Carla P. de B. Lemos, in Computer Aided Chemical Engineering, 2018

Abstract

The life cycle assessment method was applied to identify the stages of the industrial water production system in an oil refinery and analyze the energy demand, carbon footprint and water footprint in order to propose scenarios for improvements. Based on the data collection carried out in the field, the inventory of the product system was elaborated in Simapro software with the ecoinvent database. Impact assessment methods applied were Cumulative Energy Demand (CED), Intergovernmental Panel on Climate Change (IPCC) (2013) and Available Water Remaining (AWARE). It was noticed that water losses represent 59% and electricity consumption 0.92 kWh per m3 of distributed water. The proposed scenarios to use more efficient frequency inverters in water uptake, water capture in the nearest place and water losses reduction combined resulted in 28% water losses and 0.33 kWh of electricity consumption per m3 of distributed water. The proposed scenarios combined also reduced CED by 51%, carbon footprint by 47% and water footprint by 39%. The proposed actions are part of the rationalization and eco-efficiency strategies of water use in oil refineries.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444642417502937

11th International Symposium on Process Systems Engineering

Jose Munoz, Junghui Chen, in Computer Aided Chemical Engineering, 2012

5 Case Study

The proposed performance assessment method is used to evaluate the performance of temperature control loops using a proportional controller for different controller gains in free radical solution polymerization of styrene. A process variable, temperature, is used as a controlled variable. Due to the space limitation, the mathematical model is listed here and readers can refer to Schmidt & Ray (1981). Most of the parameter values are taken from Hidalgo & Brosilow (1990). The operating conditions are taken from Tatiraju et al. (1999).

The model has the vector of three non-distribution state variables s=[CICMT ] (initiator concentration, monomer concentration and temperature) and the vector of N = 200 distribution variables at several chain length locations on the MWD output γ=[CM2CM2+ΔnCM2+2ΔnCM2+3Δn⋯CMn,max]T⋅Q is the heading rate which is the control input, and the vector of two disturbance variables (inlet initiator concentration and inlet temperature) is ε=[CI,iTI] T. The MWD function γ―(y) is approximated by a linear combination of m = 35 B spline cubic wavelet functions. For a temperature proportional controller, the control law is given by

(10) Q―k=KpT―k-1=Kp z-1T―k

Eqn (10) is substituted into the transfer function relating temperature T to the control input Q and disturbance variable inlet initiator concentration CI,i s and inlet feed temperature TI to get

(11)T―k=[GT,C(z-1)GT,T(z-1)][CI,i,kTI,k]=GT(z-1)ε―k

Eqn (11) shows the relationship between the disturbances and the fluctuations in temperatures for a proportional controller with a specified gain. The disturbances, namely fluctuations in inlet initiator concentration and inlet feed temperature, are assumed to follow a Gaussian distribution with mean zero and standard deviations of 0.06 and 20. The PDF of temperature is based on two controller gains namely - 0.8 and - 0.08 respectively. Since it assumes that controlling fluctuations in MWD are indirectly done by controlling temperature fluctuations, it is speculated that the MWD fluctuations under the controller with higher gain would be smaller than those with a smaller gain. The overall PDF can be calculated using Eqn (9) shown in Figure 1 and 2. It is found that most of the deviations are very close to zero and that only certain distribution values deviate significantly from the target distribution and the performance for the controller gain of - 0.8 is better. Using the proposed method, the conclusion can indeed be obtained and plots of PDFs for overall MWD fluctuations can be obtained together with the variance of the PDF. The PDF curve for the overall MWD fluctuation will be narrower and the variance smaller when the controller has higher gain

What information can observation as an assessment method give which cannot be given by testing

Figure 1. PDF of Distribution Output K = - 0.8

What information can observation as an assessment method give which cannot be given by testing

Figure 2. PDF of Distribution Output K = -0.08

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444595072500470

What information can observation as an assessment method give?

Observation provides the opportunity to monitor or assess a process or situation and document evidence of what is seen and heard. Seeing actions and behaviours within a natural context, or as they usually occur provides insights and understanding of the event, activity or situation being evaluated.

What is the advantage of observational assessment?

It allows teachers to systematically record observations which are immediately available for planning future lessons, to track student progress, identify individual and group learning problems, and conference with students and parents.

What are the advantages and disadvantages in using observation as assessment method?

To best evaluate a teacher, districts should use observation in conjunction with other sources of data..
Advantage: Obtain Additional Information. Teacher observations provide information that other means of evaluation do not. ... .
Advantage: Can Provide Instant Feedback. ... .
Disadvantage: Bias. ... .
Disadvantage: Unreliable..

What are the disadvantages of using observations for an assessment?

Direct observation does not assess the higher-order levels of learning outcomes, and is often not adequate for a full assessment; oral questioning or other supplementary assessments may be required. Direct observation assessment requires a lot of time to assess and to prepare thus, it is an expensive way of assessing.