Use the information in the table to calculate the schedule Performance Index SPI

Earned Value Management

Ruwan Rajapakse, in Construction Engineering Design Calculations and Rules of Thumb, 2017

15.5 Schedule Performance Index

Schedule Performance Index (SPI) is defined as follows:

SPI=EV/PV=BCWP/BCWS

SPI = 1.0: If SPI is equal to 1.0, the EV is equal to the PV. In other words, the contractor is earning exactly what he planned as per schedule. The contractor is moving along as per schedule.

SPI < 1.0: In this case, the EV is less than the PV. In other words, he is earning less than what he planned as per schedule. The contractor is falling behind the schedule.

SPI > 1.0: In this case, the EV is greater than the PV. In other words, he is earning more than what he planned as per schedule. The contractor is ahead of schedule.

Some questions and answers

Question: SPI of a project is 1.5. Is the contractor making a profit?

Answer: It is not possible to tell whether the contractor is making a profit by looking at the SPI.

Question: If SPI is 1.5, is the contractor ahead of the schedule?

Answer: If SPI is 1.5, then the contractor is earning more than what he planned. Hence, he is ahead of schedule.

Question: If CPI is 0.8, is the contractor ahead of the schedule?

Answer: CPI cannot be used to answer questions about the schedule.

Question: If CPI is 0.8, is the contractor making a profit.

Answer: If CPI is 0.8, EV is less than AC. He is earning less than what he is spending. The contractor is losing money.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128092446000159

Control graphs and reports

EUR. ING.Albert Lester CEng, FICE, FIMechE, FIStructE, Hon FAPM, in Project Management, Planning and Control (Eighth Edition), 2021

Earned Schedule

It has long been appreciated that schedule performance index (cost) (SPIcost) based on the cost differences of the earned value and planned curves is somewhat illogical. An index reflecting schedule changes should be based on the time differences of a project. For this reason, schedule performance index (time) (SPItime) is a more realistic approach and gives more accurate results, although in practice the numerical differences between SPIcost and SPItime are not very great. SPItime for any point in time, or the current time, can be obtained by dropping a vertical line from the planned curve (BCWS) to the time baseline. This is time now (ATE). Next, dropping a vertical line from the point on the planned curve, where the planned value is equal to the earned value to the time baseline, gives the OD.

This duration from the start date to (OD) is referred to as ‘Earned Schedule’

SPItime is therefore OD/ATE, that is, the time efficiency. See Fig. 33.18.

Use the information in the table to calculate the schedule Performance Index SPI

Figure 33.18. Controi curves.

Similar to budget cost/CPI = the final predicted cost, estimated duration/SPItime = final completion time. It is important to remember that all units of durations on the time scale must be in day, week or month numbers, not in calendar dates.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012824339800033X

Introduction

Fernando Aguado Agelet, Andrés Eduardo Villa, in Cubesat Handbook, 2021

Projection at a rate modified by the SPI

It is possible to estimate the delay by making use of the SPI previously calculated. In that way, the schedule is reestimated, considering the last performance. The formula to calculate it is

EACtimeSPI=Actual time+(Remaining tasks duration estimation)/SPI

Other techniques exist to forecast the time delay incurred in the project, but they are not analyzed in this book. The most valuable information is ultimately given by the people who are executing the job, so surveys are often handy. Reestimation of the effort required, focusing on those tasks that are in the critical path, provides the required information to reestimate the duration of the project.

After a new schedule and cost estimation is done, the baselines have to be updated and used from that moment as new baselines for monitoring and controlling the evolution in time and costs.

Read full chapterView PDFDownload book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128178843099896

Strategic long-range business plan

Robert Bruce Hey, in Turnaround Management for the Oil, Gas, and Process Industries, 2019

Next turnaround

1.

Schedule

a.

Behind/ahead of critical path: hours

b.

Schedule variance (SV): actual versus planned duration

c.

Schedule performance index: alternative to SV

i.

Deviation from planned to date: earned value/planned value

2.

Cost

a.

Cost variance (CV): actual versus planned costs

b.

Cost performance index: alternative to CV

i.

Deviation from budget to date: earned value/actual cost

c.

Deviation from budget: estimate at completion/budget at completion

d.

Additional work: actual versus contingency %

3.

Scope

a.

Emergent work

i.

Man-hours as % of total turnaround man-hours: % (minimize)

ii.

Number of requests approved versus planned

4.

Health, safety, environment (HSE) and quality

a.

Lost time incidents: Number (target: zero)

i.

Monitor lower level HSE indicators to prevent even one incident

b.

Flawless start-up (no incidents, no leaks): number (target: zero)

i.

Measure from “hand back” to “on-stream at required quality”

c.

Number of reworks (weld failures, etc.)

5.

Productivity

a.

Productivity index

i.

Actual total man-hours on the job/total clocked man-hours.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128174548000022

Phase 6 Post turnaround

Robert Bruce Hey, in Turnaround Management for the Oil, Gas, and Process Industries, 2019

8.1.2 Detailed analysis of key performance indicators

The following should be analyzed:

1.

Schedule

a.

Behind or ahead of critical path: hours

b.

Schedule variance (SV): actual duration versus planned duration

c.

Schedule performance index: an alternative to SV

i.

Deviation from planned: earned value/planned value

2.

Cost

a.

Cost variance (CV): actual final cost versus planned final costs

b.

Cost performance index: an alternative to CV

i.

Deviation from budget: earned value/actual cost

c.

Deviation from budget: estimate at completion/budget at completion

d.

Additional work: actual versus contingency %

3.

Scope

a.

Emergent work

i.

Man-hours for emergent work as % of total turnaround man-hours: % (minimize)

4.

HSE

a.

Lost time incidents: number (target: zero)

b.

Flawless start-up (no incidents, no leaks): number (target: zero)

5.

Productivity

a.

Productivity Index

i.

Actual total man-hours on the job/total clocked man-hours.

Vendor and contractor costs would be committed costs rather than what was actually paid to the vendors/contractors (see Appendix G: Commitment).

Should the planned versus actual critical path model have been saved on a daily basis, then day-by-day occurrences can be checked and analyzed. Causes of variances are important to prevent these issues arising in the next turnaround.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128174548000083

Mapping computer vision research in construction: Developments, knowledge gaps and implications for research

Botao Zhong, ... Li Jiao, in Automation in Construction, 2019

4 Qualitative interpretation

Our qualitative interpretation of the literature provided a contextual backdrop to the science mapping and further examined the research themes, identified knowledge gaps, and proposed a framework for future research.

4.1 Summary of research themes

Computer vision research published up until 2018 has focused the object recognition and tracking people and plant on-site (e.g., cluster #2 and #6) and the status and quality of infrastructure (e.g., cluster #5 and 12). Based on the results of the cluster analysis, we examined these themes in further detail.

4.1.1 Computer vision on-site

On-site the focus of computer vision has been on monitoring safety with regard to people's behavior and site condition (Table 8) and in the context of performance tracking on project and operational monitoring levels (Table 9):

Table 8. Prior works on computer vision-based safety monitoring.

Research themeTypes of hazardsProblemTypes of data resourceDescriptionsReferenceSafety MonitoringUnsafe BehaviorsFailure to use PPENot wearing hardhat in working areas2D ImagesSafety Monitoring[44]Motion recognition for tracking unsafe actionsVideoHOG descriptor was used to extract skeleton models and kernel Principal Component Analysis was used to analyze 3D skeletons[69]Abnormal operation of individualsCross structural support (e.g., concrete and steel)2D imageA Mask Region Based CNN was used to recognize the relationship between people and concrete/steel supports[51]Falling when climbing and dismounting from a ladderVideoA hybrid deep learning model was developed to identify unsafe actions by integrating CNN and Long Short Term Memory (LSTM)[70]Entering hazardous areaStruck or be close to bulldozer/excavator that is backing upVideoApplied computer vision technique to extract spatial information for further fuzzy inference to assess safety levels of each entity[71]Unsafe ConditionsSpatial conflictsBlind lifts on congested offshore platform environmentsLaser scanner, sensorsDeveloped a framework for real-time pro-active safety assistance for mobile crane lifting operation[72]

Table 9. Prior works on computer vision-based performance monitoring.

Research themeObjectsProblemTypes of data resourceTechniqueDescriptionsReferencePerformance monitoringTracking project-level informationTracking progress on construction sitesOccupancy-based progress assessment2D images; Time-lapse imagesPoint clouds; imaging processing;Superimposing the as-built model in point clouds with as-planed BIM, and reasoning the occupancy[74]Appearance-based progress assessment2D images; Time-lapse imagesObserving the appearance of the BIM elements to reason the progress[76]Monitoring operation-level informationOperation analysis of construction workersProductivity measurement; Idle time detectionVideo streamsHuman pose analyzing algorithmsAutomated productivity measurement through human pose analyzing algorithms[75]Operation analysis of construction equipmentProductivity measurement; Cyclic operation analysis; Idle time detectionVideo streamsObject detection and tracking, and activity recognition algorithm (e.g., SVM)Analyze equipment in the earthworks of excavation, material hauling, and dirt loading[56]

1.

Safety monitoring: Han and Lee [69], for example, developed a computer vision-based framework to monitor unsafety behaviors, which contained four parts: (1) The identification of unsafety behaviors from safety documents and historical records; (2) the use of a laboratory experiment to collect motion templates for the identified unsafe actions; (3) the extraction of 3D skeleton from videos; and (4) real-time detection of unsafe behaviors using video.

2.

Performance monitoring: At a project-level emphasis has been placed on tracking the progress of construction using a series of metrics such as a Schedule Performance Index (SPI). The operational level of attention has concentrated on the productivity analysis of individuals or equipment where non-valuing adding activities are measured (e.g., waiting, idle, excessive travel, and transporting time).

Golparvar-Fard et al. [73] examined the status of construction by comparing its ‘as-planned’ with the ‘as-actual’ states using 2D time-lapse photographs. In this instance, the time-lapse images were used to document the work-in-progress, which was compared to a four-dimensional building information model (BIM). Golparvar-Fard et al. [74] developed a pipeline of Multi-View Stereo and voxel coloring algorithms to improve the density of 3D point clouds and presented a method for superimposing them within a BIM.

Computer vision has also been used to determine productivity [56]. The extent of resources that were being utilized is examined against crew-balance charts. For example, an automated interpretation technique can be used to extract and measure productivity information from the video. The interpretation progress contains computer vision, reasoning, and multimedia methods. More specifically, computer vision was used to recognize and track objects autonomously in a video. The number of objects to be tracked can be minimized by selecting an algorithm that was linked to a domain knowledge (e.g., Method Productivity Delay Model) [56]. For example, Peddi et al. [75] measured the productivity from videos by analyzing the pose of people while they were tying rebar with the results of 85% to 89% accuracy.

In addition to safety monitoring and performance analysis, there have been several studies that have focused on the automatic generation of BIMs [63], quality inspection of steel bars [77], and detecting materials to be recycled [78].

4.1.2 Computer vision and infrastructure

Traditionally, the inspection and assessment of infrastructure have been manually performed by qualified structural engineers who seek to identify defects (e.g., cracking, delamination, and spalling) and then measure their effect (e.g., depth, width, and length) [10]. It has been widely recognized that computer vision juxtaposed with other technologies such as unmanned aerial vehicles (UAV), 3D digital image correlation technique, and Closed Circuit Television (CCTV) imaging, can perform these tasks as well as inspect the structural integrity of tunnels [79], crack detection [80] and the structural condition of bridges [81]. A summary of key computer vision research that has examined infrastructure is presented in Table 10.

Table 10. Prior works on computer vision-based structural detection.

Research themeProblemTypes of data resourceTechniquesDescriptionsReferenceDefect detection and condition assessment on infrastructures (Bridges, tunnels, underground pipe, and asphalt pavements)Cracking detection2D ImagesImage processing algorithm (e.g., Histogram-based classification algorithm and support vector machines (SVM)); Image capture technique (e.g., aerial robots)Applied algorithm (e.g., SVM) to detect cracks on a concrete deck surface[82]Delamination/spalling/holes2D imagesPattern recognition approach (Segmentation, template matching, and morphological pre-processing)Combined segmentation, template matching and morphological pre-processing for spall detection and assessment on concrete columns[83]Other structural defects2D imagesComputer vision techniques and sensing technologyIntegrated video imagery and bridge responses to detect loss of connectivity between different composite sections.[84]

4.2 Knowledge gaps

4.2.1 Lack of an adequately sized database

Machine learning algorithms are dependent on the quantity and quality of information used to train them. Whether the identification of hazards on a construction site or the detection of structural defects, an extensive and high-quality database of images is a pre-requisite to ensure the successful application of computer vision. As evident from the studies undertaken to date, there is an absence of adequately sized databases that can be used to ensure the accuracy of computer vision. Thus, the limitation of adequately sized datasets is inhibiting the development of computer vision in construction. Moreover, there appears to be a reluctance of researchers to share their training sets. It is therefore vital that journal editors require all papers that are accepted for publication to provide a cop the training sets used and even datasets used for a particular study. Though, privacy laws can prevent this from occurring.

In the meantime, researchers reliant on using small databases will need to use data augmentation techniques. In this instance, where minor alterations to the existing data were undertaken such as image rotation, flipping, and random cutting [85]. Nevertheless, this progress may lead to potential loss of relevant data or outliers needed for training. With limited data, researchers tend to choose a relatively small sample to undertake their experimental works, which renders it difficult to compare and contrast evaluation metrics such as precision and recall.

4.2.2 Data privacy

We acknowledge that freely accessible databases are costly and timely to construct and may also contain private and sensitive information. Furthermore, they may be challenging to apply in different countries, and prevailing privacy laws may prevent the sharing of data. For instance, Europe has enacted regulations on data privacy protection referred to as the General Data Protection Regulation (GDPR) [86]. This regulation provides citizens of the European Union with rights when companies or institutions process personal data.

While computer vision has enabled headway to be made in identifying individuals who have performed an unsafe act on-site using videos cameras, it can be viewed as violating a person's privacy if they have not agreed to be monitored. Data acquisition equipment cannot be installed on a construction site if the people do not agree [87]. Monitoring devices can make people uncomfortable and even generate negative emotions. Furthermore, it may restrict creative behaviors if people realize their actions are monitored [88].

4.2.3 Technical challenges

Computer vision-based research comprises of two core steps: (1) data collection (e.g. 2D images, time-lapse images, and videos); and (2) analysis. In the case of data collection, the positioning and orientation of cameras need to be given consideration in order to capture the appropriate images of objects. Computer vision methods obey the principle ‘what you see is what you can analyze’ [49]. Thus, the quality of data collected is critical so that it can be effectively analyzed and used to accurately detect an object. Several factors can hinder the accuracy of object detection on-site, including poor lighting, cluttered backgrounds, and occlusions. As a result, there is a need for a multitude of camera positions to be placed on a site to overcome such problems.

The analysis of data analysis can be undertaken using several approaches but the most common are either conventional shallow learning methods such as SVM [3,34] or deep learning that utilizes CNNs [49]. As we mentioned above deep learning, particularly CNN's and Recurrent Neural Networks are becoming an increasingly popular method for image classification and object detection due to their ability to automatically extract features [44,89]. While deep learning is being widely used, there are several technical challenges that confront its use in practice. First, deep learning can only learn from the correlation between input and output and is not able to determine causality. In the case of safety monitoring, for example, not only is there a need to identify individuals and working conditions, but also the interactions between them. To date, this interaction (as identified in Section 3.4.1) has not to be examined and thus needs to be a future line of inquiry. Second, there is an absence of a generic model that can be used to address a multitude of problems. Models have been developed and trained to tackle a specific problem scenario. In practice, if such models are to be effective, they will need to identify a wide range of tasks, which will require us to develop new algorithms to fulfil this requirement.

4.2.4 Semantic gap

There is a ‘semantic gap’ between the low-level feature extracted from the images by computer vision algorithms and the high-level semantic meaning that people recognize from an image [90]. As a result of this semantic gap, developments in automated computer vision may be stymied. In the case of hazards identification, for example, not only do objects need to be detected from the images, but also domain knowledge, This domain knowledge is needed to provide a context within the safety regulations [50].

Further research, therefore, could integrate ontology and computer vision techniques to address the semantic gap. Ontology is a popular approach applied for modeling information due to its computer-oriented and logic-based features, which provides a way to formally represent domain knowledge by the explicit definition of classes, relationships, functions, axioms, and instances [91,92]. It can represent knowledge with explicit and rich semantics, which can enable knowledge query and reasoning to be performed [93].

A framework combining ontology and computer vision techniques could be developed specifically focusing on developing:

domain knowledge that is formally represented by an ontology model where different rules can be encoded for the specific application, such as hazard reasoning and defect identification;

spatial relationships between objects, which can be automated detected using computer vision algorithms [51]; and

in conjunction with the ontology a specific rule engine such as Drools [94]. In this instance, the domain knowledge can be used to accommodate the detected objects and their relationships (e.g., automated hazard reasoning or various structural defect recognition).

With the addition of a domain knowledge represented by ontologies, the ability of computer vision to understand scenes from images can be further improved.

View article

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0926580519303875

Earned Green Value management for project management: A systematic review

Benjamin Koke, Robert C. Moehler, in Journal of Cleaner Production, 2019

7.1 Earned Value Management

The final sample of 314 publications for Earned Value Management has been assessed regarding their research focus within the EVM procedure. Table 7 below shows the distribution of research topics. In addition, the categories “quality management”, ”procurement management”, and “other” were added. “Other” comprises publications that discuss the general implementation of EVM, portray application examples (such as Scrum, Agile, etc.), deal with EVM on a general level (such as the PMI standard) or which mention EVM on a side-note, but could still provide enough information to not be excluded right away (see Table 8).

Table 7. Results of distribution of EVM research topics.

Research topicAmount 2015Amount 2019Work definition (Project Scope)00Work Breakdown Structure (WBS)30Organisational Breakdown Structure (OBS)00Control Accounts (CA)00Scheduling11Establish Baseline (BCWS)54Budgeting11Definition of Performance Metrics (Earning Rules)310Measure Performance (BCWP)511Record Actual Costs (ACWP)00Determine Project Performance (CV, SV, CPI, SPI)4120Forecasting (EAC, ETC)4827Procurement Management21Quality Management106Other9551Sum223132

What is the formula of schedule Performance Index SPI?

The schedule performance index (SPI) is a measure of the conformance of actual progress (earned value) to the planned progress: SPI = EV / PV.

What does a schedule Performance Index SPI of 0.67 mean?

A good SPI is equal to or above 1, meaning you are on or ahead of schedule. In the above example, we calculated an SPI of 0.67, which indicates that the work is behind schedule. An SPI of 0.67 means that the project team only completes 0.67 hours worth of work for every hour of planned work.

What does a SPI of 0.8 mean?

SPI = EV / PV = 14,400 / 18,000 = 0.8. This means that for every estimated hour of work, the project team is only completing 0.8 hours (just over 45 minutes). If the ratio has a value higher than 1 this indicates the project is progressing well against the schedule.

How Performance Index is calculated?

The Cost Performance Index (CPI) is a method for calculating the cost efficiency and financial effectiveness of a specific project through the following formula: CPI = earned value (EV) / actual cost (AC). A CPI ratio with a value higher than 1 indicates that a project is performing well budget-wise.