Skip to main content

Currently Skimming:


Pages 43-75

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 43...
... In highway maintenance, examples of output measures are lane miles of roadway surfaced, the number of bags of litter picked up, and the number of acres mowed. Inputs Inputs are the resources used to deliver a product or service, perform an activity, or undertake a business process.
From page 44...
... More recently, especially since the enactment of the Government Performance and Results Act of 1993, the focus has been increasingly on outcomes. Outcomes Outcomes are the results, effects, or changes that occur due to delivering a product or service, conducting an activity, or carrying out a business process.
From page 45...
... Measures of value added include an increase in customer satisfaction or an increase in economic value from, for example, travel time saved or life-cycle costs avoided. As one transitions from a focus on outcomes to value added, the perspective shifts from effectiveness to the net value added to the customer and provides the basis for resource allocation in economic terms.
From page 46...
... Benchmarking is ultimately about making continuous improvements through the identification and adoption of best practices in order to equal or exceed the satisfaction of the customer. Measuring changes in customer satisfaction over time provides the feedback regarding how well you are doing.
From page 47...
... It is vitally important to recognize that the NQI survey, in attempting to determine customer satisfaction, focuses upon important attributes of highways. In the case of maintenance, the key issue is what the customer satisfaction is with regard to the attributes of maintenance products and services -- for example, the NQI asks how satisfied survey respondents are regarding the smoothness of roads.
From page 48...
... Visual Appeal h. Appearance of sound barriers i.
From page 49...
... Satisfaction with Visual Appeal Figure 7a. Satisfaction with Attributes of Highway System i i
From page 50...
... Satisfaction with Travel Amenities Figure 7d. Satisfaction with Bridge Conditions Appearance Areas Variety of Rest Areas
From page 51...
... Kentucky, for example, compared the results of customer satisfaction surveys conducted in 1995 and 1996 with the national survey results.2 Potentially, results could be compared with other states to do a simple form of customer-driven benchmarking. The significance of the NQI survey is that the maintenancerelated questions represent a set of widely or commonly recognized measures of customer satisfaction.
From page 52...
... rate their satisfaction on scale of 0 to 10, where 10 represents "extremely satisfied" and 0 "extremely dissatisfied." This question is intended to provide the California DOT (Caltrans) with feedback regarding how the state does in responding to maintenance problems associated with mudslides, floods, earthquakes, and so on.
From page 53...
... This set of customers is primarily concerned with avoiding road user costs such as travel time, vehicleoperating costs, and accident costs. The second set of customers consists of those who pay for the roads and generally, but not necessarily, consists of those who use the roads.
From page 54...
... Appendix D includes a discussion of how to calculate life-cycle costs, user costs, and willingness to pay. COMMONLY RECOGNIZED MEASURES A prerequisite for benchmarking of any type, including customer-driven benchmarking, is that benchmarking participants agree on the measures that will be used.
From page 55...
... It was sufficient for workshop participants to identify areas where there is general agreement that commonly recognized measures exist, particularly ones that relate directly or indirectly to the customer. The adopted commonly recognized measures exist side-by-side with other performance measures that many states have already developed and generally use for maintenance management and asset management.
From page 56...
... Chapter 3: Measurement 58 Table 1. Commonly Recognized Measures Adopted by Consensus (continued on next page)
From page 57...
... Key Issues in Adopting Agreed-Upon Measures When adopting benchmarking measures, there are a number of key issues to consider: ♦ Desirable attributes of the measurement scale; ♦ Types of measures to avoid; ♦ Selection of appropriate units; ♦ Segment length; ♦ Repeatability, reliability, and accuracy; and ♦ Protocols. 59 Table 1.
From page 58...
... Examples of continuous scales are as follows: ♦ Extent of bridge deck distress measured in terms of percentage of the deck area affected, ♦ Roughness measured according to the International Roughness Index (IRI) , ♦ Shoulder edge drop-off measured in inches or centimeters and arbitrarily small fractions thereof, ♦ Retroreflectivity of signs measured as candelas per footmeter square foot, ♦ Mean response time to fix a problem, and ♦ Mean time between failures.
From page 59...
... You are likely to encounter a measurement system that involves probabilistic condition states. The measurement scale is likely to be a discrete scale such as 1, 2, 3, 4, and 5.
From page 60...
... The measurement process of benchmarking is not about targets, objectives, or goals; it is about measuring performance along some scale to discern best performers so that benchmarking partners can explore what work methods and business processes lie behind best performances and can adopt or improve upon best practices. You should also avoid choosing measures that represent thresholds for actions, such as minimum tolerable conditions (e.g., a warrant to replace a traffic signal)
From page 61...
... Repeatable means that different people who apply the measure and take a measurement under the same circumstances obtain the same or nearly the same result. To obtain repeatability usually requires training.
From page 62...
... An outline of a protocol for edge drop-off (taken from the proceedings of the National Workshop on Commonly Recognized Measures) might consist of the following: 1.
From page 63...
... Do you measure where snow sticks to the road in one standard place or along every section of road and then take an average of the time the snow starts to stick? Similar 65 3 Proceedings, National Workshop on Commonly Recognized Measures.
From page 64...
... A CATALOG OF MEASURES Appendix B provides a catalog of measures you may want to use for benchmarking. Many of the measures presented are widely used, and include those identified as "commonly recognized measures" at the national workshop on the topic.
From page 65...
... ; ♦ Commonly recognized at the National Workshop on Commonly Recognized Measures for Maintenance; ♦ Repeatable, reliable, and accurate -- in other words, an assessment of whether the measure has these attributes; and ♦ Cost of using the measure or other important issues.
From page 66...
... NQI or other survey question asking customer satisfaction regarding pavement smoothness 1–5 response scale Survey question on pavement smoothness Standard NQI survey question; not accurate for jurisdictions lower than state, unless separate survey administered Low cost to use NQI survey results; moderate to high cost to develop and administer your own survey that includes question on pavement smoothness Pavement Smoothness (potholes) Number of potholes of specified size per unit distance Number per unit distance Potholes are easily observed, but the number per unit distance can be difficult to count.
From page 67...
... Faulting Inches Repeatable, reliable, and reasonably accurate measures obtained using ruler Low cost to do for sample sections or if data already exists; high cost to obtain comprehensive coverage if data doesn't exist Preservation Characteristic (appearance of deterioration, raveling, water infiltration) Extent and severity of different types of cracking: –alligator –longitudinal –transverse Percent of area covered or length of cracks and rating of severity on a scale Challenge in maintaining consistency among raters; automated distress identification technology not highly accurate Much lower cost to do for sample sections in comparison to comprehensive network coverage Overall Pavement Condition Health Index Some type of index, e.g., from 0–100 Requires construction of index reflecting key pavement attributes; each characteristic can be measured with varying degrees of reliability Low to high cost to develop and apply index, depending upon the availability of data to calculate index components Overall Level of Service Visual Level of Service Condition Rating Rating scale of A, B, C, D, or E Often visual rating scales combine more than one characteristic, and so it is difficult to portray and isolate condition of different attributes Mainly useful for communicating to policy makers and general public
From page 68...
... RESOURCE MEASURES The next broad class of measures needed for benchmarking is resources composed of labor, equipment, and material, as well as financial costs. Labor Labor is an important input to the production of maintenance products and services.
From page 69...
... Key sources of labor data are the agency's maintenance management system and the payroll system. Some agencies might also have a database containing information on the training of each employee.
From page 70...
... Sometimes, however, it is better to employ measures of the raw labor, equipment, and material inputs instead because there can be local and regional differences in the unit cost of labor, equipment, and materials. If you use total resource costs or even costs of each input to maintenance production, you will not easily be able to distinguish to what degree the physical inputs or variation in price of inputs are contributing to the outcomes.
From page 71...
... This is known as "activity-based costing." If your agency does not have such an accounting system, eventually you may want to implement activity-based costing to identify your fixed and variable costs by activity, product, and service. HARDSHIP FACTORS In addition to outcomes and resources, the third major group of measures needed for customer-driven benchmarking is hardship factors.
From page 72...
... The drawback to further data collection is that it requires additional effort on the part of crew leaders to record this information, which detracts from getting their jobs done. An alternative approach to crew leaders recording weather data is to gather data from other sources and to combine it in a database with accomplishment and resource utilization information reported in daily work reports.
From page 73...
... Roadway Attributes Certain roadway attributes affect the productivity and outcomes of maintenance work -- for example, the presence of shoulders makes it easier for crews to park their vehicles and work on roadside safety features such as guardrails and signs. In the absence of shoulders, work zones will probably need to be established, which requires blocking off a lane of traffic and takes time that could otherwise be spent performing maintenance work.
From page 74...
... Output information is essential for analyzing productivity. You may also want to estimate production functions that predict output as a function of labor, equipment, material, and environmental factors.
From page 75...
... ♦ Linkage to outcomes. Some analysts find that the most logical way to establish a measure of certain types of outcomes is to establish a functional relationship between outputs and various types of outcomes.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.