National Academies Press: OpenBook

Highway Maintenance Quality Assurance: Final Report (1997)

Chapter: 4 Development of Prototype QA Program

« Previous: 3 Results of Agency Surveys
Page 59
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 59
Page 60
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 60
Page 61
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 61
Page 62
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 62
Page 63
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 63
Page 64
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 64
Page 65
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 65
Page 66
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 66
Page 67
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 67
Page 68
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 68
Page 69
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 69
Page 70
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 70
Page 71
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 71
Page 72
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 72
Page 73
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 73
Page 74
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 74
Page 75
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 75
Page 76
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 76
Page 77
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 77
Page 78
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 78
Page 79
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 79
Page 80
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 80
Page 81
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 81
Page 82
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 82
Page 83
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 83
Page 84
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 84
Page 85
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 85
Page 86
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 86
Page 87
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 87
Page 88
Suggested Citation:"4 Development of Prototype QA Program." Transportation Research Board. 1997. Highway Maintenance Quality Assurance: Final Report. Washington, DC: The National Academies Press. doi: 10.17226/6346.
×
Page 88

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

CHAPTER 4. DEVELOPMENT OF PROTOTYPE QA PROGRAM Introduction The nrimarv task of developing a crototv~e OA Program for Me maintenance of r J r ~ 1 A ~ - 1 ~ ~ _ _ ~ ~ ~ ~ · .~ 1 · ~ e1 _ _ _ 11 _ _ ~ _ ~ highway facilities was undertaken toi~owmg tnorougn reviews or one co~ec~ea literature and detailed evaluations of current highway agency quality programs. The prototype program began as a collection of sound maintenance management and maintenance quality practices, most of which formed We core of Me program. Although the ideas and procedures behind some of these practices were modified to better capture today's quality management precepts, over practices were readily acceptable for inclusion as part of the prototype QA program. , . ~ As bow new and old ideas of assessing, controlling, and assuring quality were introduced Into Me core program, the prototype evolved into a flexible, multi- component program, fully adaptable by interested agencies In two stages: development and Implementation. By this point in Me development process, documentation of Me various component principles, procedures, and interactions fell under the ensuing task of Implementation Manual development. This chapter describes the work conducted in developing He prototype QA program for highway maintenance. It consists of several sections, beginning with a discussion of Me work approach taken in developing Me prototype program. The work approach section is followed by discourse on how various aspects of Me program, such as the types of data required to run Me program and obtaining and using customer input, were addressed during the development process. The final section summarizes the QA program and briefly describes its transformation into an implementation manual. Work Approach The basic idea In developing the prototype QA program was to establish a core set of practices using the various tied-and-true management techniques unearned in the literature review and agency surveys and introduce into that framework where possible and practical new, effective quality management concepts. In essence, yesterday's proven methods were to be infused with tomorrow's highly credible ideas. Throughout the evolvement of the prototype program, a clear focus was maintained on the goals of Me program. These goals consisted primarily of Me following: · Maintain the highway network at an acceptable LOS based on all customer input into maintenance activities. 59

Develop m~nunum criteria for a QC process of daily maintenance operations to ensure Cat operations are being conducted In an effective manner. Provide a documented means of evaluating the condition of the highway system and Me resources used to achieve an acceptable LOS. · Improve the way In which highway maintenance operations are performed by preventing problems that would have developed. Several secondary goals were set forth to ensure Cat the prototype QA program would receive maximum consideration from quality-seeking agencies. These program goals included We following: Functional in a centralized or decentralized management environment and In both large and small agencies. · Produces LOS ratings regardless of Me level of contract maintenance. - Produces repeatable and reliable ratings for a variety of conditions and features. Provides for customer involvement. Cost effective to unplement. A paramount regard in Me development of Me prototype program was Me need for me ability to assess maintenance quality at Me network, project, and activity levels. Each of these levels erases to some degree in highway agencies, and the ability to manage from all Tree levels is a highly desirable and powerful feature. A brief discussion of each maintenance level is provided below. Network-leve! QA Refers to Me quality of Me maintenance management of an entire highway system. This management originates at Me central office level and fillers Trough to Me dis~ict/reg~on level and then Me subdis~ict/area level. The prunary concern with network-level QA is appropriate allocation of funds and Me establishment of network-w~de standards. · Project-level QA Refers to Me quality of various maintenance activities applied to a particular bridge or section of highway to keep it at a desirable LOS. The maintenance acid ies perforated at this level are done as part of an approved maintenance clan formulated during design or are done In order to bring conditions among individual projects to more consistent levels. · Activity-leve! QA This level of highway maintenance QA inclucles Me routine activities performed by a maintenance crew and Nose activities Cat are done as an immediate response to a change in conditions (e.g., snow and ice removal, accident clean-up, guardrail repair). The most important aspect of this level is making sure Cat Me end results of specific activities are consistent among maintenance crews and are of high quality. 60

Program Development Development of Me prototype QA program proceeded largely in the fashion originally planned. Several outstanding managerial and technical processes were identified during the agency review process. The unclerlying principles and me~ods of Dose processes were carefully examined to determine the extent to which they conformed with basic quality tenets, such as focus on customers, use of statistical process control (SPC) techniques, and instituting training. Several processes were found to be well rooted in quality principles and were immediately embraced for use in We prototype program. Others were found to be somewhat lacking and were modified to better fit Me quality mind set. At this stage of development, each process began to be viewed as an individual component (or step) in an overall quality management program. The components were then arranged in a logical sequence Mat included a program development phase and a program implementation phase. In keeping win CQl principles, a portion of the implementation phase was designed as a quality enhancement cycle that would allow agencies to integrate new technologies and customer feedback, as wed as adjust to changes In funding. The flow chart in figure 4 shows Me various components that were used to define the prototype QA program, as well as the order in which they were established. A brief discussion of each key component is provided below, whereas complete details of all components are given in the QA program Implementation Manual. · Key Maintenance Activities~roup~ng of key work activities into like categories (i.e., maintenance elements) for Me purpose of evaluating maintenance quality. · Customer Expectations-The one-time collection of highway users' expectations concerrung the LOS at which an agency should maintain its highway system. LOS Cr~ter~a~lear and measurable definitions concerrung the points at which deficiencies cause maintenance features/characteristics to no longer meet expectations. LOS criteria are usually expressed In terms of amount and extent of deterioration (e.g., size and frequency of potholes, amount of litter per mile). Weighting Factors Factors Cat (a) reflect Me relative importance of individual maintenance feah~res/character~stics Mat comprise a maintenance element and (b) reflect Me relative unportance of individual maintenance elements that comprise a highway facility, on Me whole. · Maintenance Pnonties Establishment of Me order In which work activities will be conducted in Me event that a shortage of resources occurs. Each work activity is prorated according to the four fundamental maintenance objectives, prioritized as follows: I. Safety of Me traveling public. 2. Preservation of Me ~n~reshnent. 61

PHASE l-PROGRAM DEVELOPMENT Key Maintenance Activities and Features/Ch; ~racteristics - Roadway Segment Population and Sample Segment Selection Process LOS Data Collection,_ Develop LOS Analysis, and Reporting_ Rating System _ Techniques r Customer ~~~ | Expectations ~ LOS Criteria |and Target LOS: _ Weighting | l Factors I Agency Management | No Maintenance Priorities _ Approval PHASE Il-PROGRAM IMPLEMENTATION Resource Funding Pardal Request (ZermBased Budget) FuD Imple ment ~ogram Prionties ~ Implementation ~ Emergenaes | t Yes Baselin~ng Existing (Workload Inventory Process Updating (using Estimate to Achieve ~baseline, current, and Target LOS _ Activity Cost Data _g Agency \__ Monitoring r- Formal LOS L _ Inspections I QC of LOS I Rating Teamsl Figure 4. Prototype QA program flow chart. 62 target LOS's) |Satisfaction | ~ . ~ LOS Analysis | Land Reporting |

3. User comfort and convenience. 4. Aesthetics. Baselining Existing LOS (Pilot Study) Determining the existing LOS of the maintenance of Me agency's highway system using the components above. Workload Inventory Information on We type, location, and dimensions of key maintenance features Cat can be used to estimate potential workloads for maintenance activities. · Activity Cost Data Actual cost for performing a unit of work for a specific e, ~ e , a ~ ~ ~ ~ ~ a ~ a a e Bad e ~ 1 activity. For agencies tnat no lUU percent or their work using agency employees, this is usually readily available; however, for agencies Mat have a significant mix of in-house and contract maintenance forces doing Me same activity, a proportional blend of the cost data will be required. Zero-Based Budge!-Application of Me activity cost data and field trial results toward deterrnin~ng Me costs required to produce a specific TWOS established from customer expectation input. Formal LOS Inspections, Analysis, and Reporting Periodic maintenance ratings stemming from random inspections of short segments of the entire highway system maintained by an agency. Customer Satisfaction The periodic assessment of how satisfied highway users are with the LOS being provided by a maintenance agency. The prototype QA program was designed to encompass Me maintenance elements believed to be most common among highway agencies. These elements included traveled roadway (i.e., mainline pavement), shoulder (paved or unpaved), roadside, drainage features, traffic services, and vegetation and aesthetics. Because bridges and snow and ice control were also recognized as substantial parts of several agency's mamienance programs, methods for applying the QA program to these elements were specially formulated. Data Elements Probably one of Me most important considerations of any program is Me type of information required in order for the program to be properly administered. The ability of supervisors to make sound managenal decisions is greatly strengthened when the right kind of information is available to them. The data elements considered to be essential or highly beneficial to Me QA program were identifier! at Me outset of Me program development and consist of Me following: · Assessments of highway customer expeciations~ustomer survey ratings of the importance of various maintenance aspects on different facility types. · Roadway features inventory-MMS listing of quantities, locations, and characteristics of ma~ntenance-related roadway features/characteristics. 63

· Internal assessments of maintenance quality Technical pass-fai! condition ratings of various maintenance features/characteristics, such as guardrails and pavement rutting. · External assessments of maintenance quality~ustomer survey ratings of the level of satisfaction win maintenance-related roadway features/characteristics. Annual maintenance costs Total costs allocated and expended for a given year for each maintenance activity performed by an agency. Work accomplishments Productivity and resource (equipment, labor, and materials) usage data on each maintenance crew. Other Important data elements included Me documented costs ant! resource requirements associated with operating the QA program. Availability of Data from Other Management Information Systems Determining Me availability of data in existing management information systems was an Important task In Me development of Me QA program. If a considerable amount of the data described In Me previous section was found to be available In other management systems, such as PMS's and BMS's, and Dose data were accurate and easily accessible, Den Me scope of Me LOS rating system might be drastically reduced, since Me subject data could be extracted from Me appropriate management system rather Man collected a second time In Me field. During Me August 1995 field reviews of Me seven selected highway maintenance agencies, none of Me agencies indicated incorporating data from other management information systems into Weir LOS rating process. The general sense among Me agencies was Cat data interchange was too difficult or inefficient, or Cat the data contained in the over systems were not acceptable for use. To further investigate this matter, Me data elements commonly containecl in four different management systems PMS's, BMS's, SMS's, and Infrastructure management systems (IMS's)-were examined. The sources for this process Included pavement condition survey manuals and reports and bridge inventory manuals obtained from several SHAs, conversations win key highway agency officials, and various pieces of literature on infrastructure management. Summaries of Me findings pertaining to each of Me four management information systems are provided In Me sections below. Pavement Management Systems A PMS is defined as an established, documented procedure Mat treats all of Me pavement management activities- planning, budgeting, design, construction, maintenance, monitoring, research, rehabilitation, and reconstruction In a systematic and coordinates! manner. PMS's usually include condition surveys, a database of pavement-related information, analysis schemes, decision criteria, and implementation 64

procedures, all of which can be used to establish priorities for overlays, maintenance, and allocation of funds; budget preparation; development of rehabilitation strategies; and identification of problem areas. In essence, a PMS is a data bank for a network of pavement sections (Peterson, 19871. The element of a PMS considered to have Me most potential to serve as an input interface for the QA program was pavement survey condition information. Typically, four primary condition indicators are taken as part of a PMS. These are structural capacity, friction, roughness/ride quality, and distress. Though highway agencies use a variety of methods to collect condition indicator information and agencies key In on different condition measures, they all share an overall objective of determining how well pavements are performing. The structural capacity of a pavement is today most commonly measured using nondestructive deflection testing (NDT) techniques. A falling weight deflectometer (LEWD) is used at selected locations throughout a pavement section to identify weak areas, to estimate the strength of Me pavement system, and to predict the load-carrying capability of the pavement section given the amount of traffic it experiences. Because pavement maintenance actions provide little or no structural improvement, deflection data were not considered to be suitable for use as Indicators of maintenance quality. Pavement friction Is a safety-related condition measure that describes Me slipperiness of a pavement surface. Most commonly expressed as a friction rating or skid number, Me lower Me value, the more potentially hazardous the pavement is to motorists. Application of skid numbers or friction ratings to the LOS rating system may or may not be appropriate, depending on an agency's policy for correcting slippery pavements. In some agencies, the prime responsibility for improving pavements with low friction rests with maintenance, whereby they're tasked-through Weir own forces or through contracted forces win applying surface treahnents or mechanized patches or performing some sort of surface milling or grinding. In other agencies, however, correction of slippery pavements is a rehabilitation action item and Is privately contracted through over depalbllents within Me agency. The ride quality or roughness of pavements is evaluated by many SHAs on an annual or biennial basis using either a response-type measuring instrument, such as Me Mays Ride Meter and the PCA Roadmeter, or an inertial profiling vehicle, such as the South Dakota Profiler and the K.J. Law Profilometer.@ Both system types generate a longitudinal roughness parameter, expressed as in/mi, for a specified length of pavement section, with the latter type measuring the longitudinal profile of a pavement and Men computing an international roughness index (~) based on the measured profile and standardized vehicle response characteristics. Although maintenance is obligated to correct various surface defects (bumps, holes, dips, swells) that can collectively result in a rough ride, they are primarily concerned with localized rough spots from Me standpoint of safety. Any bumps, holes, dips, or 65

swells significant enough to cause a hazard to Be traveling public are under Me immediate domain of maintenance. As a pavement becomes more and more deteriorated, We amounts of these distresses become greater and greater. However, for reasons of safety, the severity levels of the distresses will be kept somewhat in check through proper maintenance. The one data element of roughness/ride quality surveys that was found to have potential use in He QA program Is He longitudinal profile measured by inertial profilometers. These computerized profiles are usually stored for a time after completion of a survey and may be available for visual examination. Though He profiles are usually smoothed or filtered, He possibility exists Cat vertical deviations identified In He computerized profile are indicative of localized defects Hat have not been treated by maintenance. The last type of pavement condition indicator data is distress. Evidence of distress is manifested through various characteristics, such as cracking, rutting, and potholes in asphalt pavement and spelling, cracking, and faulting in concrete pavement. Most highway agencies perform either a visual or automated distress survey In order to quantify He amount and severity of each distress type present In He pavement. The correction or treahnent of distresses is not He full responsibility of maintenance. Some distresses, such as fatigue cracking, rutting, or shattered slabs, are He result of structural deficiencies and, when Hey occur on a large scale, must be structurally improved through appropriate rehabilitation strategies. Nevertheless, maintenance may be Involved in providing temporary fixes until a long-term rehabilitation effort can be conducted. Over distresses, such as bleeding, spelling, potholes, and bumps, require functional improvements In order to restore adequate safety and, to a lesser extent, riding comfort. Treatment of these types of distresses is usually provided by maintenance and, therefore, the condition data for these distresses may be suitable for use as Indicators of maintenance quality. She over Hostesses, such as longitudinal and transverse cracking and joint seal damage, are also largely He responsibility of maintenance. These types of distresses are treated to preserve He pavement Investment (i.e., extend He life of He pavement). Condition data for these types of distresses may also be applicable In He assessment of maintenance quality. The suitability of PMS distress data for use In He LOS rating system was found to be dependent on many factors, the foremost of which include He following: Frequency of pavement surveys Most PMS's entail annual (100 percent of pavement sections sampled each year), biennial (100 percent of pavement sections sampled every 2 years), or triennial (100 percent of pavement 66

sections sampled every 3 years) surveys of a given facility type. For instance, several SHAs, like Me Indiana and Wisconsin DOTs, perform annual surveys of their interstate pavements and biennial surveys of Heir non-interstate pavements. Since maintenance quality should at least be rated annually, EMS data based on biennial or triennial surveys would not be adequate. · Timing of pavement surveys Pavement management distress surveys are often performed In the spring and early summer to avert He busy . construction schedule. LOS inspections, on the other hand, may be done routinely win little regard to yearly seasons or construction/maintenance seasons, or they may be done at selective times of He year. Any attempted linkages between He two systems must take into consideration He need for · e , e e consistency In nnung. Length of pavement survey segments-LOS sample segments will generally range between 0.1 and I.0 mi (0.16 and 1.61 km). PMS survey scents may , _ ~ . . - . . . ~ , ~ ~ range from 10(} it (3u.5 m' to the entire length or a pavement section (which could be several miles). If Here is to be a linkage between He two systems, a common length must be established. · Availability of desired data Several highway agencies only collect key distress data, such as cracking, rutting, and patching. In such instances, He possibility of using PMS data for a more complete assessment of maintenance quality is substantially reduced. Accuracy of data and type of pavement surveys PMS distress data are collected In a myriad of fashions, ranging from visual surveys of randomly selected samples to automated continuous surveys. For PMS data to be useful In He LOS ratings, He information must be collected from He same sample units as He LOS ratings and the condition surveys must be objective, accurate, and repeatable. Should an agency be able to overcome the above linkage obstacles, it slid must consider He nature of He PMS digress data collected. The agency must first review each distress type and decide if maintenance has an obligation to correct * or whether, by policy, it is a distress Hat is "out of maintenance's hands." Bridge Management Systems The Bridge Inspector's Tranung Manual (FHWA, 1991) states: In 1971, the National Bridge Inspection Standards (NBIS) came into being. The NBIS se! national policy regarding bridge inspechon frequency, inspector qualifications, reportformats, and inspection and ratingfonnats. Because of the requirements that must befulfilledfor the NBIS, it is necessary to employ a uniform bridge inspection reporting system. A uniform reporting system is essential in evaluating correctly and efficiently the condition of a structure. Furthermore, it is a valuable aid in establishing maintenance priorities and replacement pnonties, and in determining structure capacity and the cost of 67

~ - ~ maintaining the nation's bridges. The information necessary to make these defe~inations must come largely from the bridge inspection reporting system. ConsequeniZy, the importance of the reporting system cannot be overemphasized. The success of any bridge inspection program is dependent upon its reporting system. The NBIS requires that Me findings ant! results of a bridge Inspection be recorded on standard forms. Although He Structures Inventory and Appraisal Sheet (SINAI shown in figure 5 Is not a stanciard form, it represents a list of bridge data that each State must perioclicaDy report to FHWA for ah public structures within its inventory. Many SHAs have developed Heir own standard forms using He SI&A sheet as a guide. A considerable effort has been made by the FHWA to make the information and knowledge available to accurately and thoroughly Inspect and evaluate bridges. Through He manuals developed by FHWA and training courses taught by He National Highway Institute (NHI), a major effort has been accomplished to standardize He complex issue of bridge inspection. As He areas of emphasis in bridge inspection programs change due to newer types of design and construction techniques, He guidelines for inspection must also be modified to Increase uniformity and consistency. A primary use of the inspection reports is to provide guidance for immediate foDow-up inspections or corrective actions. These reports provide information that may lead to decisions to limit or deny He use of any bridge determined to be hazardous to public safety. Deficient bridges are divided into two categories: structurally deficient and functionally obsolete. Generally speaking, structurally deficient bridges are weight- restricted due to condition, are in need of rehabilitation or, in rare instances, have been denied access by He public. Functionally obsolete bridges are normally structurally sound but do not meet current standards for deck geometry, clearances, or approach alignment. At He close of He inspection, He bridge inspector must use his experience to document inspection deficiencies Cat have been observed. A Borough and well- documented inspection is essential for making informed and practical recommendations to correct bridge deficiencies. A well-prepared bridge inspection report not only provides information on existing bridge conditions, but it also serves as an excellent reference source for future inspections. The accuracy and uniformity of information ~ tal to He management of an agency's bridge program. QC is He enforcement tool used on a daily basis to ensure He inspection conclusions and recommendations are based on correct information. Many States are assigning He final review and signing of inventory results to the chief inspector, who should be a professional engineer or have a minimum of 10 years experience in bridge inspection. Quality assessment is usually accomplished by 68

NATI ONAL BRI OGE I NVE~ORY - *~**~*~** IDENTIFICATION *~*~***~** ( 1 ) STATE NAME - CODE ( 8 ) STRUC, URE NUMBER # ( S ) I NVENTORY ROUTE ( ON/UNDER ) - = (2) STATE HIGHWAY DEPARTMENT DISTRICT 3 ) COUNTY CODE _ ( 4 ) Pl ACE CODE (6) FEATURES INTERSECTED - ( 112) ( 1 ) fACI LITY CARRI ED - ( 104 ) ( 9 ) LOCAT I ON - ( 26 ) 11 ) MI LEPOINT ( 100 ) 16) LATITUDE _ ~ _ . ' (17) LONGITUDE D _~(~01) 98) BORDER BRIDGE STATE COOK _ ~ SHARE X ( 102) 9 ) BORDER BRIDGE STRUCTURE NO . # ( 103 ) (~10) (2Q) (21) (22) (371 *** STRUCTURE TYPE AND MATERIAL ****~**** (43) tTRUCTURE tYPE MAIN: MATERIAL TYP E - CODE (44) STRUCTURE TYPE APPR: MATERIAL TYP" - CODE NUMBER OF SPANS IN MAIN UNIT NUMBER OF APPROACH SPANS (45] ( 46 ) NUI'It~K US Mr~nu^~n ton (1G7) DECK STRUCTURE TYPE - CODE_ t 108) WEARING SURFACE / PROTECTIVE SYSTEM: A) TYPE OF WEARING SURFACE 8 ) TYPE OF .~BRANE C) TYPE OF DECK PROTECTION CODE _ CODE _ CODE a********* AGE AND SERVICE ******~****~***~*~** (27) YEAR BUILT 1Q6) YEAR RECONSTRUCTED (42) TYPE OF SERVICE: ON UNDER -CODE ~ 28 ) ACHES: ON STRUCTUREUNDER STRUCTURE (29) AVERAGE DAILY TRAFFIC ( 30 ) YEAR OF AOT 19 ( 19 ) BYPASS, DETOUR LENGTH ~09) TRUCK ADT _ X MI LENGTH OF MAXIMUM STRUCTURE LENGTH CURB OR SIDEWALK: ****~***** GEOMETRIC DATA **~***********~****~* SPAN Fr __ LEFT ._ FT RIGHT . FT BRIDGE ROADWAY WIDTH CURB TO CURB . FT FT (48) (49) (50) ( 5- ) -^ ~-- · ~ _ # ~ (52) DECK WIDTH OUT TO OUT ( 32 ) APPROACH ROADWAY WIDTH (~/SHOULDERS ) ( 33 ) BRIDGE MEDIAN - CODE _ (34) SKEW _ DEG (as) STRUCTURE FARED (10) INVENTORY ROUTE MEN YERT CLEAR ~IN (47) INVENTORY ROUTE TOTAL HORIZ CLEAR . (S3) MIN VERT CLEAR OVER BRIDGE RONY _ F1 _ IN (54) MIN VER1. UNDERCLEAR REF ~ _ _ FT_ IN (as) MIN LAT UtlDERCLEAR RT REF ~ _ . FT ( 56) MI?. LAT UNDEPtCLEAR LT . FT ***a****** NAVIGATION DATA ******** (38) NAYIGAT$0N CONTROL ~ CODE (90) 111) PIER PROTECTION ~COt)E _ (92) (39) NAVIGATION VERTICAL CLEARANCEFT A) 116) VERT-LIFT BRIDGE HAY MIN VERT CLEAR= FT B) ( 4Q) NAVIGATION HORIZONTAL CLEARANCEFT C] STRUCTURE INVENTORY AND APPRAISAL -/OD/YY **~******~*~*~*****~**~**~****~*~*~* SUFFI CI ENCY RATING = STATUS = a**** CLASSIrlCATIt)N ************ CODE NSIS BRIDGE LENGTH ~ I GHWAY SYSTEM ~ FUNCTIONAL CLASS - DEFENSE HIGHWAY - PARALLEL STRUCTURE DIRECTION OF TRAFFIC TEMPORARY STRUCTURE DESIGNATED NATIONAL NETWORK - . TOLL - t~tAIHTAI ~ - OWNER - HISTOQICAL SIGNIFICANCE *~*~*~* CONDITION **~*~**~* CODE ( 58 ) DECK ( 5g ) SUPERSTRUCTURE _ ( 60 ) SUBSTRUCTURE _ ( 61 ) CHANNEL & CHANNEL PROTECTION ( 62 ) CULYERtS ******* LOAt) RATING AND POSTING *a****** CODE (31 ) DESIGN LOAD ~ _ (64) OPERATING RATING ~ ~~ ( 66) INVENTORY RATING ~ = ( 70 ) BRIDGE POSTING ~ _ (41) STRUCTURE OPEN, POSTED OR CLOSED ~ _ DESCRIPTION ~ *******a APPRAISAL *** CODE ( 67 ) STRUCTURAL EVALUATION (68) DECK GEOMETRY (69) UNDERCLEARANCES, VERTICAL ~ HORIZONTAL ( 71 ) WATERWAY ADEQUACY ( 72 ) APPROACH ROADWAY ALIG - ( 36 ) TRAFFI C SAFFrY FEATURES (113) SCOUR CRITICAL BRIDGES ****** PROPOSED IMiROVEMENTS **********a (7S) TYPE Of WORK - CODE (76) LENGTH OF STRUCTURE IMPROVEMENT (94) BRIDGE IMPQOY~ENT COST (95) ROADWAY IMPROY~ENT COST ( 96 ) TOTAL PROJECT COST (97) YEAR OF IMPROVEMENT COST ESTIMATE t 114 ) FUTURE AOT (115) YEAR OF FUTURE ACT S 000 S -000 S' ,' ',000 19J20_ i**** INSPECTIONS ** INSPECTION DATE _ / _ t 91 ) Frequency ~ CRITICAL FEATURE $NSPECTI~: (93) CEl DAY FRACTURE CRIT QUAIL - - _ ~A) _/ UNDERWATER INSP - = - _ ~8) _/ OTHER SPECIAL INSP - - ~C) _/ Figure 5. FHWA structure inventory and appraisal sheet (FHWA, 1991). 69 I, it. ,..

independent inspection teams conducting a reinspection, with the results being used to improve bridge inspector training in the future. Using these inspections will enable an agency to identify deficiencies within the structure and cause repairs to keep Me structures In safe operating condition. Based on a careful review of the requirements for periodic bridge inspection and reporting, * was determined that b rid He condition data would not be very suitable for . ~_ _ . · . v ~ use in the L(~b rating system. Several of Me same factors identified with the EMS data affect Me ability to use BMS data. In particular, the distress data contained in BMS's are often not pertinent information to maintenance. Also, Me nature of Me distress data may not be compatible with the nature of maintenance LOS data, because of different evaluation criteria or procedures. Over factors Mat limit Me possibilities for data exchange include Me following: Frequency of bridge inspections Bridges are most co~runonly condition- assessed on an annual or biennial basis. Because maintenance quality should at least be evaluated annually, BMS data based on biennial surveys would not be adequate. Timing of bridge Inspections Bridge surveys are often performed at times of the year when weaker conditions are more moderate, and depending on workload, some may be performed by one or more qualified bridge inspection consulting firms. This situation could lead to a high concentration of inspections during a relatively short period of Me work year. LOS inspections however, may be done routinely wad little regard to yearly seasons or construction/maintenance seasons, or hey may be done at selective times of Me year. Any attempted linkages between Me two systems must take into consideration We need for consistency in timing and data quality among Me inspection teams. · Length of bridge segments BMS inspections entail inspection and condition ratings of whole bridge structures, which may be anywhere from 50 it (15 m) long to several miles long. For successful linkage with a BMS, bridge LOS sample segments would have to include Me entire length of each bridge or correspond win BMS data broken down by spans or over readily identifiable bridge elements. Although it was difficult to envision a situation where the reporting of compliance win inspection deadlines has not already been conducted, Me monitoring of work orders issued by an agency was considered to be a reasonable adaptation for Me prototype QA program. A process for doing this was developed In this study and is described in appendix A of the QA program ImpZementation Manual. 70

Safety Management Systems In the ~rutial Rules and Regulations (Federal Register, 1993) concerning Me development of a SMS, it was proposed that: Formalized and interactive communication, coordination, and cooperation shall be established among the organizations responsibZefor these major safety elements including: enforcement, emergency medical services, emergency response, motor camer safety, motor vehicle administration, State highway safety agencies, the public health community, State and local transportation~ighway agencies, and State and local railroad and or/frucking regulatory agencies. Thus, the goal of the SMS was to reduce the number and severity of traffic crashes by ensuring Mat all phases of highway planning, design, construction, maintenance and operation are Involved. Discussions with several SHAs that are in various stages of implementing a SMS, revealed Mat most will not be collecting field information for Me development of new data files. Instead, Weir plans are to read available information from Me data files of those organizations quoted In the Federal Register Rules and Regulations (1993). Since maintenance data are dynamic and are updated periodically, it appears that most SMS adm~ustrators plan to use maintenance data as Me input for Me SMS. A concern was also expressed by some of Me agencies dealing win Me quality of Me data (not necessarily maintenance) contained In Me files Hey were planning to access. Infrastructure Management Systems An IMS is an overall management system Hat integrates all He many different infrastructure components and individual management systems of an agency. It is a tool Hat provides highway and public works managers win He information necessary to make important decisions about total infrastructure maintenance, renovation, and replacement. IMS usage Is not very common among State and local governmental agencies. However, it is anticipated Hat IMS usage will increase dramatically in the next several years because of He growing demand to provide high quality service within very limited budgets. Many agencies collect all or several components of an IMS, but most do not coordinate He data collection activity or consolidate He information into one overall system, such as an IMS. IMS's are perhaps most commonplace in a large local government agency win centralized management responsibility of He entire infrastructure system. Out of necessity, these agencies have been required to "do more with less," and an IMS assists Hem in Hat task. 71

For an IMS to operate effectively, a wed coordinated data collection plan, database update schedule, and a universal location referencing method are required. These items ensure Cat data coDecffon duplication or redundancy does not occur, Cat Me database is maintained and has current and accurate data, and that data can be exchanged between Me various IMS subsystems. A review of various papers and articles on IMS's resulted in Me identification of several maintenance-related features Cat comprise an IMS database. Listed in table 11 are Me drainage, traffic control and safety (signs and barriers), and vegetation and aesthetics maintenance elements and Me corresponding types of information typically stored in an IMS for these elements. An IMS can contain hundreds of different data items; in no way is this table meant to serve as a complete comparison of ah items. Table Il. Summary of highway feahlres/characteristics condition-evaluated as part of IMS's. Element QA Feature/Charactenstic Drainage Ditthes Culverts/Pipes Catch Basins/Drop Inlets Under/Cross Drains Curb and Gutter l | Traffic Control Signs and Safety (sitars) Pavement Markers/Symbols Striping/Markers Signal Luminaries l | Traffic Control Barrier Wall and Safety (barners) Guardrail l Impact Attenuators Vegetation Mowing and Litter/Debns Aesthetics Bush & Tree Control Graffiti Fence Slopes/sidewalks Landscaping Debris Turf 72 IMS Information Type, width, slope, inslope, backslope erosion control and condition. Location, type, size, age, direction of flow, slope, condition, maintenance schedule, complaints. Same as culverts/pipes. Same as culverts/pipes. Type, age, condition. Location, size, type, age, message, colors, reflectivity, condition, and mounting type. Location, type, age, condition. Same as pavement markers/symbols. Location, age, style, arm size and type, color, paint condition, structural defects, base type, and luminance. Same as signals, Location, type, size, age, condition, meets standards, and accident history. Same as barrier wall. Same as barrier wall. Acreage. None. Location, size and type. None. Location, type, size, age, condition. Location construction history, Andy, and condition. None. None. None

As wit EMS and BMS data, the suitability of IMS data for use in the QA program depends on several factors. These factors include Me following: Frequency of surveys Many IMS items are collected only once and serve as inventory items. These items do not change as long as We facility is in use and could be used to help populate Me inventory portion of the QA program database. Other IMS data Cat relay condition information are collected at least biannually (2 times per year). These data are not suitable for use In most LOS rating. · Tuning of surveys IMS data collection cycles are based on Me data coDechon cycles established for the Individual management subsystems in an IMS. Most surveys are performed during Me time of non-peak construction and maintenance activity. These cycles often do not fit Me timing of surveys being recommended as part of Me prototype QA program. If information from an IMS is to be used in Me QA program, Me survey timing of Me IMS would require modification to allow data collection at different times. · Accuracy of data and type of surveys IMS data are collected using several different methods within Me same agency. This can result in a combination of visual, automated, continuous, partial, random, and over survey methods. For IMS data to be useful In the LOS ratings, Me information must be collected from Me same sample units and the condition surveys must be objective, accurate, and repeatable. Availability of desired data As a result of an IMS's integrated database and v universal referencing system, the actual physical exchange of data should not be a problem. The problem will arise when Me data required by the QA program are not collected, or are collected in a different form during Me IMS data collection process. In ah IMS's, some modifications will have to be made by Me agency to obtain condition data Cat can be directly downloaded to Me QA database. Even Cough IMS's have a well planned data collection process, Me information is not exactly what Me QA program needs. Information from Me IMS is suited to help establish Me Inventory items In Me QA database, but beyond Me crucial survey, it is generally not suited for use In updating Me condition elements or perfonning LOS ratings. In an IMS, just as in other management systems, most condition surveys are not performed with Me frequency recommended for a QA program, are not conducted at Me same locations as Me LOS sample sections, and may not contain the exact items needed to perform We LOS ratings. Avoiding Bias in Data Collection and Analysis The quality and efficiency of highway maintenance management Is greatly Improved Trough the collection of accurate data and Trough Me use of unbiased sampling and analysis techniques. Highly incumbent upon all implementing highway 73

agencies is Me compilation of reliable inventory, cost, and work accomplishment data, as these MMS data are used to estimate future work requirements and corresponding budget amounts. Equally important to Me success of Me prototype QA program, however, are the accuracies of Me maintenance quality assessments (both internal and external) produced through Me LOS rating system. Such assessments are subject to bias, which may be introduced by the raters themselves or by the sampling and data analysis procedures Mat are used. ~ developing the prototype program, serious consideration was given to ways of minim~z~g bias in Me collection and analysis of LOS rating data, as wed as customer input data. With regard to Me former type of data, a roadway segment sampling procedure was developed Mat would enable agencies to achieve a desired level of precision in LOS ratings for each maintenance stratum of interest. For instance, if an agency wished to compare the ratings of two particular highways in a given district, Men a stratified sampling routine could be effected Mat would result In a sufficient number of samples (for a given precision level) inspected along each highway. Win regard to collecting LOS ratings, a simple pass-fail assessment process was outlined, In which various maintenance features/characteristics are examined to see if Hey satisfy well-defined maintenance condition standards. The proposed rating process requires a team of two individuals to conduct independent inspections of Me same roadway segment, and Men arrive at a consensus as to which maintenance features/characteristics met Me condition standards and which did not. In cases where multiple rating teams (i.e., satellite teams) are used throughout an agency, a QA/QC process was formulated in which a central-office "true" team performs annual reviews of the rating capabilities of each satellite team. If Me ratings of a given satellite team are statistically significantly different from Me ratings of the central-offlce team, Men actions are taken to correct Me satellite team's rating techniques. In Me area of customer surveys, a survey process was devised which would allow for constructive Input from a representive crosssection of highway users. This process included use of one of the least biased melons of customer feedback (formal surveys), a statistically based random sampling procedure using appropriate source listings, and a rating scale questionnaire format. It was believed Hat He combination of these three preferential items would significantly reduce the amount of bias that is introduced into decision-making information. Statistical Applications As stated by Miller and Krum (1992) in their book, Me Whals, Whys, and Hows of Quality, statistical process control (SPC) is He backbone of quality improvement. It is the vital tool He team uses to identify problems, to help develop solutions, and to demonstrate success." Hence, for the prototype QA program to be effective and 74

purposeful, it was clear that any sampling or assessments carried out under the program would require proper statistical consideration. The use of statistical methods was investigated In five different areas of Me prototype QA program: customer surveys, LOS field inspections, LOS analysis, and QA of LOS rating teams. Discussions of the statistical applications in each area are provided In We sections below. Customer Surveys ~, To obtain as unbiased customer input as possible concenung the expectations of and satisfaction with highway maintenance LOS, a sound survey sampling process was developed and made part of Me prototype QA program. The key statistical aspects addressed with regard to customer surveys were as follows: · Type of sampling technique to be used. · Determination of required sample size. Concerning Me first aspect, two types of sampling were found to be most conducive to Performing surveys of highway customers: random sampling and stratified random ~ ~ · ~ t . ~ 1 ~ 1 ~ ~ ~ sampling. in random sampling, a random number table or a computerized ranaom number generator is used to determine which individuals are selected, based on Weir order of appearance on the source list. Random sampling assures each Individual in the population the same chance of being chosen for inclusion in the survey (Kopac, 1991). Agencies win a special interest In the views of different groups of individuals would benefit from Me stratified random sampling technique. This type of sampling calls for dividing the population into two or more strata and Men drawing a random sample from each Stratton (Kopac, 1991~. One instance in which this sampling technique is advantageous is for comparing Me perspectives of urban and rural highway users on the unportance of specific maintenance items, such as pavement striping and litter pickup. In deternurmag the number of customers to survey, a conunon practice in me past has been to select a certain percentage of the population (e.g., 5 percent or 10 percent). This type of procedure often does not ensure Me optimum sampling size (i.e., one large enough to yield statistically representative results, but not so large as to waste funds, delay Me project, or achieve a needlessly high level of precision [Kopac 1991]), which is largely dependent on the desired confidence level and Me desired precision (margin of error) of the results. Because Me customer expectation survey will contain Inquiries about Me level of importance of various aspects of maintenance, the results of the survey will undoubtedly be averages of ratings, made In accordance win a specific rating scale 75

(e.g., I-10, 1-1001. The following formula was initially identified as the formula of choice for determining the optimum survey sample size: ~0.25 x fib - a)2 x Z2] n= Eq. ~ where: n = sample size. a,b = lower and upper bounds of rating scale (e.g., for 1-10 rating scale, a=1 and b=lO). ~= precision (e.g., for precision of +0.5 on a 1-10 rating scale, use 0.5~. z = z-statistic, standard normal variate associated win a particular confidence coefficient (for 95 percent confidence, z=~.96~. This formula is based on random sampling win replacement of samples after each survey. In over words, all individuals selected to participate in one survey are also included in Me population of subsequent surveys. Though equation ~ was deemed sufficient for most applications, it was recognized Cat some agencies interested In adopting the QA program Freight service substantially smaller populations, resulting in a much larger percentage of We population being sampled (say, greater Man 20 percent) in a Even survey. In these cases, Me more "finite" population can result in significant biasing through large numbers of repeat participants. Although more complex Man equation I, Me formula below enables agencies win bow large and small customer populations to determine Me optimum survey sample she. For this reason, equation 2 was featured in the Implementation Manual instead of equation I. [0.25 x (b - a)2 x z2 x N1 n= bay x (N- I) + 0.25 x z2 x (b - a)2] Eq. 2 where: n = sample size. a,b = lower and upper bouncis of rating scale (e.g., for 1-10 rating scale, a=1 and b=lO). ~= precision (e.g., for precision of +0.5 on a I-10 rating scale, use 0.5~. N = survey population size. z = z statistic (for 95 percent confidence, z=1.96~. For agencies whose constituent populations are "finite," equation 2 can yield considerable savings In sample size without sacrificing Me desired precision. 76

LOS Field Inspections The first and foremost consideration of statistical applications for LOS field inspections was We type of quality measurement approach to be used. Two alternatives were identified and are definer! as follows: I. Method of auributes~onsists of noting Me presence (or absence) of some c aaracteristic or attribute in each of Me units in Me group under consideration, and counting how many units possess (or do not possess) Me quality attribute, or how many such events occur In Me unit, group, or area (ASQC, 1983~. _ 2. Method of variables-Consists of measuring and recording the numerical magnitude of a quality characteristic for each of the units In the group under consideration (ASQC, 1983). In simple terms, attribute-based statistical QA involves determining whether a certain standard is achieved, and Men recording Me results in a yes-no or pass-fai! fashion. Variable-based statistical QA involves measuring and recording Me numerical value according to a specified scale of measurement (e.g., 7 potholes, 12 percent joint seal failure). None of Me LOS rating programs investigated in this study were identified as var~able-based. The systems In place at Florida, Maryland, and British Columbia were clearly attribute-based, why individual maintenance features/characteristics evaluated against a specified standard and given corresponding pass or fad! ratings (in the case of British Columbia, Good, Fair, and Nof-to-Standard ratings are Event. In Weir book, Statistical Quality Design and Control: Contemporary Concepts and Methods, Devor et al. (1992) argue against Me at~ibute-based approach, because it relies on an "acceptable" level of defects. Such a basis, many Taguchi quality advocates contend, is counter to Me spirit of CQI. The var~abl~based approach, on me over hand, entails much greater detail and sophistication Man the attribute-based approach. Maximum application of variable- based statistical QA would necessitate Me measurement of multiple defect types in several maintenance features/characteristics. For instance, pavement striping might be measured for reflectivity and damage (i.e., missing or torn-away pieces), whereas signs might be measured for reflectivity, alignment, and damage. The time required In Me field to measure all possible defects for ah features/characteristics would be far greater Man Me time required to make simpler pass-fail assessments, which only occasionally are close enough to warrant detailed measurement of defects. Moreover, developing an overall assessment rating for a given feature/characteristic would likely be very complicated and tim~consurrung. 77

Despite being more qualitative than quantitative, attribut~based statistical QA was judged to be the more appropriate methodology for the present time, and therefore was chosen for use in He prototype QA program. Although Be method of variables better captures the spins of CQI, Me vast majority of highway agencies would be unable to apply this method because of its much greater demand for resources, particularly labor-hours. A second consideration of statistical applications in LOS field inspections was We method used to compute LOS ratings. In current maintenance rating programs, a percentage statistic is computed for each feature/characteristic by summing the number of segments in which a given feature/characteristic met He standard and Hen dividing it by He total number of segments. He resulting statistics for each feature/character~stic are Hen combined win venous feature/characteristic weights and element weights to produce an overall LOS rating. In the prototype QA program, an LOS rating is computed for each sample roadway segment using He established feature/characteristic weights and element weights. An overall LOS rating is Hen determined by computing the statistical mean. This approach allows an agency to determine He variance and standard error in He ratings which, In turn, can be used to calculate future sampling requirements. The details of this computational approach are given in the following section titled "LOS Analysis." a A third consideration of statistics In LOS field inspections pertained to He need for a pilot field study, or a trial run of LOS inspections. A pilot study provides insight about the inherent variability of LOS ratings, which can then be taken into consideration when determining He required sample size for (future) formal LOS inspections. For a given confidence level and precision, greater variability In LOS ratings results In a higher number of roadway segments to be sampled. Pilot studies were noted as having been performed by He Virginia, Maryland, and Florida DOTs during He implementation phase of Heir quality assessment programs. However, a pilot study is not entirely necessary, as a statistical formula exists that allows computation of He required roadway sample size based on a specified confidence level and precision. Unfortunately, the price for guaranteeing precision and being able to do without a pilot study is an increase In sample size. Even for sizeable variability in LOS ratings (standard deviation of 3 to 4 percentage points), considerably larger sample sizes would be required by foregoing a pilot study. Since a pilot study has He makings to serve as He first formal round of LOS inspections He number of samples taken In He pilot may satisfy He requirements for formal LOS inspections or can be supplemented win additional samples and because considerably fewer samples are likely to be required in comparison with nonpilot- based sampling, a pilot field study was advocated In the prototype QA program. The results of He pilot inspection round can serve as a baseline of existing maintenance conditions from which future improvements in maintenance quality can be measured. 78

A fourth aspect of statistics in LOS field inspections was We sample segment selection procedures and Me corresponding sampling rates. As win customer surveys, a sound sampling process was developed to help implementing agencies determine the required number of sample segments to inspect, given a desired precision and confidence level and a preliminary estimate of Me variance In LOS ratings. Reviews of Me sampling procedures used by Florida and Maryland indicated Mat sampling requirements in these agencies' programs are determined periodically using the statistical bootstrap method, a relatively sophisticated resampling procedure that uses no explicit formulas in relating population size, sample size, confidence level, and precision. This approach to determining sample size was considered impractical as most implementing agencies would need to seek Me assistance of a statistician. To keep Me sampling process as simple as possible' a basic formula Ocular to one used by Virginia-was identified which could be used by those individuals leading the QA program implementation effort. The formula was proposed in the Implementation Manual and is as follows: z2 X ~ n= n where: n = required sample size. s = standard deviation of Me ratings from Me pilot study. ~ = desired precision. z = z-statistic (for 95 percent confidence, z=~.96~. Eq. 3 It Is clear in this formula Mat Me necessary sample size increases as Me desired precision increases. That is, if one wants more precise results (smaller value of d), then a larger sample swe is required. The method recommended in Me prototype QA program for selecting roadway sample segments was simple random sampling, performable through a random number generator function available In most statistical or spreadsheet computer software. This method assures each individual segment In Me total roadway population Me same chance of being chosen for field inspection. Recognizing Me need to obtain adequate sampling representation among various roadway subsets, Me option of stratifying (i.e., subdividing) Me total roadway segment population was also featured in the prototype QA program. The stratification could be according to geography (district, residency, maintenance unit)' facility type (functional class, highway system), or any combination Hereof, with simple random sampling carried out for each strata. It was recommended Mat Me total number of strata be limped to 10, since Me added benefit associated wad more Man 10 was considered to be marginal in comparison with the cost of increased sample sizes. The precedence for stratified sampling in maintenance quality assessment is well established, with Me 79

LOSS = Virginia DOT stratifying by highway system (interstate, primary, secondary highways), the Maryland DOT by county (23 total), and Me Florida DOT by bow maintenance unit (30 total) and functional classification (urban limited access, rural limited access, urban arterial, rural arterial). T..05 Ar~a~ As discussed previously, Me computation of an overall LOS rating under Me prototype QA program entails computing individual l.OS ratings for each sample segment and Men calculating Me statistical mean and variance. These calculations are made using equations 4 and 5, shown below. L n where: LOSS = mean segment LOS. LOSsi = individual segment LOS values for n sample segments. n = number of sample segments. s2 - - where: s2 = sample variance of segment LOS ratings. LOSsi = individual segment LOS values for n sample segments. LOSs = mean segment LOS. n = number of sample segments. (LOSsi- LOSs)2 no Eq. 4 Eq. 5 The standard donation and We appropriate confidence interval are Men computed using equations 6 and 7 Oven below. s = Is2 Eq. 6 where: s = standard deviation of segment LOS ratings. s2 = sample variance of segment LOS ratings. fOSs+(zxs/In) Eq.7 where: LOSs = mean segment LOS. z = z-staUstic (~.96 for 95 percent confidence, 2.8 for 99.5 percent confidence). s = standard deviation of segment LOS ratings. n = number of sample segments. 80

If the LOS of a highway facility, as a whole, is to be analyzed, then a 95-percent confidence coefficient (i.e., z equal to 1.96) is recommended in equation 7. However, if ~. . . ~. . individual highway elements are to be examined to Determine which elements cause a facility to be deficient, Men a higher level of confidence (99.5 percent [z = 2.81) is required. The higher confidence coefficient is necessary because multiple confidence intervals (one for each element) are being constructed for examination. OA of LOS Rating Teams Since most implementing agencies are likely to establish multiple LOS rating teams (i.e., satellite teams), a QA process for ensuring accurate (i.e., consistent and unbiased) rating results from all teams was deemed essential to the prototype QA program. Two alternatives for performing annual QA checks of satellite teams were identified, one based on analysis of variance and We other based on two-sample z-tests. The analysis of variance approach entailed having each satellite team individually inspect a common set of randomly selected roadway segments. The variability of segment ratings, both within teams and between teams, is calculated and a statistical determination made as to whether significant differences in ratings exist among the teams. If significant differences were not found to exist, then the teams would be consistent and no further analysis would be necessary. If, on the other hand, significant differences were found to exists Men Me teams whose ratings differed from Me collective team rating would require adjustments in their rating process in order to bring Weir ratings in compliance with consensus ratings. The two-sample z-test approach entailed having a central-office rating team an ideal team, whose ratings are considered accurate perform field inspections In each satellite team's domain. A cannon set of randomly selected roadway segments are independently inspected at Me same time by bow Me central-office team and Me satellite team. The paired ratings from all sample segments are Men statistically analyzed via Me z-test In order to determine if Me satellite team's ratings are significantly different from the central-office team's ratings. If not, then the satellite team would be in compliance with the central-office team and no further analysis would be necessary. If so, Men Me satellite team would require adjustments In Weir rating process in order to bring their ratings into compliance with the central-office team. The preferred method of QA checks on LOS rating teams was the two-sample z-test. This method was considered easier from a field coordination and execution standpoint, and it involved simpler statistical calculations. Just as important, however, was the fact that key personnel from the QA program administrative staff would remain active in the LOS rating system and could possibly identify areas of improvement. The complete set of steps for conducting z-test QA on LOS rating teams is detailed In Me Implementation Manual. 81

Customer Input A major part of Me modern-day quality movement is recognition of Me customer's needs. The Demoing philosophy defines quality as "whatever We customer wants" (Miller and Krum, 1992~. However, because not all customers want Me same Ming and because customers' wants and needs change over time, it Is necessary to continuously assess and measure customer satisfaction. For maintenance agencies, of course, Me primary customers are Me highway users. Most everything a maintenance agency does is a service to Me traveling public, whether it's of direct notice to Hem (like keeping the highway system safe, comfortable, and attractive) or is substantially less perceptible (like keeping He system structurally sound). Since the customers are paying a substantial part of He bill for maintenance through highway user taxes, it is only appropriate to ask or solicit their · ~ op~uons. Win more and more emphasis being placed on He customer, solicitation of customer input was made a key albeit, optional-component of He prototype QA program. The combination of knowing the level to which customers want the highway cut v system Initially kept (customer expectations) and He levels to which they're satisfied over time (customer satisfaction) allows an agency to make He proper adjustments In maintenance effort. The inclusion of customer input into He prototype program prompted consideration of two main items. First, He method most suitable for soliciting customer opinions needed to be selected. Second, the detailed procedures for carrying out He selected methoc! needed to be established. As discussed in chapter 2, several methods have been or are currently being used to solicit customer Input. These include focus Croups, formal and informal surveys, --rid c7----r-#~ , ~ ~ ~ ~ . ~ ~ ~ ~ ~ . ... ~ ~ ~ customer panels, and formal and informal teeanacx ores. l-nou~n eacn method has advantages and disadvantages, only formal questionnaires, conducted by mail or telephone, were judged appropriate for determirmag highway users, expectations of service. These types of surveys, when properly constructed and adm~rustered on a statistical basis, yield the most reliable and representative customer inputs. The over techniques usually result In considerably biased, qualitative, or inadequate input. A Bird type of formal questionnaire survey- personal interviews-generates useful and reliable information, but was found to involve extremely high costs. -a r Table 12 highlights some of He important facets of two recent customer surveys, One conducted by the Minnesota DOT to determine both customer expectations and customer satisfaction and He over conducted by He Pennsylvania DOT to deter~rune customer expectations. This information reveals much about He resources required to obtain reliable customer input. It also illustrates some of the key differences between 82

Table 12. Key facets of Minnesota (SMS, 1994) and Pennsylvania (Pennsylvania DOT, 1993) customer surveys. Minnesota Pennsylvania Mine Conducted Source Listing Pretest Sampling Type November 1994. Telephone listing. Focus group. April 1994. Pennsylvania Driver's License database. · Disproportionate stratified random sample (based on 1990 Census of Minnesota County populations and aggregated into eight maintenance districts). N/A Random sample (using random number generator). Survey Type Sample Size Formal telephone questionnaire survey of customer expectations and customer - satisfaction. 1,200 originally proposed phone interviews (300 in each of 2 districts and 100 in each of 6 districts), 1,244 actual phone interviews. 10% of households contacted refused to participate, and less than 2% of the respondents terminated their participation in the survey midway. Formal mail-in questionnaire of customer expectations only. 4,800 mailed out (400 in each of 12 counties); 1,018 properly completed responses for 21~% overall response rate. 1~. Cost Timeframe and Staff 61 total questions, Minute phone time. $40,000 (approximately $32/respondent). 2 weeks using several telephone survey staff (responses entered directly into computer at time of survey). 25 rating questions, 1 page. $20,000 (approximately $20/respondent).a 1.5 months using three in-house individuals committed part time.a N/A a Not available. Pennsylvania conducted a customer satisfaction survey shortly after Me customer expectations survey. The back-to-back surveys were conducted over a Month timeframe at a reported combined cost of $40,000. mail-in and telephone surveys, such as the cost of the surveys and the time frame and staffing needed to complete Me surveys. The development of procedures for conducting mail-in and telephone surveys Occurred in four key areas: statistical sampling, questionnaire development, in-house . ~. ~. ~ testing of the questionnaire, and format conduct or tne survey. As discussed previously . . ~· ~- ~ in this chapter, random sampling and suatlnea random sampling were found to be the most conducive sampling methods. Win Me ideal survey population defined as "users of Me highway facilities maintained by Me agency," appropriate source listings to represent Me survey population were identified based on Me experiences of various SHAs, including Maryland, Pennsylvania, Oregon, and Minnesota. State Depardnent of Motor Vehicles (DMV) or Driver's Licensing Bureau (DEB) agencies were considered Me best sources, as Me records maintained by these entities typically include Me names of inHi~ririll~lc lir~n~H to drive or prorate vehicles within Me State alone with Me ~ . ~ 1 corresponding phone numbers and mailing addresses. -, ~ Telephone listings are also considered a viable source. This type of source listing is easier to access but is considerably more biased Man Me two previous lists because of 83

Me potential for households win no telephone or win an unlisted number (it should be noted Cat random-digat dialing [RDD] is a technique Cat can be used to eliminate Me bias associated win unlisted telephone numbers). To reach Me right customers and limit bias, Me phone interviewers can ask to survey Me licensed driver in Me household wad Me most recent birthday, as was done by a surrey consultant for Me Minnesota DOT (SMS, 1994~. In the design of maintenance questionnaires, it was determined Cat Me most appropriate way to measure Me importance of or satisfaction win various maintenance work items Is to use rating scale questions. Scales of 1-5, I-10, and I-100, win ~ representing "not Important" and 5, 10, or 100 representing "very unportant," are effective means of measuring customer expectations and satisfaction, particularly when Me questions are posed in simple terms to which Me customer can relate (e.g., potholes, smoothness, visibility). Refraining from Me use of technical questions, such as "what degree of reflectivity is acceptable for striping?" is also important. The traveling public is most valuable in defining what it expects when traveling on Me highway system, but it is the job of Me professionals In Me agency to work out technical issues. To maintain as high a response rate as possible and We highest degree of consideration for Me questions, questionnaires must be kept short and concise. A I- to 3-page mail-~n survey or a 5- to 10-m~nute phone survey is usually sufficient for asking Me questions pertinent to customers' expectations or satisfaction. On mail-in surveys, Me survey form must be made attractive (proper text arrangement, spacing, and style) so Cat participants are more receptive to Me survey. As was pointed out by Kopac (1991), "Regardless of how carefully Me questionnaire has been worded, it should not be assumed that it win work wed until it has been tested under field conditions." Hence, informally pretesting Me survey on non- professionals was emphasized, as it will provide useful feedback on the clarity, interpretation, and logical sequencing of Me questions, as well as Me length, receptiveness, and effectiveness of me survey. O , The decision of whether to administer Me questionnaire survey by mad! or by telephone is largely clependent on costs and Me availability of staff and over in-house resources. Again, table 12 shows Me costs, time frame, and staffing Cat were required for conducting Me Minnesota telephone and Pennsylvania mail-~n surveys. This information, along win samples of each agency survey, were featured In Me Implementation Manual. Adjusting to Improvements in State of the Art Many advancements In highway maintenance have been made in Me last half century. These advancements were spawned by Me desires of maintenance practitioners, researchers, and ~ndus\Ty personnel to make operations safer and more 84 f

cost-effective, and to make highway features last longer. The advancements have come in Me way of new materials (and new formulations of existing materials), new equipment (and modifications to existing equipment), and new technologies, and Weir acceptance into practice has been made possible through on-the-job training, demonstrations, instructional workshops, experiments, and research reports and symposiums. Future advancements In highway maintenance are certain to occur, and training and education of employees is essential if continuous unprovement is to be sustained. For this reason, employee training was made a key component In me prototype QA program. Training helps provide employees win me proper skills and knowledge to do their jobs right. And when jobs are done right, the quality of the service or product · · ~ IS improved. Several important ideas about employee training were recognized In Me development of this program component. First was Me idea Mat an employee's resistance to change is often rooted in Me lack of appropriate skills or resources. Although a poor attitude may be a part of Me problem, it Is more likely Mat an employee wishes to do a good job, but simply lacks Me knowledge or tools to accomplish it. Another important aspect Mat was recognized was ensuring that employees become familiarized win Me work issues so Hey understand Me Importance of Heir job function. Training employees to think about He work Hey perform, why it's perfonned, and ways in which it could be improved, creates greater potential for technology advancements for the agency and promotes motivation and self-esteem for He employees. A third aspect Hat was considered unportant In He educational process was communication. Because He natural tendency of most employees is to do He best job Hey can win very little complaining, some employees' need& be Hey new equipment, improved work skills, or additional staff may go unaddressed. Eventually, Heir work performance can suffer, giving rise to internal disputes. By instructing employees In He art of communicating with peers and supervisors and resolving small problems initially, a major step can be taken In avoiding major conflicts down He road. Quantifiable and Replicable Results Consistency and repeatability of ratings should be a significant concern of any unplemendug agency. Without He QC function being performed throughout He implementation process, He ratings Hat are produced will most likely be challenged. To prevent major issues being made of LOS ratings, formal LOS training and a pilot field study were made major components of He QA program. These programs will 85

help familiarize LOS raters with the rating process and will provide an estimate of the inherent variability of ratings among teams (if Were is more than one) once a reasonable level of consistency has been achieved throughout training. Although variability must be expected among the LOS ratings, it is important to minimize it to acceptable limits and then document it for later use in determiriing the required sample size for formal LOS field inspections. The objective of Me LOS training component is to make sure that raters look at the same features In each case and arrive at He same basic conclusions concerning Weir evaluations. Having a good description or specification of when a feature/characteristic meets or exceeds desired conditions is a key factor in this phase. Success in establishing this critenon will enable an agency to begin establishing He credibility of its QA system, whereas a lack of proper descriptions may have He opposite effect and cause further efforts to be ineffective. The objective of He pilot field study is to determine He variability of ratings among He different rating teams. Win an initial estimate of He variance among teams, He required number of sample segments to be rated during He formal inspection process can be calculated for a given precision and confidence level. The QA/QC process for LOS rating teams, which was touched upon earlier, also helps effect consistent rating results. If a team's ratings become statistically significantly different from He central-office team's ratings, Den actions are immediately taken to bring Bat team's ratings into line wad He central-office team's ratings. The actions may include helping He team deter~riine which features/characteristics are present at a Even sample segment or reinforming Rem of how a certain condition standard is Interpreted. Pros and Cons of Implementation Why should any agency consider abandoning past management practices and set out on a new direction for its maintenance operations? The answer can be very straightforward; it shouldn't if it has accomplished He following goals: Assurance Hat its highway maintenance and operations meet or exceed the expectations of He traveling public. Has a good relationship win He groups (e.g., highway commission, Governors office, legislature, county/city commissioners) having final say over agency budget requests. · Obtained adequate funding for agency maintenance needs. · Provided equal LOS for all components of He highway system. · Provided employees win the skills and equipment to accomplish assigned tasks In a cost effective and efficient manner. 86

Agencies that have not met most of these goals make prime candidates for installing the prototype QA program, the main advantages of which include the following: ~ ~ _ ~ ~ Identification of customer expectations concerning Me LOS at which Hey wish We highway system to be maintained. Identification of the key activities involving workloads necessary to accomplish He desired LOS. · Ability to document and transmit to field forces He amount of deficiencies allowable before an activity no longer meets the desired LOS. · Ability to identify factors Hat reflect He relative Importance of individual maintenance features/charactenstics and Heir unpack on He highway facility as a whole. · Establishment of a maintenance work priority system, showing which work win be performed first in He event of funding shortfalls or in emergency situations. · Ability to monitor He actual LOS being achieved In each category or work activity within a maintenance unit, region, or district. · Ability to identify locations Hat have extra resources (labor, equipment, and materials) or need additional resources in order to accomplish established LOS. Ability to produce budget requests showing He e~asJdug LOS, He proposed/desired LOS, and the funding required to achieve and maintain the desired LOS. Ability to measure customer satisfaction win He LOS being provided. Establishment of a uniform LOS in all management areas widen He maintenance and operations group. At He same time, highway agencies enticed by these numerous benefits must consider He foHow~ng disadvantages in implementing He prolotvae OA program: , ~_ ~ Permanent change in management philosophy and attitudes towards the commitment to the agency's maintenance and operations workforce and how work is accomplished. Cost of developing, Implementing, and monitoring LOS goals. Potential employee, union, and special-interest concerns with the development and implementation of LOS criteria. cat ~ 87

Next: 5 Investigation of Highway Maintenance QA and Long-Term Performance of Highway Facilities »
Highway Maintenance Quality Assurance: Final Report Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!