Click for next page ( 60


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 59
CHAPTER 4. DEVELOPMENT OF PROTOTYPE QA PROGRAM Introduction The nrimarv task of developing a crototv~e OA Program for Me maintenance of r J r ~ 1 A ~ - 1 ~ ~ _ _ ~ ~ ~ ~ .~ 1 ~ e1 _ _ _ 11 _ _ ~ _ ~ highway facilities was undertaken toi~owmg tnorougn reviews or one co~ec~ea literature and detailed evaluations of current highway agency quality programs. The prototype program began as a collection of sound maintenance management and maintenance quality practices, most of which formed We core of Me program. Although the ideas and procedures behind some of these practices were modified to better capture today's quality management precepts, over practices were readily acceptable for inclusion as part of the prototype QA program. , . ~ As bow new and old ideas of assessing, controlling, and assuring quality were introduced Into Me core program, the prototype evolved into a flexible, multi- component program, fully adaptable by interested agencies In two stages: development and Implementation. By this point in Me development process, documentation of Me various component principles, procedures, and interactions fell under the ensuing task of Implementation Manual development. This chapter describes the work conducted in developing He prototype QA program for highway maintenance. It consists of several sections, beginning with a discussion of Me work approach taken in developing Me prototype program. The work approach section is followed by discourse on how various aspects of Me program, such as the types of data required to run Me program and obtaining and using customer input, were addressed during the development process. The final section summarizes the QA program and briefly describes its transformation into an implementation manual. Work Approach The basic idea In developing the prototype QA program was to establish a core set of practices using the various tied-and-true management techniques unearned in the literature review and agency surveys and introduce into that framework where possible and practical new, effective quality management concepts. In essence, yesterday's proven methods were to be infused with tomorrow's highly credible ideas. Throughout the evolvement of the prototype program, a clear focus was maintained on the goals of Me program. These goals consisted primarily of Me following: Maintain the highway network at an acceptable LOS based on all customer input into maintenance activities. 59

OCR for page 59
Develop m~nunum criteria for a QC process of daily maintenance operations to ensure Cat operations are being conducted In an effective manner. Provide a documented means of evaluating the condition of the highway system and Me resources used to achieve an acceptable LOS. Improve the way In which highway maintenance operations are performed by preventing problems that would have developed. Several secondary goals were set forth to ensure Cat the prototype QA program would receive maximum consideration from quality-seeking agencies. These program goals included We following: Functional in a centralized or decentralized management environment and In both large and small agencies. Produces LOS ratings regardless of Me level of contract maintenance. - Produces repeatable and reliable ratings for a variety of conditions and features. Provides for customer involvement. Cost effective to unplement. A paramount regard in Me development of Me prototype program was Me need for me ability to assess maintenance quality at Me network, project, and activity levels. Each of these levels erases to some degree in highway agencies, and the ability to manage from all Tree levels is a highly desirable and powerful feature. A brief discussion of each maintenance level is provided below. Network-leve! QA Refers to Me quality of Me maintenance management of an entire highway system. This management originates at Me central office level and fillers Trough to Me dis~ict/reg~on level and then Me subdis~ict/area level. The prunary concern with network-level QA is appropriate allocation of funds and Me establishment of network-w~de standards. Project-level QA Refers to Me quality of various maintenance activities applied to a particular bridge or section of highway to keep it at a desirable LOS. The maintenance acid ies perforated at this level are done as part of an approved maintenance clan formulated during design or are done In order to bring conditions among individual projects to more consistent levels. Activity-leve! QA This level of highway maintenance QA inclucles Me routine activities performed by a maintenance crew and Nose activities Cat are done as an immediate response to a change in conditions (e.g., snow and ice removal, accident clean-up, guardrail repair). The most important aspect of this level is making sure Cat Me end results of specific activities are consistent among maintenance crews and are of high quality. 60

OCR for page 59
Program Development Development of Me prototype QA program proceeded largely in the fashion originally planned. Several outstanding managerial and technical processes were identified during the agency review process. The unclerlying principles and me~ods of Dose processes were carefully examined to determine the extent to which they conformed with basic quality tenets, such as focus on customers, use of statistical process control (SPC) techniques, and instituting training. Several processes were found to be well rooted in quality principles and were immediately embraced for use in We prototype program. Others were found to be somewhat lacking and were modified to better fit Me quality mind set. At this stage of development, each process began to be viewed as an individual component (or step) in an overall quality management program. The components were then arranged in a logical sequence Mat included a program development phase and a program implementation phase. In keeping win CQl principles, a portion of the implementation phase was designed as a quality enhancement cycle that would allow agencies to integrate new technologies and customer feedback, as wed as adjust to changes In funding. The flow chart in figure 4 shows Me various components that were used to define the prototype QA program, as well as the order in which they were established. A brief discussion of each key component is provided below, whereas complete details of all components are given in the QA program Implementation Manual. Key Maintenance Activities~roup~ng of key work activities into like categories (i.e., maintenance elements) for Me purpose of evaluating maintenance quality. Customer Expectations-The one-time collection of highway users' expectations concerrung the LOS at which an agency should maintain its highway system. LOS Cr~ter~a~lear and measurable definitions concerrung the points at which deficiencies cause maintenance features/characteristics to no longer meet expectations. LOS criteria are usually expressed In terms of amount and extent of deterioration (e.g., size and frequency of potholes, amount of litter per mile). Weighting Factors Factors Cat (a) reflect Me relative importance of individual maintenance feah~res/character~stics Mat comprise a maintenance element and (b) reflect Me relative unportance of individual maintenance elements that comprise a highway facility, on Me whole. Maintenance Pnonties Establishment of Me order In which work activities will be conducted in Me event that a shortage of resources occurs. Each work activity is prorated according to the four fundamental maintenance objectives, prioritized as follows: I. Safety of Me traveling public. 2. Preservation of Me ~n~reshnent. 61

OCR for page 59
PHASE l-PROGRAM DEVELOPMENT Key Maintenance Activities and Features/Ch; ~racteristics - Roadway Segment Population and Sample Segment Selection Process LOS Data Collection,_ Develop LOS Analysis, and Reporting_ Rating System _ Techniques r Customer ~~~ | Expectations ~ LOS Criteria |and Target LOS: _ Weighting | l Factors I Agency Management | No Maintenance Priorities _ Approval PHASE Il-PROGRAM IMPLEMENTATION Resource Funding Pardal Request (ZermBased Budget) FuD Imple ment ~ogram Prionties ~ Implementation ~ Emergenaes | t Yes Baselin~ng Existing (Workload Inventory Process Updating (using Estimate to Achieve ~baseline, current, and Target LOS _ Activity Cost Data _g Agency \__ Monitoring r- Formal LOS L _ Inspections I QC of LOS I Rating Teamsl Figure 4. Prototype QA program flow chart. 62 target LOS's) |Satisfaction | ~ . ~ LOS Analysis | Land Reporting |

OCR for page 59
3. User comfort and convenience. 4. Aesthetics. Baselining Existing LOS (Pilot Study) Determining the existing LOS of the maintenance of Me agency's highway system using the components above. Workload Inventory Information on We type, location, and dimensions of key maintenance features Cat can be used to estimate potential workloads for maintenance activities. Activity Cost Data Actual cost for performing a unit of work for a specific e, ~ e , a ~ ~ ~ ~ ~ a ~ a a e Bad e ~ 1 activity. For agencies tnat no lUU percent or their work using agency employees, this is usually readily available; however, for agencies Mat have a significant mix of in-house and contract maintenance forces doing Me same activity, a proportional blend of the cost data will be required. Zero-Based Budge!-Application of Me activity cost data and field trial results toward deterrnin~ng Me costs required to produce a specific TWOS established from customer expectation input. Formal LOS Inspections, Analysis, and Reporting Periodic maintenance ratings stemming from random inspections of short segments of the entire highway system maintained by an agency. Customer Satisfaction The periodic assessment of how satisfied highway users are with the LOS being provided by a maintenance agency. The prototype QA program was designed to encompass Me maintenance elements believed to be most common among highway agencies. These elements included traveled roadway (i.e., mainline pavement), shoulder (paved or unpaved), roadside, drainage features, traffic services, and vegetation and aesthetics. Because bridges and snow and ice control were also recognized as substantial parts of several agency's mamienance programs, methods for applying the QA program to these elements were specially formulated. Data Elements Probably one of Me most important considerations of any program is Me type of information required in order for the program to be properly administered. The ability of supervisors to make sound managenal decisions is greatly strengthened when the right kind of information is available to them. The data elements considered to be essential or highly beneficial to Me QA program were identifier! at Me outset of Me program development and consist of Me following: Assessments of highway customer expeciations~ustomer survey ratings of the importance of various maintenance aspects on different facility types. Roadway features inventory-MMS listing of quantities, locations, and characteristics of ma~ntenance-related roadway features/characteristics. 63

OCR for page 59
Internal assessments of maintenance quality Technical pass-fai! condition ratings of various maintenance features/characteristics, such as guardrails and pavement rutting. External assessments of maintenance quality~ustomer survey ratings of the level of satisfaction win maintenance-related roadway features/characteristics. Annual maintenance costs Total costs allocated and expended for a given year for each maintenance activity performed by an agency. Work accomplishments Productivity and resource (equipment, labor, and materials) usage data on each maintenance crew. Other Important data elements included Me documented costs ant! resource requirements associated with operating the QA program. Availability of Data from Other Management Information Systems Determining Me availability of data in existing management information systems was an Important task In Me development of Me QA program. If a considerable amount of the data described In Me previous section was found to be available In other management systems, such as PMS's and BMS's, and Dose data were accurate and easily accessible, Den Me scope of Me LOS rating system might be drastically reduced, since Me subject data could be extracted from Me appropriate management system rather Man collected a second time In Me field. During Me August 1995 field reviews of Me seven selected highway maintenance agencies, none of Me agencies indicated incorporating data from other management information systems into Weir LOS rating process. The general sense among Me agencies was Cat data interchange was too difficult or inefficient, or Cat the data contained in the over systems were not acceptable for use. To further investigate this matter, Me data elements commonly containecl in four different management systems PMS's, BMS's, SMS's, and Infrastructure management systems (IMS's)-were examined. The sources for this process Included pavement condition survey manuals and reports and bridge inventory manuals obtained from several SHAs, conversations win key highway agency officials, and various pieces of literature on infrastructure management. Summaries of Me findings pertaining to each of Me four management information systems are provided In Me sections below. Pavement Management Systems A PMS is defined as an established, documented procedure Mat treats all of Me pavement management activities- planning, budgeting, design, construction, maintenance, monitoring, research, rehabilitation, and reconstruction In a systematic and coordinates! manner. PMS's usually include condition surveys, a database of pavement-related information, analysis schemes, decision criteria, and implementation 64

OCR for page 59
procedures, all of which can be used to establish priorities for overlays, maintenance, and allocation of funds; budget preparation; development of rehabilitation strategies; and identification of problem areas. In essence, a PMS is a data bank for a network of pavement sections (Peterson, 19871. The element of a PMS considered to have Me most potential to serve as an input interface for the QA program was pavement survey condition information. Typically, four primary condition indicators are taken as part of a PMS. These are structural capacity, friction, roughness/ride quality, and distress. Though highway agencies use a variety of methods to collect condition indicator information and agencies key In on different condition measures, they all share an overall objective of determining how well pavements are performing. The structural capacity of a pavement is today most commonly measured using nondestructive deflection testing (NDT) techniques. A falling weight deflectometer (LEWD) is used at selected locations throughout a pavement section to identify weak areas, to estimate the strength of Me pavement system, and to predict the load-carrying capability of the pavement section given the amount of traffic it experiences. Because pavement maintenance actions provide little or no structural improvement, deflection data were not considered to be suitable for use as Indicators of maintenance quality. Pavement friction Is a safety-related condition measure that describes Me slipperiness of a pavement surface. Most commonly expressed as a friction rating or skid number, Me lower Me value, the more potentially hazardous the pavement is to motorists. Application of skid numbers or friction ratings to the LOS rating system may or may not be appropriate, depending on an agency's policy for correcting slippery pavements. In some agencies, the prime responsibility for improving pavements with low friction rests with maintenance, whereby they're tasked-through Weir own forces or through contracted forces win applying surface treahnents or mechanized patches or performing some sort of surface milling or grinding. In other agencies, however, correction of slippery pavements is a rehabilitation action item and Is privately contracted through over depalbllents within Me agency. The ride quality or roughness of pavements is evaluated by many SHAs on an annual or biennial basis using either a response-type measuring instrument, such as Me Mays Ride Meter and the PCA Roadmeter, or an inertial profiling vehicle, such as the South Dakota Profiler and the K.J. Law Profilometer.@ Both system types generate a longitudinal roughness parameter, expressed as in/mi, for a specified length of pavement section, with the latter type measuring the longitudinal profile of a pavement and Men computing an international roughness index (~) based on the measured profile and standardized vehicle response characteristics. Although maintenance is obligated to correct various surface defects (bumps, holes, dips, swells) that can collectively result in a rough ride, they are primarily concerned with localized rough spots from Me standpoint of safety. Any bumps, holes, dips, or 65

OCR for page 59
swells significant enough to cause a hazard to Be traveling public are under Me immediate domain of maintenance. As a pavement becomes more and more deteriorated, We amounts of these distresses become greater and greater. However, for reasons of safety, the severity levels of the distresses will be kept somewhat in check through proper maintenance. The one data element of roughness/ride quality surveys that was found to have potential use in He QA program Is He longitudinal profile measured by inertial profilometers. These computerized profiles are usually stored for a time after completion of a survey and may be available for visual examination. Though He profiles are usually smoothed or filtered, He possibility exists Cat vertical deviations identified In He computerized profile are indicative of localized defects Hat have not been treated by maintenance. The last type of pavement condition indicator data is distress. Evidence of distress is manifested through various characteristics, such as cracking, rutting, and potholes in asphalt pavement and spelling, cracking, and faulting in concrete pavement. Most highway agencies perform either a visual or automated distress survey In order to quantify He amount and severity of each distress type present In He pavement. The correction or treahnent of distresses is not He full responsibility of maintenance. Some distresses, such as fatigue cracking, rutting, or shattered slabs, are He result of structural deficiencies and, when Hey occur on a large scale, must be structurally improved through appropriate rehabilitation strategies. Nevertheless, maintenance may be Involved in providing temporary fixes until a long-term rehabilitation effort can be conducted. Over distresses, such as bleeding, spelling, potholes, and bumps, require functional improvements In order to restore adequate safety and, to a lesser extent, riding comfort. Treatment of these types of distresses is usually provided by maintenance and, therefore, the condition data for these distresses may be suitable for use as Indicators of maintenance quality. She over Hostesses, such as longitudinal and transverse cracking and joint seal damage, are also largely He responsibility of maintenance. These types of distresses are treated to preserve He pavement Investment (i.e., extend He life of He pavement). Condition data for these types of distresses may also be applicable In He assessment of maintenance quality. The suitability of PMS distress data for use In He LOS rating system was found to be dependent on many factors, the foremost of which include He following: Frequency of pavement surveys Most PMS's entail annual (100 percent of pavement sections sampled each year), biennial (100 percent of pavement sections sampled every 2 years), or triennial (100 percent of pavement 66

OCR for page 59
sections sampled every 3 years) surveys of a given facility type. For instance, several SHAs, like Me Indiana and Wisconsin DOTs, perform annual surveys of their interstate pavements and biennial surveys of Heir non-interstate pavements. Since maintenance quality should at least be rated annually, EMS data based on biennial or triennial surveys would not be adequate. Timing of pavement surveys Pavement management distress surveys are often performed In the spring and early summer to avert He busy . construction schedule. LOS inspections, on the other hand, may be done routinely win little regard to yearly seasons or construction/maintenance seasons, or they may be done at selective times of He year. Any attempted linkages between He two systems must take into consideration He need for e , e e consistency In nnung. Length of pavement survey segments-LOS sample segments will generally range between 0.1 and I.0 mi (0.16 and 1.61 km). PMS survey scents may , _ ~ . . - . . . ~ , ~ ~ range from 10(} it (3u.5 m' to the entire length or a pavement section (which could be several miles). If Here is to be a linkage between He two systems, a common length must be established. Availability of desired data Several highway agencies only collect key distress data, such as cracking, rutting, and patching. In such instances, He possibility of using PMS data for a more complete assessment of maintenance quality is substantially reduced. Accuracy of data and type of pavement surveys PMS distress data are collected In a myriad of fashions, ranging from visual surveys of randomly selected samples to automated continuous surveys. For PMS data to be useful In He LOS ratings, He information must be collected from He same sample units as He LOS ratings and the condition surveys must be objective, accurate, and repeatable. Should an agency be able to overcome the above linkage obstacles, it slid must consider He nature of He PMS digress data collected. The agency must first review each distress type and decide if maintenance has an obligation to correct * or whether, by policy, it is a distress Hat is "out of maintenance's hands." Bridge Management Systems The Bridge Inspector's Tranung Manual (FHWA, 1991) states: In 1971, the National Bridge Inspection Standards (NBIS) came into being. The NBIS se! national policy regarding bridge inspechon frequency, inspector qualifications, reportformats, and inspection and ratingfonnats. Because of the requirements that must befulfilledfor the NBIS, it is necessary to employ a uniform bridge inspection reporting system. A uniform reporting system is essential in evaluating correctly and efficiently the condition of a structure. Furthermore, it is a valuable aid in establishing maintenance priorities and replacement pnonties, and in determining structure capacity and the cost of 67

OCR for page 59
~ - ~ maintaining the nation's bridges. The information necessary to make these defe~inations must come largely from the bridge inspection reporting system. ConsequeniZy, the importance of the reporting system cannot be overemphasized. The success of any bridge inspection program is dependent upon its reporting system. The NBIS requires that Me findings ant! results of a bridge Inspection be recorded on standard forms. Although He Structures Inventory and Appraisal Sheet (SINAI shown in figure 5 Is not a stanciard form, it represents a list of bridge data that each State must perioclicaDy report to FHWA for ah public structures within its inventory. Many SHAs have developed Heir own standard forms using He SI&A sheet as a guide. A considerable effort has been made by the FHWA to make the information and knowledge available to accurately and thoroughly Inspect and evaluate bridges. Through He manuals developed by FHWA and training courses taught by He National Highway Institute (NHI), a major effort has been accomplished to standardize He complex issue of bridge inspection. As He areas of emphasis in bridge inspection programs change due to newer types of design and construction techniques, He guidelines for inspection must also be modified to Increase uniformity and consistency. A primary use of the inspection reports is to provide guidance for immediate foDow-up inspections or corrective actions. These reports provide information that may lead to decisions to limit or deny He use of any bridge determined to be hazardous to public safety. Deficient bridges are divided into two categories: structurally deficient and functionally obsolete. Generally speaking, structurally deficient bridges are weight- restricted due to condition, are in need of rehabilitation or, in rare instances, have been denied access by He public. Functionally obsolete bridges are normally structurally sound but do not meet current standards for deck geometry, clearances, or approach alignment. At He close of He inspection, He bridge inspector must use his experience to document inspection deficiencies Cat have been observed. A Borough and well- documented inspection is essential for making informed and practical recommendations to correct bridge deficiencies. A well-prepared bridge inspection report not only provides information on existing bridge conditions, but it also serves as an excellent reference source for future inspections. The accuracy and uniformity of information ~ tal to He management of an agency's bridge program. QC is He enforcement tool used on a daily basis to ensure He inspection conclusions and recommendations are based on correct information. Many States are assigning He final review and signing of inventory results to the chief inspector, who should be a professional engineer or have a minimum of 10 years experience in bridge inspection. Quality assessment is usually accomplished by 68

OCR for page 59
NATI ONAL BRI OGE I NVE~ORY - *~**~*~** IDENTIFICATION *~*~***~** ( 1 ) STATE NAME - CODE ( 8 ) STRUC, URE NUMBER # ( S ) I NVENTORY ROUTE ( ON/UNDER ) - = (2) STATE HIGHWAY DEPARTMENT DISTRICT 3 ) COUNTY CODE _ ( 4 ) Pl ACE CODE (6) FEATURES INTERSECTED - ( 112) ( 1 ) fACI LITY CARRI ED - ( 104 ) ( 9 ) LOCAT I ON - ( 26 ) 11 ) MI LEPOINT ( 100 ) 16) LATITUDE _ ~ _ . ' (17) LONGITUDE D _~(~01) 98) BORDER BRIDGE STATE COOK _ ~ SHARE X ( 102) 9 ) BORDER BRIDGE STRUCTURE NO . # ( 103 ) (~10) (2Q) (21) (22) (371 *** STRUCTURE TYPE AND MATERIAL ****~**** (43) tTRUCTURE tYPE MAIN: MATERIAL TYP E - CODE (44) STRUCTURE TYPE APPR: MATERIAL TYP" - CODE NUMBER OF SPANS IN MAIN UNIT NUMBER OF APPROACH SPANS (45] ( 46 ) NUI'It~K US Mr~nu^~n ton (1G7) DECK STRUCTURE TYPE - CODE_ t 108) WEARING SURFACE / PROTECTIVE SYSTEM: A) TYPE OF WEARING SURFACE 8 ) TYPE OF .~BRANE C) TYPE OF DECK PROTECTION CODE _ CODE _ CODE a********* AGE AND SERVICE ******~****~***~*~** (27) YEAR BUILT 1Q6) YEAR RECONSTRUCTED (42) TYPE OF SERVICE: ON UNDER -CODE ~ 28 ) ACHES: ON STRUCTUREUNDER STRUCTURE (29) AVERAGE DAILY TRAFFIC ( 30 ) YEAR OF AOT 19 ( 19 ) BYPASS, DETOUR LENGTH ~09) TRUCK ADT _ X MI LENGTH OF MAXIMUM STRUCTURE LENGTH CURB OR SIDEWALK: ****~***** GEOMETRIC DATA **~***********~****~* SPAN Fr __ LEFT ._ FT RIGHT . FT BRIDGE ROADWAY WIDTH CURB TO CURB . FT FT (48) (49) (50) ( 5- ) -^ ~-- ~ _ # ~ (52) DECK WIDTH OUT TO OUT ( 32 ) APPROACH ROADWAY WIDTH (~/SHOULDERS ) ( 33 ) BRIDGE MEDIAN - CODE _ (34) SKEW _ DEG (as) STRUCTURE FARED (10) INVENTORY ROUTE MEN YERT CLEAR ~IN (47) INVENTORY ROUTE TOTAL HORIZ CLEAR . (S3) MIN VERT CLEAR OVER BRIDGE RONY _ F1 _ IN (54) MIN VER1. UNDERCLEAR REF ~ _ _ FT_ IN (as) MIN LAT UtlDERCLEAR RT REF ~ _ . FT ( 56) MI?. LAT UNDEPtCLEAR LT . FT ***a****** NAVIGATION DATA ******** (38) NAYIGAT$0N CONTROL ~ CODE (90) 111) PIER PROTECTION ~COt)E _ (92) (39) NAVIGATION VERTICAL CLEARANCEFT A) 116) VERT-LIFT BRIDGE HAY MIN VERT CLEAR= FT B) ( 4Q) NAVIGATION HORIZONTAL CLEARANCEFT C] STRUCTURE INVENTORY AND APPRAISAL -/OD/YY **~******~*~*~*****~**~**~****~*~*~* SUFFI CI ENCY RATING = STATUS = a**** CLASSIrlCATIt)N ************ CODE NSIS BRIDGE LENGTH ~ I GHWAY SYSTEM ~ FUNCTIONAL CLASS - DEFENSE HIGHWAY - PARALLEL STRUCTURE DIRECTION OF TRAFFIC TEMPORARY STRUCTURE DESIGNATED NATIONAL NETWORK - . TOLL - t~tAIHTAI ~ - OWNER - HISTOQICAL SIGNIFICANCE *~*~*~* CONDITION **~*~**~* CODE ( 58 ) DECK ( 5g ) SUPERSTRUCTURE _ ( 60 ) SUBSTRUCTURE _ ( 61 ) CHANNEL & CHANNEL PROTECTION ( 62 ) CULYERtS ******* LOAt) RATING AND POSTING *a****** CODE (31 ) DESIGN LOAD ~ _ (64) OPERATING RATING ~ ~~ ( 66) INVENTORY RATING ~ = ( 70 ) BRIDGE POSTING ~ _ (41) STRUCTURE OPEN, POSTED OR CLOSED ~ _ DESCRIPTION ~ *******a APPRAISAL *** CODE ( 67 ) STRUCTURAL EVALUATION (68) DECK GEOMETRY (69) UNDERCLEARANCES, VERTICAL ~ HORIZONTAL ( 71 ) WATERWAY ADEQUACY ( 72 ) APPROACH ROADWAY ALIG - ( 36 ) TRAFFI C SAFFrY FEATURES (113) SCOUR CRITICAL BRIDGES ****** PROPOSED IMiROVEMENTS **********a (7S) TYPE Of WORK - CODE (76) LENGTH OF STRUCTURE IMPROVEMENT (94) BRIDGE IMPQOY~ENT COST (95) ROADWAY IMPROY~ENT COST ( 96 ) TOTAL PROJECT COST (97) YEAR OF IMPROVEMENT COST ESTIMATE t 114 ) FUTURE AOT (115) YEAR OF FUTURE ACT S 000 S -000 S' ,' ',000 19J20_ i**** INSPECTIONS ** INSPECTION DATE _ / _ t 91 ) Frequency ~ CRITICAL FEATURE $NSPECTI~: (93) CEl DAY FRACTURE CRIT QUAIL - - _ ~A) _/ UNDERWATER INSP - = - _ ~8) _/ OTHER SPECIAL INSP - - ~C) _/ Figure 5. FHWA structure inventory and appraisal sheet (FHWA, 1991). 69 I, it. ,..

OCR for page 59
Despite being more qualitative than quantitative, attribut~based statistical QA was judged to be the more appropriate methodology for the present time, and therefore was chosen for use in He prototype QA program. Although Be method of variables better captures the spins of CQI, Me vast majority of highway agencies would be unable to apply this method because of its much greater demand for resources, particularly labor-hours. A second consideration of statistical applications in LOS field inspections was We method used to compute LOS ratings. In current maintenance rating programs, a percentage statistic is computed for each feature/characteristic by summing the number of segments in which a given feature/characteristic met He standard and Hen dividing it by He total number of segments. He resulting statistics for each feature/character~stic are Hen combined win venous feature/characteristic weights and element weights to produce an overall LOS rating. In the prototype QA program, an LOS rating is computed for each sample roadway segment using He established feature/characteristic weights and element weights. An overall LOS rating is Hen determined by computing the statistical mean. This approach allows an agency to determine He variance and standard error in He ratings which, In turn, can be used to calculate future sampling requirements. The details of this computational approach are given in the following section titled "LOS Analysis." a A third consideration of statistics In LOS field inspections pertained to He need for a pilot field study, or a trial run of LOS inspections. A pilot study provides insight about the inherent variability of LOS ratings, which can then be taken into consideration when determining He required sample size for (future) formal LOS inspections. For a given confidence level and precision, greater variability In LOS ratings results In a higher number of roadway segments to be sampled. Pilot studies were noted as having been performed by He Virginia, Maryland, and Florida DOTs during He implementation phase of Heir quality assessment programs. However, a pilot study is not entirely necessary, as a statistical formula exists that allows computation of He required roadway sample size based on a specified confidence level and precision. Unfortunately, the price for guaranteeing precision and being able to do without a pilot study is an increase In sample size. Even for sizeable variability in LOS ratings (standard deviation of 3 to 4 percentage points), considerably larger sample sizes would be required by foregoing a pilot study. Since a pilot study has He makings to serve as He first formal round of LOS inspections He number of samples taken In He pilot may satisfy He requirements for formal LOS inspections or can be supplemented win additional samples and because considerably fewer samples are likely to be required in comparison with nonpilot- based sampling, a pilot field study was advocated In the prototype QA program. The results of He pilot inspection round can serve as a baseline of existing maintenance conditions from which future improvements in maintenance quality can be measured. 78

OCR for page 59
A fourth aspect of statistics in LOS field inspections was We sample segment selection procedures and Me corresponding sampling rates. As win customer surveys, a sound sampling process was developed to help implementing agencies determine the required number of sample segments to inspect, given a desired precision and confidence level and a preliminary estimate of Me variance In LOS ratings. Reviews of Me sampling procedures used by Florida and Maryland indicated Mat sampling requirements in these agencies' programs are determined periodically using the statistical bootstrap method, a relatively sophisticated resampling procedure that uses no explicit formulas in relating population size, sample size, confidence level, and precision. This approach to determining sample size was considered impractical as most implementing agencies would need to seek Me assistance of a statistician. To keep Me sampling process as simple as possible' a basic formula Ocular to one used by Virginia-was identified which could be used by those individuals leading the QA program implementation effort. The formula was proposed in the Implementation Manual and is as follows: z2 X ~ n= n where: n = required sample size. s = standard deviation of Me ratings from Me pilot study. ~ = desired precision. z = z-statistic (for 95 percent confidence, z=~.96~. Eq. 3 It Is clear in this formula Mat Me necessary sample size increases as Me desired precision increases. That is, if one wants more precise results (smaller value of d), then a larger sample swe is required. The method recommended in Me prototype QA program for selecting roadway sample segments was simple random sampling, performable through a random number generator function available In most statistical or spreadsheet computer software. This method assures each individual segment In Me total roadway population Me same chance of being chosen for field inspection. Recognizing Me need to obtain adequate sampling representation among various roadway subsets, Me option of stratifying (i.e., subdividing) Me total roadway segment population was also featured in the prototype QA program. The stratification could be according to geography (district, residency, maintenance unit)' facility type (functional class, highway system), or any combination Hereof, with simple random sampling carried out for each strata. It was recommended Mat Me total number of strata be limped to 10, since Me added benefit associated wad more Man 10 was considered to be marginal in comparison with the cost of increased sample sizes. The precedence for stratified sampling in maintenance quality assessment is well established, with Me 79

OCR for page 59
LOSS = Virginia DOT stratifying by highway system (interstate, primary, secondary highways), the Maryland DOT by county (23 total), and Me Florida DOT by bow maintenance unit (30 total) and functional classification (urban limited access, rural limited access, urban arterial, rural arterial). T..05 Ar~a~ As discussed previously, Me computation of an overall LOS rating under Me prototype QA program entails computing individual l.OS ratings for each sample segment and Men calculating Me statistical mean and variance. These calculations are made using equations 4 and 5, shown below. L n where: LOSS = mean segment LOS. LOSsi = individual segment LOS values for n sample segments. n = number of sample segments. s2 - - where: s2 = sample variance of segment LOS ratings. LOSsi = individual segment LOS values for n sample segments. LOSs = mean segment LOS. n = number of sample segments. (LOSsi- LOSs)2 no Eq. 4 Eq. 5 The standard donation and We appropriate confidence interval are Men computed using equations 6 and 7 Oven below. s = Is2 Eq. 6 where: s = standard deviation of segment LOS ratings. s2 = sample variance of segment LOS ratings. fOSs+(zxs/In) Eq.7 where: LOSs = mean segment LOS. z = z-staUstic (~.96 for 95 percent confidence, 2.8 for 99.5 percent confidence). s = standard deviation of segment LOS ratings. n = number of sample segments. 80

OCR for page 59
If the LOS of a highway facility, as a whole, is to be analyzed, then a 95-percent confidence coefficient (i.e., z equal to 1.96) is recommended in equation 7. However, if ~. . . ~. . individual highway elements are to be examined to Determine which elements cause a facility to be deficient, Men a higher level of confidence (99.5 percent [z = 2.81) is required. The higher confidence coefficient is necessary because multiple confidence intervals (one for each element) are being constructed for examination. OA of LOS Rating Teams Since most implementing agencies are likely to establish multiple LOS rating teams (i.e., satellite teams), a QA process for ensuring accurate (i.e., consistent and unbiased) rating results from all teams was deemed essential to the prototype QA program. Two alternatives for performing annual QA checks of satellite teams were identified, one based on analysis of variance and We other based on two-sample z-tests. The analysis of variance approach entailed having each satellite team individually inspect a common set of randomly selected roadway segments. The variability of segment ratings, both within teams and between teams, is calculated and a statistical determination made as to whether significant differences in ratings exist among the teams. If significant differences were not found to exist, then the teams would be consistent and no further analysis would be necessary. If, on the other hand, significant differences were found to exists Men Me teams whose ratings differed from Me collective team rating would require adjustments in their rating process in order to bring Weir ratings in compliance with consensus ratings. The two-sample z-test approach entailed having a central-office rating team an ideal team, whose ratings are considered accurate perform field inspections In each satellite team's domain. A cannon set of randomly selected roadway segments are independently inspected at Me same time by bow Me central-office team and Me satellite team. The paired ratings from all sample segments are Men statistically analyzed via Me z-test In order to determine if Me satellite team's ratings are significantly different from the central-office team's ratings. If not, then the satellite team would be in compliance with the central-office team and no further analysis would be necessary. If so, Men Me satellite team would require adjustments In Weir rating process in order to bring their ratings into compliance with the central-office team. The preferred method of QA checks on LOS rating teams was the two-sample z-test. This method was considered easier from a field coordination and execution standpoint, and it involved simpler statistical calculations. Just as important, however, was the fact that key personnel from the QA program administrative staff would remain active in the LOS rating system and could possibly identify areas of improvement. The complete set of steps for conducting z-test QA on LOS rating teams is detailed In Me Implementation Manual. 81

OCR for page 59
Customer Input A major part of Me modern-day quality movement is recognition of Me customer's needs. The Demoing philosophy defines quality as "whatever We customer wants" (Miller and Krum, 1992~. However, because not all customers want Me same Ming and because customers' wants and needs change over time, it Is necessary to continuously assess and measure customer satisfaction. For maintenance agencies, of course, Me primary customers are Me highway users. Most everything a maintenance agency does is a service to Me traveling public, whether it's of direct notice to Hem (like keeping the highway system safe, comfortable, and attractive) or is substantially less perceptible (like keeping He system structurally sound). Since the customers are paying a substantial part of He bill for maintenance through highway user taxes, it is only appropriate to ask or solicit their ~ op~uons. Win more and more emphasis being placed on He customer, solicitation of customer input was made a key albeit, optional-component of He prototype QA program. The combination of knowing the level to which customers want the highway cut v system Initially kept (customer expectations) and He levels to which they're satisfied over time (customer satisfaction) allows an agency to make He proper adjustments In maintenance effort. The inclusion of customer input into He prototype program prompted consideration of two main items. First, He method most suitable for soliciting customer opinions needed to be selected. Second, the detailed procedures for carrying out He selected methoc! needed to be established. As discussed in chapter 2, several methods have been or are currently being used to solicit customer Input. These include focus Croups, formal and informal surveys, --rid c7----r-#~ , ~ ~ ~ ~ . ~ ~ ~ ~ ~ . ... ~ ~ ~ customer panels, and formal and informal teeanacx ores. l-nou~n eacn method has advantages and disadvantages, only formal questionnaires, conducted by mail or telephone, were judged appropriate for determirmag highway users, expectations of service. These types of surveys, when properly constructed and adm~rustered on a statistical basis, yield the most reliable and representative customer inputs. The over techniques usually result In considerably biased, qualitative, or inadequate input. A Bird type of formal questionnaire survey- personal interviews-generates useful and reliable information, but was found to involve extremely high costs. -a r Table 12 highlights some of He important facets of two recent customer surveys, One conducted by the Minnesota DOT to determine both customer expectations and customer satisfaction and He over conducted by He Pennsylvania DOT to deter~rune customer expectations. This information reveals much about He resources required to obtain reliable customer input. It also illustrates some of the key differences between 82

OCR for page 59
Table 12. Key facets of Minnesota (SMS, 1994) and Pennsylvania (Pennsylvania DOT, 1993) customer surveys. Minnesota Pennsylvania Mine Conducted Source Listing Pretest Sampling Type November 1994. Telephone listing. Focus group. April 1994. Pennsylvania Driver's License database. Disproportionate stratified random sample (based on 1990 Census of Minnesota County populations and aggregated into eight maintenance districts). N/A Random sample (using random number generator). Survey Type Sample Size Formal telephone questionnaire survey of customer expectations and customer - satisfaction. 1,200 originally proposed phone interviews (300 in each of 2 districts and 100 in each of 6 districts), 1,244 actual phone interviews. 10% of households contacted refused to participate, and less than 2% of the respondents terminated their participation in the survey midway. Formal mail-in questionnaire of customer expectations only. 4,800 mailed out (400 in each of 12 counties); 1,018 properly completed responses for 21~% overall response rate. 1~. Cost Timeframe and Staff 61 total questions, Minute phone time. $40,000 (approximately $32/respondent). 2 weeks using several telephone survey staff (responses entered directly into computer at time of survey). 25 rating questions, 1 page. $20,000 (approximately $20/respondent).a 1.5 months using three in-house individuals committed part time.a N/A a Not available. Pennsylvania conducted a customer satisfaction survey shortly after Me customer expectations survey. The back-to-back surveys were conducted over a Month timeframe at a reported combined cost of $40,000. mail-in and telephone surveys, such as the cost of the surveys and the time frame and staffing needed to complete Me surveys. The development of procedures for conducting mail-in and telephone surveys Occurred in four key areas: statistical sampling, questionnaire development, in-house . ~. ~. ~ testing of the questionnaire, and format conduct or tne survey. As discussed previously . . ~ ~- ~ in this chapter, random sampling and suatlnea random sampling were found to be the most conducive sampling methods. Win Me ideal survey population defined as "users of Me highway facilities maintained by Me agency," appropriate source listings to represent Me survey population were identified based on Me experiences of various SHAs, including Maryland, Pennsylvania, Oregon, and Minnesota. State Depardnent of Motor Vehicles (DMV) or Driver's Licensing Bureau (DEB) agencies were considered Me best sources, as Me records maintained by these entities typically include Me names of inHi~ririll~lc lir~n~H to drive or prorate vehicles within Me State alone with Me ~ . ~ 1 corresponding phone numbers and mailing addresses. -, ~ Telephone listings are also considered a viable source. This type of source listing is easier to access but is considerably more biased Man Me two previous lists because of 83

OCR for page 59
Me potential for households win no telephone or win an unlisted number (it should be noted Cat random-digat dialing [RDD] is a technique Cat can be used to eliminate Me bias associated win unlisted telephone numbers). To reach Me right customers and limit bias, Me phone interviewers can ask to survey Me licensed driver in Me household wad Me most recent birthday, as was done by a surrey consultant for Me Minnesota DOT (SMS, 1994~. In the design of maintenance questionnaires, it was determined Cat Me most appropriate way to measure Me importance of or satisfaction win various maintenance work items Is to use rating scale questions. Scales of 1-5, I-10, and I-100, win ~ representing "not Important" and 5, 10, or 100 representing "very unportant," are effective means of measuring customer expectations and satisfaction, particularly when Me questions are posed in simple terms to which Me customer can relate (e.g., potholes, smoothness, visibility). Refraining from Me use of technical questions, such as "what degree of reflectivity is acceptable for striping?" is also important. The traveling public is most valuable in defining what it expects when traveling on Me highway system, but it is the job of Me professionals In Me agency to work out technical issues. To maintain as high a response rate as possible and We highest degree of consideration for Me questions, questionnaires must be kept short and concise. A I- to 3-page mail-~n survey or a 5- to 10-m~nute phone survey is usually sufficient for asking Me questions pertinent to customers' expectations or satisfaction. On mail-in surveys, Me survey form must be made attractive (proper text arrangement, spacing, and style) so Cat participants are more receptive to Me survey. As was pointed out by Kopac (1991), "Regardless of how carefully Me questionnaire has been worded, it should not be assumed that it win work wed until it has been tested under field conditions." Hence, informally pretesting Me survey on non- professionals was emphasized, as it will provide useful feedback on the clarity, interpretation, and logical sequencing of Me questions, as well as Me length, receptiveness, and effectiveness of me survey. O , The decision of whether to administer Me questionnaire survey by mad! or by telephone is largely clependent on costs and Me availability of staff and over in-house resources. Again, table 12 shows Me costs, time frame, and staffing Cat were required for conducting Me Minnesota telephone and Pennsylvania mail-~n surveys. This information, along win samples of each agency survey, were featured In Me Implementation Manual. Adjusting to Improvements in State of the Art Many advancements In highway maintenance have been made in Me last half century. These advancements were spawned by Me desires of maintenance practitioners, researchers, and ~ndus\Ty personnel to make operations safer and more 84 f

OCR for page 59
cost-effective, and to make highway features last longer. The advancements have come in Me way of new materials (and new formulations of existing materials), new equipment (and modifications to existing equipment), and new technologies, and Weir acceptance into practice has been made possible through on-the-job training, demonstrations, instructional workshops, experiments, and research reports and symposiums. Future advancements In highway maintenance are certain to occur, and training and education of employees is essential if continuous unprovement is to be sustained. For this reason, employee training was made a key component In me prototype QA program. Training helps provide employees win me proper skills and knowledge to do their jobs right. And when jobs are done right, the quality of the service or product ~ IS improved. Several important ideas about employee training were recognized In Me development of this program component. First was Me idea Mat an employee's resistance to change is often rooted in Me lack of appropriate skills or resources. Although a poor attitude may be a part of Me problem, it Is more likely Mat an employee wishes to do a good job, but simply lacks Me knowledge or tools to accomplish it. Another important aspect Mat was recognized was ensuring that employees become familiarized win Me work issues so Hey understand Me Importance of Heir job function. Training employees to think about He work Hey perform, why it's perfonned, and ways in which it could be improved, creates greater potential for technology advancements for the agency and promotes motivation and self-esteem for He employees. A third aspect Hat was considered unportant In He educational process was communication. Because He natural tendency of most employees is to do He best job Hey can win very little complaining, some employees' need& be Hey new equipment, improved work skills, or additional staff may go unaddressed. Eventually, Heir work performance can suffer, giving rise to internal disputes. By instructing employees In He art of communicating with peers and supervisors and resolving small problems initially, a major step can be taken In avoiding major conflicts down He road. Quantifiable and Replicable Results Consistency and repeatability of ratings should be a significant concern of any unplemendug agency. Without He QC function being performed throughout He implementation process, He ratings Hat are produced will most likely be challenged. To prevent major issues being made of LOS ratings, formal LOS training and a pilot field study were made major components of He QA program. These programs will 85

OCR for page 59
help familiarize LOS raters with the rating process and will provide an estimate of the inherent variability of ratings among teams (if Were is more than one) once a reasonable level of consistency has been achieved throughout training. Although variability must be expected among the LOS ratings, it is important to minimize it to acceptable limits and then document it for later use in determiriing the required sample size for formal LOS field inspections. The objective of Me LOS training component is to make sure that raters look at the same features In each case and arrive at He same basic conclusions concerning Weir evaluations. Having a good description or specification of when a feature/characteristic meets or exceeds desired conditions is a key factor in this phase. Success in establishing this critenon will enable an agency to begin establishing He credibility of its QA system, whereas a lack of proper descriptions may have He opposite effect and cause further efforts to be ineffective. The objective of He pilot field study is to determine He variability of ratings among He different rating teams. Win an initial estimate of He variance among teams, He required number of sample segments to be rated during He formal inspection process can be calculated for a given precision and confidence level. The QA/QC process for LOS rating teams, which was touched upon earlier, also helps effect consistent rating results. If a team's ratings become statistically significantly different from He central-office team's ratings, Den actions are immediately taken to bring Bat team's ratings into line wad He central-office team's ratings. The actions may include helping He team deter~riine which features/characteristics are present at a Even sample segment or reinforming Rem of how a certain condition standard is Interpreted. Pros and Cons of Implementation Why should any agency consider abandoning past management practices and set out on a new direction for its maintenance operations? The answer can be very straightforward; it shouldn't if it has accomplished He following goals: Assurance Hat its highway maintenance and operations meet or exceed the expectations of He traveling public. Has a good relationship win He groups (e.g., highway commission, Governors office, legislature, county/city commissioners) having final say over agency budget requests. Obtained adequate funding for agency maintenance needs. Provided equal LOS for all components of He highway system. Provided employees win the skills and equipment to accomplish assigned tasks In a cost effective and efficient manner. 86

OCR for page 59
Agencies that have not met most of these goals make prime candidates for installing the prototype QA program, the main advantages of which include the following: ~ ~ _ ~ ~ Identification of customer expectations concerning Me LOS at which Hey wish We highway system to be maintained. Identification of the key activities involving workloads necessary to accomplish He desired LOS. Ability to document and transmit to field forces He amount of deficiencies allowable before an activity no longer meets the desired LOS. Ability to identify factors Hat reflect He relative Importance of individual maintenance features/charactenstics and Heir unpack on He highway facility as a whole. Establishment of a maintenance work priority system, showing which work win be performed first in He event of funding shortfalls or in emergency situations. Ability to monitor He actual LOS being achieved In each category or work activity within a maintenance unit, region, or district. Ability to identify locations Hat have extra resources (labor, equipment, and materials) or need additional resources in order to accomplish established LOS. Ability to produce budget requests showing He e~asJdug LOS, He proposed/desired LOS, and the funding required to achieve and maintain the desired LOS. Ability to measure customer satisfaction win He LOS being provided. Establishment of a uniform LOS in all management areas widen He maintenance and operations group. At He same time, highway agencies enticed by these numerous benefits must consider He foHow~ng disadvantages in implementing He prolotvae OA program: , ~_ ~ Permanent change in management philosophy and attitudes towards the commitment to the agency's maintenance and operations workforce and how work is accomplished. Cost of developing, Implementing, and monitoring LOS goals. Potential employee, union, and special-interest concerns with the development and implementation of LOS criteria. cat ~ 87

OCR for page 59