Friday, April 5, 2019

Value Package Introduction in COS

order piece of land base in COSAbstractVPI ( nourish Package Introduction) was unmatched of the impression programs in Cummins Operating System (COS). VPI was the do work by which the Comp any(prenominal) defined, aspirationed, developed and introduced risque persona Value Packages for nodes. One of the headstone routinees in a VPI program was to identify transgress failures. When a assort failure was identify, it was transported to a nonher(prenominal) seet locations. A correspond in actors line duration from one plant location to an a nonher(prenominal) impeded the diagnosis of a calve and resulted in a custody of a tiny resolution and subsequent validation. As a proven methodology, customer foc social occasiond vi Sigma tools were employ for this design to quantify the act of this serve. sixer Sigma was a information-driven approach which was designed to eliminate defects in the process. The project goal was to identify root causes of process variation and reduce the number of long time it was fetching for a part to incline from layover of failure to the component aim for evaluation. The average number of geezerhood at the start of this project was 137. The goal was to reduce this by 50%. The benefits of perform this project was a reduction in the time it takes for separate to give the axe which clashed the ability to analyze and fix troubles in a timely manner and tout ensembleowed the part to be reformd or modified and tell tail on the railway locomotive for further hearing.VPI Failed split Movement Between LocationsIntroductionVPI (Value Package Introduction) was one of the core programs in Cummins Operating System (COS). VPI was the process by which the Company defined, designed, developed and introduced high quality Value Packages for customers. The complete VPI parcel of land totallyowed Cummins to continuously mitigate the product(s) delivered to customers. This project was conducted in an effort to inc rease the value of these packages. By improving the process of moving part from one location to a nonher, Cummins has benefited in both cycle time and cost. VPI include all the elements of products which convoluted function and inselective information formattingion that was delivered to the end-user customer. These products include oil, filters, generator sets, part, patronage management tools/software, engines, electronic features and images, service tools, re obligation, durability, packaging, safety and environmental compliance, appearance, operator friendliness, integration in the application, robust design, leak-proof components, ease of service and maintenance, send away economy, rebuild cost, price, and diagnostic software. These were recognize factors of customer satisfaction that allowed Cummins to remain competitive and provide quality move and services to the end customers. This process was essential in surviving among competitors. Statement of the choreOne of t he bring up processes in a VPI program was to identify and resolve part failures. In order to do this in a timely manner, split needed to travel quickly from the point of failure to the component plans for diagnosis. Failures were identify at Cummins skillful Center during engine testing. The failed separate were and so sent to one of twain other locations, Cummins Engine Plant (Cummins E turn tailion Solutions) or the furnish Systems Plant, where they were to be delivered to the leave engineer for diagnosis and part engineering changes. A delay in the diagnosis of a failed part meant a delay in the resolution of the problem and subsequent engine testing. The ideal situation was for a part failure to be set by the test cell technician, delivered to the engineer, diagnosed by the engineer, and the part redesigned for further testing on the engine. When this did not occur timely, the failed part did not reach the engine again for a sufficient amount of testing. The problem w as that parts were either winning a very long time to get into the engineers hands, or the parts were lost. Engines require a pre- received amount of testing time to identify potential engine failures and associated risks to the customer and the Company. As a result, the opportunity to continually improve parts and processes was missed.Through the use of customer focused vi sigma tools this process improved the ability to solve customer problems and achieve family targets. Investigation was required to determine the most efficient process for the transfer of failed parts amidst various sites within Cummins. Significance of the Problem This process was important in solving part failures. Timely transfer of parts to the correct engineer for analytic thinking reduced the amount of time for issue correction and improved the feat of the engines that were sold to customers. This package allowed Cummins to continuously improve the process and reduce cycle time and cost. This projec t involved the expat of VPI failed parts from the point of failure to the appropriate component engineer. The improvements guess during this project ensured that parts were received by the engineers in a timely manner which allowed further testing of the re-engineered failed parts.Statement of the PurposeThe process of identifying part failures and delivering them to the appropriate component engineer was essential in diagnosing problems and correcting them. Personnel were either not trained in the problem identification area or were unaware of the impress that their work had on the entire process. communication between the test cell engineers whom identify part failures was important within devil areas. First, it was sarcastic that the engineer responsible for the part was notified and secondly, the Failed Parts Analyst (FPA) had to be notified in order to know when to pick up the part for shipping. The partnership between the test cell engineer and the other devil areas was a heavy part of this process in order for it to be successful. Other factors that contributed to the time delay in part failure identification and delivery time was vacation coverage of key employees and training of shipping and delivery personnel. The average number of years for a part to be removed from the test cell engine and delivered to the appropriate design engineer was 137 days. Based on the logistics of the locations where the parts were be delivered, this process was improved to be well-bred in less time. The purpose of this project was to reduce the amount of time it was taking for this process to occur. The benefits of performing this project resulted in a reduction in the time it was taking for parts to move which impacted the ability to analyze and fix problems and allowed the part to be improved or modified and put back on the engine for further testing. The improvements derived from this project can be applied to similar processes end-to-end the multiple busin ess units. Definition of TermsVPI- Value Package Introduction was a program use by Cummins in which sunrise(prenominal) products were introduced. It included all the elements of creating a new product such as design, engineering, final product production, etc. COS- Cummins Operating System the system of Cummins operations which were standard byout the Company. It identified the manner in which Cummins operated. CE matrix tool that was used to prioritize input variable stars against customer requirements.FPA- Failed Parts Analyst the FPA was the person responsible for retrieving failed parts from the test cells, determining the correct engineer to whom these failed parts were to be delivered to, and prepared the parts for shipping to the appropriate location. SPC- Statistical Process go through SPC was an application of statistical methods utilized in the monitoring and assure of the process. TBE- Time Between Events In the context of this paper, TBE represented the number of opportunities that a failure had of occurring between daily runs.McParts- Software application program which tracked component progress through the system. It provided a time line from the time a part was entered into the system until it was closed out.AssumptionsThe assumption was made that all participants in the project were experienced with the software application program that was utilized. DelimitationsOnly failed parts associated with the Value Package Introduction program were included in the kitchen stove of this project. Additionally, only the heavy duty engine family was incorporated. The light duty diesel and mid-range engine families were excluded. This project encompassed three locations in Southern Indiana. The focus of this project was on delivery time and did not include packaging issues. It as well as focused on transportation and excluded database functionality. Veteran employees were selected for collecting data. The variable of interest considered was deliver y time. Data collection techniques were limited to first raise only. The project focusd on redesigning an lively process and did not include the possibility of developing a new theory.LimitationsThe methodology used for this project did not include automation of the process as a gradation. RFID was a to a greater extent attractive way to resolve this problem however, it was not economically feasible at the time. The universe was limited since the parts that were observed were limited to heavy duty engines which reduced variations in the size and mountain of parts. Time constraints and resource availability was an issue. Due to team members residing at several locations, meeting scheduling was to a greater extent problematic. Additionally, coordinating team meetings was a challenge because room availability was limited. Review of LiteratureIntroduction The scope of this writings review was intended to evaluate articles on failed parts within Value Package Introduction (VPI) p rograms. However, although quality design for customers is widely utilized, the writings on Value Package Introduction was rather scarce. VPI was a business process that companies used to define, design, develop, and introduce high quality packages for customers. VPI included all the elements of products which involved services and information that was delivered to the end-user customer. One of the key processes in a VPI program was to problem -solve part failures, which was the direction this literature review traveled. MethodsThis literature review focused on part/process failures and improvements. The methods used in fabrication reading materials for this literature review involved the use of the Purdue University libraries Academic Search Premier, Readers Guide, and Omni file FT Mega library. supplementary investigation was conducted on-line where many resources and leads to reference material were found. All of the references cited are from 2005 to present with the exception of a Chrysler article dated 2004 which was an interesting reference discussing the use of third party logistic centers, a journal article from 1991 that explains the term, cost of quality, which is used throughout this literature review, and two reference manuals published by AIAG which expect regulations for ISO 90012000 and the TS16949 standards. Keywords used during researching included terms such as ice, remold, failed parts and logistics. Literature ReviewBenchmarking. Two articles, authored by Haftl (2007), concentrated on the mixture of metrics needed to optimize overall performance. Some of these metrics included completion rates, iota and refashion, machine uptime, machine cycle time and first pass percentages. According to the 2006 American Machinist Benchmarking survey, leading machine shops in the United States are producing, on average, more than four times the number of units produced by other non-benchmarked shops. Also worth noting is that they also reduced th e cost of scrap and rework more than four times. (Haft, 2007, p.28). The benchmark shops showed greater improvement than other machine shops. The benchmark shops cut scrap and rework costs to 4.6 percent of sales in 2006 from 6.6 percent three years ago, and all other shops went to 7.8 percent of their sales in 2006 from 9.3 percent three years ago (Haftl, 2007, p.28). The successful reduction of scrap and rework costs by the benchmark shops were contributed to several factors. First, training was provided to employees and leadership seminars were held. Secondly, these shops practiced lean manufacturing and lastly, they had unique(predicate) programs which directly addressed scrap and rework. Whirlpool, one of the nations leading manufacturers of household appliances, had used benchmarking as a gist of finding out how they rated in comparison to their competitors. They benchmarked their primary competitor, General Electric. As a result, they detect what improvements they could ma ke that could be managed at a low investment. The improvement processes were especially useful and applied in existing strengths of the company. They rolled out a new sales and operating plan based on customer requirements (Trebilcock, 2004).Quality. An overall theme contained in all of the articles reviewed was that of quality. In Staffs review (2008), hecontended that regardless of a companys size, quality was critical in maintaining a competitive advantage and retaining customers. The Quality Leadership 100 is a list of the flush 100 manufacturers who demonstrated excellence in operations. The results were based on criteria such as scrap and rework as a percentage of sales, warranty costs, rejected parts per million, the contribution of quality to profitability, and share pallbearer value. Over 800 manufacturers participated in this survey. The top three manufacturers for 2008 were listed as 1 Advanced Instrument Development, Inc. located in Melrose Park, IL, 2 Toyota Motor Man ufacturing in Georgetown, KY., and Utillmaster Corp. Wakarusa, IN. (Staff, 2008). In an article written by Cokins (2006) the author stressed that quality was an important factor in improving profitability. He informed the reader that quality management techniques help in identifying waste and generating problem solving approaches. One of the problems he cited regarding quality was that it was not often deliberate with the appropriate measuring tools. As a result, organizations could not easily quantify the benefits in financial terms. Obstacles that change quality was the use of traditional accounting practices. The financial data was not captured in a format that could easily be applied in decision devising. Because quantifiable whole tones lacked a price base to differentiate the benefits, management often perceived process improvements as being risky. Cost of Quality (COQ), was the cost associated with identifying, avoiding and making corrections to defects and errors. It r epresented the dissimilitude between actual costs and reduced costs as a result of identifying and fixing defects or errors. In Chens report (ChenAdam,1991), the authors continued to breakdown cost of quality into two parts, the cost of control and the cost of failure. They explained that cost of control was the most easily quantifiable because it included legal community and measures to keep defects from occurring. Cost of control had the capability to detect defects before a product was shipped to a customer. Control costs included inspection, quality control labor costs and inspection equipment costs. Costs of failure included internal and external failures and were harder to calculate. Internal failures resulted in scrap and rework, while external failures, resulted in warranty claims, liability and hidden costs such as loss of customers (ChenAdam, 1991). Because cost of control and cost of failure were cerebrate, managing these two element reduced part failures and lowered t he costs associated with scrap and rework. Tsarouhas (2009, p.551) reiterated in his article on engineering and system safety , that failures arising from human errors and raw material components account for 25.06% and 5.35%, respectively, which is about 1/3 of all failures.. A rule of thumb is that the nearer the failure is to the end-user, the more expensive it is to correct (Cokins, 2006, p. 47). Identification of failed parts was a key process of Value Package Introduction and key to identifying and correcting failures before they reached the customer. A delay in the diagnosis of a defective part resulted in the delay or a miss to the implementation of a critical fix and subsequent validation. When a delay occurred, the opportunity to continually improve parts and processes was not achieved. In a journal article written by Savage give-and-take (2009), the authors affirmed that effective design relied on quality and reliability. Quality, they lamented, was the adherence to pre ciseations required by the customer. Dependability of a process included mechanical reliability (hard failures) and performance reliability (soft failures). These two types of failures occurred when performance measures failed to meet critical specifications (Savage Son, 2009).Tools and specifications. The remaining articles discussed in this literature review focused on tools and specification that were utilized across the business environment. Specifications were important aspects of fulfilling a customers needs. Every company had its own unique way of operating, so businesses often had middling different needs (Smith, Munro Bowen, 2004, p. 225). There were a number of tools that were available to help meet specific customer requirements. Quality control systems and identification of failed parts were among these tools. The application of statistical methods was used to make efforts at improvement more effective. Two common statistical methods that were used are those that were associated with statistical process control and process capability analysis. The goal of a process control system was to make predictions about the catamenia and future state of a process. A process was said to be operating in statistical control when the only sources of variation were common causes (Down, Cvetkovski, Kerkstra Benham, 2005, p. 19). Common causes referred to sources of variation that over time produced a unchanging and repeatable distribution. When common causes yielded stable results then the output was considered to be predictable. SPC involved the use of control charts though an integrated software package. In an article by Douglas Fair (2008), he viewed product defects from the eyes of the consumer. He stated that to truly leverage SPC to create a competitive advantage, key characteristics had to be identified and monitored. (Fair, 2008) The meaning for monitoring some of these characteristics involved the use of control charts. An article written on integra ted control charts, introduced control charts based on time-between-events (TBE).These charts were used in manufacturing companies to gauge the reliability of parts and service related applications. An event was defined as an occurrence of a defect and time referred to the amount of time between the occurrence of defect events (Shamsuzzaman, Min, Ngee Haiyun, 2008). Process capability was determined by the variation that came from common causes. It represented the surmount performance of a process. Other writers deemed that one way to improve quality and achieve the best performance was to reduce product deviation. The parameters they used included the process mean and production run times (Tahera, Chan Ibrahim, 2007). quill reside (2007) favored the use of Computer-Aided Manufacturing tools as a means of improving quality. According to the author, CAM allowed a company to eliminate errors that cause rework and scrap, improved delivery times and simplified operations, and ident ified bottlenecks which assisted in efficient use of equipment (Roost, 2007). Other articles on optimization introduced a lot size modeling technique to identify defective products. Lot-sizing emphasized the number of units of an item that could be produced without interruption on the machinery used in the production process (Buscher Lindner, 2007). ConclusionIn this literature review the importance of failed part identification was presented. The impact that quality and reliability had on this process was indicative of the value that proper measuring tools provide. Through the use of customer focused tools the identification and correction of failed parts was more easily accomplished and allowed a speedy resolution to customer problems. Benchmarking was discussed as a means of comparing outputs to those of competitors. Benchmarking was the first step in identifying areas requiring adjacent attention. Haftl ( 2007) and Trebilcock (2004) devoted their articles to benchmarking and the impact it had on identifying areas demanding immediate improvement processes. Staff (2008), Cokins (2006), Tsarouhas (2009), and Savage Son (2009) spent more time discussing the critical requirement of quality and the affects it had on competitive advantage. Lastly, authors Smith, Munro Bowen (2004), Down (2005), Cvetkovski, Kerkstra Benham (2005), Fair (2008), Tahera, Chan Ibrahim (2007), and Roost (2007) discussed the different specifications and tools used in improving quality and identifying failures. The articles involving benchmarking were concise and easy to understand. A similarity among all of the articles is the census that quality was important in identifying and preventing failures and that competitive advantage cannot be obtained without it. Gaps identified through this literature review were the methods of making process improvements. Several of the authors had their own version of the best practice to use to improve performance. The articles on tools and speci fications were very technical and discussed the different methods. In Fairs article,the author had a different perspective than any of the other articles reviewed. He wrote from the view of a consumer. MethodologyThis project built on existing research. living was reviewed to determine the methodology used in previous process designs. The purpose of this project was to redesign the process flow to improve capability and eliminate non-value added time. Team members were selected based on their vested interest in the project. to each one team member was a key stakeholder in the actual process. A random sampling technique was in which various components were tracked from point of failure to delivery. McParts, a software application program, was utilized to measure the amount of time that a component resided in any one area. Direct observation was also incorporated. A quantitative descriptive teach was utilized in which numerical data was collected.The DMAIC method of Six Sigma was u sed. The steps involved in the DMAIC process wereDefine project goals and the accepted process. Measure key aspects of the current process and collect relevant data. Analyze the data to determine cause-and-effect relationships and ensure that all factors are being considered. Improve the process based upon data analysis.Control the process through the creation and implementation of a project control plan. Process capability was established by conducting pilot samples from the population.In the Define stage, the Y variable objective statement was established- Reduce the amount of time it takes for a failed part to go from point of failure to the hands of the evaluating engineer by 50%. Next, a data collection plan was formed. The data was collected utilize the McParts component tracking system. Reports were run on the data to monitor part progression.In the second stage, Measure stage, a process map was created which identified all the potential inputs that affected the key outputs of the process. It also allowed people to illustrate what happened in the process. This step was useful in clarifying the scope of the project. once the process map was completed, a receive Effect matrix was developed. The Cause Effect matrix feed off of the process map and key customer requirements were then identified. These requirements were rank ordered and assigned a priority factor to each output (on a 1 to 10 scale). The process steps and materials were identified and each step was evaluated based on the score it received. A low score indicated that the input variable had a smaller effect on the output variable. Conversely, a high score indicated that changes to the input variable greatly affected the output variable and needed to be monitored.The next step involved creating a Fault Tree Analysis (FTA). The FTA was used to help identify the root causes associated with particular failures. A criterion system analysis was then conducted. Measurement tools such as McParts software application program as well as handling processes were reviewed.Next, an initial capability study was conducted to determine the current processes capability. Next, a design of experiment was established. The design of experiment entailed capturing data at various times throughout the project. Six months of data was obtained prior to the start of the project to show the current status. erstwhile the project was initiated, data was collected on a continuous basis. Finally, once the project was complete, data was collected to determine stability and control of the process. Once the experiment was completed and the data was analyzed, a control plan was created to reduce variation in the process and identify process ownership. All of the above steps included process stakeholders and team members whom assisted in creating each output. Data/FindingsDefine. The purpose of this project was to reduce the number of days it was taking a part to move from point of failure to the comp onent engineer for evaluation. Through the use of historical data, 2 of the 17 destination location for parts were identified as being problematic. The average number of days it was taking parts to be delivered to the component engineer at the Fuels Systems Plant and Cummins Engine Plant (Emission Solutions) location was 137 days. twain sites were located in the same city where the part failures were identified. Key people involved in performing the various functions in part failures and delivery were identified and interviewed.Measure. A process map was created documenting each step in the process including the inputs and outputs of each process (Figure 1). Once the process was documented, the sample size was determined. Of the 3,000 plus parts, those parts delivered to the two sites were extrapolated, resulting in a sample size of 37 parts. Parts were then tracked using a controlled database called McParts. From this point, key steps identified were utilized in creating a Cause Effect matrix. The CE matrix prioritized input variables against customer requirements. The Cause Effect matrix was used to understand the relationships between key process inputs and outputs. The inputs were rated by the customer in order of importance. The top 4 inputs identified as having the largest impact on quality were Incident (part failure) origination, appropriate tagging of parts, failed parts analyst role, and addressing the tag part to the correct destination. The Cause Effect matrix allowed the team to narrow down the list and weight the evaluation criteria. The team then did a Fault Tree Analysis (FTA) on possible solutions. The FTA analyzed the effects of failures. The critical Xs involved the amount of time for filing an incident report and tagging parts, the amount of time it takes for the FPA to pick up the parts from the test cells once the part failure is identified, and the staging and receiving process. Next, validation of the measurement system was conducte d. An expert and 2 operators were selected to run a total of 10 queries in the McParts database using random dates. The results of the 2 operators as shown in figure 2 was then scored against each other (attribute agreement analysis within appraisers) and that of the experts (appraiser versus standard)The next analytic step was to determine if there was a difference between the types of test performed and the length of time it was taking a part to be delivered to the appropriate component engineer. There were two types of tests performed, Dyno and Field tests. Figure 6 shows the median for field tests was a little better than the Dyno tests which came as a surprise because field test failures occur out in the field and occur at various locations. The Dyno tests are conducted at the Technical Center. The data drove further investigation into the outliers which showed that out of approximately 25 of these data points 8 were ECMs, 5 were sensors, 7 were wiring harnesses, 1 was an inje ctor, and 4 were fuel line failures. These findings were consistent with the box mend on days to close by group name. ECMs, sensors, wiring harnesses, and fuel lines have the highest variance. The similarities and differences in the parts were reviewed and it was discovered that they are handled by different groups once they reached FSP. The Controls group handled ECM, Sensors, and Wiring Harnesses. The XPI group handled Accumulators, Fuel lines, Fuel pumps, and Injectors. drill down further, another box plot was created to graphically depict any differences in the two different tests for both sites. The boxplot then showed that CES dyno had a much higher median and higher variability than CESs field tests and Fuel Systems dyno and field tests. (See figure 7 below)An IMR chart was created for dyno field tests without special causes. The data was stable but not normal. A test of equal variances was run for CES and FSP dyno and field tests. Based on Moods Median there is no differen ce in medians. This was likely due to small sample size in 3 of the 4 categories however CES dyno test had a lot of variation and would require further investigation.An IMR chart and box plot was run on the data for XPI and Controls group at the Fuel Systems Plant. The data was stable but not normal. Next, a test of equal variance was run which showed that the variances were not equal. Thus, the null hypothesis that the variability of the two groups was equal was rejected. Next, attention was directed towards the Fuel Systems Plant. A boxplot was created from the data which showed there was a statistical difference between medians for FSP Control group and XPI. Through the solutions derived from the DMAIC methodology of Six Sigma, the project team had performed statistical analysis which proved that there would be benefits obtained by resolving the problems that were identified. The changes were implemented and a final capability study was performed on the data which showed an 84% r eduction in the number of days it took a part to move from point of failure to the hands of the component engineer for evaluation. Improvements were documented and validated by the team. To ensure that the performance of the process would be continually measured and the process remained stable and in control, a control plan was created and approved by the process owner responsible for the process. Conclusions/ RecommendationsThe goal of this project was to reduce the number of days it was taking to move a part from point of failure to the component engineer for evaluation. This goal was accomplished and final capability of the process shows a reduction in time by 84% from 137 days to 22 days.There were 4 critical problems identified during this project whic

No comments:

Post a Comment