Prosecution Insights
Last updated: April 19, 2026
Application No. 18/559,274

COMPONENT CLASSIFICATION DEVICE, METHOD FOR CLASSIFYING COMPONENTS, AND METHOD FOR TRAINING A COMPONENT CLASSIFICATION DEVICE

Final Rejection §102§103
Filed
Nov 06, 2023
Examiner
LIU, XIAO
Art Unit
2664
Tech Center
2600 — Communications
Assignee
MTU Aero Engines AG
OA Round
2 (Final)
89%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
257 granted / 290 resolved
+26.6% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
44 currently pending
Career history
334
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 290 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s amendments filed on 01/12/2026 to the specification, drawing and claims have overcome claim objections, specification objections, and claim rejections under 35 U.S.C. 112(b) as preciously set forth in the Non-Final Rejection Office Action mailed on 11/18/2026. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 12-13, 17-18, 20-21, 27-28, and 30 is/are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Chu et al (Computational Intelligence and Neuroscience, vol 2018, article ID 5060857), hereinafter Chu. -Regarding claim 12, Chu discloses a component classification device for classifying components into predetermined component classes, the component classification device comprising (Abstract; FIGS. 1-2; Tables 1-9): a camera device configured to generate image data of a component to be classified (FIG. 2, camera, waste items); a weight scale configured to generate weight data of the component to be classified (FIG. 2, sensor; Table 3, bridge sensor; Page 2, Sec. 2.1., 2nd paragraph); an evaluator configured to generate predefined image features from the image data according to a predefined image feature extraction method (FIG. 2; Abstract; Page 3, 2nd Col., 2nd paragraph; Page 2, 1st Col., 3rd paragraph, (ii)), and to supply the image data to a pretrained first neural network (FIG. 2), and to generate bottleneck features of the image data from a predefined bottleneck layer of the first neural network (FIG. 2; Page 4, 2nd Col., 1st-3rd paragraphs); a classifier configured to assign at least one of multiple predefined component classes, the predefined component classes describing predetermined component groups or components (FIG. 2, MLP, Prediction, Output; Tables 2, 5-6), to the component according to a predetermined classification method (equations (2)-(6); Secs. 3.3-3.5), based on the weight data, the image features, and the bottleneck features (FIG. 2). -Regarding claim 13, Chu discloses the device of claim 12. Chu further discloses wherein the predetermined classification method includes an assignment to the predetermined component classes by a second neural network (equations (4)-(5); Table 6; Page 6, 2nd Col., 2nd paragraph). -Regarding claim 17, Chu discloses the device of claim 12. Chu further discloses wherein the component classification device is configured to extend a set of the image data using a predetermined data augmentation method (Table 4; Page 3, at Col., 2nd paragraph). -Regarding claim 18, Chu discloses the device of claim 12. Chu further discloses wherein the component classification device is configured to generate the image features using a machine vision method (Abstract, “uses a CNN-based algorithm to extract image features”; FIG. 2, Image preprocessing; Page 3, 1st Col., 2nd paragraph). -Regarding claim 20, Chu discloses the device of claim 13. Chu further discloses wherein the second neural network has a feedforward neural net architecture (FIG. 2, MLP). -Regarding claim 21, Chu discloses method for classifying a component using a component classification device, the method comprising (Abstract; FIGS. 1-2; Tables 1-9): generating, via a camera device of the component classification device, image data of a component to be classified (FIG. 2, camera, waste items); generating, via a weight scale of the component classification device, weight data of the component to be classified (FIG. 2, sensor; Table 3, bridge sensor; Page 2, Sec. 2.1., 2nd paragraph); generating, via an evaluator of the component classification device, predetermined image features from the image data according to a predetermined image feature extraction method (FIG. 2; Abstract; Page 3, 2nd Col., 2nd paragraph; Page 2, 1st Col., 3rd paragraph, (ii)), and supplying, via the evaluator of the component classification device, the image data to a pretrained first neural network (FIG. 2); and generating bottleneck features of the image data from a predetermined bottleneck layer of the first neural network; and assigning (FIG. 2; Page 4, 2nd Col., 1st-3rd paragraphs), via a classifier of the component classification device, to the component to be classified at least one of multiple predefined component classes describing predetermined component groups or components (FIG. 2, MLP, Prediction, Output; Tables 2, 5-6), based on the weight data, the image features, and the bottleneck features (FIG. 2), according to a predetermined classification method (equations (2)-(6); Secs. 3.3-3.5). -Regarding claim 27, Chu discloses the device of claim 12. Chu further discloses comprising a storage surface permitting the component to be situated at a predetermined position (FIG. 1; Tables 1, 4, 7-9). -Regarding claim 28, Chu discloses the device of claim 27. Chu further discloses comprising a housing and wherein the camera device has a plurality of cameras situated at different positions within the housing and aligned with the predetermined position to allow the component to be detected from different angles (Page 2, 2nd Col., Sec. 2.1, 3rd paragraph, “the investigated waste items are placed in an enclosed box with a dark grey background. The camera is placed at the upper front-right of the experiment box to maximize the marginal angle of view. Waste objects are rotated for the camera to capture views from different angles”). -Regarding claim 30, Chu discloses the method of claim 21. Chu further discloses wherein a component size is limited to 80 mm x 80 mm x 50 mm and a component weight is limited to 10 kg (FIGS. 1-2; Tables 1-9; Chu has no limitations on component weight and size). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 14 and 29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chu et al (Computational Intelligence and Neuroscience, vol 2018, article ID 5060857), hereinafter Chu in view of Hameed et al (Appl. Sci. 2020, 10, 8667), hereinafter Hameed. -Regarding claim 14, Chu discloses the device of claim 13. Chu does not disclose wherein the classifier is configured to ascertain in the predetermined classification method a particular probability value of the at least one predetermined component class describing with what probability the component is assigned to the predetermined component classes. However, using probability value to classify an object, or part, or component is a well-known method for object classification. The recited claim limitation is not an inventive concept. In the same field of endeavor, Hameed teaches a neural network based method for classification of fruit and vegetable (Hameed: Abstract; FIGS. 1-5). Hameed further teaches wherein the classifier is configured to ascertain in the predetermined classification method a particular probability value of the at least one predetermined component class describing with what probability the component is assigned to the predetermined component classes (Hameed: FIG. 4; equation (9); Page 10, Sec. 4.3). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chu with the teaching of Hameed by using a particular probability value of the at least one predetermined component class describing with what probability the component is assigned to the predetermined component classes in order to reliably classify the components. -Regarding claim 29, Chu discloses the device of claim 12. Chu further disclose comprising a housing (Page 2, 2nd Col., Sec. 2.1, 3rd paragraph, “… an enclosed box” …”). Chu does not disclose a lighting device for illuminating the component in the housing. In the same field of endeavor, Hameed teaches a neural network based method for classification of fruit and vegetable (Hameed: Abstract; FIGS. 1-5). Hameed further teaches a lighting device for illuminating the component (Hameed: FIGS. 1-2). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chu with the teaching of Hameed by using a lighting device for illuminating the component in order to improve the performance of the classifier under various challenge environments. Claim(s) 15-16 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chu et al (Computational Intelligence and Neuroscience, vol 2018, article ID 5060857), hereinafter Chu in view of Saptharishi et al (US 20090244291 A1), hereinafter Saptharishi. -Regarding claim 15, Chu discloses the device of claim 12. Chu does not disclose wherein the component classification device includes a user interface, the component classification device being configured to output the at least one assigned, predetermined component class to the user interface. However, this is known from prior arts and it is a common practice in the field. In the same field of endeavor, Saptharishi teaches a method for dynamic object classification (Saptharishi: Abstract; 1-18). Saptharishi further wherein the component classification device includes a user interface, the component classification device being configured to output the at least one assigned, predetermined component class to the user interface (Saptharishi: FIG. 1, 10, 13-14; [0094], “presented to the user includes the classification result”; FIGS. 10, 13-14). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chu with the teaching of Saptharishi by using a user interface in order to provide a good user experience and meet the user's preferences. -Regarding claim 16, Chu discloses the device of claim 15. Chu further discloses wherein the predetermined classification method includes an assignment to the predetermined component classes by a second neural network and the component classification device being configured to adapt the second neural network, as a function of the assessment data, with regard to multiple predefined component classes or the manually specified component class (Abstract, “trained and validated against the manually labelled items”; FIGS. 2, 5; Page 3, 1st Col., 1st paragraph; Page 5, Sec. 3.5). Chu does not disclose receiving at a user interface of the multiple predefined component classes classification device assessment data or a manually specified component class. However, this is known from prior arts and it is a common practice in the field. In the same field of endeavor, Saptharishi teaches a method for dynamic object classification (Saptharishi: Abstract; 1-18). Saptharishi further teaches receiving at a user interface of the component classification device assessment data or a manually specified component class (Saptharishi: FIG. 1; [0078], “the user at the user interface 104 manually labels the objects, and the labeled objects are supplied to the object classifier being trained”; FIGS. 9-10, 13-14). -Regarding claim 22, Chu discloses method for classifying a component using a component classification device, the method comprising (Abstract; FIGS. 1-2; Tables 1-9): generating, via a camera device of the component classification device, image data of a component to be classified (FIG. 2, camera, waste items); generating, via a weight scale of the component classification device, weight data of the component to be classified (FIG. 2, sensor; Table 3, bridge sensor; Page 2, Sec. 2.1., 2nd paragraph); generating, via an evaluator of the component classification device, predetermined image features from the image data according to a predetermined image feature extraction method (FIG. 2; Abstract; Page 3, 2nd Col., 2nd paragraph; Page 2, 1st Col., 3rd paragraph, (ii)), and supplying, via the evaluator of the component classification device, the image data to a pretrained first neural network (FIG. 2); and generating bottleneck features of the image data from a predetermined bottleneck layer of the first neural network; and assigning (FIG. 2; Page 4, 2nd Col., 1st-3rd paragraphs), via a classifier of the component classification device, to the component to be classified at least one of multiple predefined component classes describing predetermined component groups or components (FIG. 2, MLP, Prediction, Output; Tables 2, 5-6), based on the weight data, the image features, and the bottleneck features (FIG. 2), according to a predetermined classification method (equations (1)-(6); Secs. 3.1-3.5); and adapting via the classifier the predetermined classification method according to a predetermined adaptation method in order to assign the manually specified component class to the weight data, the image features, and the bottleneck features (Abstract, “trained and validated against the manually labelled items”; FIG. 2; Table 5; Page 3, 1st Col., 1st paragraph; Page 5, Sec. 3.5). Chu does not disclose receiving at a user interface of the component classification device assessment data or a manually specified component class. However, this is known from prior arts and it is a common practice in the field. In the same field of endeavor, Saptharishi teaches a method for dynamic object classification (Saptharishi: Abstract; 1-18). Saptharishi further teaches receiving at a user interface of the component classification device assessment data or a manually specified component class (Saptharishi: FIG. 1; [0078], “the user at the user interface 104 manually labels the objects, and the labeled objects are supplied to the object classifier being trained”; FIGS. 9-10, 13-14). Saptharishi also teaches utilizing user feedback for training and adaptation of an object classifier (Saptharishi: [0028]; [0104]). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chu with the teaching of Saptharishi by using a user interface in order to provide a good user experience and meet the user's preferences. Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chu et al (Computational Intelligence and Neuroscience, vol 2018, article ID 5060857), hereinafter Chu in view of Nilsson et al (US 20220297706 A1), hereinafter Nilsson. -Regarding claim 19, Chu discloses the device of claim 12. Chu does not disclose wherein the predetermined classification method includes a random forest method. However, it is well-known to use random forest method for object classification. In the same field of endeavor, Nilsson teaches a method for sensor fusion (Nilsson: Abstract; FIGS. 1-17). Nilsson further teaches wherein the predetermined classification method includes a random forest method (Nilsson: [0031], “detect/classify components … using … random forest). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chu with the teaching of Nilsson by using random forest method for object classification in order to improve classification accuracy and prevent machine learning model overfitting. Claim(s) 23-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chu et al (Computational Intelligence and Neuroscience, vol 2018, article ID 5060857), hereinafter Chu in view of Zhang et al (Computational Intelligence and Neuroscience, vol 2021, article ID 6647220), hereinafter Zhang. -Regarding claim 23, Chu discloses the device of claim 12. Chu does not disclose wherein the first neural network is a VGG16 model In the same field of endeavor, Zhang teaches a method for classification tasks using feature fusion with guided training (Zhang: Abstract; FIGS. 1-6). Zhang further teaches wherein the first neural network is a VGG16 model (Zhang: Table 4, Page 9, 1st Col., 1st paragraph). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chu with the teaching of Zhang by using VGG16 model as a base in order to evaluate classification accuracy and speed. -Regarding claim 24, Chu discloses the device of claim 12. Chu does not disclose wherein the predefined bottleneck layer contains a 512-dimensional representation of the image data. In the same field of endeavor, Zhang teaches a method for classification tasks using feature fusion with guided training (Zhang: Abstract; FIGS. 1-6). Zhang further teaches wherein the first neural network is a VGG16 model and wherein the predefined bottleneck layer contains a 512-dimensional representation of the image data (Zhang: Table 4, Page 9, 1st Col., 1st paragraph; It is known that VGG16 processes images to extract 512-dimensional feature vector). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chu with the teaching of Zhang by using VGG16 model as a base in order to evaluate classification accuracy and speed. Claim(s) 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chu et al (Computational Intelligence and Neuroscience, vol 2018, article ID 5060857), hereinafter Chu in view of Risser (ArXiv:2010.14702v1 28 Oct 2020). -Regarding claim 26, Chu discloses the method of claim 21. Chu does not disclose wherein the bottleneck features are based on edges, textures, or homogeneous surfaces areas. In the same field of endeavor, Risser teaches a method for style transfer and texture mixing (Risser: Abstract; FIGS. 1-13) Risser further teaches wherein the bottleneck features are based on edges, textures, or homogeneous surfaces areas (Risser: Abstract; Page 3, Sec. 3., 3rd paragraph, “Our algorithm mimics the back-propagation texture optimization process through an optimal transport-based feature transformation within the bottleneck layer of a series of multi-scale auto-encoder loops”; Page 4, Sec. 3.2.). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chu with the teaching of Risser by using bottleneck features are based on textures in order to provide fast and robust texture synthesis and style transfer (Risser: Page 1, 2nd Col., 2nd paragraph). Allowable Subject Matter Claim 25 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant's arguments filed 01/12/2026 have been fully considered but they are not persuasive. Applicant argues that Chu fails to disclose claim limitations of independent claims 12, 21, and 22 because Chu does not disclose a bottleneck layer nor bottleneck features as claimed (Remarks: Page 3, 4th paragraph, Page 4, 2nd paragraph, last paragraph). Regarding independent claims 12, 21 and 22 and in response to applicant’s argument that Chu does not disclose a bottleneck layer nor bottleneck features as claimed, there is no formal definition for bottleneck layer. Applicant’s specification only specifies that “The bottleneck layer may be a middle or inner layer of the first neural network, and may be situated between two layers of the first neural network. The bottleneck layer may differ from other layers of the first neural network by having a smaller number of neurons and/or transferred features” ([0014], page 5, 1st paragraph, emphasis added) and “extracting so-called bottleneck features 14 from a predetermined bottleneck layer 26 of first neural network 23” ([0038], page 12; FIG. 4). Chu discloses a multilayer hybrid method (MHS) to perform waste classifications using a convolutional neural network (CNN) and multilayer perceptron (MLP) associated with a camera and a bridge sensor (Chu: FIG. 2). Chu further discloses the architecture of the CNN (Chu: Page 4, 2nd Col., 1st – 3rd paragraph). According to description of applicant’s specification, Chu’s architecture does have bottleneck layers (for example, layer 11 has a small number of neurons, also layer 7, etc.), and the features from these layers are bottleneck features. Thus, Chu discloses assigning at least one of multiple predefined component classes, the predefined component classes describing predetermined component groups or components (Chu: FIG. 2, MLP, Prediction, Output; Tables 2, 5-6), to the component according to a predetermined classification method (Chu: equations (2)-(6); Secs. 3.3-3.5), based on the weight data, the image features, and the bottleneck features (Chu: FIG. 2). On the other hand, applicant claims bottleneck features or bottleneck layers because of applying VGGNet (VGG16) as a classifier (see applicant’s FIG. 4, [0038]; [0045]). However, using VGGNet as a classifier is a common practice in the field (see cited arts Hameed: Sec. 2.1; Zhang: Table 4, Page 9, 1st Col., 1st paragraph). Please note that Chu has no limitation on types of CNN used as a classifier. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAO LIU whose telephone number is (571)272-4539. The examiner can normally be reached Monday-Thursday and Alternate Fridays 8:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAO LIU/Primary Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Nov 06, 2023
Application Filed
Nov 13, 2025
Non-Final Rejection — §102, §103
Jan 12, 2026
Response Filed
Feb 27, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603972
WIRELESS TRANSMITTER IDENTIFICATION IN VISUAL SCENES
2y 5m to grant Granted Apr 14, 2026
Patent 12592069
OBJECT RECOGNITION METHOD AND APPARATUS, AND DEVICE AND MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12579834
Information Extraction Method and Apparatus for Text With Layout
2y 5m to grant Granted Mar 17, 2026
Patent 12576873
SYSTEM AND METHOD OF CAPTIONS FOR TRIGGERS
2y 5m to grant Granted Mar 17, 2026
Patent 12573175
TARGET TRACKING METHOD, TARGET TRACKING SYSTEM AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+11.5%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 290 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month