Prosecution Insights
Last updated: April 19, 2026
Application No. 18/275,061

METHOD FOR INSPECTING A COMPONENT OF A TURBOMACHINE

Non-Final OA §101§102§103
Filed
Jul 31, 2023
Examiner
LEE, SANGKYUNG
Art Unit
2858
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
MTU Aero Engines AG
OA Round
1 (Non-Final)
61%
Grant Probability
Moderate
1-2
OA Rounds
2y 8m
To Grant
66%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
86 granted / 141 resolved
-7.0% vs TC avg
Minimal +5% lift
Without
With
+4.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
46 currently pending
Career history
187
Total Applications
across all art units

Statute-Specific Performance

§101
24.1%
-15.9% vs TC avg
§103
54.6%
+14.6% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 141 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 08/21/2023 was in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claim 14 is objected to because of the following informalities: In claim 1, “-” should be deleted. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 32 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Regarding claim 32, the claim fails to recite that the computer-readable storage medium is non-statutory subject matter. The rejection may be overcome by amending claim 32 to recite “A non-transitory computer-readable storage medium…”. Claims 14-33 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Specifically, representative Claim 1 recites: A method for inspecting a component comprising the steps of: - capturing at least one image of the component using an image-capturing device; - providing metadata about the component; and - classifying, by a trained machine learning system, the component into a "serviceable" category or a "non-serviceable" category based on the image captured by the image-capturing device and the provided metadata. The claim limitations in the abstract idea have been highlighted in bold above; the remaining limitations are “additional elements.” Step 1: under the Step 1 of the eligibility analysis, we determine whether the claims are to a statutory category by considering whether the claimed subject matter falls within the four statutory categories of patentable subject matter identified by 35 U.S.C. 101: Process, machine, manufacture, or composition of matter. The above claim is considered to be in a statutory category (Process). Step 2A, Prong One: under the Step 2A, Prong One, we consider whether the claim recites a judicial exception (abstract idea). In the above claim, the highlighted portion constitutes an abstract idea because, under a broadest reasonable interpretation, it recites limitations that fall into/recite an abstract idea exceptions. Specifically, under the 2019 Revised Patent Subject matter Eligibility Guidance, it falls into the groupings of subject matter when recited as such in a claim limitation that falls into the grouping of subject matter when recited as such in a claim limitation, that covers mathematical concepts - mathematical relationships, mathematical formulas or equations, mathematical calculations. For example, the limitation of “classifying, by a trained machine learning system, the component into a "serviceable" category or a "non-serviceable" category based on the image captured by the image-capturing device and the provided metadata” is mathematical calculations (see paras. [0042]-[0043] of instant application) because at training using machine learning system is indicative of mathematical calculations. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mathematical calculations, then it falls within the “Mathematical Concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Similar limitations comprise the abstract ideas of Claims 26 and 31-33. Step 2A, Prong Two: under the Step 2A, Prong Two, we consider whether the claim that recites a judicial exception is integrated into a practical application. In this step, we evaluate whether the claim recites additional elements that integrate the exception into a practical application of that exception. This judicial exception is not integrated into a practical application. Therefore, none of the additional elements indicate a practical application. Therefore, the claims are directed to a judicial exception and require further analysis under the Step 2B. Step 2B: The above claims comprise the following additional elements: In Claim 14: a method for inspecting a component (preamble); steps of capturing at least one image of the component using an image-capturing device and providing metadata about the component; In Claim 26: a method for training a machine learning system to inspect a component (preamble); providing a machine learning system; inputting an image of the component into the machine learning system, inputting metadata about the component into the machine learning system, the metadata including at least a component type or a running time of the component or a number of remaining life cycles or a repair history or data of the operator of the components or geographical data or environmental data; outputting the determined category; inputting correct information about the category of the component into the machine learning system to train the machine learning system; and In Claim 31: a computer program product comprising instructions which are readable by a processor of a computer and which, when executed by the processor, cause the processor to execute the method (preamble); In Claim 32: a computer-readable medium on which the computer program product (preamble); and In Claim 33: a system for inspecting a component (preamble); an image-capturing device for capturing an image of the component. The additional elements such as method for inspecting a component, providing a machine learning system, a computer program product, a computer-readable medium and a system for inspecting a component in claims 14, 26, and 31-33 are recited at a high-level of generality (MPEP 2106.05(d)). Note that steps of capturing at least one image of the component using an image-capturing device, inputting an image of the component into the machine learning system, and inputting metadata about the component into the machine learning system, the metadata including at least a component type or a running time of the component or a number of remaining life cycles or a repair history or data of the operator of the components or geographical data or environmental data, inputting correct information about the category of the component into the machine learning system to train the machine learning system in claims 14, 26, and 31-33 are insignificant (data gathering) extra-solution activity (MPEP 2106.05(g)). Further, note that the step of outputting the determined category in claim 26 is insignificant (post-solution activity) extra-solution activity (MPEP 2106.05(g)). Therefore, none of the additional elements indicate a practical application. Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because these additional elements/steps are well-understood, routine, and conventional in the relevant based on the prior art of record (Campbell, Philipp, Arai (US 2017/0344907 A1), Zhang (US 2020/0184272 A1)). For example, Campbell, Zhang, and Philipp teach capturing at least one image of the component using an image-capturing device (paras. [0009], [0061] of Campbell; para. [0013], [0015] of Zhang; page 5, lines 31-32 of Philipp).component into the machine learning system, the metadata including at least a component type or a running time of the component or a number of remaining life cycles or a repair history or data of the operator of the components or geographical data or environmental data (paras. [0059], [0063] of Campbell; para. [0026], [0033], [0055], [0070], [0080] of Zhang). Therefore, independent claims 14, 26, and 31-33 are not patent eligible. Regarding claim 15, The additional element of “the image is a light image, an X-ray or CT image, and the metadata includes a component type, a running time of the component, a number of remaining life cycles, or a repair history” is well-understood, routine, and conventional in the relevant based on the prior art of record (paras. [0059], [0061], [0063] of Campbell; page 5, lines 31-32 of Philipp; paras. [0015], [0026], [0033], [0043], [0055], [0070], [0080] of Zhang). Therefore, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because these additional elements/steps are well-understood, routine, and conventional in the relevant based on the prior art of record. Regarding claims 25 and 27 The additional element of “the component is of a turbomachine” is well-understood, routine, and conventional in the relevant based on the prior art of record (page 5, line 26 of Philipp; paras. [0016]-[0017], [0023] of Haldeman (US 2015/0160097 A1)). Therefore, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because these additional elements/steps are well-understood, routine, and conventional in the relevant based on the prior art of record. Regarding claims 16-24 and 28-30, All features recited in these claims are abstract ideas, as all features found in these claims are directed towards mathematical calculations. The explanation for the rejection of Claims 16-24 and 28-30 therefore are incorporated herein and applied to Claims 14 and 26. These claims therefore stand rejected for similar reasons as explained in above Claims 14 and 26. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 14, 16-24, 26, and 28-33 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Campbell et al. (US 2020/0034958 A1,” hereinafter referred to as “Campbell”) (cited in IDS dated August 21, 2023). Regarding claim 14, Campbell discloses a method for inspecting a component (para. [0008]: computer-based method for automatically evaluating validity and extent of a damaged object from image data) comprising the steps of: - capturing at least one image of the component using an image-capturing device (para. [0009]: receive image data comprising one or more images of at least one damaged object; para. [0061]: It is understood that the present invention is suitable for use with any image data captured by any suitable source, such as, film cameras and digital cameras, but also any other camera, such as, for example found on smart phones, tablets or mobile computers); - providing metadata about the component (para. [0059]: checking the images EXIF (Exchange Image File Format) data matches the expected criteria (i.e. camera information, geolocation (i.e. GPS data) or other suitable metadata); and - classifying, by a trained machine learning system, the component into a "serviceable" category or a "non-serviceable" category based on the image captured by the image- capturing device and the provided metadata (para. [0065]: the images are then passed through another set of machine learning algorithms, a classifier would look for any dents, scratches and other unexpected variances to an undamaged car, and determine the likelihood that a variance is present; para. [0066]: assess the severity and/or nature of the damage, note that the above feature of “classifier using machine learning algorithms” and “look for unexpected variances” in para. [0065] and “assess the severity” in para. [0066] reads on “classifying, by a trained machine learning system, the component into a "serviceable" category or a "non-serviceable" category” because severe or serious damage of the object is not serviceable”). Regarding claim 16, Campbell discloses all the limitation of claim 14, in addition, Campbell discloses that the machine learning system classifies the components classified as "non-serviceable" into either a "repairable" category or a "non-repairable" category (para. [0066]: separate collection of machine learning algorithms, e.g. convolutional neural networks, which assess the severity and/or nature of the damage; para. [0070]: the data output generated by the various algorithms is then passed to the system to generated a complete report on the damages, cost of repair and/or replacement, as well as, any fraudulent activities, note that the above feature of “machine learning algorithms” and “assess the severity” in para. [0066] and “cost of repair and/or replacement” in para. [0070] reads on “the machine learning system classifies the components classified as "non-serviceable" into either a "repairable" category or a "non-repairable" category” because some serious or sever damage need replacement instead of repair). Regarding claim 17, Campbell discloses all the limitation of claim 16, in addition, Campbell discloses that the machine learning system assigns a probability of successful repair to the components classified as "repairable" (para. [0016]: using machine learning algorithms; para. [0052]: providing an estimate for potential repair time based on automated decision making, note that “machine learning algorithm” in para. [0016] and “estimate for potential repair time” reads on “the machine learning system assigns a probability of successful repair to the components classified as "repairable”). Regarding claim 18, Campbell discloses all the limitation of claim 14, in addition, Campbell discloses that the machine learning system (para. [0016]: machine learning algorithms) includes a neural network or a support vector machine (para. [0022]: artificial neural networks; paras. [0063]-[0065]: convolutional neural networks). Regarding claim 19, Campbell discloses all the limitation of claim 18, in addition, Campbell discloses that the neural network is a deep neural network (para. [0022]: any combination of Deep Learning algorithms, artificial neural networks, statistical modelling), a convolutional neural network (paras. [0063]-[0065]: convolutional neural networks). Regarding claim 20, Campbell discloses all the limitation of claim 14, in addition, Campbell discloses that the machine learning system (para. [0063]: machine learning algorithms, e.g. convolutional neural networks) is configured to identify or locate defects in the at least one image, and to take the identified or located defects into account in the classification of the component (para. [0069]: the identified locations any damage is stored allowing the user to later visually mark the damages in various formats). Regarding claim 21, Campbell discloses all the limitation of claim 20, in addition, Campbell discloses that the defects are cracks or pores and a type (para. [0056]: the identified damage is structural, non-structural or a complete/severe all-round damage, the extent of damage that is detected (e.g. scratch, dent etc.), note that the above feature of structural reads on “cracks or pores”), position (para. [0046]: obtain location of identified damage; para. [0069]: the identified locations any damage is stored), number or size of the identified defects is taken into account in the classification (para. [0050]: damaged areas location within image data). Regarding claim 22, Campbell discloses all the limitation of claim 14, in addition, Campbell discloses that the metadata used includes at least remaining life cycles of the component or data of the operator of the components or geographical data or environmental data (para. [0059] Checking the images EXIF (Exchange Image File Format) data matches the expected criteria (i.e. camera information, geolocation (i.e. GPS data) or other suitable metadata)). Regarding claim 23, Campbell discloses all the limitation of claim 14, in addition, Campbell discloses that the machine learning system (para. [0016]: machine learning algorithms) is configured to autonomously control the image-capturing device (para. [0062]: automatically assess and process the damaged object), after analysis of the at least one image, to capture at least one further image of the component with a varied imaging parameter if a classification criterion cannot be satisfied based on the at least one initial image (para. [0058]: automatically detect any image manipulation, for example, the image originator has attempted to modify/manipulate any images; para. [0059]: checking the images EXIF (Exchange Image File Format) data matches the expected criteria, note that the above feature of “automatically detect any image manipulation” in para. [0058] and “checking the images EXIF” in para. [0059] reads on “analysis of the at least one image, to capture at least one further image of the component with a varied imaging parameter if a classification criterion cannot be satisfied based on the at least one initial image”). Regarding claim 24, Campbell discloses all the limitation of claim 23, in addition, Campbell discloses that the varied imaging parameter is a varied imaging angle (para. [0063]: the image view angle and perspective is classified utilising a collection of different convolutional neural networks. In case a set of a plurality of images is provided, the image(s) with the most suitable viewing angles is (are) selected to provide the system with a maximum of information of the object(s)). Regarding claim 26, Campbell discloses a method for training a machine learning system to inspect a component (para. [0008]: computer-based method for automatically evaluating validity and extent of a damaged object from image data; para. [0016]: machine learning algorithms), the method comprising the following steps: providing a machine learning system (para. [0016]: machine learning algorithms); inputting an image of the component into the machine learning system (para. [0011]: detect and classify said at least one damaged object in any one of said one or more images, utilising at least one first machine learning algorithm); inputting metadata about the component into the machine learning system (para. [0059]: metadata), the metadata including at least a component type (para. [0063]: determine the main component (object) is what is expected)), or a running time of the component or a number of remaining life cycles or a repair history (para. [0063]: collection of algorithms to classify, whether or not, any of the images have been manipulated (using EXIF data) and, whether or not, the claim is fraudulent (i.e. by checking the vehicles insurance claim history, note that “checking the vehicles insurance claim history” reads on “a running time of the component, a number of remaining life cycles, or a repair history) or data of the operator of the components or geographical data or environmental data (para. [0059]: checking the images EXIF (Exchange Image File Format) data matches the expected criteria (i.e. camera information, geolocation (i.e. GPS data) or other suitable metadata); classifying the component into a "serviceable" category or a "non-serviceable" category based on the input data (para. [0065]: the images are then passed through another set of machine learning algorithms, a classifier would look for any dents, scratches and other unexpected variances to an undamaged car, and determine the likelihood that a variance is present; para. [0066]: assess the severity and/or nature of the damage, note that the above feature of “classifier using machine learning algorithms” and “look for unexpected variances” in para. [0065] and “assess the severity” in para. [0066] reads on “classifying, by a trained machine learning system, the component into a "serviceable" category or a "non-serviceable" category” because severe or serious damage of the object is not serviceable); outputting the determined category (para. [0070]: the data output generated by the various algorithms is then passed to the system to generated a complete report on the damages, cost of repair and/or replacement, as well as, any fraudulent activities) and inputting correct information about the category of the component into the machine learning system to train the machine learning system (para. [0059]: checking the images EXIF (Exchange Image File Format) data matches the expected criteria (i.e. camera information, geolocation (i.e. GPS data) or other suitable metadata ; para. [0063]: the image view angle and perspective is classified utilising a collection of different convolutional neural networks. In case a set of a plurality of images is provided, the image(s) with the most suitable viewing angles is (are) selected to provide the system with a maximum of information of the object(s). After pre-processing, the image(s) are then indexed and stored, note that the above feature of “checking the images EXIT” in para. [0059] and “pre-processing such as classifying manipulated image and selecting the most suitable viewing angle image” in para. [0063] reads on “inputting correct information”). Regarding claim 28, Campbell discloses all the limitation in claim 26, in addition, Campbell teaches that the machine learning system includes a neural network (para. [0016]: machine learning algorithms; para. [0063]: convolutional neural networks). Regarding claim 29, Campbell discloses all the limitation in claim 26, in addition, Campbell teaches that the correct information is generated based on a human inspection of the component (para. [0059]: checking the images EXIF (Exchange Image File Format) data matches the expected criteria (i.e. camera information, geolocation (i.e. GPS data) or other suitable metadata; para. [0063]: images (still or video) may be captured by a user of any skill set. The user then provides the image to the computer system of the invention, note that the above feature of “checking the images EXIF (Exchange Image File Format) data matches the expected criteria” in para. [0059] and “images (still or video) may be captured by a user of any skill set” reads on “the correct information is generated based on a human inspection of the component”). Regarding claim 30, Campbell discloses all the limitation in claim 26, in addition, Campbell teaches that the machine learning system additionally performs, during the classification step, a classification into a "repairable" category or a "non-repairable" category (para. [0055]: a list of the parts that are damaged and either need to be repaired or replaced and a detailed breakdown of the costs associated with the repair or replacement of the damaged parts; para. [0070]: generated a complete report on the damages, cost of repair and/or replacement, note that the above feature of “generated a complete report on the damages, cost of repair and/or replacement” in para. [0070] reads on “a classification into a "repairable" category or a "non-repairable" category”), a determination of a probability of successful repair, or an identification of defects in the at least one image of the component (para. [0055]: a list of the parts that are damaged and either need to be repaired or replaced and a detailed breakdown of the costs associated with the repair or replacement of the damaged parts; para. [0070]: generated a complete report on the damages, cost of repair and/or replacement, note that the above feature of “generated a complete report on the damages, cost of repair and/or replacement” in para. [0070] reads on “a determination of a probability of successful repair”), and accordingly, a correct category or correctly identified defects is input into the machine learning system during inputting of the correct information for training purposes (para. [0059]: checking the images EXIF (Exchange Image File Format) data matches the expected criteria (i.e. camera information, geolocation (i.e. GPS data) or other suitable metadata ; para. [0063]: the image view angle and perspective is classified utilising a collection of different convolutional neural networks. In case a set of a plurality of images is provided, the image(s) with the most suitable viewing angles is (are) selected to provide the system with a maximum of information of the object(s). After pre-processing, the image(s) are then indexed and stored, note that the above feature of “checking the images EXIT” in para. [0059] and “pre-processing such as classifying manipulated image and selecting the most suitable viewing angle image” in para. [0063] reads on “a correct category or correctly identified defects is input into the machine learning system during inputting of the correct information for training purposes”). Regarding claim 31, it is a computer program product type claim having similar limitations as of claim 26 above. Therefore, it is rejected under the same rational as of claim 26 above. The additional limitation of a computer program product comprising instructions which are readable by a processor of a computer (paras. [0033]-[0034]: a computer processor), taught by Campbell. Regarding claim 32, it is a computer-readable medium type claim having similar limitations as of claim 24 above. Therefore, it is rejected under the same rational as of claim 24 above. The additional limitation of a computer-readable medium (para. [0020]: a computer-based data storage), taught by Campbell. Regarding claim 33, Campbell discloses a system for inspecting a component (para. [0008]: computer-based method for automatically evaluating validity and extent of a damaged object from image data), the system comprising: an image-capturing device for capturing an image of the component (para. [0061]: It is understood that the present invention is suitable for use with any image data captured by any suitable source, such as, film cameras and digital cameras, but also any other camera, such as, for example found on smart phones, tablets or mobile computers); and a trained machine learning system (paras. [0063]-[0066]: machine learning algorithms) configured to receive the image from the image- capturing device (para. [0009]: receive image data comprising one or more images of at least one damaged object; para. [0044]: received from suitable databases; para. [0061]: It is understood that the present invention is suitable for use with any image data captured by any suitable source, such as, film cameras and digital cameras, but also any other camera, such as, for example found on smart phones, tablets or mobile computers) and metadata (para. [0059]: Checking the images EXIF (Exchange Image File Format) data matches the expected criteria (i.e. camera information, geolocation (i.e. GPS data) or other suitable metadata) about the component and trained to classify the component into a "serviceable" category or a "non-serviceable" category based on this data (para. [0065]: the images are then passed through another set of machine learning algorithms, a classifier would look for any dents, scratches and other unexpected variances to an undamaged car, and determine the likelihood that a variance is present; para. [0066]: assess the severity and/or nature of the damage, note that the above feature of “classifier using machine learning algorithms” and “look for unexpected variances” in para. [0065] and “assess the severity” in para. [0066] reads on “classifying, by a trained machine learning system, the component into a "serviceable" category or a "non-serviceable" category” because severe or serious damage of the object is not serviceable). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 15, 25, and 27 are rejected under 35 U.S.C. under 35 U.S.C. 103 as being unpatentable over Campbell in view of Philipp et al. (DE 102016200779 A1, hereinafter referred to as “Philipp”). Regarding claim 15, Campbell teaches all the limitation of claim 14, in addition, Campbell discloses that the image is a light image and the metadata (para. [0059]: metadata; para. [0061]: the present invention is capable of automatically processing images captured by user of any technical and/or photographing skill, note that the above feature of “ metadata” in para. [0059] and “any technical and/or photographing” in para. [0061] reads on “the image is a light image, an X-ray or CT image, and the metadata”) includes a component type (para. [0063]: determine the main component (object) is what is expected), a running time of the component, a number of remaining life cycles, or a repair history (para. [0063]: collection of algorithms to classify, whether or not, any of the images have been manipulated (using EXIF data) and, whether or not, the claim is fraudulent (i.e. by checking the vehicles insurance claim history, note that “checking the vehicles insurance claim history” reads on “a running time of the component, a number of remaining life cycles, or a repair history”). Campbell does not specifically teach an X-ray or CT image. However, Philipp teaches an X-ray or CT image (page 5, lines 31-32: during its maintenance, overhaul or repair, that is not during or immediately following its manufacture, with the aid of a computed tomography device (CT) is carried out; page 5, lines 41-42: in additional set-up effort is avoided if in the context of the maintenance of the component and an additional X-ray inspection is required). Campbell and Philipp are both considered to be analogous to the claimed invention because they are in the same filed of investigation of component (or object). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the an X-ray or CT image such as is described in Philipp into Campbell, in order to permit an improved assessment of the quality of the component (Philipp, page 5, line 21). Regarding claim 25, Campbell teaches all the limitation of claim 14. Campbell does not specifically teach that the component is of a turbomachine. However, Philipp teaches that the component is of a turbomachine (page 5, line 26: first aspect of the invention relates to an investigation method for a serviceable hollow component of a turbomachine). Campbell and Philipp are both considered to be analogous to the claimed invention because they are in the same filed of investigation of component (or object). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the component such as is described in Philipp into Campbell, in order to permit an improved assessment of the quality of the component (Philipp, page 5, line 21). Regarding claim 27, Campbell teaches all the limitation of claim 26. Campbell does not specifically teach that the component is of a turbomachine. However, Philipp teaches that that the component is of a turbomachine (page 5, line 26: first aspect of the invention relates to an investigation method for a serviceable hollow component of a turbomachine). Campbell and Philipp are both considered to be analogous to the claimed invention because they are in the same filed of investigation of component (or object). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the component such as is described in Philipp into Campbell, in order to permit an improved assessment of the quality of the component (Philipp, page 5, line 21). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hestand et al. (US 10,878,556 B2) teaches a method of assessing damage to a component includes displaying a sensor image of the component in a first viewing pane, displaying a reference image of the component, which is a graphical depiction of the component with accurate dimensions, in a second viewing pane, placing a plurality of first identification markers on the sensor image of the component in the first viewing pane to correspond to a matching location with a second identification marker on the component in the reference image. Haller, Jr. et al. (US 10,380, 696 B1) teaches an image processing system automatically processes a plurality of images of a damaged vehicle and determines replacements parts that are needed to repair the vehicle therefrom. Davis et al. (US 8,4771,154 B2) teaches a method of using a Graphic User Interface (GUI) for interactive virtual inspection of modeled objects. The method includes acquiring a three-dimensional model of a modeled object and displaying a first view of the modeled object for a user to identify locations of interest on a surface of the modeled object visible within the first view. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANGKYUNG LEE whose telephone number is (571)272-3669. The examiner can normally be reached on Monday-Friday 8:30am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lee Rodak can be reached on (571)270-5628. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SANGKYUNG LEE/Examiner, Art Unit 2858 /LEE E RODAK/Supervisory Patent Examiner, Art Unit 2858
Read full office action

Prosecution Timeline

Jul 31, 2023
Application Filed
Jan 12, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596109
METHOD AND SYSTEM FOR CALIBRATING MEASURED VALUES FOR AMBIENT AIR PARAMETERS USING TRAINED MODELS
2y 5m to grant Granted Apr 07, 2026
Patent 12510346
MEASUREMENT METHOD
2y 5m to grant Granted Dec 30, 2025
Patent 12504751
INSPECTION SYSTEM AND METHOD
2y 5m to grant Granted Dec 23, 2025
Patent 12472569
METHOD FOR PRODUCING OR MACHINING TOOTHING
2y 5m to grant Granted Nov 18, 2025
Patent 12467979
Abnormal Cell Diagnosing Method and Battery System Applying the Same
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
61%
Grant Probability
66%
With Interview (+4.6%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 141 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month