Prosecution Insights
Last updated: April 19, 2026
Application No. 18/676,667

AIRCRAFT COMPONENT IDENTIFICATION SYSTEM

Final Rejection §101§103
Filed
May 29, 2024
Examiner
HARTMANN, ERIN MARIE
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Pratt & Whitney Canada Corp.
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
5 granted / 8 resolved
+10.5% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
28 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
11.9%
-28.1% vs TC avg
§103
40.7%
+0.7% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
32.2%
-7.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 8 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This office action is in response to application number 18/676,667 filed on 1/15/2026, in which Claims 1-20 are presented for examination. Applicant amends Claims 1, 3-9, and 14-16. Information Disclosure Statement The information disclosure statement (IDS) submitted on 5/29/2024 and the information disclosure statement (IDS) submitted on 10/20/2025 were received and considered by the examiner. Response to Arguments Applicant's arguments, pgs. 6-9, filed 1/15/2026 with respect to the objections to the drawings have been fully considered but they are not fully persuasive. Applicant argues that an express illustration of the features is not required for understanding and references FIGS. 2 and 3 for support regarding the “support mechanism” and “motor” and the meaning “rotatable about an axis being coaxial with a central axis,” stating that one of ordinary skill the art would understand the subject matter. Examiner respectfully disagrees. Examiner understands that FIG. 2 demonstrates a coaxial system and that FIG. 3 identifies the support as part of the system. However, neither figure shows how the “support mechanism” structure is designed or incorporated as part of the system. The invention is claiming a “support mechanism” that is fundamental to the claimed design of coaxial rotation for imaging an identifier on an airfoil of a hub assembly. Therefore, one of ordinary skill In the art would not understand the structure of the support and how it is designed to be coaxial or how the motor engages with the support. For one of ordinary skill in the art to understand the structure and integration of the “support mechanism” an existing mechanism or mechanism type would need to be at least referenced, or defined, in the specification which uses the same design and is currently not clearly identified in the specification. Further, the related claim language is identified as contributing to the uniqueness of the claimed invention, see the section for allowable subject matter, and therefore, at least an example representation or implementation must be captured in a drawing to show every feature of the invention as specified in the claims. Therefore, the objections to the drawings, regarding the features of Claims 2-5, set forth in the office action of 10/16/2025 are maintained. The remaining objects to the drawings set forth in the office action of 10/16/2025 have been withdrawn. Applicant's arguments, pgs. 3-5 and 9, filed 1/15/2026 with respect to the objection of Claims 3 and 14 have been fully considered and they are persuasive. Therefore, the objection of Claims 3 and 14 set forth in the office action of 10/16/2025 has been withdrawn. However, in light of the amendments, Examiner makes a new objection to Claim 7. Further details are provided below. Applicant's arguments, pgs. 3-5 and 9, filed 1/15/2026 with respect to the rejection of Claims 2-6, 8-12, and 15-17 under 35 U.S.C. 112(b) have been fully considered and they are persuasive. Therefore, the rejection of Claims 2-6, 8-12, and 15-17 under 35 U.S.C. 112(b) set forth in the office action of 10/16/2025 has been withdrawn. Applicant’s amendments and arguments, pgs. 3-5 and 9-10, with respect to the rejection of Claims 1-6 under 35 U.S.C. 101 for being directed to a non-statutory subject matter have been fully considered and are persuasive. Therefore, the rejection of Claims 1-6 under 101 for being directed to a non-statutory subject matter set forth in the office action of 10/16/2025 have been withdrawn. Applicant’s amendments and arguments, pgs. 3-5 and 9-10, with respect to the rejection of Claims 7-9 and 11-13 under 35 U.S.C. 101 for being directed to a judicial exception have been fully considered but are not persuasive. Applicants arguments are directed to the amended claim language regarding using a camera to capture an image, and are therefore moot. However, Applicant further argues that using a trained model improves the technological process of camera-based identification of aircraft components because the trained model uses imaging for transforming identifiers into maintenance actions, and further, recites significantly more because of the combination of using the trained OCR model for engine part identifiers. Examiner respectfully disagrees. Camera-based identification and training a model using imaging, including imaging of identifiers, for identifying subsequent activities, such as requesting a maintenance action is recited at a high level. Further, as indicated in the rejection below and supported by the prior art, these are well understood, routine, and conventional techniques for using optical character recognition to perform component identification and request service actions. Therefore, using a camera and a trained model, simply automates the process of looking at a component, or component identifier, reading, or comparing to, a list of part numbers, and determining maintenance actions. Therefore, the rejection of Claims 7-9 and 11-13 under 35 U.S.C. 101 for being directed to a judicial exception set forth in the office action of 10/16/2025 is maintained and, in light of the amendments, Examiner provides an updated rejection. Further details are provided below. Applicant’s amendments and arguments, pgs. 3-5 and 10-12, with respect to the rejection of Claims 1-20 under 35 U.S.C. 103 have been fully considered but are not persuasive. Applicant argues that Asai does not teach identifying a part identifier on aircraft component and only discloses visualizing an aircraft component for inspection. Examiner respectfully disagrees. Asai is used for generally applying and teaching using a camera in an aircraft component identification system which can be directed towards an aircraft component, not specifically for using a camera to identify a label. Bizouarn already teaches more generally a component identification system for using a camera with a part identifier, but did not specifically discuss an aircraft component application. As stated in the rejection, and MPEP 2143(I)(A), it is obvious to combine the features of the systems of Bizouarn and Asai to create a combined system where the components of each system yield predictable results, with each component of the individual systems performing the same in the combined system. Applicant further argues that Chopra only discusses using an OCR model and does not teach a trained model, trained using machine learning and training data, where the training data includes image datasets of part identifiers. Applicant argues that Chopra, instead, only discusses using a machine learned model to identify hole plugs of a component. Examiner respectfully disagrees. Although Chopra does discuss using a trained model for hole plug validation, Chopra, [pg. 3, para 0034] also discusses a component verification system comprising a component verification device that includes a machine-learning model used for the purposes of validating hole placement, but further includes an optical character recognition (OCR) model that is trained using a datastore and reference models, or component images. Chopra, [pg. 4, para 0043], further explains that the OCR model is used for recognizing part labels to for identification of the component and subsequently used for component hole validation. Chopra [pg. 5, para 0052] generally describes that an AI model uses inputs to recognize patterns and makes associations between input and output data, where [g. 6, para 0058] component images used as training data for the machine learning model include data to extract alphanumeric characters or labels for identifying the components, “component verification device 102 may pre-process the training data using, for example, an interface (e.g., a network interface, network interface circuitry, a data extraction interface, data extraction circuitry, etc.) to extract alphanumeric characters, codes, labels, etc., and/or identify the components 106 and/or one(s) of the reference model(s) 112 that correspond to the components 106.” Finally [pg. 5, paras 0059-0061], further explain the training and deployment process and [pgs. 10-11, paras 0089-0090], again, describe the machine leaning model used for image processing and OCR. Therefore, the rejection of Claims 1, 6-9, 11-16, and 18-20 under 35 U.S.C. 103 set forth in the office action of 10/16/2025 is maintained and, in light of the amendments, Examiner provides an updated rejection. Further details are provided below. Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the following must be shown or the feature(s) canceled from the claim(s). No new matter should be entered. Claim 2 (lines 2-4): “the system having a support mechanism for supporting the assembly and the camera, the support mechanism having a movable support operable for moving one of the assembly and the camera relative to the other of the assembly and the camera,” Claim 3 (lines 1-2): “wherein the assembly is supported by the support, the support being rotatable about an axis being coaxial with a central axis of the assembly,” Claim 4 (lines 1-5): “a motor engaged to the support, the motor operatively connected to the controller, the computer-readable medium further having instructions executable by the processing unit to: receive a command from a user; and cause the motor to rotate the assembly in response to the command,” and Claim 5 (lines 1-7): “the support is operatively connected to the controller, the computer-readable medium further having instructions executable by the processing unit to: cause the support to move the one of the assembly and the camera; detect when the part identifier is within a line of sight of the camera; and cause the support to stop moving the one of the assembly and the camera when the part identifier is within the line of sight.” Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claim 7 is objected to because of the following informalities: Claim 7 (lines 2-3): “being capture with” should be “captured with” Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 7-9 and 11-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. Claim 7. A method of maintaining an aircraft engine, comprising: receiving image data of an image of an aircraft component of the aircraft engine, the image being capture with a camera [apply it] and including a part identifier of the aircraft component [data gathering]; performing optical character recognition on the image data to obtain a series of characters of the part identifier [pre-solution activity] by feeding the image data to a trained model, the trained model having been trained using machine learning and training data, the training data including image data sets associated with part identifier sets, the part identifier sets including partially damaged part identifiers; [apply it, data gathering]; determining that the aircraft component needs servicing [mental process] by obtaining information about the aircraft component using the series of characters of the part identifier [data gathering]; and causing the aircraft component to be serviced [post-solution activity]. 101 Analysis Step 1: Statutory Category – Yes The claim recites a method comprising receiving image data of a part identifier captured using a camera, performing optical character recognition using a trained model, determining that the part needs servicing, and causing the part to be serviced. Step 2A Prong One Evaluation: Judicial Exception – Yes – Mental Process The claim recites the mental processes, as bolded above. These limitations, as drafted, are simple processes that, under their broadest reasonable interpretation, could be performed in the human mind. For example, a person can look at an image of a part identifier, identify the part number, and correlate the part number to a maintenance schedule. Step 2A Prong Two Evaluation: Practical Application – No This judicial exception is not integrated into a practical application because the additional elements (indicated above) do not impose any meaningful limit on the judicial exception. Receiving image data of the part identifier and obtaining information about the part are recited at a high level, and amount to mere data gathering, which is a form of insignificant extra-solution activity. Performing optical character recognition is recited at a high level and amounts to pre-solution activity, which is a form of insignificant extra-solution activity. A trained model, machine learning, and a camera act merely as a means for applying the abstract idea. Causing the component to be serviced is recited at a high level and amounts to post-solution activity, which is a form of insignificant extra-solution activity. Step 2B Evaluation: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional claim elements, as stated for Step 2A Prong Two, do no more than provide application of the abstract idea and add extra-solution activity. And therefore, does not provide an inventive concept. Performing optical character recognition is recited a high level and is considered pre-solution activity, which is a form of insignificant extra-solution activity. The specification explains that the controller performs optical character recognition on image data to read an identification number, for example, specification pg. 7, para 0045 recites, “The controller 320 is configured to perform optical character recognition on image data obtained from the camera 302 to obtain a series of characters that compose the identification number 301 A.” The specification further explains that the optical character recognition can contain sub steps for identifying the text, pre-processing, and post-processing steps, such as image adjustments and correcting common recognition errors, specification pg. 9, para 0052, “The performing of the optical character recognition at 504 may include a plurality of sub-steps. For instance, a preprocessing step may be performed. This preprocessing step may include improving the quality of the image. This might include adjustments like correcting the brightness and contrast, removing noise, correcting skew (tilting of the image), and adjusting the resolution. […]. Then, a text detection step may occur. […]. Then, the controller may proceed with a character segmentation step in which the identification number is separated into individual characters. At which point, each character may be analyzed and compared to a set of predefined patterns by the trained model. A post-processing step may be used to correct common recognition errors.” The specification does not provide any indication that performing optical character recognition, as recited in Claim 7, is anything other than the standard optical character recognition and image processing steps used, and known to those of ordinary skill in the art, for analyzing and reading images. Therefore, the specification indicates that performing optical character recognition is a well-understood, routine, and conventional function as it is claimed in a merely generic manner. Causing the component to be serviced is recited at a high level and amounts to post-solution activity, which is a form of insignificant extra-solution activity. The specification explains that causing the component to be serviced includes scheduling a replacement, cleaning, or repair, for example, specification pgs. 10-11, para 0056 recites, “At which point, the method 600 includes causing the aircraft component 301 to be serviced. This may include, scheduling the aircraft component 301 to be replaced, cleaned, repaired, inspected, and so on.” The specification does not provide any indication that causing the component to be serviced, as recited in Claim 7, is anything other than standard steps for maintaining equipment. Therefore, the specification indicates that causing the component to be serviced is a well-understood, routine, and conventional function as it is claimed in a merely generic manner. Dependent Claims 8-9 and 11-13 do not recite any further limitations that cause the claims to be patent eligible. The limitations of the dependent claims further narrow the abstract idea, and thus can also be performed as a mental process, in the human mind. These limitations do no more than further describe additional data gathering steps, means for applying the abstract idea, and insignificant extra-solution activity. Therefore, Claims 8-9 and 11-13 are not patent eligible under the same rational as provided for Claim 7. Dependent Claim 10 does recite further limitations that would allow the claims to be patent eligible. The limitations of this claims further specifies the details of moving the airfoil assembly and camera on a support to align with the part identifier, which is not a standard method of imaging a part identifier. Therefore, Claim 10 would be patent eligible. Claims 1-6 and 14-20 are not directed to an abstract idea, and therefore, Claims 1-6 and 14-20 would be patent eligible. Therefore, Claims 7-9 and 11-13 are rejected under 35 U.S.C. § 101 as being directed to a judicial exception, without amounting to significantly more. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 7-8, 13-15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Bizouarn, DE-102012109213-A1 (herein "Bizouarn"), in view of Asai, PG Pub US-2022/0309738A1 (herein "Asai"), Chopra et al, PG Pub US-2024/0256727-A1 (herein "Chopra"), and Ramaswamy et al., PG Pub US-2025/0157242-A1 (herein "Ramaswamy"). Regarding Claim 1, Bizouarn discloses: (Currently Amended) An […] component identification system, comprising: a camera configured for capturing an image of a part identifier on a[…] component […], […]. See [Bizouarn, FIG. 1 and pg. 3, para 0025], which shows, “a schematic view of a system for detecting an attachment. See also [Bizouarn, pg. 3, para 0029], which explains the system includes an optical reading device to read the information of the working device from the optically readable character, “This is characterized in order to facilitate the recognition of characterizing information of the working device 4. The system shown in FIG. 1 is made up of an optically readable character 5 assigned to the working device 4 and containing information identifying the working device 4, and by an optical reading device 7 which is operable to read out information about the working device 4 contained therein from the character 5,” and [Bizouarn, pg. 4, para 0032], which explains that the optical reading device can be a smartphone camera, “In the exemplary embodiment shown, an optical reading device in the form of a so-called smartphone 7 is provided for the system, i.e. a mobile personal communication device equipped with a camera. The smartphone 7 can be used to optically capture and read out the character 5 by means of the integrated camera.” Bizouarn further discloses: obtain information about the […] component using […] the part identifier. See [Bizouarn, pg. 4, para 0030], which explains that the optically readable character can be used to store information about the device, such as maintenance details for repair and replace, “The sign 5 contains information which identifies the working device 4. This information can comprise, for example: device-specific details for the operation of the working device 4 and/or device-specific details for maintenance, repair and/or replacement parts of the working device 4, and/or individually reproducible details for the working device 4.” See again [Bizouarn, pg. 4, para 0032], which explains that the camera can read the character and extract the information, “The smartphone 7 can be used to optically capture and read out the character 5 by means of the integrated camera. For this purpose, an executable application program can be loaded into a working memory of the smartphone 7. This program supports an image evaluation of the captured camera image by first extracting the binary information contained therein from the matrix and then decrypting information characterizing the working device 4.” Bizouarn does not disclose: An aircraft component identification system, comprising: a camera configured for capturing an image of […] an aircraft component of an aircraft engine, the part identifier including a series of characters; and; a controller operatively connected to the camera, the controller having a processing unit and a non-transitory computer-readable medium having stored thereon instructions executable by the processing unit to: perform optical character recognition on image data obtained from the image captured by the camera to identify the series of characters, including feeding the image data to a trained model, the trained model having been trained using machine learning and training data, the training data including image data sets associated with part identifier sets , the part identifier sets including partially damaged part identifiers; and […]. However, Asai teaches: An aircraft component identification system, comprising: a camera configured for capturing an image of […] an aircraft component of an aircraft engine. See [Asai, pg. 1, paras 0017-0021], which explain that the system includes imaging devices for capturing an image of maintenance inspection targets, “[0017] FIG. 1 illustrates the configuration of an airframe maintenance-inspection system 1 according to this embodiment. In this embodiment, the airframe maintenance-inspection system 1 includes a plurality of imaging devices 2 and a transmitter 3 equipped in an aircraft 10, as well as a display device 4 disposed at an airport 20. [0018] The following description relates to an example where a maintenance-inspection target A to be inspected and maintained for, […]. [0019] The plurality of imaging devices 2 of the airframe maintenance-inspection system 1 are equipped in the aircraft 10. [0020] […]. [0021] Each of the imaging devices 2 may be configured to capture a still image, or may alternatively be configured to capture a moving image,” [Asai, pg. 3, para 0053], which can include the aircraft engine, “The maintenance inspection target A may alternatively be a component, such as the engine, subjected to line maintenance.” As stated in MPEP § 2143(I)(A), it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to incorporate the imaging and identification system of Bizouarn with the imaging and inspection system of Asai into a combined system. Doing so would be technically feasible, with no inventive effort. Furthermore, the resulting imaging system would yield predictable results, where the components of Bizouarn and the components of Asai would be expected to work as intended, with each component in the combined system performing the same. Further, modifying Bizouarn with Asai to include imaging an aircraft component, supports post-flight inspections and allows the mechanic to identify issues and repair with the aircraft and aircraft engine which can be incurred during flight [Asai, pg. 1, paras 0003-0006]. However, Chopra teaches: […] the part identifier including a series of characters; and; a controller operatively connected to the camera, the controller having a processing unit and a computer-readable medium having stored thereon instructions executable by the processing unit to: perform optical character recognition on image data obtained from the image captured by the camera to identify the series of characters, including feeding the image data to a trained model, the trained model having been trained using machine learning and training data, the training data including image data sets associated with part identifier sets; and […]. See [Chopra, pg. 3, para 0032], which describes a machine-learning model that consumes an image of an aircraft component for analysis, “Systems, apparatus, articles of manufacture, and methods for machine-learning based hole plug validation are disclosed. Examples disclosed herein include execution of image-processing machine-learning models and implementation of natural language processing techniques for hole plug placement validations. In some disclosed examples, a machine-learning model ingests an image of a component, such as an aircraft component […], and identifies a number and/or a color of respective hole plugs in the image. In some disclosed examples, discrepancies between the number of identified hole plugs and/or color(s) thereof may be identified based on comparisons with respect to a reference model […] of the component.” Also see [Chopra, pg. 4, para 0043], which further explains that the component can be marked with a label, including text, which can be imaged and analyzed using an optical character recognition model, executed by a component verification device, “In the illustrated example of FIG. 1, the component verification device 102 can execute and/or instantiate one or more models using the image of the first one of the components 106 as model input(s) to the one or more models. For example, the component verification device 102 can execute the OCR model 108 using the image as OCR model input(s) to generate OCR model output(s), which can include identifications of alphanumeric characters on the first one of the components 106 in the image. For example, the first one of the components 106 can be labeled, marked, etc., with one or more alphabetic letters, numbers, etc., and/or any combination(s) thereof, which can be used to identify the first one of the components 106, a manufacturing stage of the first one of the components 106, and/or a contract or work order (e.g., a manufacturing work order) associated with the first one of the components 106. Additionally or alternatively, the first one of the components 106 may be marked, labeled, etc., with one or more symbols or indicia, such as a bar code, a quick response (QR) code, etc., and/or any combination(s) thereof,” and [Chopra, pgs. 3-4, para 0038], which explains that the images can be generated using a camera coupled to the component verification device, “In some examples, the camera 116 outputs the image, the video, etc., to the component verification device 102. For example, the component verification device 102 can store the image, the video, etc., from the camera 116 in the datastore 110 as the component image(s) 114. In the illustrated example, the camera 116 can output the image, the video, etc., to the component verification device 102 via the network 118. Alternatively, the camera 116 may output the image, the video, etc., to the component verification device 102 without utilizing the network 118. For example, the camera 116 can be in direct wired and/or wireless communication with the component verification device 102 without an intervening gateway, router, or other network interface device. Alternatively, the camera 116 may be any other type of optical sensor, such as a light detection and ranging (LIDAR) sensor, a laser, etc. […] Additionally or alternatively, the camera 116 may be configured and/or set to capture images of the components 106 using any other support structure.” See also [Chopra, pg. 1, para 0021], which explains the system includes a medium with instructions for executing the machine-learning model, “An example at least one non-transitory computer readable storage medium is disclosed that includes instructions that, when executed, cause processor circuitry to at least execute a machine-learning model based on an image of an aircraft component to generate an output representative of first identifications of first hole plugs in the aircraft component, […],” and [Chopra, pg. 11, para 0093], which explains that the device also includes a form of a processor, “Thus, for example, any of the interface circuitry 210, the data extraction circuitry 220, the machine-learning circuitry 230, the difference determination circuitry 240, the report generation circuitry 250, the operation control circuitry 260, and/or, more generally, the component verification device 102, could be implemented by processor circuitry, […], programmable processor(s ), programmable microcontroller(s), […].” Finally see [Chopra, pg. 5, para 0052], which explains that the machine learning model can be trained to recognize patterns, “AI, including machine learning (ML), […], enables machines […] to use a model to process input data to generate an output based on patterns and/or associations previously learned by the model via a training process. For instance, the machine-learning model 104 can be trained with data to recognize patterns and/or associations and follow such patterns and/or associations when processing input data such that other input(s) result in output(s) consistent with the recognized patterns and/or associations,” and [Chopra, pg. 6, para 0059] once trained, the machine learning model can be deployed to the component verification system, “Once training is complete, the component verification device 102 may deploy the machine-learning model 104 for use as an executable construct that processes an input and provides an output based on the network of nodes and connections defined in the machine-learning model 104. The component verification device 102 may store the machine-learning model 104 in the datastore 110 and/or, more generally, in the component verification device 102,” to [Chopra, pg. 6,para 0060] analyze real-time input for detection and identification by applying the trained patterns, “Once trained, the deployed machine-learning model 104 may be operated in an inference phase to process data. In the inference phase, data to be analyzed (e.g., live data) is input to the machine-learning model 104, and the machine-learning model 104 executes to create output(s) […]. This inference phase can be thought of as the AI “thinking” to generate the output(s) based on what it learned from the training (e.g., by executing the machine-learning model 104 to apply the learned patterns and/or associations to the live data). In some examples, input data undergoes pre-processing before being used as input(s) […] to the machine-learning model 104. Moreover, in some examples, the output data may undergo post-processing after it is generated by the machine-learning model 104 to transform the output(s) into a useful result (e.g., a display of data, […], etc.).” It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Chopra to use a series of characters and optical character recognition, using machine learning, for the part identifier. Doing so allows the components to be inspected in accordance with the appropriate standards [Chopra, pg. 3, para 0035], where the execution of the image-processing machine-learning model improves efficiency and quality of the verification and validation [Chopra, pg. 3, para 0033]. However, Ramaswamy teaches: […] the part identifier sets including partially damaged part identifiers […]. See [Ramaswamy, pg., para 0044], which explains that an Optical Character Recognition (OCR) engine is used to process document images and the OCR-free model is trained using a training dataset, “[…] automatically extract text data values from each document image based on using an Optical Character Recognition (OCR) engine to process a respective portion of the document image located within each pre-defined ROI bounding box included in the ROI template, wherein the OCR engine generates extracted text data values each associated with a corresponding labeled text field within the document image; generate annotation metadata for each document image, wherein the annotation metadata organizes the extracted text data values for each document image using a structured schema indicative of relationships between categories and subcategories of the labeled text fields within the document image; and train an OCR-free machine learning network using a training dataset comprising the plurality of document images and the annotation metadata generated for each document image.” See also [Ramaswamy, pg. 33, para 0350], which explains that the OCR-free model uses various types of images in its training dataset which include image and text data where the images of text include portions that are obscured or damaged, “ In one illustrative example, a one-time process can be performed to fine-tune an OCR-free ML/AI model to perform data extraction of provider or provider-related information from various publicly available sources, webpages, databases, etc. In some aspects, the OCR-free ML/AI model for fine-tuning may be the same as or similar to an OCR-free ML/AI backbone model utilized in and described previously with respect to one or more of FIGS. 3-12. In some embodiments, the OCR-free model fine-tuned by the provider credential verification engine 1400 can be an OCR-free ML/AI model configured to extract information from screenshots or other digital images, for instance pix2struct (although various other model choices and/or OCR-free model implementations may also be utilized without departing from the scope of the present disclosure). For instance, in one illustrative example the OCR-free model training 1406 is performed for an OCR-free model such as pix2struct, although it is again noted that various other OCR-free models may also be utilized. In general, an OCR-free model can be obtained as a machine learning model that does not require or perform OCR to extract textual information from an input image (e.g., where OCR is the conventional process of converting images of text into machine-readable text). OCR-free models are often trained on large datasets of images and text, which enables the OCR-free models to learn the identification and extraction of text directly from document images. OCR-free models can provide advantages over the conventional OCR-based systems and techniques, including increased accuracy, increased inference speed, and improved versatility (e.g., as OCR-free models can be used to extract text from a variety of images, including scanned documents, handwritten notes, and even images of text where portions of the text is obscured or damaged, etc.). In the illustrative example wherein the OCR-free model provided to the OCR-free model training 1406 and/or stored to the model repository 1425 after training is a pix2struct model (e.g., in the illustrative example where the finetuned OCR-free model 1430 is based on pix2struct and/or a pix2struct backbone, etc.), the pix2struct model may be obtained as a pre-trained, OCR-free image-to-text machine learning model configured for visual language understanding (e.g., visual document understanding (VDU), etc.). The pix2struct model can be trained on a massive dataset of masked screenshots of web pages, and may be used to generate text descriptions of images, to translate images into text, and/or to answer questions about images. The pix2struct is an example of an OCR-free model. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Ramaswamy to include damaged part identifiers in the training data. Using a variety images improves the efficiency of processing [Ramaswamy, pg. 18, para 0200] and accuracy, reliability, consistency, [Ramaswamy, pg. 22, para 0233], and versatility [Ramaswamy, pg. 33, para 0350] of character recognition. Regarding Claim 7, Bizouarn discloses: (Currently Amended) A method of maintaining […], comprising: receiving image data of an image of a[…] component […], the image the image being capture with a camera and including a part identifier of the […] component; […]; determining that the […] component needs servicing by obtaining information about the […] component […] the part identifier; and causing the […] component to be serviced. See again [Bizouarn, pg. 3, para 0029], which explains the system includes an optical reading device to read the information of the working device from the optically readable character and [Bizouarn, pg. 4, para 0030], which explains that the optically readable character can also be used to store information about the device, such as maintenance details for repair and replace. See again [Bizouarn, pg. 4, para 0032], which explains that the camera can read the character and extract the information. Bizouarn does not disclose: A method of maintaining an aircraft engine, comprising: receiving image data of an image of an aircraft component of the aircraft engine […]; performing optical character recognition on the image data to obtain a series of characters of the part identifier by feeding the image data to a trained model, the trained model having been trained using machine learning and training data, the training data including image data sets associated with part identifier sets , the part identifier sets including partially damaged part identifiers; determining that the aircraft component needs servicing by obtaining information about the aircraft component using the series of characters of the part identifier; and causing the aircraft component to be serviced. However, Asai teaches: A method of maintaining an aircraft engine, comprising: receiving image data of an image of an aircraft component of the aircraft engine […]; […]; determining that the aircraft component needs servicing by obtaining information about the aircraft component […]; and causing the aircraft component to be serviced. See again [Asai, pg. 1, paras 0017-0021], which explain that the system includes imaging devices for capturing an image of maintenance inspection targets, [Asai, pg. 3, para 0053], which can include the aircraft engine. See also [Asai, pgs. 2-3, paras 0043-0044], which explain that, using the images, a problem can be identified and repaired, “[0043] Furthermore, when the mechanic views the VR image (i.e., the three-dimensionally viewable image of the maintenance-inspection target A) and discovers a problem in the maintenance-inspection target A, before the aircraft 10 lands, the mechanic can ascertain the status of the problem or determine whether a repair is to be performed. If a repair is to be performed, for example, tools to be used for the repair can be prepared before the aircraft 10 lands. [0044] Consequently, in the airframe maintenance-inspection system 1 according to this embodiment, a visual inspection process of the maintenance-inspection target A and a preparation process for repairing a problematic area can be performed before the aircraft 10 lands. Thus, after the landing of the aircraft 10, line maintenance (such as a thru-flight inspection and a post-flight inspection) can be performed quickly and efficiently on the airframe of the aircraft 10.” As stated in MPEP § 2143(I)(A), it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to incorporate the imaging and identification system of Bizouarn with the imaging and inspection system of Asai into a combined system. Doing so would be technically feasible, with no inventive effort. Furthermore, the resulting imaging system would yield predictable results, where the components of Bizouarn and the components of Asai would be expected to work as intended, with each component in the combined system performing the same. Further, modifying Bizouarn with Asai to include imaging an aircraft component, supports post-flight inspections and allows the mechanic to identify issues and repair with the aircraft and aircraft engine which can be incurred during flight [Asai, pg. 1, paras 0003-0006]. This allows for immediately addressing issues and for post-flight inspection to be performed quickly and efficiently [Asai, pgs. 2-3, para 0044]. However, Chopra teaches: performing optical character recognition on the image data to obtain a series of characters of the part identifier by feeding the image data to a trained model, the trained model having been trained using machine learning and training data, the training data including image data sets associated with part identifier sets; […] obtaining information […] using the series of characters of the part identifier. See again [Chopra, pg. 3, para 0032], which describes a machine-learning model that consumes an image of an aircraft component for analysis. Also see again [Chopra, pg. 4, para 0043], which further explains that the component can be marked with a label, including text, which can be imaged and analyzed using an optical character recognition model, executed by a component verification device and [Chopra, pgs. 3-4, para 0038], which explains that the images can be generated using a camera coupled to the component verification device. Finally see again [Chopra, pg. 5, para 0052], which explains that the machine learning model can be trained to recognize patterns and [Chopra, pg. 6, para 0059] once trained, the machine learning model can be deployed to the component verification system to [Chopra, pg. 6,para 0060] analyze real-time input for detection and identification by applying the trained patterns. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Chopra to use a series of characters and optical character recognition, using machine learning, for the part identifier. Doing so allows the components to be inspected in accordance with the appropriate standards [Chopra, pg. 3, para 0035], where the execution of the image-processing machine-learning model improves efficiency and quality of the verification and validation [Chopra, pg. 3, para 0033]. However, Ramaswamy teaches: […] the part identifier sets including partially damaged part identifiers […]. See again [Ramaswamy, pg., para 0044], which explains that an Optical Character Recognition (OCR) engine is used to process document images and the OCR-free model is trained using a training dataset and [Ramaswamy, pg. 33, para 0350], which explains that the OCR-free model uses various types of images in its training dataset which include image and text data where the images of text include portions that are obscured or damaged. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Ramaswamy to include damaged part identifiers in the training data. Using a variety images improves the efficiency of processing [Ramaswamy, pg. 18, para 0200] and accuracy, reliability, consistency, [Ramaswamy, pg. 22, para 0233], and versatility [Ramaswamy, pg. 33, para 0350] of character recognition. Regarding Claim 8, Bizouarn as modified discloses the limitations of Claim 7. Bizouarn does not explicitly disclose: (Currently Amended) wherein the receiving of the image data includes capturing an image of the aircraft component using a camera. However, [Bizouarn, FIG. 1 and pg. 3, para 0025], which shows, “a schematic view of a system for detecting an attachment of an agricultural device and [Bizouarn, pg. 3, para 0029], which explains the system includes an optical reading device to read the information of the working device from the optically readable character, [Bizouarn, pg. 4, para 0032], where the optical reading device can be a smartphone camera. However, Asai teaches: (Currently Amended) wherein the receiving of the image data includes capturing an image of the aircraft component using a camera. See [Asai, pg. 1, paras 0017-0021], which explain that the system includes imaging devices for capturing an image of aircraft maintenance inspection targets, [Asai, pg. 3, para 0053]. As stated in MPEP § 2143(I)(A), it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to incorporate the imaging system of Bizouarn with the imaging system of Asai into a combined system. Doing so would be technically feasible, with no inventive effort. Furthermore, the resulting imaging system would yield predictable results, where the components of Bizouarn and the components of Asai would be expected to work as intended, with each component in the combined system performing the same. Further, modifying Bizouarn with Asai to include imaging an aircraft component, supports post-flight inspections and allows the mechanic to identify issues and repair with the aircraft and aircraft engine which can be incurred during flight [Asai, pg. 1, paras 0003-0006]. Regarding Claim 13, Bizouarn as modified discloses the limitations of Claim 7. Bizouarn does explicitly not disclose: (Original) […] wherein the causing of the aircraft component to be serviced includes causing a replacement or a maintenance of the aircraft component. However, [Bizouarn, pg. 4, para 0030], does explain that the readable character contains information of the agricultural component to support maintenance, repair, and replacement, “The sign 5 contains information which identifies the working device 4. This information can comprise, for example: device-specific details for the operation of the working device 4 and/or device-specific details for maintenance, repair and/or replacement parts of the working device 4, and/or individually reproducible details for the working device 4.” However, Asai teaches: (Original) […] wherein the causing of the aircraft component to be serviced includes causing a replacement or a maintenance of the aircraft component. See [Asai, pg. 3, para 0058], which explains that the information can trigger replacing broken components, “With regard to components that are not to be inspected among the components of the aircraft 10, the number of components tends to increase when the components are given a break-proof protection design or when redundancy is ensured in case the components break. However, by applying the airframe maintenance-inspection system 1 according to this embodiment to such components, as mentioned above, the components can be constantly inspected for whether they are broken, and can be simply replaced if they are broken, whereby the protection design can be simplified and redundancy is not to be ensured.” As stated in MPEP § 2143(I)(A), it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to incorporate the inspection system of Bizouarn with the inspection system of Asai into a combined system. Doing so would be technically feasible, with no inventive effort. Furthermore, the resulting imaging system would yield predictable results, where the components of Bizouarn and the components of Asai would be expected to work as intended, with each component in the combined system performing the same. Further, modifying Bizouarn with Asai to include triggering a replacement of an aircraft component, supports post-flight inspections and allows the mechanic to identify issues and repair with the aircraft and aircraft engine which can be incurred during flight [Asai, pg. 1, paras 0003-0006], which allows for immediately addressing issues and for post-flight inspection to be performed quickly and efficiently [Asai, pgs. 2-3, para 0044]. Doing so also allows the design of the aircraft to be simplified by minimizing protective designs and redundancy [Asai, pg. 3, para 0058]. Regarding Claim 14, Bizouarn discloses: (Currently Amended) A method of performing […] recognition of a part identifier […] component […], comprising: receiving image data of an image of [[an]] a part identifier on the […] component […]. See again [Bizouarn, FIG. 1 and pg. 2, para 0025], which shows, “a schematic view of a system for detecting an attachment. Also see again [Bizouarn, pg. 3, para 0029], which explains the system includes an optical reading device to read the information of the working device from the optically readable character and [Bizouarn, pg. 4, para 0032], which explains that the optical reading device can be a smartphone camera. Bizouarn does not disclose: A method of performing optical character recognition of a part identifier on an aircraft component of an aircraft engine, comprising: receiving image data of an image of an part identifier on the aircraft component, the part identifier composed of a series of characters; performing optical character recognition on the image data to obtain a series of characters of the part identifier by feeding the image data to a trained model, the trained model having been trained using machine learning and training data, the training data including image data sets associated with part identifier sets , the part identifier sets including partially damaged part identifiers; and displaying the series of characters on a display. However, Asai teaches: A method of performing […] recognition of […] an aircraft component of an aircraft engine, comprising: receiving image data of an image of […] the aircraft component. See [Asai, pg. 1, paras 0017-0021], which explain that the system includes imaging devices for capturing an image of maintenance inspection targets, [Asai, pg. 3, para 0053], which can include the aircraft engine. As stated in MPEP § 2143(I)(A), it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to incorporate the imaging and identification system of Bizouarn with the imaging and inspection system of Asai into a combined system. Doing so would be technically feasible, with no inventive effort. Furthermore, the resulting imaging system would yield predictable results, where the components of Bizouarn and the components of Asai would be expected to work as intended, with each component in the combined system performing the same. Further, modifying Bizouarn with Asai to include imaging an aircraft component, supports post-flight inspections and allows the mechanic to identify issues and repair with the aircraft and aircraft engine which can be incurred during flight [Asai, pg. 1, paras 0003-0006]. However, Chopra teaches: A method of performing optical character recognition of a part identifier [… ]comprising: receiving image data of an image of an part identifier […], the part identifier composed of a series of characters; performing optical character recognition on the image data to obtain a series of characters of the part identifier by feeding the image data to a trained model, the trained model having been trained using machine learning and training data, the training data including image data sets associated with part identifier sets; and displaying the series of characters on a display. See again [Chopra, pg. 3, para 0032], which describes a machine-learning model that consumes an image of an aircraft component for analysis. Also see again [Chopra, pg. 4, para 0043], which further explains that the component can be marked with a label, including text, which can be imaged and analyzed using an optical character recognition model, executed by a component verification device and [Chopra, pgs. 3-4, para 0038], which explains that the images can be generated using a camera coupled to the component verification device. Finally see again [Chopra, pg. 5, para 0052], which explains that the machine learning model can be trained to recognize patterns and [Chopra, pg. 6, para 0059] once trained, the machine learning model can be deployed to the component verification system to [Chopra, pg. 6,para 0060] analyze real-time input for detection and identification by applying the trained patterns. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Chopra to use a series of characters and optical character recognition, using machine learning, for the part identifier. Doing so allows the components to be inspected in accordance with the appropriate standards [Chopra, pg. 3, para 0035], where the execution of the image-processing machine-learning model improves efficiency and quality of the verification and validation [Chopra, pg. 3, para 0033]. However, Ramaswamy teaches: […] the part identifier sets including partially damaged part identifiers […]. See again [Ramaswamy, pg., para 0044], which explains that an Optical Character Recognition (OCR) engine is used to process document images and the OCR-free model is trained using a training dataset and [Ramaswamy, pg. 33, para 0350], which explains that the OCR-free model uses various types of images in its training dataset which include image and text data where the images of text include portions that are obscured or damaged. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Ramaswamy to include damaged part identifiers in the training data. Using a variety images improves the efficiency of processing [Ramaswamy, pg. 18, para 0200] and accuracy, reliability, consistency, [Ramaswamy, pg. 22, para 0233], and versatility [Ramaswamy, pg. 33, para 0350] of character recognition. Regarding Claim 15, Bizouarn as modified discloses the limitations of Claim 14. Bizouarn does not explicitly disclose: (Currently Amended) […] wherein the receiving of the image data includes capturing an image of the aircraft component using a camera. However, [Bizouarn, FIG. 1 and pg. 3, para 0025], which shows, “a schematic view of a system for detecting an attachment of an agricultural device and [Bizouarn, pg. 3, para 0029], which explains the system includes an optical reading device to read the information of the working device from the optically readable character, [Bizouarn, pg. 4, para 0032], where the optical reading device can be a smartphone camera. However, Asai teaches: (Currently Amended) […] wherein the receiving of the image data includes capturing an image of the aircraft component using a camera. See [Asai, pg. 1, paras 0017-0021], which explain that the system includes imaging devices for capturing an image of aircraft maintenance inspection targets, [Asai, pg. 3, para 0053]. As stated in MPEP § 2143(I)(A), it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to incorporate the imaging system of Bizouarn with the imaging system of Asai into a combined system. Doing so would be technically feasible, with no inventive effort. Furthermore, the resulting imaging system would yield predictable results, where the components of Bizouarn and the components of Asai would be expected to work as intended, with each component in the combined system performing the same. Further, modifying Bizouarn with Asai to include imaging an aircraft component, supports post-flight inspections and allows the mechanic to identify issues and repair with the aircraft and aircraft engine which can be incurred during flight [Asai, pg. 1, paras 0003-0006]. Regarding Claim 20, Bizouarn as modified discloses the limitations of Claim 14. Bizouarn discloses: (Original) […] comprising determining that the […] component needs servicing by obtaining information about the […] component using […] the part identifier. See again [Bizouarn, pg. 3, para 0029], which explains the system includes an optical reading device to read the information of the working device from the optically readable character and [Bizouarn, pg. 4, para 0030], which explains that the optically readable character can also be used to store information about the device, such as maintenance details for repair and replace. See again [Bizouarn, pg. 4, para 0032], which explains that the camera can read the character and extract the information. However, Asai teaches: comprising determining that the aircraft component needs servicing by obtaining information about the aircraft component […]. See again [Asai, pg. 1, paras 0017-0021], which explain that the system includes imaging devices for capturing an image of aircraft maintenance inspection targets. See also [Asai, pgs. 2-3, paras 0043-0044], which explain that, using the images, a problem can be identified and repaired,. As stated in MPEP § 2143(I)(A), it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to incorporate the imaging and identification system of Bizouarn with the imaging and inspection system of Asai into a combined system. Doing so would be technically feasible, with no inventive effort. Furthermore, the resulting imaging system would yield predictable results, where the components of Bizouarn and the components of Asai would be expected to work as intended, with each component in the combined system performing the same. Further, modifying Bizouarn with Asai to include imaging an aircraft component, supports post-flight inspections and allows the mechanic to identify issues and repair with the aircraft and aircraft engine which can be incurred during flight [Asai, pg. 1, paras 0003-0006]. This allows for immediately addressing issues and for post-flight inspection to be performed quickly and efficiently [Asai, pgs. 2-3, para 0044]. However, Chopra teaches: […] obtaining information about […] using the series of characters of the part identifier. See again [Chopra, pg. 3, para 0032], which describes a machine-learning model that consumes an image of an aircraft component for analysis. Also see again [Chopra, pg. 4, para 0043], which further explains that the component can be marked with a label, including text, which can be imaged and analyzed using an optical character recognition model, executed by a component verification device and [Chopra, pgs. 3-4, para 0038], which explains that the images can be generated using a camera coupled to the component verification device. Finally see again [Chopra, pg. 5, para 0052], which explains that the machine learning model can be trained to recognize patterns and [Chopra, pg. 6, para 0059] once trained, the machine learning model can be deployed to the component verification system to [Chopra, pg. 6,para 0060] analyze real-time input for detection and identification by applying the trained patterns. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Chopra to use a series of characters and optical character recognition, using machine learning, for the part identifier. Doing so allows the components to be inspected in accordance with the appropriate standards [Chopra, pg. 3, para 0035], where the execution of the image-processing machine-learning model improves efficiency and quality of the verification and validation [Chopra, pg. 3, para 0033]. Claims 6, 11-12, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Bizouarn, in view of Asai, Chopra, and Ramaswamy, and further in view of Olwal et al., PG Pub US-2022/0165054-A1 (herein "Olwal") and Truitner, Patent No. US-11/907,841-B1 (herein "Truitner"). Regarding Claim 6, Bizouarn as modified discloses the limitations of Claim 1. Bizouarn does not disclose: (Currently Amended) […] a light operatively connected to the controller, the non-transitory computer-readable medium further having instructions executable by the processing unit to: determine that a luminosity of the image is below a minimum threshold; and cause an increase in a power supplied to the light. However, Olwal teaches: the non-transitory computer-readable medium further having instructions executable by the processing unit to: determine that a luminosity of the image is below a minimum threshold. See [Olwal, pg. 15, 0132], which explains that the device includes a sensor to determine if the lighting condition is acceptable for activating the imaging sensor, “The device 1502 includes a lighting condition sensor 1544 configured to estimate a lighting condition for capturing image data. In some examples, the lighting condition sensor 1544 includes an ambient light sensor that detects the amount of ambient light that is present, which can be used to ensure that the image frame 1529a is captured with a desired signal-to-noise ratio (SNR). However, the lighting condition sensor 1544 may include other types of photometric (or colorimeter) sensors. […]. The sensor trigger 1571 may receive lighting condition information from the lighting condition sensor 1544 and motion information from the motion sensor 1546, and, if the lighting condition information and the motion information, indicate that the conditions are acceptable to obtain an image frame 1529a, the sensor trigger 1571 may activate the imaging sensor 1542a to capture an image frame 1529a.” See also [Olwal, pg. 15, para 0134], which further explains that the image will not be captured if the lighting condition is below a threshold, “In some examples, the motion information and/or the lighting condition information is used to determine whether to transmit the image frame 1529b. […]. If the lighting condition information indicates that the lighting condition is below a threshold level, the image frame 1529b may not be transmitted, and the microcontroller 1506 may activate the imaging sensor 1542b to capture another image frame.” It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Olwal to determine if the lighting conditions are acceptable. Doing so allows the image to have a desired signal-to-noise ratio, or image quality [Olwal, pg. 15, para 0132] and ensure only acceptable images are transmitted [Olwal, pg. 15, para 0134]. However, Truitner teaches: a light operatively connected to the controller, […]: […]; and cause an increase in a power supplied to the light. See [Truitner, cols 9-10, lines 49-67 and 1-7], which explains that the imaging and recognition system increases light intensity, if the image is dark, “After determining and assigning the general classification to an item, any needed adjustments are automatically made by the camera system 22 to improve the image sets of the item. These adjustments may include adjustments of the lights 44, […]. The following may be examples of adjustments that may be needed: a. If the item is reflective (such as eyewear), the lighting adjusts, which means commands may be given for key lights 44 to physically move (by sliding along the rails 43) from the front of the item to the sides of the item to reduce light reflection; b. If an item is dark colored (absorbs light), lighting intensity increases to extract more detail from the item. Lights 44 are preferably equipped with adjustable apertures and luminosity features; or c. Side lighting automatically adjusts for the size of the item, so that lights 44 are always positioned in front of, or on the sides of, an item, rather than directly overhead.” It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Truitner to include lights that can be increased. Doing so allows adjustments when an item is dark or reflective to extract more detail from the item and maintain image quality [Truitner, cols 9-10, lines 59-67 and 104]. Regarding Claim 11, Bizouarn as modified discloses the limitations of Claim 8. Bizouarn does not disclose: (Original) […] determining that a luminosity of the aircraft component is below a minimum luminosity threshold; and increasing power supplied to a light located in a vicinity of the aircraft component. However, Olwal teaches: (Original) […] determining that a luminosity of the […] component is below a minimum luminosity threshold. See again [Olwal, pg. 15, 0132], which explains that the device includes a sensor to determine if the lighting condition is acceptable for activating the imaging sensor and [Olwal, pg. 15, para 0134], which further explains that the image will not be captured if the lighting condition is below a threshold. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Olwal to determine if the lighting conditions are acceptable. Doing so allows the image to have a desired signal-to-noise ratio, or image quality [Olwal, pg. 15, para 0132] and ensure only acceptable images are transmitted [Olwal, pg. 15, para 0134]. However, Truitner teaches: increasing power supplied to a light […]. See [Truitner, cols 9-10, lines 49-67 and 1-7], which explains that the imaging and recognition system increases light intensity, if the image is dark. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Truitner to include lights that can be increased. Doing so allows adjustments when an item is dark or reflective to extract more detail from the item and maintain image quality [Truitner, cols 9-10, lines 59-67 and 104]. However, Asai teaches: […] luminosity of the aircraft component […]; […] a light located in a vicinity of the aircraft component. See [Asai, pg. 2, paras 0023-0024], which explains that a light is located with the imaging device, near the aircraft component being imaged, “[0023] Furthermore, it is not rare that the maintenance-inspection target A is located in a dark place inside the airframe, as in the case of the stored wheel Al. [0024] Therefore, illuminators 21, such as flash devices, may be disposed, as illustrated in FIG. 2, such that the maintenance-inspection target A is to be illuminated with the illuminators 21 when the imaging devices 2 are to capture images.” It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Asai to include a light for an aircraft component. Doing so allows for lighting the aircraft component for imaging that is in a dark place, which is not rare for aircraft maintenance-inspection targets [Asai, pg. 2, paras 0023-0024]. Regarding Claim 12, Bizouarn as modified discloses the limitations of Claim 11. Bizouarn does not disclose: (Original) […] wherein the determining that the luminosity is below the minimum luminosity threshold includes receiving a signal from a sensor of the camera, the sensor configured for generating a signal indicative of the luminosity. However, Olwal teaches: (Original) […] wherein the determining that the luminosity is below the minimum luminosity threshold includes receiving a signal from a sensor of the camera, the sensor configured for generating a signal indicative of the luminosity. See again [Olwal, pg. 15, 0132], which explains that the device includes a sensor to determine if the lighting condition is acceptable for activating the imaging sensor and [Olwal, pg. 15, para 0134], which further explains that the image will not be captured if the lighting condition is below a threshold. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Olwal to use a sensor to determine if the lighting conditions are acceptable. Doing so allows the system to detect the light and determine a signal-to-noise ratio to ensure that the ratio, or quality is desirable, [Olwal, pg. 15, para 0132] and further ensure that only acceptable images are transmitted [Olwal, pg. 15, para 0134]. Regarding Claim 18, Bizouarn as modified discloses the limitations of Claim 14. Bizouarn does not disclose: (Original) […] determining that a luminosity of the aircraft component is below a minimum luminosity threshold; and increasing power supplied to a light located in a vicinity of the aircraft component. However, Olwal teaches: (Original) […] determining that a luminosity of the […] component is below a minimum luminosity threshold. See again [Olwal, pg. 15, 0132], which explains that the device includes a sensor to determine if the lighting condition is acceptable for activating the imaging sensor and [Olwal, pg. 15, para 0134], which further explains that the image will not be captured if the lighting condition is below a threshold. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Olwal to determine if the lighting conditions are acceptable. Doing so allows the image to have a desired signal-to-noise ratio, or image quality [Olwal, pg. 15, para 0132] and ensure only acceptable images are transmitted [Olwal, pg. 15, para 0134]. However, Truitner teaches: increasing power supplied to a light […]. See [Truitner, cols 9-10, lines 49-67 and 1-7], which explains that the imaging and recognition system increases light intensity, if the image is dark. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Truitner to include lights that can be increased. Doing so allows adjustments when an item is dark or reflective to extract more detail from the item and maintain image quality [Truitner, cols 9-10, lines 59-67 and 104]. However, Asai teaches: […] luminosity of the aircraft component […]; […] a light located in a vicinity of the aircraft component. See again [Asai, pg. 2, paras 0023-0024], which explains that a light is located with the imaging device, near the aircraft component being imaged. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Asai to include a light for an aircraft component. Doing so allows for lighting the aircraft component for imaging that is in a dark place, which is not rare for aircraft maintenance-inspection targets [Asai, pg. 2, paras 0023-0024]. Regarding Claim 19, Bizouarn as modified discloses the limitations of Claim 18. Bizouarn does not disclose: (Original) […] wherein the determining that the luminosity is below the minimum luminosity threshold includes receiving a signal from a sensor, the sensor configured for generating a signal indicative of the luminosity. However, Olwal teaches: (Original) […] wherein the determining that the luminosity is below the minimum luminosity threshold includes receiving a signal from a sensor, the sensor configured for generating a signal indicative of the luminosity. See again [Olwal, pg. 15, 0132], which explains that the device includes a sensor to determine if the lighting condition is acceptable for activating the imaging sensor and [Olwal, pg. 15, para 0134], which further explains that the image will not be captured if the lighting condition is below a threshold. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Olwal to use a sensor to determine if the lighting conditions are acceptable. Doing so allows the system to detect the light and determine a signal-to-noise ratio to ensure that the ratio, or quality is desirable, [Olwal, pg. 15, para 0132] and further ensure that only acceptable images are transmitted [Olwal, pg. 15, para 0134]. Claims 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Bizouarn, in view of Asai and Chopra, and further in view of Martet Renaud et al., FR-3098252-A1 (herein "Martet Renaud"). Regarding Claim 9, Bizouarn as modified discloses the limitations of Claim 8. Bizouarn does not disclose: (Currently Amended) […] wherein the is an airfoil. However, Martet Renaud teaches: (Currently Amended) […] wherein the is an airfoil. See [Martet Renaud, pg. 4, 0016], which explains that, “The invention thus makes it possible to propose a simple, lightweight and efficient inspection device for continuous 360° blade inspection,” and [Martet Renaud, pgs. 5-6, para 0027], which explains that the blades are inspected using a camera, “The invention also relates to a method for inspecting blades in a turbomachine by means of a wireless inspection device according to one of the features of the invention, and said method successively comprises the steps of: a) inserting said device through an orifice of a casing arranged around said blades, b) fixing the support on the blade of a first stage of blades and following a predetermined radial position relative to a longitudinal axis of the turbomachine, c) rotating said first stage of blades and acquiring images by the camera of all the blades of a second stage of blades adjacent to said first stage, d) remotely transmitting said images acquired by the transmission system.” It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Martet Renaud to include capturing imaging data of an aircraft turbine blade. Doing so provides a non-destructive method for imaging the turbine which is a necessary part of maintenance [Martet Renaud, pg. 2, para 0006], by providing a wireless inspection method that allows inaccessible elements of the turbine to be visualized [Martet Renaud, pg. 3, para 0011]. Regarding Claim 16, Bizouarn as modified discloses the limitations of Claim 15. Bizouarn does not disclose: (Currently Amended) […] wherein the is an airfoil. However, Martet Renaud teaches: (Currently Amended) […] wherein the is an airfoil. See again [Martet Renaud, pg. 4, 0016], which explains that, “The invention thus makes it possible to propose a simple, lightweight and efficient inspection device for continuous 360° blade inspection,” and [Martet Renaud, pgs. 5-6, para 0027], which explains that the blades are inspected using a camera. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Bizouarn with Martet Renaud to include capturing imaging data of an aircraft turbine blade. Doing so provides a non-destructive method for imaging the turbine which is a necessary part of maintenance [Martet Renaud, pg. 2, para 0006], by providing a wireless inspection method that allows inaccessible elements of the turbine to be visualized [Martet Renaud, pg. 3, para 0011]. Allowable Subject Matter Claims 2-5, 10, and 17 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. As allowable subject matter has been indicated, applicant's reply must either comply with all formal requirements or specifically traverse each requirement not complied with. See 37 CFR 1.111(b) and MPEP § 707.07(a). The following is a statement of reasons for the indication of allowable subject matter: Claim 2 would be allowable for disclosing a support mechanism that supports the assembly and camera, and further, moves, allowing the assembly and camera to move relative to one another. The referenced prior art and a thorough search of prior art does not explicitly disclose or teach a support mechanism for both an assembly and a camera that also allows the assembly and camera to move relative to each other. Claim 3 would be allowable because it depends on Claim 2 and for disclosing the support of Claim 2, and further disclosing that the support rotates coaxial to the central axis of the assembly. The referenced prior art and a thorough search of prior art does not explicitly disclose or teach a support that moves, or specifically rotates, coaxial to the central axis of the assembly. Claim 4 would be allowable because it depends on Claim 3 and for disclosing the support of Claim 3, and further disclosing that the support is connected to a motor that can be controlled by a user to rotate the assembly. The referenced prior art and a thorough search of prior art does not explicitly disclose or teach a support that moves, and further, moves by a motor that is controlled by a user. Finally, Claim 5 would be allowable because it depends on Claim 2 and for disclosing the support of Claim 2, and further disclosing that the support moves the assembly or camera, relative to one another, to place the part identifier in the line of sight of the camera. As stated above, for Claim 2, the referenced prior art and a thorough search of prior art does not explicitly disclose or teach a support that moves the assembly and camera to move relative to each other. Similarly, Claim 10 and Claim 17 would be allowable for the same reasons as described above, with respect to an assembly of airfoils that are moved relative to a camera. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIN MARIE HARTMANN whose telephone number is (571)272-5309. The examiner can normally be reached M-F 7-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kito Robinson can be reached at (571) 270-3921. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /E.M.H./Examiner, Art Unit 3664 /KITO R ROBINSON/Supervisory Patent Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

May 29, 2024
Application Filed
Oct 11, 2025
Non-Final Rejection — §101, §103
Jan 15, 2026
Response Filed
Mar 06, 2026
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+50.0%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 8 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month