Prosecution Insights
Last updated: April 19, 2026
Application No. 19/022,768

PORTABLE SYSTEMS AND METHODS FOR MONITORING MAINTENANCE EVENTS FOR AIRCRAFTS

Non-Final OA §101§103
Filed
Jan 15, 2025
Examiner
BUSCH, CHRISTOPHER CONRAD
Art Unit
3621
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
The Boeing Company
OA Round
1 (Non-Final)
29%
Grant Probability
At Risk
1-2
OA Rounds
3y 4m
To Grant
50%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
102 granted / 353 resolved
-23.1% vs TC avg
Strong +21% interview lift
Without
With
+20.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
34 currently pending
Career history
387
Total Applications
across all art units

Statute-Specific Performance

§101
41.9%
+1.9% vs TC avg
§103
35.9%
-4.1% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 353 resolved cases

Office Action

§101 §103
DETAILED ACTION Status of the Claims This office action is submitted in response to the application filed on 1/15/25. Examiner notes that this application claims priority from provisional application 63663446. Examiner further notes Applicant’s priority date of 6/24/24, which stems from the aforementioned provisional application. Claims 1-20 are currently pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Independent claims 1, 13, and 16, in part, describe a method comprising: accessing technical specifications associated with a maintenance event, obtaining data associated with actions of an operator or characteristics of an aircraft during the maintenance event, and monitoring the maintenance event based on that data and the technical specifications. As such, the invention is directed to the abstract idea of collecting information, analyzing it against rules or thresholds, and making a determination regarding step completion, which are activities that can be performed mentally (or with pen and paper) by a human supervisor given the same information, and therefore recite a mental process and method of organizing human activity. Therefore, under Step 2A, Prong One, the claims recite a judicial exception. Next, the aforementioned claims recite additional elements including “a control unit including one or more processors” and “one or more sensors,” configured to perform their conventional functions of receiving, processing, and outputting information, i.e., to access the technical specifications, obtain sensor data, and monitor the maintenance event based on that information. Dependent claims 4, 8, 15, 18, and 20 further describe an “output device” for transmitting data to a user. These limitations are recited at a high level of generality, and appear to be nothing more than generic computer components. Claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 134 S. Ct. at 2358, 110 USPQ2d at 1983. See also 134 S. Ct. at 2389, 110 USPQ2d at 1984. Furthermore, the recitation that the data includes “two or more different types of modalities” (e.g., video, text, audio, image, or touch) does not change the character of the claim, because it merely specifies the types of information being collected and evaluated, without reciting any particular improvement in how the data is sensed, processed, or used by the system.​ Dependent claim 12 further describes the aforementioned control unit as an AI or ML system. Examiner notes that the use of AI or ML in this instance amounts to mere instructions to implement the abstract idea on a computer, and merely uses a computer as a tool to perform the abstract idea. See MPEP 2106.05(f). Furthermore, looking at the elements individually and in combination, under Step 2A, Prong Two, the claims as a whole do not integrate the judicial exception into a practical application because they fail to: improve the functioning of a computer or a technical field, apply the judicial exception in the treatment or prophylaxis of a disease, apply the judicial exception with a particular machine, effect a transformation or reduction of a particular article to a different state or thing, or apply the judicial exception beyond generally linking the use of the judicial exception to a particular technological environment. Rather, the claims merely use a computer as a tool to perform the abstract idea(s), and/or add insignificant extra-solution activity to the judicial exception, and/or generally link the use of the judicial exception to a particular technological environment (e.g. a piece of equipment such as a camera, drone, headphone, etc.). Next, under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, when considered both individually and as an ordered combination, do not amount to significantly more than the abstract idea. Furthermore, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. Simply put, as noted above, there is no indication that the combination of elements improves the functioning of a computer (or any other technology), and their collective functions are merely facilitated by generic computer implementation. Additionally, pursuant to the requirement under Berkheimer, the following citations are provided to demonstrate that the additional elements, identified as extra-solution activity, amount to activities that are well-understood, routine, and conventional. See MPEP 2106.05(d). Receiving or transmitting data over a network, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362; OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). Thus, taken alone and in combination, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea), and are ineligible under 35 USC 101. Claims 2–12 and 14–20 depend from claim 1 or recite the same concept in method or wearable form (claims 13 and 16–20) and merely add further limitations that represent field-of-use restrictions (e.g., aircraft maintenance or inspection), types of data (modalities), or generic post-solution activity (e.g., communicating notifications or instructions), none of which are sufficient to render the claims patent-eligible. Therefore, claims 1-20 are not drawn to eligible subject matter, as they are directed to an abstract idea without significantly more. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 1-11, 13, and 16-18 are rejected under 35 USC 103 as being unpatentable over Soldani et al. (US 10049111 B2) in view of Kathirvel et al. (US 10065750 B2). Claims 1 and 13: Soldani discloses a system and method comprising: a control unit including one or more processors configured to access one or more technical specifications associated with a maintenance event of an aircraft, the maintenance event including one or more steps configured to be completed by an operator to complete the maintenance event (Fig. 1; col. 3, ll. 45–67; col. 4, ll. 1–9; col. 5, ll. 15–24). Soldani’s display selection means 130 (including display selection module 132) is processor-based and accesses a database storing “maintenance assistance data” associated with each aircraft equipment, where the maintenance assistance data “may also comprise other data such as: instructions for adjusting or handling the equipment; [and] a list of elementary steps to be performed when a maintenance operation is to be carried out on this equipment, for example elementary steps to be performed when this equipment is to be replaced, moved, tested for correct function, etc.”); one or more sensors configured to obtain data during the maintenance event of the aircraft, the data being associated with one or more of actions of the operator or one or more characteristics of the aircraft during the maintenance event (Fig. 1; col. 4, ll. 35–55; col. 5, ll. 31–44). The portable device 120 includes a camera 121 that “is set up to capture at least one image Im of an external environment notably situated inside an aircraft,” and “in particular, the camera 121 is set up to capture images of an equipment of the aircraft” (col. 4, ll. 35–43). The captured images are transmitted to the display selection means 130, which uses shape recognition means 131 to identify the equipment and determine an equipment identifier (col. 5, ll. 31–44). These images are obtained during maintenance operations and inherently depict the operator’s actions relative to the equipment and the configuration/state of the aircraft equipment, i.e., data associated with actions of the operator or characteristics of the aircraft during the maintenance event.); and wherein the control unit is configured to monitor the maintenance event based at least in part on the data obtained by the one or more sensors during the maintenance event and at least one of the one or more technical specifications (Fig. 1; col. 4, ll. 56–67; col. 5, ll. 45–67; col. 6, ll. 1–7. After the camera 121 captures images and the display selection means 130 identifies the equipment, the display selection module 132 selects maintenance assistance data corresponding to the equipment from the database (col. 5, ll. 15–24; col. 5, ll. 31–44). The augmented-reality display means 122 then “display[s], in augmented reality, images … corresponding to these maintenance assistance data,” such that “a user sees a view of the identified equipment, over which the said images are overlaid,” and tracking is performed “so that an image corresponding to the maintenance assistance data remains overlaid on a view of the identified equipment, whatever the movements of the camera” (col. 5, ll. 45–67; col. 6, ll. 1–7)). Soldani does not appear to explicitly describe a system where “the data includ[es] two or more different types of modalities.”​ Kathirvel, however, discloses that the data includ[es] two or more different types of modalities (col. 4, ll. 20–28; Abstract; col. 2, ll. 45–67; col. 5, ll. 1–22). Kathirvel describes a wearable device 16 with input devices 46 including “an audio recording device [and] a video recording device” (col. 4, ll. 20–28); and in the Abstract and description (col. 2, ll. 45–67; col. 5, ll. 1–22) Kathirvel describes “receiving, at the wearable device, part data from at least one of a video recording device and an audio recording device of the wearable device” and presenting maintenance information via both a display system and an audio system.​ Therefore, it would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to combine this feature of Kathirvel with those of Soldani. One would have been motivated to do this in order to augment Soldani’s portable augmented-reality maintenance assistance system, which already uses image data and technical specifications to guide maintenance, with Kathirvel’s wearable multimodal sensing (video and audio) and connectivity to aircraft and ground systems, thereby providing a portable monitoring system that captures richer context (multiple data modalities) during maintenance while still accessing and applying technical specifications to the ongoing maintenance event. A person of ordinary skill in the art would recognize that incorporating known video and audio recording capabilities from Kathirvel into Soldani’s AR-based maintenance system is a predictable combination to improve hands-free operation, situational awareness, and robustness of monitoring in the aircraft maintenance environment. Claim 2: Soldani discloses that the maintenance assistance data and guidance images presented in augmented reality can include “text, a three-dimensional animation, [and] a diagram” (col. 5, ll. 15–30; col. 5, ll. 45–67), i.e., text and image modalities, but does not explicitly describe a system wherein “the two or more different types of modalities includes one or more of video, text, audio, image, or touch.” Kathirvel, however, discloses a method wherein the two or more different types of modalities includes one or more of video, text, audio, image, or touch (col. 4, ll. 20–28; Abstract; col. 2, ll. 45–67; col. 5, ll. 1–22. Kathirvel describes a wearable device 16 with input devices 46 including “an audio recording device [and] a video recording device” (col. 4, ll. 20–28) and describes “receiving, at the wearable device, part data from at least one of a video recording device and an audio recording device of the wearable device” and presenting maintenance information via both a display system and an audio system.). The rationale for combining Kathirvel with Soldani is articulated above and reincorporated herein. Claim 3: Soldani discloses using image data from a camera and maintenance assistance data (including instructions and “a list of elementary steps to be performed when a maintenance operation is to be carried out on this equipment”) to guide the operator in real time via augmented-reality overlays (col. 4, ll. 35–55; col. 5, ll. 15–24; col. 5, ll. 45–67; col. 6, ll. 1–7), but does not explicitly describe a system wherein “the control unit is configured to examine completion of the one or more steps of the maintenance event by the operator based on each of the two or more different types of modalities of the data.” Kathirvel, however, discloses a method wherein the control unit is configured to examine completion of the one or more steps of the maintenance event by the operator based on each of the two or more different types of modalities of the data (col. 4, ll. 20–28; col. 7, ll. 32–67; col. 8, ll. 1–10. Kathirvel describes a wearable device 16 with input devices 46 including “an audio recording device [and] a video recording device” (col. 4, ll. 20–28) and a maintenance module 60 whose part data determination module “receiv[es] … voice and image data of the part data from at least the one or more input devices which further comprise: a video recording device and an audio recording device” and uses “image processing solutions” and “speech processing solutions” to determine and confirm part data based on aircraft and maintenance information (col. 7, ll. 32–67; col. 8, ll. 1–10). Thus, Kathirvel examines maintenance-related information (part identity/procedure context) using each of the different modalities (audio and video) before confirming and progressing.). The rationale for combining Kathirvel with Soldani is articulated above and reincorporated herein. Claim 4: Soldani further discloses an output device operably coupled with the control unit, wherein the control unit is configured to communicate with the operator during the maintenance event via the output device (Fig. 1; col. 5, ll. 45–67; col. 6, ll. 1–7. Soldani describes augmented reality display means 122 that receive maintenance assistance data from the display selection means 130 and, in response, display in augmented reality images corresponding to these maintenance assistance data over the view of the identified equipment, thereby communicating information from the control unit to the operator during the maintenance event.). Claim 5: Soldani further discloses wherein the control unit is configured to communicate one or more instructions to the operator for the operator to complete one or more of the one or more steps of the maintenance event (col. 5, ll. 15–24; col. 5, ll. 45–67. Soldani teaches that the maintenance assistance data include “instructions for adjusting or handling the equipment; [and] a list of elementary steps to be performed when a maintenance operation is to be carried out on this equipment” and that images corresponding to these maintenance assistance data are displayed in augmented reality over the equipment, thereby communicating instructions to the operator for completing the maintenance steps.). Claim 6: The Soldani/Kathirvel combination discloses those limitations cited above. Kathirvel, however, further discloses a system wherein the control unit is configured to confirm the completion of the one or more steps of the maintenance event (col. 7, ll. 32–67; col. 8, ll. 1–10. Kathirvel describes a maintenance module 60 with a part data determination module that “confirm[s] … at least a number of a replacement unit from the part data determined by either the image processing or speech processing solutions wherein the confirming is based on at least information which comprise: aircraft information and maintenance information loaded into the maintenance module,” and reports maintenance activity status to computing systems (col. 8, ll. 10–25), thereby confirming that particular maintenance steps/actions have been completed.). The rationale for combing Kathirvel with Soldani is articulated above and reincorporated herein. Claim 7: The Soldani/Kathirvel combination discloses those limitations cited above. Kathirvel, however, further discloses a system wherein the control unit is configured to identify that at least one of the one or more steps of the maintenance event was completed incorrectly relative to a completion threshold of the at least one of the one or more technical specifications (col. 7, ll. 32–67; col. 8, ll. 1–10. Kathirvel’s maintenance module 60 (including the part data determination module and information processing module) processes voice and image data of part data using speech and image processing, determines part numbers, and then confirms those part numbers “based on at least information which comprise: aircraft information and maintenance information loaded into the maintenance module”. If the determined part data or maintenance activity status does not match the stored aircraft/maintenance information, the system has effectively identified that the corresponding maintenance action/step does not satisfy the maintenance information (i.e., a completion threshold defined by the technical specifications)). The rationale for combing Kathirvel with Soldani is articulated above and reincorporated herein. Claim 8: The Soldani/Kathirvel combination discloses those limitations cited above. Kathirvel, however, further discloses a system wherein the control unit is configured to communicate a notification to the operator via the output device responsive to identifying that the at least one of the one or more steps was completed incorrectly (col. 7, ll. 32–67; col. 8, ll. 10–25. Kathirvel describes an information processing module that receives part information and maintenance information from computing systems and “generates user interface data” and “audio notifications” for presentation by the display system and audio system of the wearable device; where part/maintenance data fail confirmation against aircraft and maintenance information, the resulting UI/audio messages notify the user of the discrepancy and required attention.). The rationale for combing Kathirvel with Soldani is articulated above and reincorporated herein. Claim 9: The Soldani/Kathirvel combination discloses those limitations cited above. Kathirvel, however, further discloses a system wherein the notification is configured to include a corrective action recommendation for the operator to complete in order to remedy the at least one of the one or more steps that was completed incorrectly” (col. 7, ll. 32–67; col. 8, ll. 10–25. Kathirvel describes an information processing module that presents part information and “a part option” for selection via the user interface or audio, where the set of part options comprises “to order a new part, to request a maintenance procedure for a part, and to report a maintenance activity status to one or more of the computing systems.” When an inconsistency is detected between part data/maintenance activity and the stored aircraft/maintenance information, these presented options function as corrective action recommendations instructing the operator what to do to remedy the situation.). The rationale for combing Kathirvel with Soldani is articulated above and reincorporated herein. Claim 10: The Soldani/Kathirvel combination discloses those limitations cited above. Kathirvel, however, further discloses a system wherein the at least one of the one or more steps that is completed incorrectly is a first step, wherein the control unit is configured to communicate the notification to the operator responsive to the operator completing the first step incorrectly and prior to the operator starting a sequential second step (Fig. 3; col. 6, ll. 1–67; col. 7, ll. 1–31; col. 7, ll. 32–67; col. 8, ll. 10–25. Kathirvel describes a maintenance flow in which the wearable device performs login and configuration, loads aircraft and maintenance information, receives and processes part data via audio/video, and “confirm[s] … at least a number of a replacement unit from the part data … wherein the confirming is based on at least information which comprise: aircraft information and maintenance information loaded into the maintenance module” before proceeding, and only after such confirmation does the information processing module present maintenance procedure data and part options and report maintenance activity status. Thus, if a first action/step fails confirmation, the system notifies the user and blocks progression until the issue is resolved.). The rationale for combing Kathirvel with Soldani is articulated above and reincorporated herein. Claim 11: Soldani further discloses a system wherein the one or more sensors and the control unit are operably coupled with a body, wherein the body is configured to be one or more of worn by or coupled to the operator during the maintenance event (Fig. 1; col. 3, ll. 45–67; col. 4, ll. 1–9. Soldani describes a portable device 120 implemented as augmented-reality goggles or a helmet, including camera 121 and display selection means 130, that is “held in the hand or carried by the user about his person, for example in the form of goggles,” and worn by a technician inside the aircraft during maintenance, such that the sensors and control unit are operably coupled with the wearable body.). Claim 16: Soldani discloses a wearable or portable monitoring system, comprising: a body configured to be one or more of worn by or coupled to an operator during a maintenance or inspection event of an aircraft (Fig. 1; col. 3, ll. 45–67; col. 4, ll. 35–43. Soldani discloses a maintenance assistance system 100 comprising a portable device 120 that “can be held in the hand or carried by the user about his person, for example in the form of goggles,” such that the device is worn by or coupled to a technician while performing aircraft maintenance operations.); a control unit including one or more processors operably coupled with the body, the control unit configured to configured to wirelessly access one or more technical specifications associated with one or more of the aircraft or the maintenance or inspection event of the aircraft, wherein one or more steps of the maintenance or inspection event are included in at least one of the one or more technical specifications (Fig. 1; col. 3, ll. 45–67; col. 4, ll. 1–9; col. 5, ll. 15–24. Soldani discloses a control unit in the form of display selection means 130, which are “calculation means notably comprising a processor” that are in communication (e.g., wireless) with the portable device 120 and access maintenance assistance data stored in a database, where the maintenance assistance data associated with each equipment include “a list of elementary steps to be performed when a maintenance operation is to be carried out on this equipment.”); one or more sensors operably coupled with the body, the one or more sensors configured to obtain data during the maintenance or inspection event, the data being associated with one or more of actions of the operator or one or more characteristics of the aircraft during the maintenance or inspection event (Fig. 1, 4A–4B, 5; col. 4, ll. 35–43; col. 5, ll. 31–44; col. 8, ll. 60–67; col. 9, ll. 1–26; col. 11, ll. 1–32. Soldani discloses that the portable device 120 includes a camera 121 set up to capture images Im of the environment inside the aircraft, “in particular… images of an equipment of the aircraft,” during maintenance operations, and a geolocation module 423 that supplies the position of the portable device relative to the aircraft; these sensors obtain data during maintenance operations that are associated with equipment configuration and with the spatial position of the technician’s device (and thus implicitly with the operator’s actions and aircraft characteristics)); and wherein the control unit is configured to monitor the maintenance or inspection event based at least in part on the data obtained by the one or more sensors during the maintenance or inspection event and the at least one of the one or more technical specifications (Fig. 1, 2A–2B, 4A–4B; col. 4, ll. 56–67; col. 5, ll. 15–24; col. 5, ll. 31–67; col. 6, ll. 1–7; col. 9, ll. 1–26; col. 11, ll. 1–32. Soldani discloses that, after the camera 121 captures images and the geolocation module supplies the position of the portable device, the display selection means 130 (including shape recognition means 131 and display selection module 132) identify the equipment present in the images, determine a useful identifier, retrieve maintenance assistance data and lists of elementary steps associated with the identified equipment from the database, and select and send maintenance assistance and alert/guidance data to the augmented-reality display means 122 based on the identified equipment and the current position relative to equipment of interest and positions of interest. Soldani further describes classifying maintenance operations and identifiers of interest on the basis of positions and a model of the aircraft so that a user does not begin a new maintenance operation until a previously begun operation has been completed, selecting alert data when the portable device is near an equipment of interest, and selecting guidance data to direct the user to the next equipment of interest, thereby tracking progress and status of the maintenance event relative to the planned operations and associated maintenance assistance data. (Fig. 4A–4B, 6–7; col. 9, ll. 60–67; col. 10, ll. 1–30; col. 11, ll. 1–32; col. 12, ll. 48–67.)). Soldani does not explicitly describe a system in which “the data includ[es] two or more different types of modalities” in the sense of different information modalities such as video, audio, text, and images. Kathirvel, however, discloses a system in which the data includ[es] two or more different types of modalities (Abstract; col. 2, ll. 45–67; col. 4, ll. 20–28; col. 5, ll. 1–22. Kathirvel discloses a wearable device 16 having input devices 46 that include “an audio recording device [and] a video recording device,” and describes “receiving, at the wearable device, part data from at least one of a video recording device and an audio recording device of the wearable device” and using such multimodal data (video and audio) while presenting maintenance information via both a display system and an audio system.). The rationale for combining Kathirvel with Soldani is articulated above and reincorporated herein. Claim 17: Soldani discloses the wearable or portable monitoring system, wherein the control unit is configured to examine completion of the one or more steps of the maintenance or inspection event by the operator based on … the data (Fig. 2B, 4B; col. 5, lines 26–48; col. 9, lines 1–26, 48–60, 60 – col. 10, line 30; col. 11, lines 1–32; col. 12, lines 48–67. The display selection means 130/430 use data obtained from camera 121 and geolocation module 423/523 together with step-based maintenance assistance data extracted from request Re to control when guidance and assistance are provided and to prevent a user from beginning a new maintenance operation until a previously begun operation has been completed, thereby examining completion of maintenance steps based on sensor data and the technical step list.) Soldani does not explicitly disclose that completion is examined “based on each of the two or more different types of modalities of the data.” Kathirvel, however, discloses a system in which the control unit is configured to examine completion of maintenance steps based on each of two or more different types of modalities of the data (Fig. 1–3; claim 1; col. 5, line 60 – col. 7, line 25; col. 8, line 60 – col. 11, line 35. Kathirvel discloses maintenance module MM 60 that receives part data derived from both video interactions and voice interactions via a video recording device and an audio recording device of wearable device 16, uses that multimodal data together with maintenance information to guide the user through a maintenance procedure, and advances or completes the procedure based on the recognized audio and video inputs corresponding to required steps, thereby examining step completion based on each of the audio and video modalities.). The rationale for combining Kathirvel with Soldani is articulated above and reincorporated herein. Claim 18: Soldani further discloses a wearable or portable monitoring system of claim 16 further comprising an output device operably coupled with the body, wherein the control unit is configured to communicate with the operator during the maintenance or inspection event via the output device (Fig. 1–2B, 4A–4B; col. 4, lines 35–67; col. 5, lines 12–48; col. 9, lines 1–26, 48–60; col. 10, lines 1–30; col. 11, lines 1–32. Soldani discloses portable device 120 including augmented reality display means 122/422 operably coupled with the device, and display selection means 130/430 that, during aircraft maintenance, select and send maintenance assistance, alert, and guidance data to the augmented reality display means so that information is overlaid on the technician’s view, thereby communicating with the operator via that output device during the maintenance event.). Claims 12, 14-15, and 19-20 are rejected under 35 USC 103 as being unpatentable over Soldani/Kathirvel in view of Cho (US 2022/0172594 A1). Claim 12: The Soldani/Kathirvel combination discloses those limitations cited above, but does not appear to explicitly describe a system “wherein the control unit is an artificial intelligence (AI) or machine-learning system.” Cho, however, discloses a system wherein the control unit is an artificial intelligence (AI) or machine-learning system (Paragraphs 93–106; FIGS. 2A–2B, 3A–3B, 4A–4D. Cho uses a neural-network-based Long Short-Term Memory (LSTM) deep-learning model as the control logic to classify workers’ motions from sensor data for real-time monitoring and alerts.). Therefore, it would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to combine this feature of Cho with those of Soldani/Kathirvel. One would have been motivated to do this in order to improve the accuracy and robustness of recognizing worker or mechanic motions and actions and to thereby enhance real-time maintenance-related monitoring and alerting in Soldani’s maintenance-assistance system. Claim 14: The Soldani/Kathirvel/Cho combination discloses those limitations cited above. Cho, however, further discloses a method for examining completion of one or more steps of the maintenance event based on each of the two or more different types of modalities of the data. (Paragraphs 18–19, 27–28, 93–106; FIGS. 1, 5A–5B, 7, 8E, 9A–9B. Cho discloses examining workers’ motions/steps based on multiple sensor modalities from wearable IMU devices (e.g., acceleration, angular velocity, magnetic field) that are combined into input vectors and processed by a machine-learning/LSTM model to recognize the worker’s motion for task and safety monitoring.). The rationale for combining Cho with Soldani/Kathirvel is articulated above and reincorporated herein. Claim 15: The Soldani/Kathirvel/Cho combination discloses those limitations cited above. Cho, however, further discloses a method for identifying that at least one of one or more steps of the maintenance event was completed incorrectly relative to a completion threshold of the at least one of the one or more technical specifications; and communicating a notification to the operator responsive to identifying that the at least one of the one or more steps was completed incorrectly​​ (Abstract; Paragraphs 23–26, 29, 44; FIGS. 1, 5A–5F. Cho discloses automatically recognizing workers’ motions/steps from wearable sensor data using machine-learning models and generating alerts/notifications when monitored motion or proximity indicates a safety or performance concern, with such alerts communicated to workers and/or supervisors via wearable devices and graphical user interfaces.). The rationale for combining Cho with Soldani/Kathirvel is articulated above and reincorporated herein. Claim 19: Soldani discloses the wearable or portable monitoring system of claim 16, including a portable device 120 (body) with augmented-reality display means 122/422 serving as an output device, operably coupled with the device used by an operator during aircraft maintenance operations, and display selection means 130/430 (control unit) that select and send maintenance assistance, alert, and guidance data to the augmented-reality display so that visual alerts and messages are presented to the operator during the maintenance event (Fig. 1, 2A–2B, 4A–4B; col. 3, ll. 45–67; col. 4, ll. 35–43; col. 5, ll. 15–24; col. 5, ll. 45–67; col. 9, ll. 1–26; col. 11, ll. 1–32; col. 12, ll. 48–67.). Soldani does not explicitly disclose a system in which the control unit is configured to identify that at least one of the steps of the maintenance or inspection event was completed incorrectly relative to a completion threshold of a technical specification, nor that it communicates a notification to the operator responsive to identifying that incorrect completion. Cho, however, discloses a system further comprising an output device operably coupled with the body, wherein the control unit is configured to identify that at least one of the one or more steps of the maintenance or inspection event was completed incorrectly relative to a completion threshold of the at least one of the one or more technical specifications, and wherein the control unit is configured to communicate a notification to the operator responsive to identifying that the at least one of the one or more steps was completed incorrectly (Abstract; Paragraphs 23–26, 29, 44, 93–106; FIGS. 1, 2A–2B, 5A–5F. Cho describes a wearable worker-monitoring system in which inertial measurement unit (IMU) sensors provide motion data to an AI/machine-learning classifier (e.g., a stacked LSTM network) that recognizes workers’ sequenced motions/steps, evaluates them against defined criteria or thresholds for safety and productivity, and generates alerts/notifications to the worker and/or supervisor via wearable devices and graphical user interfaces when motion patterns indicate unsafe, abnormal, or non-compliant actions. (Abstract; ¶¶23–26, 29, 44, 93–106; FIGS. 1, 2A–2B, 5A–5F.). The rationale for combining Cho with Soldani/Kathirvel is articulated above and reincorporated herein. Claim 20: Soldani discloses the wearable or portable monitoring system of claim 19 and that the display selection means may “perform a classification of the identifiers of interest on the basis of the corresponding maintenance operation, so that a user does not begin a new maintenance operation until after a maintenance operation previously begun has been completed,” and uses alert and guidance data to guide the user from one equipment of interest to another in a sequence that minimizes travel and respects the planned operation order (Fig. 4A–4B, 6–7; col. 9, ll. 60–67; col. 10, ll. 1–30; col. 11, ll. 1–32; col. 12, ll. 48–67.). Soldani does not explicitly disclose that the control unit identifies that an incorrectly completed step is a first step and communicates a notification responsive to completion of that first step incorrectly and before the operator starts a sequential second step. Cho, however, discloses a system wherein the at least one of the one or more steps that is completed incorrectly is a first step, wherein the control unit is configured to communicate the notification to the operator responsive to the operator completing the first step incorrectly and prior to the operator starting a sequential second step (Abstract; Paragraphs 23–26, 29, 44, 93–106; FIGS. 1, 2A–2B, 5A–5F. Cho discloses that its AI-based monitoring system classifies sequences of worker motions, detects in real time when actions or steps deviate from expected patterns or threshold criteria, and issues alerts immediately upon detection of such deviations so that unsafe or non-compliant motions can be corrected before the worker proceeds further in the task sequence.). The rationale for combining Cho with Soldani/Kathirvel is articulated above and reincorporated herein. Other Relevant Prior Art Schwartz et al. (20170210492), directed to an augmented reality system for assessing an affected area of an aircraft. Afrasiabi et al. (10643329), directed to automated paint quality control for aircraft. Lake et al. (8825276), directed to maintenance systems and methods for use in analyzing maintenance data. Shin et al. (20250174135), directed to an aircraft turnaround monitoring system. Hochman et al. (20220343292), directed to a system and method of documenting aircraft parts condition, repair, and overhaul. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER BUSCH whose telephone number is (571)270-7953. The examiner can normally be reached M-F 10-7. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Waseem Ashraf can be reached at 571-270-3948. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER C BUSCH/Examiner, Art Unit 3621 /WASEEM ASHRAF/Supervisory Patent Examiner, Art Unit 3621
Read full office action

Prosecution Timeline

Jan 15, 2025
Application Filed
Jan 02, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597051
Systems and Methods for the Display of Corresponding Content for User-Requested Vehicle Services Using Distributed Electronic Devices
2y 5m to grant Granted Apr 07, 2026
Patent 12536560
ADAPTABLE IMPLEMENTATION OF ONLINE VIDEO ADVERTISING
2y 5m to grant Granted Jan 27, 2026
Patent 12488359
Systems and Methods for Selectively Modifying Web Content
2y 5m to grant Granted Dec 02, 2025
Patent 12423732
IMPROVED ARTIFICIAL INTELLIGENCE MODELS ADAPTED FOR ADVERTISING
2y 5m to grant Granted Sep 23, 2025
Patent 12393962
SYSTEM INTEGRATION USING AN ABSTRACTION LAYER
2y 5m to grant Granted Aug 19, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
29%
Grant Probability
50%
With Interview (+20.9%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 353 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month