Prosecution Insights
Last updated: April 19, 2026
Application No. 18/121,067

AUTOMATIC AND GUIDED INTERIOR INSPECTION

Non-Final OA §103§112
Filed
Mar 14, 2023
Examiner
AZIMA, SHAGHAYEGH
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Diehl Aerospace GmbH
OA Round
3 (Non-Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
286 granted / 350 resolved
+19.7% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
36 currently pending
Career history
386
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 350 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is in response to the applicant's communication filed on 02/13/2026. In virtue of this communication, claims 1-10, 12-19 filed on 02/13/2026 are currently pending in the instant application. Claims 1, 12, 13, and 14 have been amended without adding a new subject matters. Claim 11 has been cancelled. New claims 16-19 have been added, without adding a new subject matters. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/04/2026 has been entered. Response to Arguments Applicant's arguments filed 02/13/2026 have been fully considered : With regard to claim interpretation under 35 U.S.C.112(f), the interpretation is sustained. With regard to rejection under 35 U.S.C.112(d), the rejection has been withdrawn in view of the amendment filed 02/13/2026. With regard to rejection under 35 U.S.C.112 (b), new 112(b) rejection has been raised. Regarding prior art rejection applicant’s arguments are moot in view of new ground of rejection necessitate by amendment filed on 02/13/2026. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: recording unit, analysis unit, image processing unit in claim 1, claims 3-9. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “the vehicle”, in line 4, there is insufficient antecedent basis for this limitation in the claim. Please clarify. Claims 4 and 5 disclose the same vehicle. Regarding claim 12, it is not clear if the claim is dependent of claim 1 or is an independent claim, if the claim is dependent of claim 1 ( as applicant stated in argument filed on 02/13/2026), two statutory subject matters of method and device(arrangement) cannot be used in one claim. If the claim is independent claim all the units used in claim 12 “the recording units and analyzing unit” needs to be introduced, or removed from the claim 12. Please clarify. The remaining dependent claims have been analyzed and are rejected for failing to cure the deficiencies noted above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claim(s) 1-2, 4, 12, 15, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan et al. (US 10,565,460), in view of Monroe et al. (US 2007/0130599 ) , in view of Brittain et al. (US 2021/0041373.) . As per claim 1, An inspection arrangement for an interior, wherein the interior contains an object, -with at least one image recording unit which is configured to record an image of the interior of the vehicle with the object,”( Examiner note the claim limitations are rejected in view of 112 rejection stated above. Chan, Col. 1, line 54-57 disclose a vehicle in-cabin imaging device for generating data representative of at least one skeletal diagram of at least one occupant within an associated vehicle may include a processor and a memory. Further Col. 8, lines 28-36 discloses The current image data may be representative of images, and/or features (e.g., a vehicle occupant head location/orientation, a vehicle occupant hand location/orientation, a vehicle occupant arm location/orientation, a vehicle occupant elbow location/orientation, a vehicle occupant torso location/orientation, a seat belt location, a cellular telephone location, a vehicle occupant eye location/orientation, a vehicle seat location/orientation, etc.) extracted from a respective image, of an interior of a vehicle.) “- with a classification database which contains classification values for at least one of the objects and assignment rules for assigning the classification values to specific image contents, depicting the objects, of images,” (Chan, Col. 1, lines 57-60 discloses classified image feature data may be stored on the memory, i.e. database containing classification values or classified image feature data. Col. 8, lines 44-52 discloses classify current image features by comparing the previously classified image features data with the current image feature data. For example, the processor 315 may compare the current image feature data to a previously classified image features data, and may classify a current image the same as a previously classified image features when the processor 315 determines that the two images are similar. “- with an image analysis unit which is communicatively connected to the database and is configured to automatically analyse the images in terms of their image contents and to assign the classification values to the objects represented by them according to the assignment rules,”(Chan, Col. 3, lines15-20 discloses systems and methods for acquiring images of occupants inside a vehicle may include using a vehicle in-cabin device that automatically classifies images of an interior of a vehicle. A vehicle in-cabin device may include features (e.g., a processor, a memory and sensors) that are configured to automatically acquire and classify images of the interior of a vehicle and occupants within the vehicle. Further Col.7, line15-30, further Col. 7, line67, Col8, lines1-50.) “with at least one interface for the input and/or output of the classification values assigned to the images and/or of the images from and/or to a downstream entity.” (Chan Col.7, line15-40 discloses The network interface 330 may be configured to facilitate communications between the vehicle in-cabin device 305 and the remote. Col. 7, lines 54-61 discloses a vehicle in-cabin device 405 of a vehicle in-cabin device data collection system 400 is depicted along with method 500 of automatically classifying image features of an interior of a vehicle and, or transmitting related data to a remote server 310.) However Chan is silent on the following which would have been obvious in view of Monroe from similar filed of endeavor “An inspection arrangement for an interior of a passenger, aircraft, “wherein the inspection arrangement is a distributed arrangement which is split over at least two communicatively interconnected sub-devices, and wherein at least one interconnected split sub-device is at a remote location from the passenger aircraft.”(Monroe, ¶[0031] discloses The system of the subject invention would provide onboard and remote monitoring and reconstruction of events in such areas. The system would also permit the recording of visual information to provide a history for later review, providing yet another source of information for increasing the overall security. The system also provides real-time transmission of information to remote vehicles and personnel, and allows those vehicles to select and process information remotely. ¶[0033-0038] discloses (1) the system is adapted for monitoring an aircraft or other vehicle while in route or in flight, for collection and relay of situational awareness data relating to onboard conditions and, where desired, performance and structural data; (2) the system is adapted for monitoring situational awareness relating to the aircraft or other vehicle while in port, including conditions in the terminal or port environment; and (3) the system supplies the data to local and remote monitoring stations via both wired and wireless links, permitting access of the information by permanent monitoring stations as well as mobile units including response vehicles, intercept vehicles and handheld units for roving personnel. ¶[0075] disclose collecting, selecting and transmitting selected scene data available at a camera on the transport, the transport crew, or the terminal of the transport to a remote location, including collecting the image data on a preselected basis at the camera and defining and transmitting an original scene to the remote location. Subsequent data of the scene is compared to the data representing the scene in its original state. Each transmitted data scene may be tagged with unique identifying data. Each transmitted scene may be analyzed with automated techniques, such as facial recognition processing or automated object detection. The transmitted data is stored for archival, search and retrieval. The selection scheme of the invention also permits notification of the detected events to be sent via a network to selected monitoring stations. Further ¶[0087-0089] disclose the system is wireless, with each component communication with wireless LAN (WLAN) or a wireless WAN (WWAN). a plurality of sensor units are placed strategically throughout the aircraft or other commercial transport. The data sensors/transducers, such as, by way of example, cameras, engine management sensors, panic buttons, pressure or course change sensors and the like generate critical data which is transmitted to the cockpit display and to one or more onboard recorders. On command, or in response to certain types of events, real time data is capable of being transmitted to remote stations either fixed on the ground or mobile on the ground or in the air, permitting monitoring of events as they occur and permitting formulation of appropriate responses. ¶[0089] discloses in the preferred embodiment of the invention, multiple cameras are located in the cabin, cargo bay and cockpit of the aircraft.. Further see ¶[0263.] ) “the recording unit being the first sub-device on board the aircraft,”(Monroe, ¶[0265]discloses With specific reference to FIG. 2, the aircraft 10 includes a plurality of cameras C1A-C8B strategically installed in the cabin, cargo hold, and cockpit of the aircraft and at strategic locations on the exterior of the fuselage as well. The cameras provide video and image capture of strategic locations throughout the aircraft, providing full range view of the cockpit, cargo hold passenger cabin. The cameras may be continuously activated for capturing and storing the collected images on an onboard data recorder, as will be explained, and the system may be activated to send the data to a remote location on a real time basis.) “ the database and analysis unit forming a processing unit located in a stationary ground station wherein the inspection arrangement is capable of being operational during the flight of the aircraft.”(Monroe, ¶[0260] discloses the system includes multiple cameras with continuous video and/or images stored on a hardened recorder located at the point of installation, such as onboard the aircraft. Video/Image streams are linked to the ground. Video/Image streams are also linked to mobile units such as intercept aircraft and ground security vehicles. ¶[0263] discloses Turning now to FIG. 1, the aircraft 10 is shown in flight as 10a, on the ground as 10b and at the gate as 10c. The sensors and cameras on the aircraft (see FIG. 2) provide onboard situational and event data that may be transmitted from air-to-ground to a flight control station 11 while the aircraft is in flight. While in motion, data can be transmitted continuously. All of the communication nodes a the FAA Center 11, Ground Control 13, Terminal 15, and Ground Station 18 can be interconnected to the LAN/WAN network cloud (not illustrated). This provides data interconnectivity with the aircraft at all times during its flight, taxi, and docked operations. As is needed, such as during an emergency event, he collected data is sent over a wide area network (WAN) 23 to various recipients such as homeland security 25, the FBI 27, CIA 29, and the airlines 31, respectively. ¶[0425-0427] The aircraft can stream event alarms and other aircraft event data. This can be done in a routine manner, such as sending routine navigational information, or in extraordinary cases such as streaming aircraft systems and sensor information during an emergency event. For example, the "packet-switched" data previously discussed can be transmitted from the aircraft to a monitor station and recorded in a continuous manner. If an emergency event then occurs, the additional "circuit-switched" information will be recorded as well. This information can be recorded on the same workstation or server, or different workstations and servers, then merged for analysis.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Monroe technique of aircraft monitoring system into Chan technique to provide the known and expected uses and benefits of Monroe technique over Classifying digital images of inside a vehicle technique of Chan. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Monroe to Chan in order to prevent critical and catastrophic events, 2) manage the emergency during such an event, and 3) investigate support after the occurrence of such an event. (Refer to Monroe paragraph [0002].) The prior art made of record and not relied upon(because of 112 rejection) is considered pertinent to applicant's disclosure. Meskimen et al. (US 2021/0150235) discloses inflight entertainment system using video object recognition to monitor seat areas and generate flight crew notification.¶[0030-0033] and ¶[0049-0053]. As per claim 2, The inspection arrangement according to claim 1, wherein the classification value is one of the following: “- an object value identifying one of the objects, - a location value describing a location of the object in the interior, - a problem value describing a problem on the object, - a problem classification value classifying the problem, - a repair value correlated with the repair of the problem on the object, - an image attribute value describing an attribute of the image, - an image marking describing a marking location in the image.” ( Chan Col. 10, line 36-40 discloses Risk quantification may also be measured by weighting certain behaviors with higher severity than other behaviors, so the duration times are weighted. Risk quantification may also differentiate subcategories of behaviors based on degree of motion of hands, head, eyes, body. For example, the methods and systems of the present disclosure may distinguish texting with the phone on the steering wheel from texting with the phone in the driver's lap requiring frequent glances up and down. The latter would be quantified with greater risk in terms of severity of distraction.) As per claim 4, in view of claim 1, Chan discloses “wherein at least one of the recording units is a recording unit which is to be fixedly attached in the vehicle as specified.” (Chan, Col. 6, lines 42-50 discloses the image sensors being attached to the driver or passenger side pillar.) Regarding Claim 12, in view of claim 1, Chan as modified by Monroe discloses “recoding at least one image of the object of the interior of the vehicle with at least one of the recording units,” (Chan, Col. 1, line 54-57 disclose A vehicle in-cabin imaging device for generating data representative of at least one skeletal diagram of at least one occupant within an associated vehicle may include a processor and a memory. Further Col. 8, lines 28-36 discloses The current image data may be representative of images, and/or features (e.g., a vehicle occupant head location/orientation, a vehicle occupant hand location/orientation, a vehicle occupant arm location/orientation, a vehicle occupant elbow location/orientation, a vehicle occupant torso location/orientation, a seat belt location, a cellular telephone location, a vehicle occupant eye location/orientation, a vehicle seat location/orientation, etc.) extracted from a respective image, of an interior of a vehicle.) “analyzing the image in terms of the image content automatically using the database by the analysis unit, and assigning the classification values with the aid of the assignment rules to the image content and hence to the image and to the object” (Chan, Col. 1, lines 57-60 discloses classified image feature data may be stored on the memory, i.e. database containing classification values or classified image feature data. Col. 8, lines 44-52 discloses classify current image features by comparing the previously classified image features data with the current image feature data. For example, the processor 315 may compare the current image feature data to a previously classified image features data, and may classify a current image the same as a previously classified image features when the processor 315 determines that the two images are similar. Col. 3, lines15-20 discloses systems and methods for acquiring images of occupants inside a vehicle may include using a vehicle in-cabin device that automatically classifies images of an interior of a vehicle. A vehicle in-cabin device may include features (e.g., a processor, a memory and sensors) that are configured to automatically acquire and classify images of the interior of a vehicle and occupants within the vehicle. Further Col.7, line15-30, further Col. 7, line67, Col. 8, lines 1-50. “outputting the classification values assigned to the image.” (Chan Col.7, line15-40 discloses The network interface 330 may be configured to facilitate communications between the vehicle in-cabin device 305 and the remote. Col. 7, lines 54-61 discloses a vehicle in-cabin device 405 of a vehicle in-cabin device data collection system 400 is depicted along with method 500 of automatically classifying image features of an interior of a vehicle and, or transmitting related data to a remote server 310. Further see Col. 9, line 2-5 and 20-25.) However Chan is silent on the following which would have been obvious in view of Monroe from similar filed of endeavor “an inspection method for an interior of a passenger vehicle aircraft”(Monroe, ¶[0031] discloses The system of the subject invention would provide onboard and remote monitoring and reconstruction of events in such areas. The system would also permit the recording of visual information to provide a history for later review, providing yet another source of information for increasing the overall security. The system also provides real-time transmission of information to remote vehicles and personnel, and allows those vehicles to select and process information remotely. ¶[0033-0038] discloses (1) the system is adapted for monitoring an aircraft or other vehicle while in route or in flight, for collection and relay of situational awareness data relating to onboard conditions and, where desired, performance and structural data; (2) the system is adapted for monitoring situational awareness relating to the aircraft or other vehicle while in port, including conditions in the terminal or port environment; and (3) the system supplies the data to local and remote monitoring stations via both wired and wireless links, permitting access of the information by permanent monitoring stations as well as mobile units including response vehicles, intercept vehicles and handheld units for roving personnel. ¶[0075] disclose collecting, selecting and transmitting selected scene data available at a camera on the transport, the transport crew, or the terminal of the transport to a remote location, including collecting the image data on a preselected basis at the camera and defining and transmitting an original scene to the remote location. Subsequent data of the scene is compared to the data representing the scene in its original state. Each transmitted data scene may be tagged with unique identifying data. Each transmitted scene may be analyzed with automated techniques, such as facial recognition processing or automated object detection. The transmitted data is stored for archival, search and retrieval. The selection scheme of the invention also permits notification of the detected events to be sent via a network to selected monitoring stations. ¶[0089] discloses in the preferred embodiment of the invention, multiple cameras are located in the cabin, cargo bay and cockpit of the aircraft.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Monroe technique of aircraft monitoring system into Chan technique to provide the known and expected uses and benefits of Monroe technique over Classifying digital images of inside a vehicle technique of Chan. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Monroe to Chan in order to prevent critical and catastrophic events, 2) manage the emergency during such an event, and 3) investigate support after the occurrence of such an event. (Refer to Monroe paragraph [0002].) As per claim 14, in view of claim 12, Chan discloses “wherein the inspection method is carried out during operation of the interior for passenger transport.” (Chan, Col. Lines 47-67 discloses distinguish between Automated and Manual driving modalities for variable insurance rating for a scenario where there are many vehicles that are capable of automatically operating the piloting functions, and are capable of the driver manually operating the piloting functions. The driver can elect to switch between automated and manual driving modes at any point during a drive. Gesture recognition would be utilized to distinguish whether a driver is operating the vehicle manually, or whether the vehicle is operating automatically. This could be determined through either OEM or aftermarket hardware. The sensors and software algorithms are able to differentiate between automatic and manual driving based on hand movements, head movements, body posture, eye movements. It can distinguish between the driver making hand contact with the steering wheel (to show that he/she is supervising) while acting as a supervisor, versus the driver providing steering input for piloting purposes. Depending on who/what is operating the vehicle would determine what real-time insurance rates the customer is charged. Examiner notes examining drivers gestures to determine the mode of driving would be interpreted as performing the analysis of vehicle interior while the vehicle is operating.) As per claim 15, The inspection method of claim 12, “wherein the classification values assigned to the image are outputted via at least one of the interfaces to a downstream entity and/or by the latter.”( Chan Col.7, line15-40 discloses The network interface 330 may be configured to facilitate communications between the vehicle in-cabin device 305 and the remote. Col. 7, lines 54-61 discloses a vehicle in-cabin device 405 of a vehicle in-cabin device data collection system 400 is depicted along with method 500 of automatically classifying image features of an interior of a vehicle and, or transmitting related data to a remote server 310. Further Col. 9, line 2- 5 discloses using the gesture recognition systems from an aftermarket/insurance device in order to provide an estimate to first responders about the severity of the crash and what kinds of resources/equipment/expertise is required in order to triage—have some idea of what emergency medical needs could be upon arrival. Line 20-25 discloses Upon crash detection the device could transmit via the driver's phone (which is already connected via Bluetooth) or perhaps transmit using an onboard transmitter that uses emergency frequencies (and therefore does not require consumer to pay for data fees).) As per claim 17, The inspection method according to Claim 15, “wherein the classification values assigned to the image are outputted via at least one of the interfaces to a downstream entity and/or by the latter.” (Chan Col.7, line15-40 discloses The network interface 330 may be configured to facilitate communications between the vehicle in-cabin device 305 and the remote. Col. 7, lines 54-61 discloses a vehicle in-cabin device 405 of a vehicle in-cabin device data collection system 400 is depicted along with method 500 of automatically classifying image features of an interior of a vehicle and, or transmitting related data to a remote server 310.) Claim(s) 3, 8, 16, is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan et al. (US 10,565,460), in view of Monroe et al. (US 2007/0130599 ), in view of in view of Brittain et al. (US 2021/0041373.) As per claim 3, in view of claim 1, Chan as modified by Monroe does not explicitly disclose the following which would have been obvious in view of Brittain from similar field of endeavor “wherein the inspection arrangement contains an image processing unit which is interposed between the recording unit and the analysis unit and is configured to carry out image processing on the images generated by the recording unit before the processed images are transmitted to the analysis unit.” (Brittain, Fig. 1, ¶ [0022] discloses The processing circuitry may also perform one or more pre-processing operations, such as one-dimensional or two-dimensional convolutions, ranked filtering, contrast enhancement, static flat-field correction, and/or frequency processing on the image data included in the virtual camera array before outputting virtual camera array for inspection analysis. Once the virtual camera array has been generated from the image output signals, and any pre-processing has been completed, a variety of types of analysis techniques may be performed on the image data included in the virtual camera array to provide inspection status(es) for the imaged portions of the web.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Brittain technique of inspection of images into Chan as modified by Monroe technique to provide the known and expected uses and benefits of Brittain technique over Classifying digital images of inside a vehicle technique of Chan as modified by Monroe. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Brittain to Chan as modified by Monroe in order to lower the expense, and lack of flexibility in the inspection system of different type of images. (Refer to Brittain paragraph [0005].) As per claim 8, in view of claim 1 Chan as modified by Monroe does not explicitly disclose the following which would have been obvious in view of Brittain from similar field of endeavor “wherein the inspection arrangement contains: - a display unit which is connected to one of the interfaces and is configured to display the images and/or the classification values to a user, - an input unit which is connected to one of the interfaces, can be operated by the user and is configured to modify at least one of the displayed classification values or to generate an additional classification value in the inspection arrangement.” (Brittain, ¶ [0031], discloses image processing circuitry includes a computer monitor 117A, and at least one input device, such as keyboard 117B, that allows a system user, such as operator 118, to provide inputs to the image processing circuitry. Computer monitor 117A may provide a visual display of various types of information related to image capturing devices 113 and the generation of the virtual camera array. Inputs to image processing circuitry 114 provided via an input device, such as keyboard 117B, and/or programing executed by computing device 117 and/or image processing circuitry 114, may be used to provide inputs to configure and control image capturing devices 113, for example resolution settings and/or sample rates to be used by image capturing devices 113.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Brittain technique of inspection of images into Chan as modified by Monroe technique to provide the known and expected uses and benefits of Brittain technique over Classifying digital images of inside a vehicle technique of Chan as modified by Monroe. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Brittain to Chan as modified by Monroe in order to lower the expense, and lack of flexibility in the inspection system of different type of images. (Refer to Brittain paragraph [0005].) Claim 16 has been analyzed and is rejected for the reasons indicated in claim 8 above. As per claim 11, in view of claim 1, Chan as modified by Monroe does not explicitly disclose the following which would have been obvious in view of Brittain from similar field of endeavor “wherein the inspection arrangement is a distributed arrangement which is split over at least two communicatively interconnected sub-devices.” (Brittain, Fig. 1 showing various subsystem communicatively interconnected sub devices.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Brittain technique of inspection of images into Chan as modified by Monroe technique to provide the known and expected uses and benefits of Brittain technique over Classifying digital images of inside a vehicle technique of Chan as modified by Monroe. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Brittain to Chan as modified by Monroe in order to lower the expense, and lack of flexibility in the inspection system of different type of images. (Refer to Brittain paragraph [0005].) Claim(s) 5-7 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan et al. (US 10,565,460), in view of Monroe et al. (US 2007/0130599 ), in view of Senechal et al. (US 2021/0001862). As per claim 5, in view of claim 1, Chan as modified by Monroe does not explicitly disclose the following which would have been obvious in view of Senechal from similar field of endeavor “wherein at least one of the recording units is a recording unit which can be deployed movably in the vehicle.” (Senechal, Fig. 6, ¶ [0082] discloses capturing in-cabin images using multiple mobile imaging devices. For example, cellphone, notepad, smartwatch, etc.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Senechal technique of vehicular in-cabin imaging into Chan as modified by Monroe technique to provide the known and expected uses and benefits of Senechal technique over Classifying digital images of inside a vehicle technique of Chan as modified by Monroe. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Senechal to Chan as modified by Monroe in order to provide accurate recommendation to autonomous or semiautonomous vehicle based on image content. (Refer to Senechal paragraph [0025].) As per claim 6, in view of claim 5 Chan as modified by Monroe as modified by Senechal discloses “wherein at least one of the recording units is a hand-held recording unit.” (Senechal, Fig. 6, ¶[0082] discloses capturing in-cabin images using multiple mobile imaging devices. For example, cellphone, notepad, smartwatch, etc.) As per claim 7, in view of claim 5, Chan as modified by Monroe as modified by Senechal discloses “wherein at least one of the recording units is a recording unit which can be moved at least semi-autonomously.” (Senechal, Fig. 6, ¶ [0082] discloses capturing in-cabin images using multiple mobile imaging devices. For example, cellphone, notepad, smartwatch, etc.) As per claim 9, in view of claim 1, Chan as modified by Monroe does not explicitly disclose the following which would have been obvious in view of Senechal from similar field of endeavor “wherein the inspection arrangement contains a hand-held end-user device which contains at least one of the recording units and/or - if present - the display unit and/or the input unit.” (Senechal, Fig. 6, ¶ [0082] discloses capturing in-cabin images using multiple mobile imaging devices. For example, cellphone, notepad, smartwatch, etc.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Senechal technique of vehicular in-cabin imaging into Chan as modified by Monroe technique to provide the known and expected uses and benefits of Senechal technique over Classifying digital images of inside a vehicle technique of Chan as modified by Monroe. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Senechal to Chan as modified by Monroe in order to provide accurate recommendation to autonomous or semiautonomous vehicle based on image content. (Refer to Senechal paragraph [0025].) As per claim 10, in view of claim 9, Chan as modified by Monroe as modified by Senechal discloses “wherein at least part of the inspection arrangement is implemented as an application on the end-user device.” (Chan, Col. 3, lines 45-55 discloses mobile application being on the mobile device providing use friendly interface for reporting and troubleshooting vehicle in-cabin device operation. Claim(s) 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan et al. (US 10,565,460), in view of Monroe et al. (US 2007/0130599 ), in view of Yang et al. (US 2019/0244042) As per Claim 13, in view of claim 12, Chan as modified by Monroe does not explicitly disclose the following which would have been obvious in view of Yang from similar field of endeavor “wherein the classification value and/or also the image are output to a user as an entity with a request to a user to review the classification value and possibly input a corrected or additional classification value via one of the interfaces.” (Yang, ¶ [0058], discloses apparatus that detects and classifies objects associated with a vehicle 100 is further configured to receive an input to re-label the classified object from the operator of the vehicle, and reclassify the object present in the difference between the second region and the first region based on the received input. The operator may receive a classification or identification of an object or feature that is determined by performing object detection on the second image after the difference between the images is detected. The operator may then confirm the classification or identification or revise it as necessary from a remote computer.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Yang technique of autonomous vehicle in-cabin imaging into Chan as modified by Monroe technique to provide the known and expected uses and benefits of Yang technique over Classifying digital images of inside a vehicle technique of Chan as modified by Monroe. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Yang to Chan as modified by Monroe in order to accurately classifying a particular object. (Refer to Yang paragraph [0015].) Claim(s) 18 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan et al. (US 10,565,460), in view of Monroe et al. (US 2007/0130599 ) , in view of McMillin et al. (US 2009/0150022 ). As per claim 18, The inspection arrangement of Claim 1, However Chan as modified by Monroe does not explicitly disclose the following which would have been obvious in view of McMillin from similar filed of endeavor “wherein defects/anomalies/deficiencies identified in flight are repaired at the next landing.” (McMillin, ¶[0055] discloses the pilot on the aircraft 24 is made aware of a fault in flight by an advisory that may be displayed by an alerting system of the aircraft 24 (FIG. 1). The pilot enters the fault data 52 (FIG. 2) and generates the PIREP 56 (FIG. 2) using the fault report generator 30 (FIG. 2). Upon storing the PRIEP 56 (FIG. 2), the PRIEP 56 (FIG. 2) is transmitted to the ground via the network 26 (FIG. 1). ¶[0057] discloses the maintenance controller selects the solution using the fault evaluator. first the maintenance controller reviews the available resources 94 (FIG. 4) such as the technicians, their qualifications, and the available tools. If there is sufficient time to complete the work and the resources are available at the next location, the work order 100 (FIG. 4) is generated at 220 and the parts are reserved. ¶[0076] discloses After reviewing the flight details at 614 as discussed above, it is determined at 616 that there is sufficient time at the next destination to perform the maintenance and at 618 that there are sufficient parts available. ¶[0077] discloses the work order 100 (FIG. 4) is generated and the parts are reserved at 624 as discussed above. The maintenance is scheduled at 626, the parts are retrieved at 628 and 620, and the maintenance is performed at 632 and 634 as discussed above. ¶ [0078] Once the maintenance is complete and there are no existing faults at 638, the technician records her stop times and signs off on the tasks using the work manager 42 (FIG. 8) at 642. The inspector completes the inspection at 644 and 648 and the aircraft 24 (FIG. 1) is released for the next scheduled flight at 648 as discussed above.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine McMillin technique of aircraft maintenance into Chan as modified by Monroe technique to provide the known and expected uses and benefits of McMillin technique over Classifying digital images of inside a vehicle technique of Chan as modified by Monroe. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate McMillin to Chan as modified by Monroe in order to provide necessary maintenance with less error. (Refer to McMillin paragraph [0004].) Claim 19 has been analyzed and is rejected for the reasons indicated in claim 18 above. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAGHAYEGH AZIMA whose telephone number is (571)272-1459. The examiner can normally be reached Monday-Friday, 9:30-6:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at (571)272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAGHAYEGH AZIMA/Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Mar 14, 2023
Application Filed
Jul 19, 2025
Non-Final Rejection — §103, §112
Nov 24, 2025
Response Filed
Dec 12, 2025
Final Rejection — §103, §112
Feb 13, 2026
Response after Non-Final Action
Mar 04, 2026
Request for Continued Examination
Mar 06, 2026
Response after Non-Final Action
Mar 19, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586350
DETERMINING AUDIO AND VIDEO REPRESENTATIONS USING SELF-SUPERVISED LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12573209
ROBUST INTERSECTION RIGHT-OF-WAY DETECTION USING ADDITIONAL FRAMES OF REFERENCE
2y 5m to grant Granted Mar 10, 2026
Patent 12561989
VEHICLE LOCALIZATION BASED ON LANE TEMPLATES
2y 5m to grant Granted Feb 24, 2026
Patent 12530867
Action Recognition System
2y 5m to grant Granted Jan 20, 2026
Patent 12525049
PERSON RE-IDENTIFICATION METHOD, COMPUTER-READABLE STORAGE MEDIUM, AND TERMINAL DEVICE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+11.4%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 350 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month