Prosecution Insights
Last updated: April 19, 2026
Application No. 18/561,057

TARGET ACQUISITION SYSTEM FOR AN UNMANNED AIR VEHICLE

Final Rejection §101§103
Filed
Nov 15, 2023
Examiner
CHENNAULT, AUSTIN ROBERT
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Code Planet Saver OY
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
2 granted / 4 resolved
-2.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
22 currently pending
Career history
26
Total Applications
across all art units

Statute-Specific Performance

§101
31.3%
-8.7% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
12.3%
-27.7% vs TC avg
§112
8.9%
-31.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Office Action is in response to the preliminary amendment filed on 11/14/2025. Claims 1, 6-7, 9, 12-14, and 18 are amended. Claim 15-17 are cancelled. Claims 1-14 and 18 are presently pending and are presented for examination. Claim Objections Claim 18 is objected to under 37 CFR 1.75(c) as being in improper form because a multiple dependent claim can only refer to the claims it depends on in the alternative. See MPEP § 608.01(n). For purposes of further examination on the merits, Examiner will interpret the goggles to execute the functionality of claim 13 or 14. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-7, 9-10, 12-14, and 18 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claim 1 is directed toward a machine, independent claim 13 is directed to a machine, independent claim 14 is directed to a method. Therefore, each of the independent claim(s) 1, and 13-14 along with the corresponding dependent claims 2-12 and 18 are directed to a statutory category of invention under Step 1. Under Step 2A, Prong 1, the claims are analyzed to determine whether one or more of the claims recites subject matter that falls within one of the following groups of abstract ideas: (1) mental processes, (2) certain methods of organizing human activity, and/or (3) mathematical concepts. In this case, the independent claim(s) 1 and 13-14 is/are directed to an abstract idea without significantly more. Specifically, the claim(s), under its/their broadest reasonable interpretation(s) cover(s) certain mental processes. The language of independent claim 1 is used for illustration: A target acquisition system (This is a mental process. A human could, e.g. visually, determine a target.) for an unmanned aircraft comprising: goggles, the unmanned aircraft, which unmanned aircraft, equipped with a camera and a measuring unit, is configured to transmit location data (DS, KA, DE) relating to a location (MS) of a target to the goggles, which goggles are configured to form an augmented reality user interface (LK) by means of at least one goggle lens for controlling the unmanned aircraft, and which goggles, equipped with an orientation detector, are configured to determine a location of an augmented reality target object, which indicates the location of the target, on a user interface view of the goggles and to present to a wearer of the goggles the location of the target as the augmented reality target object (MB) in the user interface view of the augmented reality user interface based on the target location data received from the unmanned aircraft and an orientation (SA) of the goggles as detected by the orientation detector so that the wearer of the goggles is able to see the target regardless of whether there is a direct line of sight to the target's location (This is a mental process. A human given target location data and data indicative of a subject’s field of view could estimate whether the target object should be visible and where in the field of view the target object should be visible.). As explained above, independent claim 1 recites at least one abstract idea. The other independent claim(s), claim(s) 13 and 14 which is/are similar in scope to claim 14 likewise recite(s) at least one abstract idea under Step 2A, Prong 1. Under Step 2A, Prong 2, the claims are analyzed to determine whether the claim, as a whole, integrates the abstract idea into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements such as merely using a computer to implement an abstract idea, adding insignificant extra-solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a "practical application"; see at least MPEP 2106.04(d). In this case, the mental processes are not integrated into a practical application. For example, independent claims 1 and 13-14 and dependent claim 18 recite additional elements. These/this limitation(s) amount to implementing the abstract idea on a computer, add insignificant extra-solution activity, and/or generally link use of the judicial exception to a particular technological environment or field of use; see at least MPEP 2106.04(d). More specifically, an unmanned aircraft… found in independent claim(s) 1 and 13-14. This limitation amounts to generally linking the use of the abstract idea to a particular technological environment or field of use. goggles… found in independent claim(s) 1 and 13-14, and 18. This limitation amounts to generally linking the use of the abstract idea to a particular technological environment or field of use. receive/transmit location data… found in independent claim(s) 1 and 13-14. This limitation amounts to insignificant extra-solution activity. present to a wearer of the goggles the location of the target [by means of the orientation detector]… found in independent claim(s) 1 and 13-14. This limitation amounts to insignificant extra-solution activity. A tangible non-volatile computer readable medium… found in dependent claim(s) 18. This limitation amounts to implementing the abstract idea on a computer. Therefore, taken alone, the additional elements do not integrate the abstract idea into a practical application. Furthermore, looking at the additional limitation(s) as an ordered combination or as a whole, the limitations add nothing significant that is not already present when looking at the elements taken individually. Because the additional elements do not integrate the abstract idea into a practical application by imposing meaningful limits on practicing the abstract idea, independent claim(s) 1 and 13-14, as well as dependent claim 18, is/are directed to an abstract idea. Under Step 2B, the claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application in Step 2A, Prong Two, the additional element of limiting the use of the idea to one particular environment employs generic computer functions to execute an abstract idea and, therefore, does not add significantly more. Mere instruction to apply an exception using generic computer components and limiting the use of the abstract idea to a particular environment or field of use cannot provide an inventive concept. Additionally, as discussed above, the additional limitation(s) as recited above, is/are considered insignificant extra-solution activity. A conclusion that an additional element is insignificant extra-solution activity in Step 2A must be re-evaluated in Step 2B to determine if the element is more than what is well-understood, routine, and conventional in the field. In this case, the additional limitation of a tangible non-volatile computer readable medium… is well-understood, routine, and conventional activity, because the specification does not provide any indication that the tangible non-volatile computer readable medium… is/are anything more than conventional computer(s). Additionally, the remaining element(s) has/have been deemed insignificant extra-solution activity by one or more courts; see at least MPEP 2106.05(d) and MPEP 2106.05(g): receive/transmit location data… is considered well-understood, routine, and conventional activity under buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). present to a wearer of the goggles the location of the target [by means of the orientation detector]… is considered well-understood, routine, and conventional activity under TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48 (Gathering and analyzing information using conventional techniques and displaying the result). Because the claims fail to recite anything sufficient to amount to significantly more than the judicial exception, independent claim(s) 1 and 13-14 is/are patent ineligible under 35 U.S.C. 101. Dependent claims 2-12 and 18 have been given the full two-part analysis, including analyzing the additional limitations, both individually and in combination. Dependent claims 2-7, 9-10, 12, and 17-18, when analyzed both individually and in combination, are also patent ineligible under 35 U.S.C. 101 based on the same analysis as above. The additional limitations recited in the dependent claims fail to establish that the dependent claims are not directed to an abstract idea. The additional limitations of the dependent claims, when considered individually and as an ordered combination, do not amount to significantly more than the abstract idea. Accordingly, claims 2-7, 9-10, 12, and 17-18 are patent ineligible under 35 U.S.C. 101. If the independent claims were amended to include all of the limitations from one or more of dependent claim 8 or 11, the rejections under 35 U.S.C. 101 toward claims 1-16 would be withdrawn. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 6-7, 14, and 18 are rejected under 35 U.S.C. 103 as being obvious over EP 3376152 A1, hereinafter “Ger”, in view of US 20170277180 A1, hereinafter “Baer”, and NPL document “Oculus Quest: The Best Standalone VR Headset”, hereinafter “Rogers”. Regarding claim 1, Ger, in the same field of endeavor and solving a related problem, discloses A target acquisition system for an unmanned aircraft (See Figs. 1 and 2, page 5 paragraphs 12-13, and page 6 paragraphs 6-page 7 paragraph, the system comprises a drone, i.e. an unmanned aircraft, that detects relevant objects, i.e. acquires targets, the target in this case being a person.) comprising: goggles (See page 6 paragraph 12, the display may be an augmented reality display which displays information directly in the field of vision of the soldier. Examiner asserts that displaying augmented reality information directly in the soldier’s field of view is only possible with some sort of head mounted device with displays directly in front of the soldier’s eyes, i.e. goggles.), the unmanned aircraft, which unmanned aircraft, equipped with a camera and a measuring unit, is configured to transmit location data (DS, KA, DE) relating to a location (MS) of a target to the goggles (See page 6 paragraph 6, the drone, i.e. unmanned aircraft, comprises a detection device that outputs detection results. See page 3 paragraph 2, the detection means comprises a GPS sensor. See page 6 paragraph 11, the detection device comprises a camera. The detection device further detects the position of the person, i.e. the target. The subsystem responsible for determining the position of the target is therefore a measuring unit. See page 6 paragraph 7, the soldier carries an output device which receives the detection results, which are necessarily related to the target, and displays them on the display device, i.e. the goggles. See page 4 paragraph 4, the display device shows the user the position of the relevant object, i.e. target. This means that the location information of the target was necessarily sent to the display device.), which goggles are configured to form an augmented reality user interface (LK) by means of at least one goggle lens (See page 4 paragraph 4, the display device can be an augmented reality display that superimposes the detection results on the user’s field of view. This inherently takes place by at least one lens of the device, i.e. goggles.); and which goggles, equipped with an orientation detector, are configured to present to a wearer of the goggles the location of the target as the augmented reality target object (MB) in the augmented reality user interface based on the target location data and an orientation (SA) of the goggles as detected by the orientation detector so that the wearer of the goggles is able to see the target regardless of whether there is a direct line of sight to the target's location (See page 4 paragraph 4, the display device can be an augmented reality display that superimposes the detection results on the user’s field of view, i.e. goggles. The display device determines whether the user is looking in the direction of the relevant object, i.e. target, in order to display it. This indicates that the display device comprises an orientation sensor and displays the target based on the orientation of the device, indicating the display device comprises an orientation detector. This also indicates that the display device was necessarily sent the location of the target and displays the target based on this data. See Fig. 1 and page 6 paragraphs 6-12, the device shows the wearer of the goggles a soldier crouching behind a wall based on detection information from the drone, i.e. so the wearer can see the target regardless of the direct line of sight.). Ger does not explicitly disclose for controlling the unmanned aircraft or configured to determine a location of an augmented reality target object, which indicates the location of the target, on a user interface view of the goggles. Baer, in the same field of endeavor and solving a related problem, discloses for controlling the unmanned aircraft (See [0022], the headset, i.e. goggles, displays images and videos and comprises the controller. See Abstract, the controller controls the surveyor. See [0031], the surveyor is an unmanned aerial vehicle, i.e. an unmanned aircraft.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for detecting relevant objects from unmanned vehicle camera data disclosed by Ger to include using the display interfaces to control the vehicle of Baer. One of ordinary skill in the art would have been motivated to make this modification in order to allow the user to maneuver the drone to the area of interest for data collection, as suggested by Baer at [0020]. Rogers renders obvious configured to determine a location of an augmented reality target object, which indicates the location of the target, on a user interface view of the goggles (See page 1 paragraph 2 and page 3 paragraph 1-2, the headset does not require a PC for use, i.e. uses an internal processor for VR capabilities. This includes determining the location of objects to draw. It would be obvious to use a standalone VR headset in order to obtain better mobility and portability.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for detecting relevant objects from unmanned vehicle camera data disclosed by Ger and Baer to include the use of an standalone headset, i.e. one that comprises the computing means necessary to perform image analysis and rendering of the AR images, as suggested by Rogers. One of ordinary skill in the art would have been motivated to make this modification to improve affordability, ease of setup, and mobility for the user, as suggested by Rogers at page 1 paragraph 2. Regarding claim 2, Ger combined with Baer and Rogers renders obvious the limitations of claim 1. Ger discloses wherein the unmanned aircraft is further configured to identify the target from video captured by the camera and to obtain, by the measuring unit, the location data related to the location of the target after identifying the target (See page 6 paragraph 11, the detection device comprises a camera. The detection device also determines the speed of the person it recognizes. Examiner asserts that determining speed is only possible if the camera is taking images in rapid succession, i.e. is a video camera. See page 7 paragraph 6, the imagery captured is analyzed with object recognition algorithms, i.e. on a computer. This indicates that the camera data and therefore the camera itself is digital. See page 6 paragraph 6, the drone, i.e. unmanned aircraft, comprises a detection device that outputs detection results. See page 3 paragraph 2, the detection means comprises a GPS sensor, i.e. a measuring unit. See page 6 paragraph 11, the detection device further detects the position of the person, i.e. the target. See page 4 paragraph 4, the display device shows the user the position of the relevant object, i.e. target. This means that the location information of the target was necessarily obtained before being sent to the display device.). Baer discloses digital video (VD) (See [0021], the camera may capture the video in a number of digital video formats. This indicates that the camera is capturing digital video.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for detection of objects in sensor data and control of a drone disclosed by Ger and Rogers to include capturing digital video data of Baer. One of ordinary skill in the art would have been motivated to make this modification in order to allow flexibility in the captured video format and integration with different object detection algorithms, as suggested by Baer at [0021] and [0079]. Regarding claim 6, Ger combined with Baer and Rogers renders obvious the limitations of claim 1. Ger further discloses wherein the unmanned aircraft is configured to transmit location data related to a change of a location of the target in response to a change in target location and/or the orientation detector is configured to detect a change in the orientation of the goggles in response to a change in the orientation of the goggles, wherein the goggles are configured to re-create the location of the target to be presented as the target object in the user interface based on at least one change that has occurred (See page 6 paragraph 11, the detection device determines the speed of a recognized person, i.e. target. Speed is inherently related to a change of location of the target and can only be calculated in response to an identified change of position in the target. See page 6 paragraph 6, the detection device is part of the drone, i.e. unmanned aircraft. See page 8 paragraph 2, the speed information is transmitted to the output device.). Regarding claim 7, Ger combined with Baer and Rogers renders obvious the limitations of claim 1. Baer discloses wherein the unmanned aircraft is further configured to transmit video (VD, VA) captured by it to the goggles and the user interface in the goggles is configured to display the received video (VD, VA) (See Abstract, the surveyor captures video of the environment. The controller displays video captured by the surveyor. See [0022], the headset, i.e. goggles, displays images and videos and comprises the controller. See [0031], the surveyor is an unmanned aerial vehicle, i.e. an unmanned aircraft ). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for detecting relevant objects from unmanned vehicle camera data and displaying relevant data on goggles disclosed by Ger and Rogers to include displaying the video data itself on the goggles of Baer. One of ordinary skill in the art would have been motivated to make this modification in order to allow the user to maneuver the drone to the area of interest for data collection, as suggested by Baer at [0020]. Regarding claim 14, Ger discloses A target acquisition method on an unmanned aircraft (See Figs. 1 and 2, page 5 paragraphs 12-13, and page 6 paragraphs 6-page 7 paragraph, the system comprises a drone, i.e. an unmanned aircraft, that detects relevant objects, i.e. acquires targets, the target in this case being a person. See page 2 paragraph 9, the detection devices detect relevant objects and output the corresponding detection results. See page 6 paragraph 6, the drone has a detection device. The method is therefore on a drone, i.e. on an unmanned aircraft.) comprising at least following steps of: receiving, by a data transceiver, location data (DS, KA, DE) relating to a location (MS) of a target transmitted by the unmanned aircraft equipped with a camera and a measuring unit (See page 6 paragraph 6, the drone, i.e. unmanned aircraft, comprises a detection device that outputs detection results. See page 3 paragraph 2, the detection means comprises a GPS sensor. See page 6 paragraph 11, the detection device comprises a camera. The detection device further detects the position of the person, i.e. the target. The subsystem responsible for determining the position of the target is therefore a measuring unit. See page 6 paragraph 7, the soldier carries an output device which receives the detection results, which are necessarily related to the target, and displays them on the display device, i.e. the goggles. See page 4 paragraph 4, the display device shows the user the position of the relevant object, i.e. target. This means that the location information of the target was necessarily sent to the display device. See page 6 paragraph 7, the output device is the processing system driving the display device. See page 7 paragraph 3, the output device and the drone directly communicate with each other. This indicates transmission and reception of data. Examiner asserts that the communication device, which is part of the augmented reality goggle apparatus, therefore comprises a transceiver.), forming, by a processor, an augmented reality user interface (LK) using at least one goggle lens of the goggles (See page 4 paragraph 4, the display device can be an augmented reality display that superimposes the detection results on the user’s field of view. This inherently takes place by at least one lens of the device, i.e. goggles. Computing the augmented reality display inherently requires implementation on a processor.), and wherein the augmented reality user interface determines a location of an augmented reality target object, which indicates the location of the target, on a user interface view of the goggles and presents, by means of an orientation detector of the goggles, to a wearer of the goggles the location of the target as the augmented reality target object (MB) in the augmented reality user interface based on the target location data received from the unmanned aircraft and an orientation (SA) of the goggles as detected by the orientation detector so that the wearer of the goggles is able to see the target regardless of whether there is a direct line of sight to the target's location (See page 4 paragraph 4, the display device can be an augmented reality display that superimposes the detection results on the user’s field of view, i.e. goggles. The display device determines whether the user is looking in the direction of the relevant object, i.e. target, in order to display it. This indicates that the display device comprises an orientation sensor and displays the target based on the orientation of the device, indicating the display device comprises an orientation detector. This also indicates that the display device was necessarily sent the location of the target and displays the target based on this data. . See Fig. 1 and page 6 paragraphs 6-12, the device shows the wearer of the goggles a soldier crouching behind a wall based on detection information from the drone, i.e. so the wearer can see the target regardless of the direct line of sight.). Ger does not explicitly disclose a data transceiver of goggles, by a processor of the goggles, or for controlling the unmanned aircraft. Baer, in the same field of endeavor and solving a related problem, discloses for controlling the unmanned aircraft (See [0022], the headset displays images and videos and comprises the controller. See Abstract, the controller controls the surveyor. See [0031], the surveyor is an unmanned aerial vehicle, i.e. an unmanned aircraft.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for detecting relevant objects from unmanned vehicle camera data disclosed by Ger to include using the display interfaces to control the vehicle of Baer. One of ordinary skill in the art would have been motivated to make this modification in order to allow the user to maneuver the drone to the area of interest for data collection, as suggested by Baer at [0020]. Rogers renders obvious a data transceiver of goggles (See page 1 paragraph 2 and page 3 paragraph 1-2, the headset does not require a PC for use, i.e. uses an internal processor for VR capabilities. This includes determining the location of objects to draw. It would be obvious to use a standalone VR headset in order to obtain better mobility and portability and send the relevant location data to the headset.); and by a processor of the goggles (See page 1 paragraph 2 and page 3 paragraph 1-2, the headset does not require a PC for use, i.e. uses an internal processor for VR capabilities. This includes determining the location of objects to draw. It would be obvious to use a standalone VR headset in order to obtain better mobility and portability.); It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for detecting relevant objects from unmanned vehicle camera data disclosed by Ger and Baer to include the use of an standalone headset, i.e. one that comprises the computing means necessary to perform image analysis and rendering of the AR images, as suggested by Rogers. One of ordinary skill in the art would have been motivated to make this modification to improve affordability, ease of setup, and mobility for the user, as suggested by Rogers at page 1 paragraph 2. Regarding claim 18, Ger combined with Baer and Rogers renders obvious the limitations of claim 14. Ger renders obvious A tangible non-volatile computer readable medium storing a computer program that comprises instructions, which, when the computer program is executed by the goggles according to claim 13, cause the goggles to carry out at least the steps of the target acquisition method according to claim 14 (See Figs. 1 and 2, page 5 paragraphs 12-13, and page 6 paragraphs 6-page 7 paragraph, the system comprises a drone, i.e. an unmanned aircraft, that detects relevant objects, i.e. acquires targets, the target in this case being a person. See page 2 paragraph 9, the detection devices detect relevant objects and output the corresponding detection results. See page 6 paragraph 6, the drone has a detection device. The method is therefore on a drone, i.e. on an unmanned aircraft. See page 6 paragraph 7, the detection device comprises a camera on which data object recognition algorithms are used to detect a relevant object, i.e. target. Examiner asserts that detecting objects in the camera data necessarily takes place on a computer. This also requires instructions causing the computer to execute code corresponding to the described functionality. It would be obvious to store the corresponding instructions on a non-volatile computer readable medium so that the system would not need to be programmed each time the power cycles.). Claims 3 and 5 are rejected under 35 U.S.C. 103 as being obvious over Ger, Baer, and Rogers in view of WO 2020019175 A1, hereinafter “Lin”. Regarding claim 3, Ger combined with Baer and Rogers renders obvious the limitations of claim 2. Ger discloses wherein the measuring unit is configured to obtain a distance (DE) of the camera to the target after target acquisition (See page 5 paragraph 9, a relevant object, i.e. target, in this case a tree, is identified and its distance to the user is calculated. See page 6 paragraph 6, the user is the drone, i.e. unmanned aircraft. See page 8 paragraph 1, the drone comprises a camera. Distance to the user is therefore distance to the camera.). Ger combined with Baer and Rogers does not explicitly disclose wherein the measuring unit is configured to obtain an orientation (KA) of the camera. Lin, in the same field of endeavor and solving a related problem, renders obvious wherein the measuring unit is configured to obtain an orientation (KA) of the camera and a distance (DE) of the camera to the target after target acquisition (See page 2 paragraph 6-7, the method acquires the relative distance, i.e. distance from the camera, to the point corresponding to pixels in the image using the posture, i.e. orientation, of the camera at the time each of two pictures is taken. The subunits of the camera apparatus measuring orientation are a measuring unit.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for display of camera data, object recognition in camera data, and control of the drone using camera data disclosed by Ger, Baer, and Rogers to include use of the camera data and orientation of the camera to determine distance to the relevant objects. One of ordinary skill in the art would have been motivated to make this modification in order to enhance the drone’s environmental awareness capabilities and reduce the calculation required for these tasks, as suggested by Lin at page 2 paragraph 2-4. Regarding claim 5, Ger combined with Baer, Rogers, and Lin renders obvious the limitations of claim 3. Further disclosure from Ger renders obvious wherein the unmanned aircraft is configured to obtain its location (DS), determine the location of the identified target based on its location and camera orientation and distance (KA, DE), and transmit the location of the target as the location data (DS, KA, DE) to the goggles that are configured to generate the determined location to be presented as the target object in the user interface based on the location data received (See page 6 paragraph 6, the drone, i.e. unmanned aircraft, comprises a detection device that outputs detection results. See page 3 paragraph 2, the detection means comprises a GPS sensor, indicating the drone obtains its own location. See page 6 paragraph 11, the detection device comprises a camera. The detection device further detects the position of the person, i.e. the target. See page 6 paragraph 7, the soldier carries an output device which receives the detection results, which are necessarily related to the target, and displays them on the display device, i.e. the goggles. See page 4 paragraph 4, the display device shows the user the position of the relevant object, i.e. target. This means that the location information of the target was necessarily sent to the display device.). Claim 4 is rejected under 35 U.S.C. 103 as being obvious over Ger, Baer, Rogers, and Lin, in view of US 20230045581 A1, hereinafter “Verheylewegen”. Regarding claim 4, Ger combined with Baer, Rogers, and Lin renders obvious the limitations of claim 3. Ger renders obvious wherein the unmanned aircraft is configured to obtain its location (DS) and transmit it together with the camera orientation and distance (KA, DE) as the location data (DS, KA, DE) to determine the location of the identified target based on the location data and to generate the determined location to be presented as the target object in the user interface (See page 7 paragraph 5-10 and Fig. 2, the central server performs image and object recognition on camera data, indicating the camera data was sent to it from the drone, i.e. unmanned aircraft. Further, the information processing system is configured to make use of the central server for processing the image data. The server performs friend/enemy identification. The identified soldier is displayed in the field of view corresponding to its location in the AR headset of the soldier. This indicates that the server determined the position of the soldier from the provided camera data. This indicates further that the server was sent all data necessary to calculate the location.). Ger combined with Baer, Rogers, and Lin does not explicitly disclose to the goggles. Verheylewegen, in the same field of endeavor and solving a related problem, renders obvious to the virtual reality system comprising goggles (See [0058]-[0060], and Fig. 1, the control unit is a laptop connected to the headset, i.e. goggles. See [0060]-[0065], the control unit receives input data from the image sensors, i.e. cameras. See [0074]-[0075], the control unit determines potential targets using the image data using object recognition algorithms. Further, the data can come from drones, i.e. unmanned aircraft, indicating the camera data can be remotely sent to the control unit for processing. See [0031], the control unit computes an augmented view, i.e. augmented reality view, for the headset, i.e. goggles. See [0039], the system causes graphics in the augmented view to correspond to the coordinates of the potential target, indicating that the location of the target was identified.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for detecting relevant objects from unmanned vehicle camera data and using the camera data to control the vehicle disclosed by Ger, Baer, and Rogers to include transferring the camera data to the virtual reality system for processing of Verheylewegen. One of ordinary skill in the art would have been motivated to make this modification in order to allow the system to perform analysis of camera data without requiring that the object sending the camera data perform the specific analysis itself, as suggested by Verheylewegen at [0075]. Rogers renders obvious to the goggles (See page 1 paragraph 2 and page 3 paragraph 1-2, the headset does not require a PC for use, i.e. uses an internal processor for VR capabilities. This includes determining the location of objects to draw. It would be obvious to use a standalone VR headset in order to obtain better mobility and portability.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for detecting relevant objects from unmanned vehicle camera data disclosed by Ger, Baer, and Verheylewegen to include the use of a standalone headset, i.e. one that comprises the computing means necessary to perform image analysis and rendering of the AR images, as suggested by Rogers. One of ordinary skill in the art would have been motivated to make this modification to improve affordability, ease of setup, and mobility for the user, as suggested by Rogers at page 1 paragraph 2. Claim 8 is rejected under 35 U.S.C. 103 as being obvious over Ger, Baer, and Rogers in view of US 11003186 B1, hereinafter “Neal”. Regarding claim 8, Ger combined with Baer and Rogers renders obvious the limitations of claim 1. Ger combined with Baer and Rogers does not explicitly disclose wherein the goggles are configured to transmit to the unmanned aircraft a location (OS) of the wearer of the goggles and the unmanned aircraft is configured to move autonomously based on predetermined flight control and the received location of the wearer. Neal, in the same field of endeavor and solving a related problem, renders obvious wherein the goggles are configured to transmit to the unmanned aircraft a location (OS) of the wearer of the goggles and the unmanned aircraft is configured to move autonomously based on predetermined flight control and the received location of the wearer (See column 1 paragraph 4, the user summons a drone, i.e. unmanned aircraft, to his or her location. See column 4 paragraph 2, the drone navigates to the summoning individual’s location using navigation programming on the drone, i.e. autonomously. Examiner asserts that summoning necessarily comprises a predetermined flight control, i.e. flying to the specified location. See paragraph 6, the summoning individual’s location is transmitted directly to the drone.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for detection of objects and control of unmanned aircraft disclosed by Ger, Baer, and Rogers to include summoning to the location of the user of Neal. One of ordinary skill in the art would have been motivated to make this modification in order to allow for the drone to provide video and audio feeds or safety measures, as suggested by Neal at column 1 paragraph 4. Claim 9 is rejected under 35 U.S.C. 103 as being obvious over Ger, Baer, and Rogers in view of US 20180203470 A1, hereinafter “Pattison”. Regarding claim 9, Ger combined with Baer and Rogers renders obvious the limitations of claim 1. Ger combined with Baer and Rogers does not explicitly disclose wherein the goggles are configured to determine a distance (OE) of the wearer of the goggles from the target based on a location (OS) of the wearer of the goggles and the location of the target, and to present the determined distance of the wearer from the target in the user interface. Pattison, in the same field of endeavor and solving a related problem, renders obvious wherein the goggles are configured to determine a distance (OE) of the wearer of the goggles from the target based on a location of the wearer of the goggles and the location of the target, and to present the determined distance of the wearer from the target in the user interface (See [0092], the drone detects a moving object or animal, i.e. a target. The drone transmits the coordinates of the target to the user. The distance from the user to the object is displayed on the remote activation unit. Determining the distance of the object to the user necessarily uses the location of the user and the target. Drone video is transmitted to the user. Examiner asserts that the coordinates, i.e. location of the target, and video are transmitted to the user’s remote activation unit and not the user itself, since the user is a person, see Fig. 4 and [0064]. See [0061], the remote activation unit is a digital device such as a smart phone or controller used to control the drone.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for detection of objects and control of unmanned aircraft disclosed by Ger, Baer, and Rogers to include determining and displaying the distance of the user to the detected object of Pattison. One of ordinary skill in the art would have been motivated to make this modification in order to assist the user in assessing the significance of any threat relating to the object, as suggested by Pattison at [0092]. Claim 10 is rejected under 35 U.S.C. 103 as being obvious over Ger, Baer, and Rogers in view of US 20180039271 A1, hereinafter “Rimoux”, and US 20130253733 A1, hereinafter “Lee”. Regarding claim 10, Ger combined with Baer and Rogers renders obvious the limitations of claim 1. Ger combined with Baer and Rogers does not explicitly disclose wherein the goggles or a mobile terminal comprised in the goggles are configured to receive a switch command (VK) for the control of the unmanned aircraft issued by the user using physical control in order to switch the control of the unmanned aircraft from autonomous control to eye control (SO) or motion control (LO), and to send the switch command to the unmanned aircraft. Rimoux, in the same field of endeavor and solving a related problem, discloses configured to receive a switch command (VK) for the control of the unmanned aircraft issued by the user using physical control in order to switch the control of the unmanned aircraft from autonomous control to manual control, and to send the switch command to the unmanned aircraft (Fig. 6 and [0119]-[0122], the drone can loiter by circling a central point under autonomous control. The drone passes to manual piloting mode, i.e. switches from autonomous to manual control, when the user issues a command from the controls. Any control when the drone is in loiter mode is therefore a switch command.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for detection of objects and control of unmanned aircraft disclosed by Ger, Baer, and Rogers to include a mode for autonomous flight that switches to manual flight when receiving commands. One of ordinary skill in the art would have been motivated to make this modification to allow the drone to continue operating when the drone stops receiving inputs from the operator, as suggested by Rimoux at [0126]. Ger combined with Baer, Rogers, and Rimoux does not explicitly disclose eye control (SO) or motion control (LO). Lee, in the same field of endeavor and solving a related problem, discloses eye control (SO) or motion control (LO) (See [0013], the UAV control system generates flight control commands based on the gestures of a user controlling the UAV. This is motion control.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for detection of objects and control of unmanned aircraft disclosed by Ger, Baer, Rogers, and Rimoux to include by allowing control of the drone by gesture as disclosed by Lee. One of ordinary skill in the art would have been motivated to make this modification to allow for more effective control of the drone, as suggested by Lee at [0004]. Claim 11 is rejected under 35 U.S.C. 103 as being obvious over Ger, Baer, Rogers, Rimoux, and Lee, in view of NPL document “Digital FPV”, hereinafter “GetFPV”. Regarding claim 11, Ger combined with Baer, Rogers, Rimoux, and Lee renders obvious the limitations of claim 10. Ger combined with Baer, Rogers, Rimoux, and Lee does not explicitly disclose wherein, after receiving the switch command, the unmanned aircraft is configured to transmit analogue video (VA) to the goggles that are configured to display the received analogue video in the user interface. NPL document “Digital FPV”, hereinafter “GetFPV”, renders obvious wherein, after receiving the switch command, the unmanned aircraft is configured to transmit analogue video (VA) to the goggles that are configured to display the received analogue video in the user interface (See page 2 paragraph 4, digital drone FPV video systems can provide much higher image quality. See page 5 paragraph 2-page 6 paragraph 1, analog video systems provide lower latency during image transmission. See further page 7 paragraph 1, analog video tolerates image breakup better than digital video in piloting situations.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for detection of objects and control of unmanned aircraft, including a switch between autonomous and manual control mode, disclosed by Ger, Baer, Rogers, Rimoux, and Lee to include switching between analog video for manual mode and digital video for autonomous control mode, as suggested by GetFPV. One of ordinary skill in the art would have been motivated to make this modification in order to allow the use of analog video, which has lower latency and tolerates breakup better, and is therefore more suitable for piloting control, in manual mode, and digital video when the system is in autonomous mode, in order to provide clearer images to the user, as suggested by GetFPV at page 2 paragraph 4, page 5 paragraph 2-page 6 paragraph 1, and page 7 paragraph 1. Claims 12 and 13 are rejected under 35 U.S.C. 103 as being obvious over Ger, Baer, and Rogers in view of Verheylewegen. Regarding claim 12, Ger combined with Baer and Rogers renders obvious the limitations of claim 1. Verheylewegen, in the same field of endeavor and solving a related problem, discloses wherein a mobile terminal is configured to relay at least one of the following communications between the goggles and the unmanned aircraft: a location (OS) of the wearer transmitted by the goggles, the location data transmitted by the unmanned aircraft, video transmitted by the unmanned aircraft, and a switch command (VK) indicating changes of the transmission format of the video transmitted by the goggles and control mode (See [0060], the control unit is a laptop, which is a mobile terminal. See [0025], the image sensors record sequences of images. A sequence of images is a video. See [0026]-[0032], the control unit takes the sequence of images as input and computes an augmented view based on the images on a headset, i.e. goggles. Examiner asserts that this is relaying the video transmitted from the camera to the goggles. See [0075], the control unit can obtain the data from outside information sources, such as drones, i.e. unmanned aircraft.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for detection of objects and control of unmanned aircraft disclosed by Ger, Baer, and Rogers to include relaying the video data through a mobile terminal of Verheylewegen. One of ordinary skill in the art would have been motivated to make this modification in order to allow the mobile terminal to process the video data before displaying it to the user, as suggested by Verheylewegen at [0031]. Regarding claim 13, Ger discloses Goggles for an unmanned aircraft (See page 6 paragraph 12, the display may be an augmented reality display which displays information directly in the field of vision of the soldier. Examiner asserts that displaying augmented reality information directly in the soldier’s field of view is only possible with some sort of head mounted device with displays directly in front of the soldier’s eyes, i.e. goggles.), comprising a data transceiver configured to receive location data (DS, KA, DE) relating to a location (MS) of the target transmitted by an unmanned aircraft equipped with a camera and a measuring unit (See page 6 paragraph 6, the drone, i.e. unmanned aircraft, comprises a detection device that outputs detection results. See page 3 paragraph 2, the detection means comprises a GPS sensor. See page 6 paragraph 11, the detection device comprises a camera. The detection device further detects the position of the person, i.e. the target. The subsystem responsible for determining the position of the target is therefore a measuring unit. See page 6 paragraph 7, the soldier carries an output device which receives the detection results, which are necessarily related to the target, and displays them on the display device, i.e. the goggles. See page 4 paragraph 4, the display device shows the user the position of the relevant object, i.e. target. This means that the location information of the target was necessarily sent to the display device. Receiving the data indicates that the headset receives data. See page 6 paragraph 7, the output device is the processing system driving the display device. See page 7 paragraph 3, the output device and the drone directly communicate with each other. This indicates transmission and reception of data. Examiner asserts that the communication device, which is part of the augmented reality goggle apparatus, therefore comprises a transceiver.), and a processor configured to form an augmented reality user interface (LK) by means of at least one goggle lens of the goggles (See page 4 paragraph 4, the display device can be an augmented reality display that superimposes the detection results on the user’s field of view. This inherently takes place by at least one lens of the device, i.e. goggles. Computing the augmented reality display inherently requires implementation on a processor.), and wherein the goggles comprise an orientation detector, whereupon the augmented reality user interface is configured to determine a location of an augmented reality target object, which indicates the location of the target, on a user interface view of the goggles and to present to a wearer of the goggles, by means of the orientation detector, the location of the target as the augmented reality target object in the augmented reality user interface based on the target location data received from the unmanned aircraft and an orientation (SA) of the goggles as detected by the orientation detector so that the wearer of the goggles is able to see the target regardless of whether there is a direct line of sight to the target's location (See page 4 paragraph 4, the display device can be an augmented reality display that superimposes the detection results on the user’s field of view, i.e. goggles. The display device determines whether the user is looking in the direction of the relevant object, i.e. target, in order to display it. This indicates that the display device comprises an orientation sensor and displays the target based on the orientation of the device, indicating the display device comprises an orientation detector. This also indicates that the display device was necessarily sent the location of the target and displays the target based on this data. See Fig. 1 and page 6 paragraphs 6-12, the device shows the wearer of the goggles a soldier crouching behind a wall based on detection information from the drone, i.e. so the wearer can see the target regardless of the direct line of sight.). Ger does not explicitly disclose Goggles for acquiring a target or for controlling the unmanned aircraft. Baer, in the same field of endeavor and solving a related problem, discloses for controlling the unmanned aircraft (See [0022], the headset displays images and videos and comprises the controller. See Abstract, the controller controls the surveyor. See [0031], the surveyor is an unmanned aerial vehicle, i.e. an unmanned aircraft.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for detecting relevant objects and targets from unmanned vehicle camera data disclosed by Ger and Rogers to include using the display interfaces to control the vehicle of Baer. One of ordinary skill in the art would have been motivated to make this modification in order to allow the user to maneuver the drone to the area of interest for data collection, as suggested by Baer at [0020]. Ger combined with Baer does not explicitly disclose Goggles for acquiring a target. Verheylewegen, in the same field of endeavor and solving a related problem, discloses goggles for acquiring a target (See [0024]-[0033], the system comprises a headset, i.e. goggles, that take eye and head movement from the operator as input data. The headset driven by the control unit. The control unit further uses the eye data of the operator to determine a target selected by the operator. Firing parameters are then passed to a weapon controller. This is use of goggles for acquiring a target. See [0075], the control unit can interact with and obtain data from drones, i.e. unmanned aircraft.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system for detecting relevant objects and acquiring targets from unmanned vehicle camera data disclosed by Ger, Baer, and Rogers to include using the goggles to acquire targets as suggested by Verheylewegen. One of ordinary skill in the art would have been motivated to make this modification in order to allow the mobile terminal to process the video data before displaying it to the user, as suggested by Verheylewegen at [0031]. Response to Arguments (A) Applicant argues “101 Rejection Claims 1, 13, and 14 have been rejected based on 35 U.S.C. 101. Applicants traverse the rejection. The Office states that claims 1 and 13 are directed to an abstract idea without significantly more. Claim 1 has been amended to include features that are more than an abstract idea. In particular, claim 1 recites "... which goggles, equipped with an orientation detector, are configured to determine a location of an augmented reality target object, which indicates the location of the target, on a user interface view of the goggles and to present to a wearer of the goggles the location of the target as the an augmented reality target object (MB) in the user interface view of the augmented reality user interface based on the received target location data received from the unmanned aircraft and an orientation (SA) of the goggles as detected by the orientation detector so that the wearer of the goggles is able to see the target regardless of whether there is a direct line of sight to the target's location." The combination of these amendments is clearly directed to more than an abstract idea as they integrate the idea into a practical application. In addition, the amendments include additional features that are more than a judicial exception. For at least this reason, the rejection should be withdrawn. Similarly, claim 13 recites "...wherein presenting, by the augmented reality user interface determines a location of an augmented reality target object, which indicates the location of the target, on a user interface view of the goggles and presents, by means of an orientation detector of the goggles, to a wearer of the goggles, by means of an orientation detector, the location of the target as thean augmented reality target object (MB) in the augmented reality user interface based on the target location data received from the unmanned aircraft and an orientation (SA) of the goggles as detected by the orientation detector so that the wearer of the goggles is able to see the target regardless of whether there is a direct line of sight to the target's location." The combination of these amendments is clearly directed to more than an abstract idea as they integrate the idea into a practical application. In addition, the amendments include additional features that are more than a judicial exception. For at least this reason, the rejection should be withdrawn. Claims dependent upon claims 1 and 13, respectively, also overcome the 35 U.S.C. 101 rejection since they include the features of claims 1 and 13, respectively.” As to (A), Examiner does not find the argument persuasive. Display of an object on a display device, e.g. goggles, is considered well-understood, routine, and conventional activity under TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48 (Gathering and analyzing information using conventional techniques and displaying the result). Determining the location of the object amounts to a computer implementation of a mental process. A human given target location data and data indicative of a subject’s field of view could estimate whether the target object should be visible and where in the field of view the target object should be visible. (B) Applicant argues “35 U.S.C. 103 rejections Claims 1, 13, and 14 have been rejection under 35 U.S.C. 103 in view of Ger (EP 3376152) and Baer (US 2017/0277180), and Verheylewegen (US 2023/0045581). Applicants traverse the rejection. Ger does not teach a target acquisition system that comprises goggles to form an AR user interface by means of at least one goggle lens for controlling the unmanned aircraft as claimed in amended claims 1, 13, and 14. Ger does not disclose that display device 106 is goggles, which form an AR user interface used for controlling drone 101 by means of its lens. Display device 106 can be a head-up or AR display according to par. 52. The head-up and AR displays however are not examples of goggles, which use a goggle lens for forming an AR interface, as a skilled person in the art very well knows. Ger also lacks how the goggles determine a location of an AR target object, which indicates the target's location, on a user interface view of the goggles and present to a wearer of the goggles the target's location as the determined AR target object in the user interface view of the AR user interface on the basis of the target's location data received from the unmanned aircraft and an orientation of the goggles detected by an orientation detector in the goggles so that the target is possible to see regardless of whether the wearer has a direct line of sight to the target's location as claimed in amended claims 1, 13, and 14. In addition, Ger does not also disclose how display device 106 determines a location of an AR object, which indicates a location of person 121, on some display view and how display device 106 presents to soldier 104 the location of person 121 as an AR object in said display view through display device 106 on the basis of some location data from drone 101 and some orientation data from display device 106 so that soldier 104 can see person 121 through display device 106 even if soldier 104 lacks a direct line of sight behind wall 120. Detection device 102, e.g., camera, radar, or IR sensor, can detect person 121 and recognize, e.g., a person's orientation, speed, or armament by means of recognition algorithms according to par. 50, 51. Display device 106 can display detection results 103 graphically and visualize a picture of person 121 according to par. 53. Drone 101 and display device 106 can detect their surroundings so that some unknown device can create a 3D model of the detected environment according to par. 54. For at least the foregoing reasons, the rejection of claims 1 and 13 should with withdrawn. Baer (US 2017/0277180) also lacks at least the same features as Ger. Baer does not disclose that headset display 161 is goggles, which form an AR user interface used for controlling unmanned surveyor 101 by means of its lens. Controller 151 with display 161 controls surveyor 101 according to par. 22. Display 161 is not an example of goggles, which use a goggle lens for forming an AR interface but a display that is attached on a head as skilled person very well knows. Baer does not also disclose how display 161 determines a location of an AR object, which indicates a target's location, on some display view and how display 161 presents to a user the target's location as an AR object in said display view through display 161 on the basis of position data from surveyor 101 and orientation data from display 161 so that the user can see the target through display 161 even if the user lacks a direct line of sight to the target's location. Surveyor 101 can transmit an image or video captured by camera 131 as well as position and orientation data of surveyor 101 and a target via link 155. Controller 151 can display the image or video and render it into 3D for displaying in an AR environment on display 161 according to par. Display 161 thus displays only a view from camera 131, not a user's view. For at least the foregoing reasons, Baer, as well as the other references, do not cure the deficiencies of Ger so the rejection of claims 1, 13, and 14 should with withdrawn. Since even the Ger does not disclose all features of the subject matter of amended claim 1, 13, and 14 and Baer and the other reference do not cure the deficiencies of Ger, claims 1, 13 and 14 are not obvious in view of the cited references. The above discussed missing features provide target acquisition system 100, which enables target's safe detection and destruction without a direct line of sight. The cited prior art is silent of these missing features as above has been explained. The cited prior art does not thus disclose any hint or motivation that could lead a skilled person, who knows Ger's solution and attempts to improve safety of target's detection and destruction in view of Baer's teachings and the other cited art, to modify said solution so that a resulted modification would comprise all claimed features, especially the discussed missing features. For at least these reasons, the rejection of claims 1, 13 and 14 should be withdrawn. Claims 2-12 and 18 Applicants traverse the §103 rejections in the Office Action. Applicants respectfully submit that pending dependent claims 2-12 and 18 include every feature of independent claim 1 and 13 and that the cited references fail to teach, disclose, or suggest at least the features of claim 1 and 13. Thus, pending dependent claims 2-12 and 18 are also allowable over the prior art of record. In re Fine, 5 U.S.P.Q. 2d 1596, 1600 (Fed. Cir. 1988).” As to (B), Examiner does not find the argument persuasive. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Ger discloses so that the wearer of the goggles is able to see the target regardless of whether there is a direct line of sight to the target's location (See Fig. 1 and page 6 paragraphs 6-12, the device shows the wearer of the goggles a soldier crouching behind a wall based on detection information from the drone, i.e. so the wearer can see the target regardless of the direct line of sight.). Applicant’s remaining arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 20220130015 A1 which relates to processing video streams for presentation to a user with an AR headset. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AUSTIN ROBERT CHENNAULT whose telephone number is (571)272-4606. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hitesh Patel can be reached at (571) 270-5442. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AUSTIN ROBERT CHENNAULT/Examiner, Art Unit 3667 /Hitesh Patel/Supervisory Patent Examiner, Art Unit 3667 2/20/26
Read full office action

Prosecution Timeline

Nov 15, 2023
Application Filed
Aug 13, 2025
Non-Final Rejection — §101, §103
Nov 14, 2025
Response Filed
Feb 20, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12576752
VEHICLE SEAT CONTROL APPARATUS AND METHOD THEREOF
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+100.0%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month