Prosecution Insights
Last updated: April 19, 2026
Application No. 18/246,483

DEVICE AND SYSTEM FOR AUTONOMOUS VEHICLE CONTROL

Final Rejection §103§DP
Filed
Mar 23, 2023
Examiner
LEE, HANA
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Academy Of Robotics
OA Round
2 (Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
96%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
84 granted / 141 resolved
+7.6% vs TC avg
Strong +37% interview lift
Without
With
+36.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
36 currently pending
Career history
177
Total Applications
across all art units

Statute-Specific Performance

§101
12.6%
-27.4% vs TC avg
§103
48.8%
+8.8% vs TC avg
§102
14.2%
-25.8% vs TC avg
§112
22.1%
-17.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 141 resolved cases

Office Action

§103 §DP
DETAILED ACTION The amendments filed 9/15/2025 have been entered. Claims 1, 6-7, and 9-10 have been amended. Claims 1-10 remain pending in the application and are discussed on the merits below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 9/25/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments Applicant's arguments filed 9/15/2025 have been fully considered but they are not persuasive. Applicant asserts “Hotson does not disclose a neural network with specific feedback outputs for actively selecting pixels and color channels from the input image” in pages 8-9 of Applicant’s Remarks. However, as outlined below, Hotson discloses that a plurality of recurrent connections and a that a pixel point may be received for a current sensor frame which reads on “selecting pixels” “iteratively” of Applicant’s claim 1. The color channels are not relied upon in Hotson and are taught by Toyama. Applicant further asserts “introduction of feedback outputs for actively selecting pixels and color channels via Applicant’s device is to provide an adaptive approach, and one that is not contemplated by Hotson” in page 9 of Applicant’s Remarks. In response to applicant's argument, the fact that the inventor has recognized another advantage which would flow naturally from following the suggestion of the prior art cannot be the basis for patentability when the differences would otherwise be obvious. See Ex parte Obiaya, 227 USPQ 58, 60 (Bd. Pat. App. & Inter. 1985). The claims do not explicitly recite “adaptive approach” and as currently claimed, the combination of Hotson and Toyama read on Applicant’s claims as outlined below. Applicant further asserts “the skilled artisan would be neither drawn nor motivated to look to Toyama to further enhance such selection via further feedback output” in page 9 of Applicant’s Remarks. In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, as stated in the previous Office Action dated 3/13/2025, One of ordinary skill in the art would have been motivated to modify the classification neural network using pixel values of images as disclosed by Hotson by adding the color based feedback taught by Toyama to allow a model to account for color when locating target objects. Response to Amendment Regarding the objection to the specification, Applicant has submitted an abstract to overcome the objection. The objection to the specification has been withdrawn. Regarding the objection to the claims, Applicant has amended the claims to overcome the objection. The objection to the claims has been withdrawn. Regarding the rejections under 35 USC §112, Applicant has amended the claims to overcome the rejections. The rejections under 35 USC §112 have been withdrawn. Regarding the rejections under 35 USC §103, amendments made to the claims fail to overcome the rejections. The rejections under 35 USC §103 are maintained as outlined below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 3-10 are rejected under 35 U.S.C. 103 as being unpatentable over Hotson et al. (U.S. Patent Application Publication No. 2018/0211128 A1; hereinafter Hotson) in view of Toyama (U.S. Patent Application Publication No. 2005/0008193 A1). Regarding claim 1, Hotson discloses: A computer device comprising a memory and a processor (computing device 800 includes processors and memory devices, see at least [0058]), the computer device configured to be fitted to a vehicle (computing device 800 can function as vehicle control system 100, see at least [0057]) and to communicate with a camera or sensor (vehicle control system includes camera systems 110, radar systems 106, and lidar systems 108, see at last [0025]), the processor being configured to: pre-process an image received from the camera or sensor data from the sensor to produce an input image (use output from previous image frame as input to current image frame, see at last [0017]; processing performed to transform point cloud into depth map which can be fed into convolutional network, see at least [0019]-[0020]); present the input image to a neural network stored in the memory of the computer device (use output from previous image frame as input to current image frame, see at last [0017]; processing performed to transform point cloud into depth map which can be fed into convolutional network, see at least [0019]-[0020]); wherein the neural network is trained to classify a feature in an image presented to it (classification or feature detection using deep neural network, see at least [0031]) , the neural network having an input layer (input node, see at least [0031]), a hidden layer (one or more hidden layers, see at least [0031]) and an output layer (output node, see at least [0031]), the output layer including three outputs: a first feedback output for iteratively selecting pixels from the input image to input at the input layer at each iteration of the neural network (output nodes 210 may be fed back through delays 212 to one or more input nodes, see at least [0035]; output or information determined from previous set of pixel/frame data, see at least [0037]); a third output for outputting an output value indicative of a classification result from the neural network (output indicating presence of pedestrian may be fed into input node, see at least [0035]); the processor further configured to obtain the output value from the neural network (end of computation, output node 210 yield values that correspond to the class inferred by the neural network, see at least [0031]-[0032]); and post-process the output value from the neural network to identify a feature of the environment of the vehicle (outputs of neural network may be fed forward so that features detected at a specific location in an image may be fed forward and objects or features in a series of images may be consistently detected and/or tracked, see at least [0040]). Hotson does not explicitly disclose: a second feedback output for iteratively selecting a color channel of the selected pixels to input at the input layer at each iteration However, Toyama teaches: a second feedback output for iteratively selecting a color channel of the selected pixels to input at the input layer at each iteration (learned color based object model may be iteratively fed back into learning function to replace initial preliminary object model, see at least [0028]) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the object detection neural network disclosed by Hotson by adding the color based model taught by Toyama with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification in order “to allow the learning function to learn an increasingly accurate color-based model” (see [0028]). Regarding claim 3, the combination of Hotson and Toyama teaches the elements above and Hotson further discloses: the computer device is further configured to communicate with a control computer of the vehicle (vehicle control system 100 may include vehicle control actuators to control driving of the vehicle, see at least [0026]). Regarding claim 4, the combination of Hotson and Toyama teaches the elements above and Hotson further discloses: the neural network stored in the memory is configured to perform one or more specific tasks including image classification, object detection (system 100 used to automatically detect, classify, and localize objects, see at least [0024]) and road segmentation. Regarding claim 5, the combination of Hotson and Toyama teaches the elements above and Hotson further discloses: the memory includes multiple neural networks (plurality of neural networks may be used, see at least [0041]), the processor being configured to present the input image to the multiple neural networks and to further process the output value of each of the multiple neural networks to identify the feature of the environment of the vehicle (a plurality of different recurrent neural networks may be used to generate each feature map and plurality of different feature maps may be generated for the single image, each neural network trained to detect an object such as pedestrian and vehicle, see at least [0041]). Regarding claim 6, the combination of Hotson and Toyama teaches the elements above and Hotson further discloses: the computer device is configured to perform pre-processing, post-processing and presenting to the neural network locally at the computer device, such that connection to an external network outside of the vehicle is not necessary to identify the feature of the environment (vehicle control system 100 may be used to automatically detect, classify, and localize objects, driving/assistance system 102 may use neural network to detect and localize objects based on perception data gathered by one or more sensors, see at least [0024]) Regarding claim 7, the combination of Hotson and Toyama teaches the elements above and Hotson further discloses: A vehicle control system for fitting in or on a vehicle (vehicle control system 100, see at least [0024]), the system comprising: a sensor or camera (vehicle control system includes camera systems 110, radar systems 106, and lidar systems 108, see at last [0025]); a control computer (automated driving/assistance system 102, see at least [0024] and Fig. 1); and the computer device of claim 1 (computing device 800 includes processors and memory devices, see at least [0058]; computing device 800 can function as vehicle control system 100, see at least [0057]); wherein the computer device is configured to receive sensor data or an original image from the sensor or camera, and send information related to the feature of the environment of the vehicle to the control computer (sensor systems may be used to obtain real-time sensor data so that automated driving/assistance system 102 can drive a vehicle in real-time, see at least [0028]; the control computer being configured to control one or more components of the vehicle based on the information received from the computer device (automated driving/assistance system 102 may be used to automate control operation of a vehicle based on perception data, see at least [0024] and [0028]). Regarding claim 8, the combination of Hotson and Toyama teaches the elements above and Hotson further discloses: the control computer is configured to autonomously control the vehicle (automated driving/assistance system 102 may be used to control operation of vehicle, see at least [0024] and [0012]). Regarding claim 9, the combination of Hotson and Toyama teaches the elements above and Hotson further teaches: a plurality of computer devices according to claim 1 (one or more data processing devices, see at least [0089]), each of the plurality of computer devices being configured to perform a different specific task (a plurality of different recurrent neural networks may be used to generate each feature map, “a feature map for pedestrian detection may be generated using a neural network trained for pedestrian detection while a feature map for vehicle detection may be generated using a neural network trained for vehicle detection”, see at least [0041]) comprising a memory and a processor computing device 800 includes processors and memory devices, see at least [0058]); and being configured to be fitted to a vehicle (computing device 800 can function as vehicle control system 100, see at least [0057]) and to communicate with a camera or sensor (vehicle control system includes camera systems 110, radar systems 106, and lidar systems 108, see at last [0025]), the processor being configured to: pre-process an image received from the camera or sensor data form the sensor to produce an input image (use output from previous image frame as input to current image frame, see at last [0017]; processing performed to transform point cloud into depth map which can be fed into convolutional network, see at least [0019]-[0020]); and present the input image to a neural network stored in the memory of the computer device (use output from previous image frame as input to current image frame, see at last [0017]; processing performed to transform point cloud into depth map which can be fed into convolutional network, see at least [0019]-[0020]), wherein the neural network is trained to classify a feature in an image presented thereto (classification or feature detection using deep neural network, see at least [0031]), the neural network having an input layer, a hidden layer and an output layer (input node, see at least [0031]; one or more hidden layers, see at least [0031]; output node, see at least [0031], the output layer including three outputs: a first feedback output for selecting pixels from the input image to input at the input layer at each iteration of the neural network (output nodes 210 may be fed back through delays 212 to one or more input nodes, see at least [0035]; output or information determined from previous set of pixel/frame data, see at least [0037]); a third output for outputting an output value indicative of a classification result from the neural network (output indicating presence of pedestrian may be fed into input node, see at least [0035]); obtain the output value from the neural network (end of computation, output node 210 yield values that correspond to the class inferred by the neural network, see at least [0031]-[0032]); and post-process the output value from the neural network to identify a feature of the environment of the vehicle (outputs of neural network may be fed forward so that features detected at a specific location in an image may be fed forward and objects or features in a series of images may be consistently detected and/or tracked, see at least [0040]) Hotson discloses “the disclosure is not to be limited to the specific forms or arrangements of parts so described and illustrated” (in [0090]-[0091]). Therefore, one of ordinary skill in the art would be able to modify the computing device disclosed by Hotson to include multiple processing devices each with a memory and neural network as taught above. Hotson does not explicitly disclose: a second feedback output for selecting a color channel of the selected pixels to input at the input layer at each iteration However, Toyama teaches: a second feedback output for selecting a color channel of the selected pixels to input at the input layer at each iteration (learned color based object model may be iteratively fed back into learning function to replace initial preliminary object model, see at least [0028]) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the object detection neural network disclosed by Hotson by adding the color based model taught by Toyama with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification in order “to allow the learning function to learn an increasingly accurate color-based model” (see [0028]). Regarding claim 10, the combination of Hotson and Toyama teaches the elements above and Hotson further discloses: A vehicle comprising the control system of claim 7 (control operation of a vehicle, see at least [0024]) Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Hotson in view of Toyama as applied to claim 1 above and further in view of Jiang et al. (U.S. Patent Application Publication No. 2022/0165045 A1; hereinafter Jiang). Regarding claim 2, the combination of Hotson and Toyama teaches the elements above but does not teach: the computer device is a system on a chip (SoC) However, Jiang teaches: the computer device is a system on a chip (SoC) (hardware structure of a chip, chip includes neural processing unit, algorithms may be implemented in the chip, see at least [0143]) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the object detection neural network disclosed by Hotson and the color based model taught by Toyama by adding the chip taught by Jiang with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for easy implementation of a plurality of functions (see [0345]). Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim 1 is provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of copending Application No. 18/246,476 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the invention as claimed in application 18/246,476 encompasses the same limitations and elements in the instant application. Claims 2-10 are dependent on claim 1 and inherit the deficiency above. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Instant Application Application No. 18/246,476 1. A computer device comprising a memory and a processor, the computer device configured to be fitted to a vehicle and to communicate with a camera or sensor, the processor being configured to: pre-process an image received from the camera or sensor data from the sensor to produce an input image; present the input image to a neural network stored in the memory of the computer device; wherein the neural network is trained to classify a feature in an image presented to it, the neural network having an input layer, a hidden layer and an output layer, the output layer including three outputs: a first feedback output for iteratively selecting pixels from the input image to input at the input layer at each iteration of the neural network; a second feedback output for iteratively selecting a color channel of the selected pixels to input at the input layer at each iteration; and a third output for outputting an output value indicative of a classification result from the neural network; the processor further configured to obtain the output value from the neural network; and post-process the output value from the neural network to identify a feature of the environment of the vehicle. 1. A computer-implemented method for use in a vehicle for identifying a feature of the environment of the vehicle, the method comprising: receiving an original image from a sensor or camera; pre-processing the original image to produce an input image; presenting the input image to a neural network; wherein the neural network is trained to classify a feature in an image presented to it, the neural network having an input layer, a hidden layer and an output layer, the output layer including three outputs: a first feedback output for selecting pixels from the input image to input at the input layer at each iteration of the neural network; a second feedback output for selecting a colour channel of the selected pixels to input at the input layer at each iteration; and a third output for outputting an output value indicative of a classification result from the neural network; obtaining the output value from the neural network; and post-processing the output value from the neural network to identify a feature of the environment of a vehicle. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HANA LEE whose telephone number is (571)272-5277. The examiner can normally be reached Monday-Friday: 7:30AM-4:30PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jelani Smith can be reached at (571) 270-3969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.L./Examiner, Art Unit 3662 /DALE W HILGENDORF/Primary Examiner, Art Unit 3662
Read full office action

Prosecution Timeline

Mar 23, 2023
Application Filed
Mar 06, 2025
Non-Final Rejection — §103, §DP
Sep 15, 2025
Response Filed
Nov 26, 2025
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12534067
SYSTEM AND METHOD FOR VEHICLE NAVIGATION
2y 5m to grant Granted Jan 27, 2026
Patent 12509078
VEHICLE CONTROL DEVICE
2y 5m to grant Granted Dec 30, 2025
Patent 12485990
DRIVER ASSISTANCE SYSTEM
2y 5m to grant Granted Dec 02, 2025
Patent 12453305
MOBILE ROBOT SYSTEM AND BOUNDARY INFORMATION GENERATION METHOD FOR MOBILE ROBOT SYSTEM
2y 5m to grant Granted Oct 28, 2025
Patent 12442161
WORK MACHINE
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
96%
With Interview (+36.6%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 141 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month