Prosecution Insights
Last updated: April 19, 2026
Application No. 18/221,814

SYSTEMS AND METHODS FOR AUTONOMOUS HORN ACTIVATION AND KIDNAPPING DETECTION

Non-Final OA §101§103§112§DP
Filed
Jul 13, 2023
Examiner
ORANGE, DAVID BENJAMIN
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Torc Robotics, Inc.
OA Round
1 (Non-Final)
34%
Grant Probability
At Risk
1-2
OA Rounds
3y 7m
To Grant
63%
With Interview

Examiner Intelligence

Grants only 34% of cases
34%
Career Allow Rate
51 granted / 151 resolved
-28.2% vs TC avg
Strong +29% interview lift
Without
With
+29.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
51 currently pending
Career history
202
Total Applications
across all art units

Statute-Specific Performance

§101
13.1%
-26.9% vs TC avg
§103
29.0%
-11.0% vs TC avg
§102
20.2%
-19.8% vs TC avg
§112
32.0%
-8.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 151 resolved cases

Office Action

§101 §103 §112 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant's election with traverse of species I in the reply filed on August 19, 2025 is acknowledged. The traversal is on the ground(s) that there is not a search burden. This is persuasive because the art that the examiner identified discloses identifying gestures generically (i.e., it equally teaches both the kidnapping and horn honking species). Therefore, the restriction requirement as set forth in the Office action mailed on July 11, 2025 is hereby withdrawn. In view of the withdrawal of the restriction requirement as to the rejoined inventions, applicant(s) are advised that if any claim presented in a divisional application is anticipated by, or includes all the limitations of, a claim that is allowable in the present application, such claim may be subject to provisional statutory and/or nonstatutory double patenting rejections over the claims of the instant application. Once the restriction requirement is withdrawn, the provisions of 35 U.S.C. 121 are no longer applicable. See In re Ziegler, 443 F.2d 1211, 1215, 170 USPQ 129, 131-32 (CCPA 1971). See also MPEP § 804.01. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. This is a provisional nonstatutory double patenting rejection. Claims 1-20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over the claims U.S. Pat. App. No. 18/221,825 of in view of the prior art as applied below. Both the pending claims and the conflicting patents are all directed to autonomous vehicles detecting gestures. Further, any differences between the present claims and the claims in any of the conflicting patents are obvious in view of the prior art as applied below. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the below prior art with any of the conflicting patents in for implementation details (especially as the patent claims lack implementation details). Based on the findings herein, this is an example of “(A) Combining prior art elements according to known methods to yield predictable results.” MPEP 2143. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more. Step 1: Claim 1 (and its dependents) recite a vehicle, and machines are eligible subject matter. Claim 11 (and its dependents) recite a method, and processes are eligible subject matter. Step 2A, prong one: All of the elements of claims 1-20 are a mental process because a person can look around and determine that someone else is waving their arms in a certain way. Further, the various models are also mental processes, see example 47, claim 2, element (d) (from the July 2024 AI subject matter eligibility examples). MPEP 2106.04(a)(2)(III)(C) explains that use of a generic computer or in a computer environment is still a mental process. In particular, this section begins by citing Gottschalk v. Benson, 409 US 63 (1972). “The Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea.” In Benson the Supreme Court did not separately analyze the computer hardware at issue; the specifics of what hardware was claimed is only included in an appendix to the decision. Because there are no additional elements, no further analysis is required for Step 2A, prong two or Step 2B. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1 and 11 recite executing “a machine learning model using the sequence of images as input to detect a human inside a second vehicle depicted within the sequence of images.” However, the specification does not reasonably convey possession of this feature. In particular, specification [0064] simply states “The machine learning model may be a passenger detection machine learning model.” This is not a known type of AI model, nor has Applicant submitted an IDS demonstrating that this is a known type of AI model. Further, the only guidance that the specification provides regarding architecture is a passing reference to a convolutional neural network ([0064] and [0087], and the statement regarding which data is used to train has no guidance on the amount of training, where the data was sourced, how it was prepared, or other details that would have been determined in the process of reducing this to practice. The details provided in specification [0065] and [0066] do not demonstrate reduction to practice because they do not evidence lessons learned through experimentation. For example, [0066] discloses using back propagation for training, but this is too generic and widely used to show actual or constructive reduction to practice. In contrast, Ramaraj, Nitish, Girish Murugan, and Rajeshkannan Regunathan. "Neural network-powered conductorless ticketing for public transportation." 2024 4th International Conference on Pervasive Computing and Social Networking (ICPCSN). IEEE, 2024 (attached) details a process of identifying people in vehicles and provides examples of the sorts of architectural and training details that transpire during reduction to practice. The examiner notes that identifying passengers for ticketing appears to be an easier problem than what is currently claimed due to the lack of vehicle motion and unobstructed views of people. Claims 1 and 11 also recite determining “based on the detection of the human inside the second vehicle within the sequence of images, the human is depicted performing a defined arm gesture within the sequence of images.” This raises the same issue as the previous claim element. The lack of reduction to practice is evidenced in the difference between claims 2 and 4, where claim 2 recites that the machine learning model used to detect the person is also used to indicate the arm gesture, whereas claim 4 specifies that a second model is instead used. The specification does not resolve which of these two was intended (or propose how both approaches are feasible). The lack of possession for detecting the defined arm gesture is shown in specification [0053]’s “Such training images can be captured, for example, when simulating situations in which a human is being kidnapped.” The examiner believes that a large amount of training data is needed to distinguish video of a child raising and lowering their arm to request a horn honk from a person waving their hand to signify that they are being kidnapped (according to the specification, but the specification does not address why this would not be detected by the kidnapper). There is no discussion of the results of the training efforts or specific technical techniques that were employed. This recitation is also an example of unlimited functional claiming. MPEP 2173.05(g). Dependent claims are likewise rejected. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1 and 11 recite “images captured by the sensor as the autonomous vehicle was moving.” First, it is not clear if this is intended to be a method step (i.e., the vehicle must move and capture images) or if this should instead by interpreted as a product-by-process claim. MPEP 2113. However, if this is interpreted as a product-by-process claim, it is not clear what structure is implied (i.e., is the idea that the picture is a little blurry due to the vehicle’s movement?) Claims 9 and 16 recite corresponding language for the second sequence of images and are similarly rejected. Claims 1 and 11 recite “defined arm gesture,” but this is subjective. MPEP 2173.05(b)(IV). In other words, the claim does not specify who or what defines the gesture. Claims 3 and 14 recite a determination a gesture based on identifying an output indicating the gesture. This is unclear because it would appear that the output itself is the claimed determination (in other words, the claim language is circular). Claims 5 and 15 recite a “defined pattern,” but this is subjective. MPEP 2173.05(b)(IV). Claims 7 and 18 recite “capture images in a 360 degree rotation,” but it is not clear whether this means 360 of images or that the camera itself must rotate (as opposed to a series of cameras pointing in different directions). Dependent claims are likewise rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over US20160167648A1 (“James”) in view of US20210380099A1 (“Lee”). (Original) An autonomous vehicle, comprising: a horn; (James, Fig. 5, speaker 152. To the extent that James [0082] is not a “horn,” a horn is an obvious substitute. MPEP 2144.06(II).) a sensor configured to capture images; and (James, Fig. 1, camera system 127) one or more processors configured to: (James, Fig. 1, processor 110) receive a sequence of images from the sensor, the sequence of images captured by the sensor as the autonomous vehicle was moving; (James, Fig. 1, camera system 127) execute a (James, abstract, “As another example, the external environment of the autonomous vehicle can be detected to identify a person (e.g. … a human driver or occupant of another vehicle”) determine, based on the detection of the human inside the second vehicle within the sequence of images, the human is depicted performing a defined arm gesture within the sequence of images; and (James, Fig. 6. See also, [0116] “Alternatively or in addition, such a determination can be made based on a verbal gesture and/or a non-verbal human gesture received from a person in the external environment.”) activate the horn responsive to the determination that the human is depicted performing the defined arm gesture within the sequence of images. (James, Fig. 6. See also, [0064] “The external communication system 145 includes a visual communication system 146 and/or an audial communication system 150.”) However, James is not relied on for: the model is a machine learning model. (Lee, [0052] “With respect to actors or obstacles in the environment, the size of an actor or obstacles may be determined using one or more sensors and sensor data therefrom related to vehicle 800, and/or one or more machine learning models (e.g., convolutional neural networks).”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Lee to the teachings of James such that Lee’s machine learning models are used and Lee’s truck and sensors are used for James’ vehicle for the purpose of providing implementation details on James’ detections. Based on the above, this is an example of “combining prior art elements according to known methods to yield predictable results.” MPEP 2143. (Original) The autonomous vehicle of claim 1, wherein the one or more processors are configured to execute the machine learning model using the sequence of images as input by: executing the machine learning model using the sequence of images as input to output an indication that the human is depicted performing the defined arm gesture within the sequence of images. (James, [0116] “For instance, the person 310 may say “go ahead” or “proceed” and/or make such a hand gesture.”) 3. (Original) The autonomous vehicle of claim 2, wherein the one or more processors are configured to determine the human is depicted performing the defined arm gesture within the sequence of images by identifying the output indication that the human is depicted performing the defined arm gesture within the sequence of images. (James, [0116] “For instance, the person 310 may say “go ahead” or “proceed” and/or make such a hand gesture.”) 4. (Original) The autonomous vehicle of claim 1, wherein the one or more processors are configured to determine the human is depicted performing the defined arm gesture within the sequence of images by: responsive to the detection of the human inside the second vehicle depicted within the sequence of images, execute a second machine learning model using the sequence of images as input to output an indication that the human is depicted performing the defined arm gesture within the sequence of images. (Lee, [0122] “The DLA may be used to run any type of network to enhance control and driving safety, including for example, a neural network that outputs a measure of confidence for each object detection. … The neural network may take as its input at least some subset of parameters, such as bounding box dimensions, ground plane estimate obtained (e.g. from another subsystem), inertial measurement unit (IMU) sensor 866 output that correlates with the vehicle 800 orientation, distance, 3D location estimates of the object obtained from the neural network and/or other sensors (e.g., LIDAR sensor(s) 864 or RADAR sensor(s) 860), among others.”) 5. (Original) The autonomous vehicle of claim 1, wherein the defined arm gesture comprises the human moving an arm up and down in a defined pattern. (James, [0116] “For instance, the person 310 may say “go ahead” or “proceed” and/or make such a hand gesture.”) 6. (Original) The autonomous vehicle of claim 1, wherein the autonomous vehicle comprises: a tractor; and (Lee, abstract, “tractor trailer truck”) a trailer pulled by the tractor, (Lee, abstract, “tractor trailer truck”) wherein the sensor is mounted to a top surface of the tractor. (Lee, Fig. 8B, surround cameras 874. Lee’s surround cameras teach the claimed “top” because the cameras are mounted on top to get a 360 degree view. Lee, [0081]. See also, Lee, [0026] “vehicle 800—such as a tractor and/or trailer of a tractor trailer truck”) 7. (Original) The autonomous vehicle of claim 6, wherein the sensor is configured to capture images in a 360 degree rotation. (James, [0048] “For instance, the cameras 128 can be rotatable about one or more axes, pivotable, slidable and/or extendable, just to name a few possibilities.”) 8. (Original) The autonomous vehicle of claim 1, wherein the autonomous vehicle comprises: a tractor; (Lee, abstract, “tractor trailer truck”) a trailer pulled by the tractor, (Lee, abstract, “tractor trailer truck”) wherein the sensor is mounted to the tractor at a first location; (Lee, Fig. 8B, surround cameras 874. See also, Lee, [0026] “vehicle 800—such as a tractor and/or trailer of a tractor trailer truck”) and a second sensor mounted to the tractor at a second location. (Lee, Fig. 8B, stereo camera 868. See also, Lee, [0026] “vehicle 800—such as a tractor and/or trailer of a tractor trailer truck”) Claim 9 is rejected as per claim 8. Additionally, James, [0018] teaches the claimed “second” – “Alternatively or in addition, such interaction can include the autonomous vehicle determining one or more future driving maneuvers based on, at least in part, one or more non-verbal human gestures detected in the external environment.” 10. (Original) The autonomous vehicle of claim 1, wherein the one or more processors are further configured to transmitting an indication to activate the horn to a controller of the autonomous vehicle. (James, Fig. 1) Claims 11-20 are rejected as per claims 1-10. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US10528837B1 – “Training of vehicles to improve autonomous capabilities” teaches honking to warn pedestrians US10272839B2 – “Rear seat occupant monitoring system for vehicle” Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID ORANGE whose telephone number is (571)270-1799. The examiner can normally be reached Mon-Fri, 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID ORANGE/Primary Examiner, Art Unit 2663
Read full office action

Prosecution Timeline

Jul 13, 2023
Application Filed
Oct 07, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567126
INFRASTRUCTURE-SUPPORTED PERCEPTION SYSTEM FOR CONNECTED VEHICLE APPLICATIONS
2y 5m to grant Granted Mar 03, 2026
Patent 11300964
METHOD AND SYSTEM FOR UPDATING OCCUPANCY MAP FOR A ROBOTIC SYSTEM
2y 5m to grant Granted Apr 12, 2022
Patent 10816794
METHOD FOR DESIGNING ILLUMINATION SYSTEM WITH FREEFORM SURFACE
2y 5m to grant Granted Oct 27, 2020
Patent 10433126
METHOD AND APPARATUS FOR SUPPORTING PUBLIC TRANSPORTATION BY USING V2X SERVICES IN A WIRELESS ACCESS SYSTEM
2y 5m to grant Granted Oct 01, 2019
Patent 10285010
ADAPTIVE TRIGGERING OF RTT RANGING FOR ENHANCED POSITION ACCURACY
2y 5m to grant Granted May 07, 2019
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
34%
Grant Probability
63%
With Interview (+29.4%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 151 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month