Prosecution Insights
Last updated: April 19, 2026
Application No. 18/895,081

METHOD AND SYSTEM FOR PRODUCING AN ENVIRONMENTAL AWARENESS FOR ALERTING AN OPERATOR OF A VEHICLE

Non-Final OA §103§112§DP
Filed
Sep 24, 2024
Examiner
NGUYEN, TAI T
Art Unit
2685
Tech Center
2600 — Communications
Assignee
Roadio Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
919 granted / 1087 resolved
+22.5% vs TC avg
Strong +17% interview lift
Without
With
+17.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
27 currently pending
Career history
1114
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
27.1%
-12.9% vs TC avg
§102
26.5%
-13.5% vs TC avg
§112
28.5%
-11.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1087 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on December 04, 2024 is being considered by the examiner. Specification The disclosure is objected to because of the following informalities: Applicant is required to insert the Patent No. 12,128,920 onto the specification. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 11-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 11 recites the limitation "the set of images" in lines 7-8. There is insufficient antecedent basis for this limitation in the claim. Claim 15 recites the limitation "the user alert" in line 1. There is insufficient antecedent basis for this limitation in the claim. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,128,920. Although the claims at issue are not identical, they are not patentably distinct from each other because they all disclosed the same subject matter. The corresponding claims that contain the same subject matter are mapped below: Patent No. 12,128,920 Application No. 18/895,081 1. A method for providing risk-based alerts to an operator of a 2-wheeled vehicle during a trip, the method comprising: at each of a set of times during the trip, collecting a sensor dataset, the sensor dataset comprising a set of images from a set of monocular cameras; for each of the set of times: producing a set of bounding boxes representing a set of objects in the set of images, each of the set of bounding boxes associated with a height; assigning an object type classification to each of the set of bounding boxes; determining a depth metric for each object of the set of objects based on: a set of intrinsic parameters associated with the set of monocular cameras; a height of the associated bounding box; and the object type classification of the associated bounding box; for each object, producing a predicted trajectory based on the depth metric determined for the object; for each object, calculating a future time and a future distance with which the object will pass the 2-wheeled vehicle based on the predicted trajectory; in an event that, for at least one object of the set, the future time is below a time threshold and the future distance is below a distance threshold, triggering an alert to the operator. 1. A method comprising, during a trip of a 2-wheeled vehicle: receiving a sensor dataset comprising a set of images from a set of monocular camera mounted to the 2-wheeled vehicle; producing a set of bounding boxes representing a set of objects in the set of images, wherein each of the set of bounding box is associated with a height and an object type classification; and determining a depth metric for each object of the set of objects based on: the height of the bounding box associated with the object; and the object type classification associated with the bounding box. 7. The method of claim 1, wherein the depth metric for each object is determined based on information consisting essentially of: the set of intrinsic parameters associated with the set of monocular cameras; the height of the associated bounding box; and the object type classification of the associated bounding box. 2. The method of claim 1, wherein the depth metric is further determined based on a set of intrinsic parameters associated with the monocular camera. 13. A system for providing risk-based alerts to an operator of a 2-wheeled vehicle during a trip, the system comprising: a set of sensors coupled to the 2-wheeled vehicle, the set of sensors comprising a set of monocular cameras; a processing subsystem in communication with the set of sensors, configured to: at each of a set of times during the trip, collect a sensor dataset from the set of sensors, the sensor dataset comprising a set of images from the set of monocular cameras; for each of the set of times: produce a set of bounding boxes representing a set of objects in the set of images, each of the set of bounding boxes associated with a height; assign an object type classification to each of the set of bounding boxes; determine a depth metric for each object of the set of objects based on: a set of intrinsic parameters associated with the set of monocular cameras; a height of the associated bounding box; and the object type classification of the associated bounding box; for each object, produce a predicted trajectory based on the depth metric determined for the object; for each object, calculate a future time and a future distance with which the object will pass the 2-wheeled vehicle based on the predicted trajectory; in an event that, for at least one object of the set, the future time is below a time threshold and the future distance is below a distance threshold, trigger an alert to the operator at a user interface. 11. A system comprising, during a trip of a 2-wheeled vehicle: a set of sensors, comprising a monocular camera, mounted to the 2-wheeled vehicle; and a processing subsystem mounted to the 2-wheeled vehicle and in communication with the set of sensors, the processing subsystem configured to: receive a sensor dataset from the monocular camera; determine a set of bounding boxes representing a set of objects in the set of images, wherein each bounding box has a height and is associated with an object type classification; and determine a depth metric for each object of the set of objects based on: a height of the associated bounding box; and the object type classification of the associated bounding box. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, 11-13, and 18-20 are is/are rejected under 35 U.S.C. 103 as being unpatentable over Nezhadara et al. (US 2020/0082560) in view of Oestterling et al. (US 2022/0222475). As per claim 11, Nezhadara et al. disclose a system (figure 2), during a trip of a vehicle (100, paragraph 0022) comprising: a set of sensors (sensor system, 110), comprising a camera (116, figure 2), mounted to the vehicle (paragraph 0028); and a processing subsystem (data analysis system, 120) mounted to vehicle and in communication with the set of sensors (paragraphs 0029-0031), the processing subsystem configured to: receive a sensor dataset from the monocular camera (paragraph 0030); determine a set of bounding boxes representing a set of objects (see abstract), wherein each bounding box has a height and is associated with an object type classification; and determine a depth metric for each object of the set of objects (figures 1-2) based on: a height of the associated bounding box (bounding box estimator, 124, paragraphs 0032-0033 and 0040); and the object type classification (object classification module, 126) of the associated bounding box (0039). Nezhadara et al. disclose the instant claimed invention except for the system being used in a two-wheeled vehicle. Oestterling et al. disclose a motorcycle (100, figure 1) including a forward-looking camera (210) and a rearward-looking (212) wherein the cameras (210, 212) being monocular cameras (paragraph 0033). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to utilize the monocular camera (202) mounted to the bicycle (100) as taught by Oestterling et al. in a system as disclosed by Nezhadara et al. for the purpose of capturing environmental images sounding the bicycle during the bicycle trip in order to provide a safe riding trip. As per claim 12, Nezhadara et al. disclose the processing subsystem being further configured to determine the depth metric based on a set of intrinsic parameters (distinct “3D point clouds”) associated with the monocular camera (paragraphs 0032-0033 and 0042). As per claim 13, Nezhadara et al. further disclose the processing subsystem configured to determine a predicted trajectory based on the depth metric determined for an object of the set of objects (paragraphs 0034). As per claim 18, Nezhadara et al. disclose the set of bounding boxes are detected using a neural network, wherein the depth metric is not determined using a neural network (paragraphs 0014-0015 and 0033). As per claims 19-20, Nezhadara et al. disclose the instant claimed invention except for the set of sensors further comprises a rear -facing monocular camera, wherein the processing system is further configured to determine a second set of bounding boxes for each of a second set of objects, each associated with a height and an object type classification, from a second sensor dataset from the rear-facing monocular camera and the processing system is further configured to determine a depth metric for each of the second set of objects based on the height and the object type classification associated with the respective bounding box. Oestterling et al. disclose a motorcycle (100, figure 1) including a forward-looking camera (210) and a rearward-looking (212, paragraph 0033). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to utilize the rearward-looking camera as taught by Oestterling et al. in a system as disclosed by Nezhadara et al. for the purpose of ensuring all surround vehicle being monitored in order to provide alert to the rider the potential objects around him/her. As per claims 1-5 and 10, The method claims 1-5 and 10 are essentially the same in scope as system claims 11-13 and 18-20 above and are rejected similarly. Claim(s) 14 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nezhadara et al., as modified, as applied to claim 11 above, and further in view of Oboril et al. (US 2021/0009121). As per claim 14, Nezhadara et al., as modified, disclose the instant claimed invention except for the processing system being further configured to: determine a risk of collision between the object and the 2-wheeled vehicle based on the predicted trajectory; and trigger a user alert when the risk of collision exceeds a threshold. Oboril et al. disclose a system predictive risk-aware driving system (figures 1-3) for a vehicle (100) comprising a vehicle control system (200) configured to obtain data for one or more objects in a vehicle environment, wherein the data comprises motion data associated with at least one or more objects; determining a predicted motion trajectory for at least one or more objects; determining whether a risk indicating a risk of a collision for the vehicle and the at least one or more objects exceeds a predefined risk threshold; and generating an information that the risk exceeds the predefined risk threshold to be considered in determining the safety driving state for the vehicle (see abstract). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to utilize the system as taught by Oboril et al. in a system as disclosed by Nezhadara et al., as modified for the purpose of alerting the driver of the surround objects in order to preventing collision between the host vehicle with surrounding objects. As per claim 15, Oboril et al. disclose the user alert being determined based on the object type classification (paragraph 0079) and the risk of collision (see abstract). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAI T. NGUYEN whose telephone number is (571)272-2961. The examiner can normally be reached Mon-Fri: 9am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Quan-Zhen Wang can be reached at 571-272-3114. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAI T NGUYEN/Primary Examiner, Art Unit 2685 December 11, 2025
Read full office action

Prosecution Timeline

Sep 24, 2024
Application Filed
Dec 11, 2025
Non-Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602978
FIREARM MONITORING SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602971
VISUAL SENSOR OF A TACTICAL GEAR TO USE FACIAL RECOGNITION TECHNOLOGY TO IDENTIFY A PERSON OF INTEREST AND TO CAUSE A RESPONSIVE DEVICE ON THE TACTICAL GEAR TO NOTIFY A WEARER
2y 5m to grant Granted Apr 14, 2026
Patent 12602979
INFORMATION PROCESSING DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12597335
WEARABLE DEVICE FOR EMERGENCY EVENT EVACUATION AND RESCUE
2y 5m to grant Granted Apr 07, 2026
Patent 12593195
SYSTEMS AND METHODS FOR DETECTING ILLEGITIMATE LOCATION DATA FOR A MONITORED INDIVIDUAL
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+17.4%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 1087 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month