Prosecution Insights
Last updated: April 19, 2026
Application No. 18/967,933

SYSTEM AND METHOD TO IMPROVE INTERACTIVE GUIDANCE OF HEAVY VEHICLES IN URBAN AREAS

Non-Final OA §102§112
Filed
Dec 04, 2024
Examiner
DO, TRUC M
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Niosense Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 12m
To Grant
90%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
544 granted / 660 resolved
+30.4% vs TC avg
Moderate +7% lift
Without
With
+7.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
37 currently pending
Career history
697
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
22.9%
-17.1% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 660 resolved cases

Office Action

§102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This is a non-final Office Action on the merits in response to communications filed by Applicant on December 04, 2024. Claims 1-20 are currently pending and examined below. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. The term “heavy vehicle” in claim 1 is a relative term which renders the claim indefinite. The term “heavy” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) and/or 102(a)(2) as being anticipated by Lewis et al. US2019/0325746 (“Lewis”). Regarding claim(s) 1, 11. Lewis discloses a computer-implemented method of using an artificial intelligence (AI) model to detect road constraints for heavy vehicles comprising: a) training, by one or more processors, the AI model based on labeled constraint data and a selected training algorithm to generate a trained AI model (para. 3, para. 35-36, FIG. 2 illustrates an example implementation of a system 200 in accordance with certain aspects of the present disclosure. As illustrated in FIG. 2, the system 200 may have one or more local processing units 202 that may perform various operations of methods described herein. Each local processing unit 202 may include a local state memory 204 and a local parameter memory 206 that may store parameters of a neural network.); b) detecting one or more road constraints in road and constraints information related a defined geographical area comprising an origin and a destination using the trained AI model (para. 106-110, FIG. 10 is a call flow diagram illustrating a method 1000 for operating at least one neural network 1002. The at least one neural network 1002 may include one or more DNNs and/or other machine-learning models. The at least one neural network 1002 may be an aspect of one or more components 902, 904, 906, 908, 910, 912, 914, and/or 920 described with respect to the example system architecture 900 of FIG. 9.); and c) determining one or more routes between the origin and destination based at least in part on the detected one or more road constraints, and optimized with respect to one or more optimization parameters (para. 120-125, At operation 1104, the at least one neural network may obtain first image data through a first camera that is oriented toward the route. In an aspect, the first image data may depict a first scene associated with the route. For example, the at least one neural network may request first image data from a first camera that is oriented away from a vehicle and toward the route (e.g., road). The at least one neural network may obtain the first image data (e.g., based on the request), and the at least one neural network may store the first image data.). Regarding claim(s) 2, 12. Lewis discloses wherein the AI model comprises an Artificial Neural Network (ANN), and wherein the selected training algorithm includes a backpropagation algorithm and a gradient descent algorithm ([0042] A deep convolutional network (DCN) may be a network of convolutional network(s), configured with additional pooling and normalization layers. DCNs have achieved state-of-the-art performance on many tasks. DCNs can be trained using supervised learning in which both the input and output targets are known for many exemplars and are used to modify the weights of the network by use of gradient descent methods.). Regarding claim(s) 3, 13. Lewis discloses wherein the labeled constraint data comprises GPS data, and wherein the AI model comprises, at least in part, a gradient boosting classifier model trained to detect the one or more road constraints based on patterns in speed, acceleration, and/or bearing changes in the GSP data ([0066] The example system architecture 900 may include and/or may be communicatively coupled with a navigation planner 920. The navigation planner 920 may be configured to generate a set of navigational instructions 700. The navigation planner 920 may include a global navigation satellite system (GNSS)-based navigation system, such as a program at least partially implemented as software and including and/or communicatively coupled with a GNSS-positioning component. An example of a GNSS-based navigation system may include a global-position system (GPS) navigation system that includes a GPS-positioning component.). Regarding claim(s) 4, 14. Lewis discloses said labeled constraint data comprises GPS data, and wherein the ANN is a Long Short-Term Memory (LSTM) neural network trained to detect the one or more road constraints in the GPS data ([0035] FIG. 2 illustrates an example implementation of a system 200 in accordance with certain aspects of the present disclosure. As illustrated in FIG. 2, the system 200 may have one or more local processing units 202 that may perform various operations of methods described herein. Each local processing unit 202 may include a local state memory 204 and a local parameter memory 206 that may store parameters of a neural network. In addition, the local processing unit 202 may have a local (neuron) model program (LMP) memory 208 for storing a local model program, a local learning program (LLP) memory 210 for storing a local learning program, and a local connection memory 212. Furthermore, as illustrated in FIG. 2, each local processing unit 202 may interface with a configuration processor unit 214 for providing configurations for local memories of the local processing unit, and with a routing connection processing unit 216 that provides routing between the local processing units 202.). Regarding claim(s) 5, 15. Lewis discloses wherein the labeled constraint data comprises satellite image data, and wherein the ANN is configured to implement a You Only Look Once (YOLO) object detection model trained to detect the one or more road constraints via one or more visual patterns in the satellite image data associated with heavy vehicle-specific road constraints ([0076] In various aspects, the object detector 902 may include or may be communicatively coupled with a DNN and/or other machine-learning model. For example, the object detector 902 may include one or more reinforcement-learning models, a CNN, an RNN, or another object-detection system. In one aspect, the object detector 902 may include one or more neural networks that implement a Single Shot Multibox Detector (SSD) and/or You Only Look Once (YOLO) for object detection (e.g., real-time object detection). In one aspect, object detector 902 may identify each object of the set of objects 804, 806, 808, 812 disposed along the route by processing the first image data 930 using a DNN or other model/neural network.). Regarding claim(s) 6, 16. Lewis discloses wherein the labelled constraint data is divided into hard constraint data, and soft constraints data comprising weighted segments constraints ([0097] In various aspects, the salient objects selection component 910 may adjust one or more weights associated with one or more neurons of the associated DNN and/or other machine-learning model. By adjusting the weights, the salient objects selection component 910 may influence the saliency of objects 804, 806, 808, 812 detected in the first scene represented in the first image data 930. In an aspect, the salient objects selection component 910 may adjust the one or more weights based on the field-of-view information 946 indicated in the saliency information 940. In an aspect, the salient objects selection component 910 may adjust the one or more weights based on the profile information 942 indicated in the saliency information 940.). Regarding claim(s) 7, 17. Lewis discloses further comprising the steps of: d) translating, by the one or more processors, said route into a plurality of waypoints; and e) communicating, by the one or more processors, the plurality of waypoints, via a network, to a turn-by-turn application installed on a user device of a driver ((0059] FIG. 7 illustrates an example of a set of navigational instructions 700. The set of navigational instructions 700 may be provided by a navigation system. For example, a navigation system may allow a user to input a destination (e.g., a street address of a destination, a specific location, a general location, etc.). The navigational system may calculate a route between a beginning point (e.g., the user's current location) and a destination. The navigational system may separate the route into discrete steps, as shown by each navigational instruction 702a-e. Each navigational instruction 702a-e may indicate a distance to a next navigational point (e.g., turn, bearing, heading, etc.). Regarding claim(s) 8, 18. Lewis discloses f) tracking, by the one or more processors via tracking data received from one or more vehicle telematics devices positionally coupled with a vehicle travelling along said route, a position of the vehicle; and g) sending, by the one or more processors, one or more control signals to one or more traffic management subsystems and/or traffic signals to avoid one or more stops by the vehicle (([0041] Locally connected neural networks may be well suited to problems in which the spatial location of inputs is meaningful. For instance, a network 300 designed to recognize visual features from a car-mounted camera may develop high layer neurons with different properties depending on their association with the lower versus the upper portion of the image. Neurons associated with the lower portion of the image may learn to recognize lane markings, for example, while neurons associated with the upper portion of the image may learn to recognize traffic lights, traffic signs, and the like.) Regarding claim(s) 9, 19. Lewis discloses f) tracking, by the one or more processors via tracking data received from one or more vehicle telematics devices positionally coupled with a vehicle travelling along said route, a position of the vehicle; and g) upon detecting, by the one or more processors, that the vehicle has left said route, generating one or more new routes incorporating the position of the vehicle ([0066] The example system architecture 900 may include and/or may be communicatively coupled with a navigation planner 920. The navigation planner 920 may be configured to generate a set of navigational instructions 700. The navigation planner 920 may include a global navigation satellite system (GNSS)-based navigation system, such as a program at least partially implemented as software and including and/or communicatively coupled with a GNSS-positioning component. An example of a GNSS-based navigation system may include a global-position system (GPS) navigation system that includes a GPS-positioning component.). Regarding claim(s) 10, 20. Lewis discloses f) tracking, by the one or more processors via tracking data received from one or more vehicle telematics devices positionally coupled with a vehicle travelling along said route, a position of the vehicle; and g) establish, by the one or more processors, based, at least in part, on said tracking data and said position of the vehicle, one or more performance metrics ([0050] The performance of deep learning architectures may increase as more labeled data points become available or as computational power increases. Modern DNNs are routinely trained with computing resources that are thousands of times greater than what was available to a typical researcher just fifteen years ago. New architectures and training paradigms may further boost the performance of deep learning. Rectified linear units may reduce a training issue known as vanishing gradients. New training techniques may reduce over-fitting and thus enable larger models to achieve better generalization. Encapsulation techniques may abstract data in a given receptive field and further boost overall performance.). Inquiry Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRUC M DO whose telephone number is (571)270-5962. The examiner can normally be reached on 9AM-6PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramón Mercado, Ph.D. can be reached on (571) 270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TRUC M DO/Primary Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Dec 04, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12570273
PARKING ASSISTANCE METHOD AND PARKING ASSISTANCE DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12548440
DISTRIBUTED OPTICAL FIBER SENSING (DFOS) SYSTEM AND METHOD OF USING THE SAME
2y 5m to grant Granted Feb 10, 2026
Patent 12546648
DISTRIBUTED OPTICAL FIBER SENSING (DFOS) SYSTEM AND METHOD OF USING THE SAME
2y 5m to grant Granted Feb 10, 2026
Patent 12542053
INFORMATION PROVISION SYSTEM, METHOD FOR PROVIDING PASSENGER VEHICLE INFORMATION, AND RECORDED PROGRAM MEDIUM
2y 5m to grant Granted Feb 03, 2026
Patent 12534096
METHODS AND SYSTEMS AND NON-TRANSITORY COMPUTERS FOR MONITORING DRIVING BEHAVIOR OF A VEHICLE
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
90%
With Interview (+7.2%)
2y 12m
Median Time to Grant
Low
PTA Risk
Based on 660 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month