Prosecution Insights
Last updated: April 19, 2026
Application No. 18/431,230

SYSTEMS AND METHODS FOR PERSONALIZED GAP PREFERENCE PREDICTION

Non-Final OA §103§112
Filed
Feb 02, 2024
Examiner
MALKOWSKI, KENNETH J
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Toyota Motor Engineering & Manufacturing North America, Inc.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
94%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
480 granted / 642 resolved
+22.8% vs TC avg
Strong +19% interview lift
Without
With
+19.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
22 currently pending
Career history
664
Total Applications
across all art units

Statute-Specific Performance

§101
8.3%
-31.7% vs TC avg
§103
40.7%
+0.7% vs TC avg
§102
20.4%
-19.6% vs TC avg
§112
27.7%
-12.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 642 resolved cases

Office Action

§103 §112
DETAILED ACTION Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 7 and 17 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The metes and bounds of what is and is not included in the limitation “convert the scene graph to a standard graph representation, generate a description of the scene based on the standard graph representation, wherein the description comprises one or more sentences in natural language, feed the description to a natural language processing model to generate the scene embedding” is unclear and indefinite under a broadest reasonable interpretation in view of the remaining claim language and guidance provided in the specification for several reasons. First, it is unclear what “standard graph representation” is referring to. The term is not provided with a limiting definition in the specification and does not appear to have a well known or standard definition in the art, especially as it pertains to graphs that are not “scene” graphs. The metes and bounds of what can constitute the overlap, if any, between a scene graph and a standard graph is also unclear. Can a standard graph be a scene graph? How is one “converted” to another? What would a standard graph specifically depict in the context of the disclosed invention? In addition, the term “standard” represents a relative term rendering the claim further unclear. In addition, “the scene embedding” refers back to the claim 1 recitation “generate a scene embedding based on the scene graph”. How can the same scene embedding be generated from different graphs? Accordingly, regarding claims 7 and 17, a great degree of uncertainty and confusion exists regarding the proper interpretation of the claim in light of the multiplicity and scope of rejections set forth under 35 USC §112(b) above and their interrelation with one another. Due to the multiplicity of unclear interrelated limitations as well as logical inconsistencies, considerable speculation is required to interpret the intended meaning of the claim and what the claim is intended to encompass. As such, the examiner is unable to interpret the meaning and scope of this claim with substantial certainty that would be required to attempt to apply prior art to reject the claim. Therefore, the examiner will not attempt to apply prior art to reject this claim because unreasonable and speculative assumptions as to the proper interpretation of claimed limitations that would be required to reject the claim on the basis of prior art would be improper. (See In re Steele, 305 F.2d 859, 134 USPQ 292 (CCPA 1962), MPEP §2143.03(I), MPEP §2173.06(II)¶2; “it is improper to rely on speculative assumptions regarding the meaning of a claim and then base a rejection under 35 U.S.C. 103 on these assumptions”; “a rejection under 35 U.S.C. 103 should not be based on considerable speculation about the meaning of terms employed in a claim or assumptions that must be made as to the scope of the claims.”). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claims 7 and 17 are rejected under 35 U.S.C. 112(d) as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends, or for reciting a limitation that replaces or omits a limitation in a parent claim, even though it placed further limitations on the remaining elements or added still other elements. See MPEP 608.01(n), section III “test for proper dependency” (“a claim in dependent form shall contain . . . (i) a reference to a claim previously set forth, and (ii) then specify a further limitation of the subject matter claimed . . . if claim 1 recites the combination of elements A, B, C, and D, a claim reciting the structure of claim 1 in which D was omitted or replaced by E would not be a proper dependent claim, even though it placed further limitations on the remaining elements or added still other elements.”) Claims 7 and 17 recite a limitation that replaces a limitation in a parent claim, i.e., “the scene embedding” recited therein refers back to the claim 1 recitation “generate a scene embedding based on the scene graph” such that claims 7 and 17 replace the scene graph embedding of claim 1 with “standard graph representation” of “sentences in natural language”. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-6, 8-13, 15-16 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. 20230230484 to Al Faruque et al. (Far) in view of U.S. 20230035228 to Gupta et al. (Gupta) With respect to claims 1 and 12, Far discloses a system for personalized gap preference prediction of an ego vehicle driving on a road comprising: one or more vision sensors operable to capture one or more images of a surrounding scene; and (¶¶ 140 “image captured by the on-board camera at time n”; 145; 191; 20 extract scene graphs from camera data; 21 camera data . . . AV perception architectures utilizing sensor fusion”; FIG. 17; Fig. 8-9; 78; 124; 145; 204; abstract “present invention is directed to a Spatiotemporal scene-graph embedding methodology that models scene-graphs and resolves safety-focused tasks for autonomous vehicles . . . accepting the one or more images, extracting one or more objects from each image . . . generating a scene-graph for each image”) one or more processors operable to: generate a scene graph based on the captured images; generate a scene embedding based on the scene graph; generate vehicle data embedding based on vehicle data of the ego vehicle; concatenate the scene embedding and the vehicle data embedding to generate a time-stamped state; (¶¶ 19-28 “present invention to provide systems and methods that allow for Spatiotemporal scene-graph embedding to model scene-graphs and resolve safety-focused tasks for autonomous vehicles . . . a tool for systematically extracting and embedding road scene-graphs . . . quickly and easily extract scene graphs from camera data . . . user-friendly scene-graph extraction framework; allowing researchers to explore various spatio-temporal graph embedding methods . . . condensing the one or more scene-graphs into a spatial graph embedding, generating a spatia-temporal graph embedding from the spatial graph embedding, and calculating a confidence value for whether or not a collision will occur. The system may further comprise a risk assessment module for processing the spatio-temporal graph embedding through a temporal attention layer of the LSTM network to generate a context vector, processing the context vector through an LSTM decoder to generate a final spatio-temporal graph embedding, and calculating a confidence value for whether or not the one or more images contain a risky driving maneuver”; 100-105 temporal model of the present invention uses an LSTM for converting the sequence of scene-graph embeddings h, to the combined spatia-temporal embedding Z. For each timestamp t, the LSTM updates the hidden state p.sub.t and cell state c, as follows, p , c =ISTM; 140; 144-145) In addition, Far at least suggests generating a preferred gap related to surrounding vehicles via the time stamped mlm cited above (FIG. 10 “proximity thresholds” defines the set of enabled distance relations and their thresholds in feet; 80 assessing the risk of driving behaviors, traffic participants' relations that are considered to be useful are the distance relations and the directional relations. The assumption made here is that the local proximity and positional information of one object will influence the other's motion only if they are within a certain distance. Therefore, only the location information is extracted for each object and a simple rule is adopted to determine the relations between the objects using their attributes (e.g., relative location to the ego car), as shown in FIG. 4. For distance relations, two objects are assumed to be related by one of the relations r e {Near Collision (4 ft.), Super Near (7 ft.), Very Near (10 ft.), Near (16 ft.), Visible (25 ft.)} if the objects are physically separated by a distance that is within that relation's threshold; 148; 24 intervehicle distance determines confidence of risky behavior/ collision; 69) However, Far fails to explicitly disclose the gap is a preferred gap, i.e., as described in the specification, a user preferred gap such that the ego vehicle operates to keep a gap from the corresponding surrounding vehicle at the preferred gap. Gupta, from the same field of endeavor, discloses generating a preferred gap related to one of one or more surrounding vehicles at least in part by inputting a spatiotemporal state into a machine learning model and operates an ego vehicle to keep a gap from the corresponding surrounding vehicle at the preferred gap. (i.e., FIG. 1 time gap between surrounding vehicle 104 relative to ego vehicle 102 input to machine learning model 116 with historical context data 114 to create a preferred gap to keep gap from the corresponding surrounding vehicle at the preferred gap at trip (n+1) wherein the mlm is used 302 to control the vehicle 304 “ego vehicle control” to result in 310, “actual distance between vehicles” as shown in FIG. 3 and corresponding descriptions; ¶¶ 27 As the ego vehicle 102 continues to do more trips, the ego vehicle 102 continues to calculate new parameters (e.g., gap preference, acceleration profile) through incremental learning and an updated STP ML model representing the parameters is uploaded to the cloud server 106. The cloud server 106 may update its aggregated STP ML model if there is a change from new data; 34 STP model module 207 outputs a target driving parameter, such as a target acceleration or a target gap between the ego vehicle and the lead vehicle; 52 update a cloud STP ML model associated with the driver of the ego vehicle 102 based on the initial STP ML model and the updated STP ML model to improve accuracy of personalized parameters for the driver of the ego vehicle 102. The historical data storage 114 of the cloud server 106 may store historical data related to the initial STP ML model and the updated STP ML model . . . cloud server 106 may guide the ego vehicle 102 of what the gap preferences, the acceleration profile to use in new situations by transmitting parameters based on the global STP ML model.; 53-54 personalized parameters for the vehicle may include a desired acceleration, a desired gap, and the like. In some embodiments, the personalized parameters for the ego vehicle 102 may be parameters for the STP ML model of the ego vehicle 102 such that the STP ML model of the ego vehicle 102 is updated. The parameters for the ego vehicle 102 may be used as guidance for the ego vehicle 102 . . . then the ego vehicle 102 may update the personalized time gap to be longer when driving under similar conditions. The ego vehicle 102 may update personalized parameters or the updated STP ML model to the cloud server 106. The updating process may repeat as the ego vehicle 102 continues to travel.; claims 1-8 personalized driving setting for the driver is a personalized adaptive cruise control setting for the driver . . . update the personalized driving setting based on driving preferences by the driver . . . controller is configured to determine a target gap between the vehicle and a leading vehicle based on the personalized driving setting, a current gap between the vehicle and the leading vehicle, and a relative velocity between the vehicle and the leading vehicle; claim 9 operate the vehicle based on the personalized driving setting) Accordingly, it would have been obvious to one of ordinary skill in the art at the time of effective filing date for the machine learning model of Far to generate a preferred gap related to a surrounding vehicle by operating the vehicle to keep the preferred gap output from the machine learning model, as taught by Gupta, in order to improve the automatic cruise control so that it is personalized for a particular driver and used more effectively since ACC’s which do not know the personal preferences of the user will be shut off by manual user intervention, personalization reduces ACC shut-off (Gupta, ¶ 19, 21). With respect to claims 2 and 13, Far in view of Gupta disclose the one or more processors are operable to extract visual information of the one or more surrounding vehicles and the road; and generate the scene graph based on the extracted visual information (Far, ¶¶ 140 “image captured by the on-board camera at time n”; 145; 191; 20 extract scene graphs from camera data; 21 camera data . . . AV perception architectures utilizing sensor fusion”; FIG. 17; Fig. 8-9; 78; 124; 145; 204; abstract “accepting the one or more images, extracting one or more objects from each image . . . calculating relations between each object for each image, and generating a scene-graph for each image based on the aforementioned calculations. The system may further comprise instructions for calculating a confidence value for whether or not a collision will occur through the generation of a spatio-temporal graph embedding based on a spatial graph embedding and a temporal model”) With respect to claims 4 and 15, Far in view of Gupta disclose the scene graph comprises: an ego vehicle node corresponding to the ego vehicle, one or more surrounding vehicle nodes corresponding to the surrounding vehicles and edges between the ego vehicle node and the one or more surrounding vehicle nodes. (Far, FIG. 2, FIG. 3 depicting scene graph with ego vehicle / surrounding vehicle nodes; Fig. 4, 8 and respective corresponding descriptions; ¶¶ 79 The nodes of a scene-graph, denoted as 0, represent the objects in a scene such as lanes, roads, traffic signs, vehicles, pedestrians, etc. The edges of a scene-graph are represented by the corresponding adjacency matrix A,, where each value in A, represents the type of the edges. The edges between two nodes represent the different kinds of relations between them (e.g., near, Front Left, isln, etc.); 23, 72 Each scene-graph may comprise one or more nodes representing the corresponding ego-object and the corresponding object dataset). With respect to claims 5-6 and 16, Far in view of Gupta disclose the edges are vectors based on relative directions and distances between the ego vehicle and the one or more surrounding vehicles and a weight of each edge is a function of Euclidean distance between the ego vehicle and the corresponding vehicle (Far, i.e., distance between ego vehicle and corresponding vehicle is a straight line Euclidean distance, ¶ 67 BEV representation includes “identifying a proximity relation between the ego-object and each object of the object dataset for each image by measuring a distance between the ego object and each object . . . directional relation . . . relative orientation . . . right lane middle lane left lane”, as distinguished from “horizontal displacement” also measured . . . generating a scene-graph for each image based on the BEV representation”; i.e., objects can be surrounding cars, Car_0, Car_1, FIG. 4; 79 collecting the list of objects in each image and their attributes, the corresponding scene-graphs are constructed . . . multiple types of edges connect nodes. The nodes of a scene-graph, denoted as 0, represent the objects . . . vehicles . . . edges of a scene-graph are represented by the corresponding adjacency matrix A, where each value in A, represents the type of the edges. The edges between two nodes represent the different kinds of relations between them (e.g., near, Front Left, isln, etc.)”, i.e., including Euclidean distance as discussed above) (Far, ¶ 123 “Each node is assigned its type label from the set of actor names and its corresponding attributes (e.g., position, angle, velocity, current lane, light status, etc.) for relation extraction. Once all nodes are added to the scene-graph, the present invention extracts relations between each pair of objects in the scene.”; 148 extraction pipeline identifies three kinds of pairwise relations: proximity relations (e.g. visible, near, very near, etc.), directional (e.g. Front Left, Rear Right, etc.) relations, and belonging (e.g. car 1 isln left lane) relations. Two objects are assigned the proximity relation, r {Near Collision (4 ft.), Super Near (7 ft.), Very Near (10 ft.), Near (16 ft.), Visible (25 ft.)} provided the objects are physically separated by a distance that is within that relation's threshold. The directional relation, r e {Front Left, Left Front, Left Rear, Rear Left, Rear Right, Right Rear, Right Front, Front Right}, is assigned to a pair of objects . . . each vehicle's horizontal displacement is used relative to the ego vehicle to assign vehicles to either the Left Lane, Middle Lane, or Right Lane using the known lane width. The abstraction only considers three-lane areas, and, as such, vehicles in all left lanes and all right lanes are mapped to the same Left Lane node and Right Lane node respectively. If a vehicle overlaps two lanes (i.e., during a lane change), it is mapped to both lanes; claim 1 “ E. identifying a proximity relation between the ego-object and each object of the object dataset for each image by measuring a distance between the ego-object and each object; F. identifying a directional relation between the ego-object and each object of the object dataset for each image by determining a relative orientation of the ego-object and each object; and G. generating a scene-graph for each image based on the BEV representation”) With respect to claims 8-9 and 18, Far in view of Gupta disclose the machine learning model is a temporal encoder wherein the one or more processors are further operable to: obtain multiple time-stamped states in sequential time stamps based on images of the surrounding scene captured at different times and driving data obtained at the different times; and input the multiple time-stamped states in sequential time stamps to the temporal encoder to generate the preferred gap. (Far, i.e., FIG. 2 “generating a spatio-temporal graph”; FIG. 10 “temporal attention”, “sequence classification”; ¶¶ 19-28 “Spatiotemporal scene-graph embedding to model scene-graphs and resolve safety-focused tasks for autonomous vehicles . . . a tool for systematically extracting and embedding road scene-graphs . . . quickly and easily extract scene graphs from camera data . . . user-friendly scene-graph extraction framework; allowing researchers to explore various spatio-temporal graph embedding methods . . . condensing the one or more scene-graphs into a spatial graph embedding, generating a spatio-temporal graph embedding from the spatial graph embedding, and calculating a confidence value for whether or not a collision will occur. The system may further comprise a risk assessment module for processing the spatio-temporal graph embedding through a temporal attention layer of the LSTM network to generate a context vector, processing the context vector through an LSTM decoder to generate a final spatio-temporal graph embedding, and calculating a confidence value for whether or not the one or more images contain a risky driving maneuver”; 100-105 temporal model of the present invention uses an LSTM for converting the sequence of scene-graph embeddings h, to the combined spatio-temporal embedding Z. For each timestamp t, the LSTM updates the hidden state p.sub.t and cell state c, as follows, p , c =ISTM; 140; 144-145) (Gupta, i.e., FIG. 1 time gap between surrounding vehicle 104 relative to ego vehicle 102 input to machine learning model 116 with historical context data 114 to create a preferred gap to keep gap from the corresponding surrounding vehicle at the preferred gap at trip (n+1) wherein the mlm is used 302 to control the vehicle 304 “ego vehicle control” to result in 310, “actual distance between vehicles” as shown in FIG. 3 and corresponding descriptions; ¶¶ 27 As the ego vehicle 102 continues to do more trips, the ego vehicle 102 continues to calculate new parameters (e.g., gap preference, acceleration profile) through incremental learning and an updated STP ML model representing the parameters is uploaded to the cloud server 106. The cloud server 106 may update its aggregated STP ML model if there is a change from new data; 34 STP model module 207 outputs a target driving parameter, such as a target acceleration or a target gap between the ego vehicle and the lead vehicle; 52 update a cloud STP ML model associated with the driver of the ego vehicle 102 based on the initial STP ML model and the updated STP ML model to improve accuracy of personalized parameters for the driver of the ego vehicle 102. The historical data storage 114 of the cloud server 106 may store historical data related to the initial STP ML model and the updated STP ML model . . . cloud server 106 may guide the ego vehicle 102 of what the gap preferences, the acceleration profile to use in new situations by transmitting parameters based on the global STP ML model.; 53-54 personalized parameters for the vehicle may include a desired acceleration, a desired gap, and the like. In some embodiments, the personalized parameters for the ego vehicle 102 may be parameters for the STP ML model of the ego vehicle 102 such that the STP ML model of the ego vehicle 102 is updated. The parameters for the ego vehicle 102 may be used as guidance for the ego vehicle 102 . . . then the ego vehicle 102 may update the personalized time gap to be longer when driving under similar conditions. The ego vehicle 102 may update personalized parameters or the updated STP ML model to the cloud server 106. The updating process may repeat as the ego vehicle 102 continues to travel.; claims 1-8 personalized driving setting for the driver is a personalized adaptive cruise control setting for the driver . . . update the personalized driving setting based on driving preferences by the driver . . . controller is configured to determine a target gap between the vehicle and a leading vehicle based on the personalized driving setting, a current gap between the vehicle and the leading vehicle, and a relative velocity between the vehicle and the leading vehicle; claim 9 operate the vehicle based on the personalized driving setting) With respect to claims 10 and 19, Far in view of Gupta disclose the surrounding vehicles are a lead vehicle, a rear vehicle, one or more adjacent-lane vehicles, or a combination thereof (Far, FIG. 3 “car-1” – “car-4”; FIG. 4, car_0, Car_1 shown relative to surrounding lanes; Fig. 8-10) With respect to claims 11 and 20, Far in view of Gupta disclose the one or more vision sensors comprise one or more front-view vision sensors, one or more rearview vision sensors, one or more side-view vision sensors, or a combination thereof. (Far, ¶191 on-board dashboard cameras; FIG. 4 and 9 depicting front view vision sensor capture) Claims 3 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. 20230230484 to Al Faruque et al. (Far) in view of U.S. 20230035228 to Gupta et al. (Gupta) and further in view of US 20230177850 to Ambrus et al. (Ambrus) With respect to claims 3 and 14, Far in view of Gupta disclose wherein one or more processors are operable to: detect the one or more surrounding vehicles from the captured images using an object detection module; (Far, ¶¶ 140 “image captured by the on-board camera at time n”; 145; 191; 20 extract scene graphs from camera data; 21 camera data . . . AV perception architectures utilizing sensor fusion”; FIG. 17; Fig. 8-9; 78; 124; 145; 204; abstract “present invention is directed to a Spatiotemporal scene-graph embedding methodology that models scene-graphs and resolves safety-focused tasks for autonomous vehicles . . . accepting the one or more images, extracting one or more objects from each image . . . generating a scene-graph for each image”); generate lane masks using a lane segmentation module, and (Far, ¶¶ 11 lane data extracted from image; 17; 67; 70; 77; 79; 81; 121-124; 147-148; claim 7) the visual information comprises the detected one or more surrounding vehicles, the depth map, and the lane masks. (Far, ¶¶ 11 lane data extracted from image; 17; 67; 70; 77; 79; 81; 121-124; 147-148; claim 7; FIG. 8 object detection detects objects including the surrounding vehicles and lane masks as visual information; Far fails to disclose generating a depth map of the one or more surrounding vehicles using a monocular depth perception module such that visual information comprises the depth map. Ambrus, from the same field of endeavor, discloses generating a depth map of the one or more surrounding vehicles using a monocular depth perception module such that visual information comprises the depth map (i.e., 404, 416, FIG. 4; FIG. 6-10 and corresponding descriptions, i.e., 1002 “depth map of a monocular image”; abstract, ¶¶ 6-8, 55-58, 61, 76-80, 82-92, claims 1-12) Accordingly, it would have been obvious to one of ordinary skill in the art at the time of effective filing date to generate depth or three dimensional information from images as taught by Ambrus in the system of Far in view of Gupta in order to provide a low cost dimensional image extraction technique using an inexpensive camera/ sensing system to reduce cost (Ambrus, ¶¶ 1-5, 24-27), i.e., relative to other more expensive methods of extracting depth data, i.e., Lidar. Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: U.S. Patent No. 12158762 (see application 18/244,770 Aurora) is cited to disclose the subject matter of claims 7 and 17, as best understood, as including transforming a vehicle scene into a text embedding (Fig. 2-5 and corresponding description). U.S. 20230252795 to Tong is cited to disclose the subject matter of claims 7 and 17, as best understood, in Fig. 2-7 and corresponding description. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENNETH J MALKOWSKI whose telephone number is (313)446-4854. The examiner can normally be reached 8:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris Almatrahi can be reached at 313-446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KENNETH J MALKOWSKI/Primary Examiner, Art Unit 3667
Read full office action

Prosecution Timeline

Feb 02, 2024
Application Filed
Jan 02, 2026
Non-Final Rejection — §103, §112
Apr 08, 2026
Examiner Interview Summary
Apr 08, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589745
VISUAL GUIDANCE METHOD FOR IMPROVING AUTONOMOUS NAVIGATION WITH ROW FOLLOWING CORRECTIONS IN STEREO CAMERA SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12583443
MOVING BODY CONTROL DEVICE, MOVING BODY CONTROL METHOD, AND MOVING BODY CONTROL PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12571636
METHOD AND DEVICE WITH LANE DETECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12553733
COMPUTER-IMPLEMENTED METHOD FOR BEHAVIOR PLANNING OF AN AT LEAST PARTIALLY AUTOMATED EGO VEHICLE WITH A SPECIFIED NAVIGATION DESTINATION
2y 5m to grant Granted Feb 17, 2026
Patent 12546621
TRAVELING TRACK GENERATION DEVICE AND TRAVELING TRACK GENERATION METHOD
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
94%
With Interview (+19.1%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 642 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month