Prosecution Insights
Last updated: April 19, 2026
Application No. 19/012,313

METHOD AND DEVICE WITH AUTONOMOUS DRIVING

Non-Final OA §101§102
Filed
Jan 07, 2025
Examiner
BUKSA, CHRISTOPHER ALLEN
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
94%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
99 granted / 136 resolved
+20.8% vs TC avg
Strong +21% interview lift
Without
With
+20.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
38 currently pending
Career history
174
Total Applications
across all art units

Statute-Specific Performance

§101
13.8%
-26.2% vs TC avg
§103
48.3%
+8.3% vs TC avg
§102
27.0%
-13.0% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Joint Inventors This application currently names joint inventors. In considering patentability of the claims, the Examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the Examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Examiner has checked and verified that the subject matter of the foreign priority document supports the instant application, and as such, the earlier filed date of 08/02/2024 is granted. Information Disclosure Statement The information disclosure statements (IDS) submitted on 01/07/2025 (1), 01/07/2025 (2), and 08/19/2025, were filed before the mailing of a First Office Action on the Merits. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Status of Claims This action is in response to Applicant’s filing on 01/07/2025. Claims 1-20 are pending and examined below. Claim Rejections - 35 USC § 101 Examiner notes that although the independent claims recite several determining steps that could be construed as mental processes, the independent claims clearly recite the utilization of encoded data into a generative neural network which is something that the human mind cannot do. As such, the claims are not being rejected under 35 U.S.C. 101 as they recite patent-eligible subject matter. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zheng et al., “GenAD: Generative End-to-End Autonomous Driving” (available on 04/07/2024), herein referred to as Zheng. Regarding claim 1, Zheng discloses the following: acquiring pieces of sensor data (3.1 Instance-Centric Scene Representation, Page 3) sensor inputs may be obtained determining a first path of the moving object based on the pieces of sensor data (3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4) future predicted trajectories for the vehicle may be generated these predicted trajectories can be considered first paths predicted trajectories are based on the BEV semantic map/space which is constructed through sensor data inputting at least one piece of sensor data among the pieces of sensor data into an encoder that encodes the at least one piece of sensor data (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5) the generated future trajectories are mapped to a latent space through an encoding process the encoding process utilizes the future predicted trajectories (which includes sensor data) in the BEV semantic map to generate a latent trajectory space inputting the encoded at least one piece of sensor data into a generative neural network model that generates guide information on a path of the moving object (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5) the encoded data in the latent space is passed into a gated recurrent unit (GRU) which is modeled after a recursive neural network which can be considered as a generative neural network the GRUs output next future trajectories in the latent space which can be considered guide information determining the final path of the moving object based on the first path and the guide information (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) a reconstructed trajectory may be generated through decoding of the GRU outputs this reconstructed trajectory is based on the initial predicted future trajectory and the next future trajectories in the laten space Regarding claim 2, Zheng discloses all the limitations of claim 1. Zheng further discloses the following: determining a global path of the moving object based on the at least one piece of sensor data among the pieces of sensor data (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) map elements within the BEV map/space may have an associated ground-truth trajectory (can be considered a global path) this data within the BEV map/space is based on obtained sensor data recognizing and tracking a moving element of a surrounding environment of the moving object based on the at least one piece of sensor data, and generating map information (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) the BEV map/space may have semantic elements representing agents in the environment which can be considered as recognizing and tracking a moving object in the environment determining a local path of the moving object based on the moving element, the map information, and the global path (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) agent tokens may be utilized for each agent tokens are specific to each semantic map element and are instanced instanced tokens represent each semantic element and its associated information (position, trajectory, etc.), which can be considered a local path Regarding claim 3, Zheng discloses all the limitations of claim 2. Zheng further discloses the following: generating path modification information based on the global path and the guide information (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) guide information generated from the GRUs may further be integrated to obtain temporal waypoint data for the reconstructed trajectories modifying the local path based on the path modification information to determine the final path (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) the temporal waypoints for the reconstructed trajectories may be utilized for evolving the latent space trajectories (local paths, agent tokens, see claim 2) to acquire actual reconstructed trajectories (final path; see claim 1) Regarding claim 4, Zheng discloses all the limitations of claim 1. Zheng further discloses the following: a default prompt is set for the generative neural network model to output the guide information (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) the GRUs may operate based on gradually outputting temporal evolutions of a latent space (guide information) this model behavior may be considered a default prompt as the GRUs always operate based off of their model Regarding claim 5, Zheng discloses all the limitations of claim 1. Zheng further discloses the following: repeatedly determining the first path with a first frequency, and repeatedly generating the guide information with a second frequency, wherein the first frequency is greater than the second frequency (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) predicted future trajectories may occur constantly as the ego vehicle moves through an environment the guide information (output by the GRUs) may be output gradually, meaning that it occurs at a lower frequency than the predicted future trajectories Regarding claim 6, Zheng discloses all the limitations of claim 1. Zheng further discloses the following: acquiring encoded results corresponding respectively to the pieces of sensor data (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) encoded data corresponds to pieces of sensor data input concatenating the encoded results to generate the encoded at least one piece of sensor data (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) encoded data is input into a latent space encoded data as a whole is concatenated as it is input collectively into the latent space Regarding claim 7, Zheng discloses all the limitations of claim 1. Zheng further discloses the following: the encoder is trained to generate the guide information by the generative neural network model receiving the encoded at least one piece of sensor data (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) the encoding of sensor data may be performed through a variational autoencoder which is trained to map encoded data to a latent space Regarding claim 8, Zheng discloses all the limitations of claim 1. Zheng further discloses the following: determining a query based on the encoded at least one piece of sensor data (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) BEV token queries may be utilized for encoded BEV features acquiring experience data corresponding to the query from a memory (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) past frame data may be utilized for aligning features in the BEV map past data may only be utilized if some form of memory is present Regarding claim 9, Zheng discloses all the limitations of claim 8. Zheng further discloses the following: the memory stores past driving experience information comprising driving situation information, behavior information corresponding to the driving situation information, and reasoning information for the behavior information (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) past frames may be utilized for the BEV map encoding past frames can indicate past driving experience (agent and/or ego), behavior information (actions and positions of agent/ego vehicles), and reasoning information (agent/ego vehicle trajectories can be considered reasoning information) comparing the query with the driving situation information stored by the memory and acquiring reference driving situation information corresponding to a current driving situation (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) past frames may be aligned into the current BEV semantic map this alignment may be considered as comparing the query (BEV token query, see claim 8) with the past driving experience information (which includes the driving situation information) aligned data can be considered reference data for future processes acquiring reference prediction behavior information and reference reasoning information corresponding to the reference driving situation information (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) past frame data and the data it represents (driving situation information, behavior information etc.) may be considered reference data for future processes (new alignment based on new data, etc.) Regarding claim 10, Zheng discloses all the limitations of claim 8. Zheng further discloses the following: inputting the encoded at least one piece of sensor data and the experience data into the generative neural network model which infers the guide information therefrom (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) the GRUs may utilize the output encoded information and latent space information to output the guide information Regarding claim 11, Zheng discloses all the limitations of claim 8. Zheng further discloses the following: the memory stores the experience data by dividing the experience data into components (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) frame data (past and aligned) may be have multiple components (positional, trajectory, etc.) acquiring element-specific feature vectors based on the encoded at least one piece of sensor data (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) encoded elements (features) within the BEV map/space may be mapped through vectoring by way of a Gaussian distribution this can be considered as acquiring feature vectors as the vectors are based on semantic elements acquiring component feature data by converting the element-specific feature vectors into a feature space (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) the feature vectors output through the encoded mapping results in a latent space which can be considered a feature space since each semantic element is represented the latent data present may be considered component feature data determining element-specific query data based on the component feature data (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) BEV token queries may be utilized for the BEV mapping The BEV mapping into the latent space may result in latent space data (component feature data) that is based on query data transmitting the element-specific query data to the memory and acquiring element-specific experience data corresponding to the element-specific query data (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) data present in the latent space is utilized for future processes (GRU, decoding, etc.) and would require storing the data in some fashion Regarding claim 12, Zheng discloses all the limitations of claim 11. Zheng further discloses the following: inputting the component feature data and the element-specific experience data into the generative neural network model which infers the guide information therefrom, wherein the generative neural network model has learned a causal relationship between the element-specific feature vectors (Fig. 3, 3.1 Instance-Centric Scene Representation, Page 3, 3.2 Trajectory Prior Modelling, Page 4, 3.3 Latent Future Trajectory Generation, Page 5, 3.4 Latent Future Trajectory Generation, Page 5) component feature data and the element-specific experience data that is present in the latent space (see claim 11) may be input into GRUs which output the guide information the GRUs are trained based on learned relationships between latent space data Regarding claim 13, the claim limitations are similar to a portion of those in claim 9 and are rejected using the same rationale as seen above in claim 9. Regarding claim 14, the claim limitations are similar to those in claim 1 and are rejected using the same rationale as seen above in claim 1. Regarding claim 15, a portion of the claim limitations are similar to those in claim 1 and are rejected using the same rationale as seen above in claim 1. Additionally, Zheng discloses one or more processors (Table 1 summary, data is computed using a GPU which includes processors and memory; additionally, neural networks require processors in order to function). Regarding claim 16, the claim limitations are similar to those in claim 2 and are rejected using the same rationale as seen above in claim 2. Regarding claim 17, the claim limitations are similar to those in claim 3 and are rejected using the same rationale as seen above in claim 3. Regarding claim 18, the claim limitations are similar to those in claim 4 and are rejected using the same rationale as seen above in claim 4. Regarding claim 19, the claim limitations are similar to those in claim 5 and are rejected using the same rationale as seen above in claim 5. Regarding claim 20, the claim limitations are similar to those in claim 6 and are rejected using the same rationale as seen above in claim 6. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER ALLEN BUKSA whose telephone number is (571)272-5346. The examiner can normally be reached M-F 7:30 AM-4:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Worden can be reached at (571) 272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER A BUKSA/Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Jan 07, 2025
Application Filed
Mar 13, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578725
SELF-MAINTAINING, SOLAR POWERED, AUTONOMOUS ROBOTICS SYSTEM AND ASSOCIATED METHODS
2y 5m to grant Granted Mar 17, 2026
Patent 12576524
CONTROL DEVICE, CONTROL METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12570428
SYSTEM AND METHOD FOR MOVING AND UNBUNDLING A CARTON STACK
2y 5m to grant Granted Mar 10, 2026
Patent 12554024
MAP-AIDED SATELLITE SELECTION
2y 5m to grant Granted Feb 17, 2026
Patent 12534223
UNMANNED ROBOT FOR URBAN AIR MOBILITY VEHICLE AND URBAN AIR MOBILITY VEHICLE
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
94%
With Interview (+20.8%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month