Prosecution Insights
Last updated: April 19, 2026
Application No. 17/101,831

Dynamic Scene Representation

Non-Final OA §103
Filed
Nov 23, 2020
Examiner
STRYKER, NICHOLAS F
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Lyft Inc.
OA Round
3 (Non-Final)
40%
Grant Probability
At Risk
3-4
OA Rounds
3y 6m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 40% of cases
40%
Career Allow Rate
15 granted / 38 resolved
-12.5% vs TC avg
Strong +28% interview lift
Without
With
+27.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
40 currently pending
Career history
78
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
56.9%
+16.9% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 38 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/14/2025 has been entered. Claim(s) 1-3, 7-9, 11-13, 17-20, and 23 have been amended. Claim(s) 4-6, 10, and 14-16 have been cancelled. Claim(s) 25 have been added. Claim(s) 1-3, 7-9, 11-13, and 17-25 are pending examination and rejected as detailed below. Response to Arguments Applicant presents the following argument(s) regarding the previous office action: Applicant asserts that the 103 rejections of claims 1-3, 7-9, 11-13, and 17-25 is improper. Applicant asserts that at least the independent claims 1, 11, and 20 are allowable over the prior art in light of the amendments. Applicant’s arguments with respect to claim(s) 1-3, 7-9, 11-13, and 17-25 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Regarding applicant’s argument A, the examiner finds it moot. The applicant’s amendments to independent claims 1, 11, and 20 do not overcome the rejections under the prior art of Wray in combination with Fergusson. Applicant’s arguments rely on newly amended limitations to point to how their claim overcomes the prior art. After further search and consideration, the examiner would rely on newly cited portions of Wray to teach the amended limitations. These newly cited portions would teach new limitations as amended. Therefore claims 1, 11, and 20 would remain rejected as detailed below. The dependent claims would remain rejected at least due to their dependence on rejected independent claims. A more detailed explanation and mapping can be found below in the section titled, “Claim Rejections – 35 USC 103.” Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim(s) 1-3, 7-9, 11-13, and 17-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wray (US PG Pub 2019/0329771) in view of Fergusson (US Pat, 10,899,345). Regarding claim 1, Wray teaches a computer-implemented method comprising: ([0037] teaches a computer method) receiving sensor data associated with a period of operation in an environment by at least one sensor of a vehicle; ([0196] teaches receiving sensor data from a vehicle) processing the sensor data and thereby deriving (i) vehicle information that indicates a past and future trajectory of the vehicle during the period of operation, ([0184] teaches the system deriving the past and future locations of the vehicle. [0237] further teaches determining the future trajectory of the vehicle) (ii) respective agent information for each agent object detected in the vehicle's surrounding environment during the period of operation that indicates ([0209] teaches the system identifying the current spatiotemporal information for an external moving object, i.e. agent. [0198] teaches the system. [0198] teaches the system may categorize the agent information to make a determination of how it may interact with the vehicle, i.e. a scenario. [0240] further teaches the system determining the future trajectory of an agent vehicle) and (iii) respective non-agent information for each non-agent object detected in the vehicle's surrounding environment during the period of operation; ([0097] teaches the system identifying any object around the vehicle) segmenting the period of operation into a sequence of contiguous decision units of the vehicle using the vehicle information, the respective agent information, the respective non-agent information, and at least one interaction prediction model that is configured to predict a likelihood of the vehicle's decision-making being impacted by a detected agent or non-agent object during a future time horizon, ([0141]-[0148] teaches the use of a Markov decision process to evaluate the current and future actions of a vehicle based on the vehicle’s state and the respective environmental state. The vehicle states are determined as discrete spatial-temporal locations that are differentiated by the exact action that a vehicle took to achieve the new state. [0225]-[0228] teaches the system determining the likelihood that an agent will impact the vehicle’s decision making process) wherein each respective decision unit in the sequence of contiguous decision units comprises a respective unit of time during which there is no change in which agents and non-agent objects are considered to be relevant to the vehicle’s decision-making, ([0147]-[0148], [0166], and [0223] teach the system monitoring the “temporal location” these locations include environmental information at each time period. As each temporal location is a slice in time there is no change to what is relevant to a vehicle at a given time period. [0235] further teaches that at each temporal location information about the relevant objects is recorded) and wherein the segmenting involves evaluating whether each respective time of a series of times during the period of operation comprises a boundary point between a pair of contiguous decision units ([0141] teaches the system identifying an action that moves it from one state to another) by: evaluating whether there has been a change to which agent or non-agent objects are relevant to the vehicle's decision-making at the respective time by: identifying a respective set of objects that were detected within the vehicle’s surrounding environment at the respective time; ([0235]-[0240] teaches the system sensing and identifying a respective set of objects around the ego vehicle at a given time period) determining whether any individual agent object and any individual non-agent object in the respective set of object is considered to be relevant to the vehicle’s decision-making ([0225]-[0226] teaches the system determining that a detected object at a respective time period is relevant to the vehicle operation) using the at least one interaction prediction model along with (i) a portion of the vehicle information associated with the respective time, (ii) respective agent information for any agent object detected in the vehicle's surrounding environment at the respective time, and (iii) respective non-agent information for any non-agent object detected in the vehicle's surrounding environment at the respective time; ([0225]-[0231] teaches that the system is constantly evaluating its surroundings and that it may determine that an external object, or itself has changed in a way that would impact the decision making process) based on the determining, identifying a respective subset of objects considered to be relevant to the vehicle’s decision-making at the respective time (Fig. 7 and [0232-[0241] teaches the system determining which of a detected subset of objects, i.e. “pedestrians” are considered relevant to the vehicle’s travels and thus its decision making) comparing the respective subset of objects considered to be relevant to the vehicle’s decision-making at the respective time to a subset of objects considered to be relevant to the vehicle’s decision-making at a preceding time in the series of times; ([0133], [0209], [0236], and [0256] teach the system continuously updating the observed data in regards to the surroundings and ensuring considered which objects are relevant at each respective time period, and can compare this to a previous time period to determine how much the relevancy has changed) and based on the evaluating, determining either that (i) the respective time defines a change in decision unit of the vehicle and comprises a boundary point between a pair of contiguous decision units if there has been a change to which agent or non-agent objects are relevant to the vehicle's decision-making at the respective time or (ii) the respective time does not define a change in decision unit of the vehicle and does not comprise a boundary point between a pair of contiguous decision units if there has not been a change to which agent or non-agent objects are relevant to the vehicle's decision-making at the respective time; ([0235]-[0240] teaches the system may determine for each temporal period whether or not the environment around the vehicle has changed and based on the changes update to the next temporal location which includes a new vehicle state and new vehicle action relevant to the environment) and generating a respective representation of each respective decision unit in the sequence of contiguous decision units comprising a searchable data structure that encodes the vehicle's interaction with (i) any agent object determined to be relevant to the vehicle's decision-making during the respective decision unit and (ii) any non-agent object determined to be relevant to the vehicle's decision-making during the respective decision unit. ([0075]-[0079] teach the system storing the series of scenarios, each scenario is determined to have the operational environment and the information relevant to the vehicle. The scenarios are stored in a way that the computer can access them when needed. [0137] teaches the further idea of generating state specific scenarios that model the exact environment and surroundings of a vehicle. These states are generated and stored in a way that the vehicle can then access them by using present information to find a matching scenario, i.e. a searchable data structure) Wray does not teach respective agent information for each agent object detected in the vehicle's surrounding environment during the period of operation that indicates a respective past. However, Fergusson teaches “respective agent information for each agent object detected in the vehicle's surrounding environment during the period of operation that indicates a respective past.” (Col. 18, lines 38-41; teach determining the past trajectory of an external agent vehicle) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Wray in view of Fergusson; and have a reasonable expectation of success. Both teach vehicle perception and control systems. Understanding where an agent comes from can allow the system to understand what may happen next. If a vehicle knows that an agent comes from one direction and can understand the route it takes, the vehicle can then determine what may be an unlikely way for the agent to go. This allows for optimal route planning. As Fergusson teaches in Col. 1, Background, a vehicle with a better perception system allows for safer decision making. This keeps drivers safe. Claims 11 and 20 are substantially similar and would be rejected for the same rationale as above. Regarding claim 2, Wray teaches the computer-implemented method of claim 1, wherein the respective representation of each respective decision unit comprises a searchable data structure that includes (i) vehicle information that indicates a past and future trajectory of the vehicle during the respective decision unit (Fig. 7 and [0235]-[0237] teaches the system may can have a model of a scenario, the scenario can have the trajectory information of the vehicle) and (ii) one or both of (a) agent information that indicates a past and future trajectory of at least one agent object that is determined to be relevant to the vehicle's decision-making during the respective decision unit, or (b) non-agent information for at least one static object that is determined to be relevant to the vehicle's decision-making during the respective decision unit. (Fig. 7 and [0235]-[0239] teaches the system will have the state information of the agent and non-agents around the vehicle including their respective trajectory information at different temporal locations) Claim 12 is substantially similar and would be rejected for the same rationale as above. Regarding claim 3, Wray teaches the computer-implemented method of claim 2, wherein one or both of (i) the vehicle information that indicates the past and future trajectory of the vehicle during the respective decision unit or (ii) the agent information that indicates the past and future trajectory of the at least one agent object that is determined to be relevant to the vehicle's decision-making during the respective decision unit comprises confidence information indicating an estimated accuracy of the past and future trajectory. ([0126]-[0131], [0154] and [0158] teach the system have determined a probability, or confidence, that the modeled information is accurate and that the trajectories of the vehicle and/or agents are correct) Claim 13 is substantially similar and would be rejected for the same rationale as above. Regarding claim 7, Wray teaches the computer-implemented method of claim 1, further comprising: based on a selected decision unit included in the sequence of contiguous decision units, predicting one or more alternative versions of the selected decision unit. (Fig. 8 and [0261] teach the system determining alternative versions of each scene) Claim 17 is substantially similar and would be rejected for the same rationale as above. Regarding claim 8, Wray teaches the computer-implemented method of claim 7, wherein predicting one or more alternative versions of the selected decision unit comprises: generating, for the selected decision unit, one or more alternative versions of one or both of (i) vehicle information that indicates a past and future trajectory of the vehicle during the selected decision unit (Fig. 8 and [261 teach the system generating alterative trajectory data for the vehicle during a temporal location) or (ii) agent information that indicates a past and future trajectory of at least one agent object determined to be relevant to the vehicle's decision-making during the selected decision unit. (Fig. 8 and [0268] teach the system generating alternative trajectories for each of the agent trajectories at various temporal locations) Claim 18 is substantially similar and would be rejected for the same rationale as above. Regarding claim 9, Wray teaches the computer-implemented method of claim 1, further comprising: based on (i) a first decision unit included in the sequence of contiguous decision units and (ii) a second decision unit included in the sequence of contiguous decision units, generating a representation of a new decision unit ([0250]-[0256] teaches the system taking a series of decision units, i.e. scenarios, and instantiating them at the same time which would be analogous to a representation of a new scenario) comprising: at least one of (i) vehicle information that indicates a past and future trajectory of the vehicle during the first decision unit or (ii) agent information that indicates a past and future trajectory of at least one agent object determined to be relevant to the vehicle's decision-making during the first decision unit; ([0254]-[0259] teaches the system determining the trajectory of a vehicle and/or agents for the first temporal location, i.e. decision unit) and at least one of (i) vehicle information that indicates a past and future trajectory of the vehicle during the second decision unit or (ii) agent information that indicates a past and future trajectory of at least one agent object determined to be relevant to the vehicle's decision-making during the first decision unit. ([0254]-[0259] teach the system having vehicle and/or agent information including trajectory data for a subsequent temporal location) Claim 19 is substantially similar and would be rejected for the same rationale as above. Regarding claim 21, Wray teaches the computer-implemented method of claim 1, wherein a change to which agent or non-agent objects are relevant to the vehicle's decision-making at the respective time comprises one of: a change from (i) no agent or non-agent object determined to be relevant to the vehicle's decision-making at a prior time to (ii) at least one agent or non-agent object determined to be relevant to the vehicle's decision-making at the respective time; ([0260]-[0270] teaches the system monitoring potential agents to determine the relevance of the agents, this includes updating the environmental information to include changes to the number of agents detected around the vehicle) a change from (i) at least one agent or non-agent object determined to be relevant to the vehicle's decision-making at a prior time to (ii) no agent or non-agent object determined to be relevant to the vehicle's decision-making at the respective time; ([0260]-[0270] teaches the system monitoring potential agents to determine the relevance of the agents, this includes updating the environmental information to include changes to the number of agents detected around the vehicle) or a change from (i) a first set of one or more agent or non-agent objects determined to be relevant to the vehicle's decision-making at a prior time to (ii) a second set of one or more agent or non-agent objects determined to be relevant to the vehicle's decision-making at the respective time, wherein the first set one or more agent or non-agent objects differs from the second set of one or more agent or non-agent objects. ([0260]-[0270] teaches the system monitoring potential agents to determine the relevance of the agents, this includes updating the environmental information to include changes to the number of agents detected around the vehicle) Regarding claim 22, Wray teaches the computer-implemented method of claim 1, wherein at least one given decision unit included in the sequence of contiguous decision units comprises a timeframe during which no agent or non-agent object is determined to be relevant to the vehicle's decision-making, and wherein the respective representation of the given decision unit comprises a searchable data structure indicating that the vehicle did not interact with any agent or non-agent object determined to be relevant to the vehicle's decision-making during the given decision unit. ([0147] teaches the changes in vehicle state at each temporal location are used as an analogous structure to the decision unit in the current application. [0150] teaches that changes in state information can be reflective of no interaction between a vehicle and agents, the given example being a change in state based on the vehicle arriving at an intersection. This is a change in state without the requisite interaction with an agent) Regarding claim 23, Wray teaches the computer-implemented method of claim 1, wherein determining whether any individual agent object and any individual non-agent object in the respective set of objects is considered to be relevant to the vehicle’s decision-making comprises: inputting, into the at least one interaction prediction model, (i) the portion of the vehicle information associated with the respective time, (ii) the respective agent information for any agent object detected in the vehicle's surrounding environment at the respective time, and (iii) respective non-agent information for any non-agent object detected in the vehicle's surrounding environment at the respective time; ([0149]-[0151] teaches the system determining the probability that a vehicle’s future decision making will be impacted by changes in the operating environment of the area. This prediction is further explained in the example given in [0226]-[0230] in which the vehicle system inputs information regarding location and trajectory of the vehicle and agents/non-agents to determine the likelihood of interference between the two) for any agent object detected in the vehicle's surrounding environment, using the at least one interaction prediction model to predict a respective likelihood of the vehicle's decision-making being impacted by the agent object during the future time horizon; ([0149]-[0151] teaches the system determining the probability that a vehicle’s future decision making will be impacted by changes in the operating environment of the area) and for any non-agent object detected in the vehicle's surrounding environment, using the at least one interaction prediction model to predict a respective likelihood of the vehicle's decision- making being impacted by the non-agent object during the future time horizon. ([0149]-[0151] teaches the system determining the probability that a vehicle’s future decision making will be impacted by changes in the operating environment of the area) Regarding claim 24, Wray teaches the computer-implemented method of claim 1, wherein the at least one interaction prediction model utilizes a confidence level associated with sensor data for the detected agent or non-agent object to predict the likelihood of the vehicle's decision-making being impacted by the detected agent or non-agent object during the future time horizon. ([0149]-[0151] teaches the system determining the probability that a vehicle’s future decision making will be impacted by changes in the operating environment of the area) Regarding claim 25, Wray teaches the computer-implemented method of claim 1, further comprising: storing respective representation of each respective decision unit in an indexed and searchable repository of scenes. ([0075]-[0079] teach the system storing the series of scenarios, each scenario is determined to have the operational environment and the information relevant to the vehicle. The scenarios are stored in a way that the computer can access them when needed. [0195] further teaches that the system can store data in a memory. The system can access each of these respective stored scenarios, therefore it is implicit that the data is indexed and searchable in the repository.) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Jojo-Verge (US PG Pub 2020/0363800) teaches methods and systems for decision making in an autonomous vehicle (AV) are described. A probabilistic explorer reduces the breadth and depth of the potentially infinite actions being explored allowing for an accurate prediction on a future scene to a defined time horizon and an appropriate selection of a goal state anywhere within that time horizon. The probabilistic explorer uses a neural network (NN) to suggest best (probabilistically speaking) actions for the AV and scene values, and a modified Monte Carlo Tree Search to identify a sequence of actions, where exploration is guided by the NN. The probabilistic explorer processes the suggested actions and driving scene(s) to provide estimated trajectories of all scene actors and an estimated trajectory for the AV at every time step for every action explored. A virtual driving scene is generated, which is iteratively processed to determine a vehicle goal state or vehicle low-level control actions. Isele (US PG Pub 2020/0391738) teaches autonomous vehicle interactive decision making may include identifying two or more traffic participants and gaps between the traffic participants, selecting a gap and identifying a traffic participant based on a coarse probability of a successful merge between the autonomous vehicle and a corresponding traffic participant, generating an intention prediction associated with the identified traffic participant based on vehicle dynamics of the identified traffic participant, predicted behavior of the identified traffic participant in the absence of the autonomous vehicle, and predicted behavior of the identified traffic participant in the presence of the autonomous vehicle making a maneuver creating an interaction between the identified traffic participant and the autonomous vehicle, generating an intention prediction associated with the autonomous vehicle, calculating an updated probability of a successful interaction between the identified traffic participant and the autonomous vehicle based on the intention prediction associated with the identified traffic participant and the autonomous vehicle. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS STRYKER whose telephone number is (571)272-4659. The examiner can normally be reached Monday-Friday 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached at (571) 272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.S./Examiner, Art Unit 3665 /CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Nov 23, 2020
Application Filed
Jan 13, 2022
Response after Non-Final Action
Jan 05, 2024
Non-Final Rejection — §103
Apr 16, 2024
Examiner Interview (Telephonic)
Apr 16, 2024
Examiner Interview Summary
May 10, 2024
Response Filed
May 10, 2024
Response after Non-Final Action
Aug 21, 2024
Response Filed
Jul 03, 2025
Final Rejection — §103
Oct 14, 2025
Request for Continued Examination
Oct 22, 2025
Response after Non-Final Action
Jan 08, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12524021
FAULT TOLERANT MOTION PLANNER
2y 5m to grant Granted Jan 13, 2026
Patent 12492903
NAVIGATION DEVICE AND METHOD OF MANUFACTURING NAVIGATION DEVICE
2y 5m to grant Granted Dec 09, 2025
Patent 12475526
COMPUTING SYSTEM WITH A MAP AUTO-ZOOM MECHANISM AND METHOD OF OPERATION THEREOF
2y 5m to grant Granted Nov 18, 2025
Patent 12455576
INFORMATION DISPLAY SYSTEM AND INFORMATION DISPLAY METHOD
2y 5m to grant Granted Oct 28, 2025
Patent 12449822
GROUND CLUTTER AVOIDANCE FOR A MOBILE ROBOT
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
40%
Grant Probability
67%
With Interview (+27.6%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 38 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month