DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This Office Action is in response to Amendments and Remarks filed on 12/22/2025 for application number 18/775,789 filed on 07/17/2024, in which claims 21-40 were originally presented for examination. Claims 21, 26, 31, 33 & 40 are currently amended, claim 25 has been cancelled, and claim 41 has been added as a new claim depending on claim 40. Accordingly, claims 21-24 & 26-41 are currently pending.
Priority
Acknowledgment is made of applicant’s claim for priority of provisional patent application No. 62/589,951 filed on 11/22/2017, and parents applications 15/848,564 and 16/792,725 filed on 12/20/2017 and 02/17/2020, respectively.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 07/17/2024 has been received and considered.
Examiner Notes
Examiner cites particular paragraphs or columns and lines in the references as applied to Applicant’s claims for the convenience of the Applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. The prompt development of a clear issue requires that the replies of the Applicant meet the objections to and rejections of the claims. Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP §2163.06. Applicant is reminded that the Examiner is entitled to give the Broadest Reasonable Interpretation (BRI) to the language of the claims. Furthermore, the Examiner is not limited to Applicant’s definition which is not specifically set forth in the claims. See MPEP §2111.01.
Response to Arguments
Arguments filed on 12/22/2025 have been fully considered and are addressed as follows:
Regarding Interview summary: Examiner refers to interview summary mailed on 12/23/2025 for any agreement made with the applicant during the interview.
Regarding the Claim Objections: The claims objection are withdrawn, as the amended claim(s) filed on 12/22/2025 has/have properly addressed the claim(s) informality objection(s) recited in the Non-Final Office Action mailed on 10/02/2025.
Regarding the claim rejections under 35 USC §112(b): The rejections of claims 26 & 34 for lack of antecedent basis are withdrawn, as the amended claims filed on 12/22/2025 recite proper antecedent basis.
Regarding the claim rejections under 35 USC §102(a)(1): Applicant’s arguments regarding the rejections of claims as being clearly anticipated by the prior art of Zhu (US-9,381,916-B2) have been fully considered. However, those arguments are not persuasive.
Applicant asserts that:
“the "object-centric" prediction disclosed by Zhu is the opposite of "generating, based on the data, a model from the perspective of the autonomous vehicle," as recited by independent claim 21, Moreover, Zhu fails to contemplate processing of object data from the perspective of the autonomous vehicle which may preclude the perception/prediction data from the perspective of the autonomous vehicle itself”
(see Remarks pages 8 & 9; emphasis added)
The examiner respectfully disagrees. Examiner notes that Applicant’s arguments are all focusing on new limitations added to the amended base claims 21, 31 & 40 apparently to overcome the current anticipation rejection under §102(a)(1) as recited in the Non-Final Office Action mailed on 10/02/2025. Those arguments are rendered moot in light of the new grounds of rejection outlined below, which were necessitated by the applicant’s amendment, i.e., Applicant’s arguments and amendments have been addressed in the new rejection outlined below.
For at least the foregoing reasons, and the rejections outlined below, the prior art rejections are maintained.
Claim Rejections - 35 USC §102
In the event the determination of the status of the application as subject to AIA 35 USC §102 and §103 (or as subject to pre-AIA 35 USC §102 and §103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 USC §102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 21-24, 26, 27, 29-35 & 38-41 are rejected under 35 USC §102(a)(1) as being clearly anticipated by Patent No. US-9,381,916-B2 to Zhu et al. (hereinafter “Zhu”)
As per claim 21, Zhu discloses a computer-implemented method (Zhu, in at least Abstract, Fig. 8 and Col(s). 1-2 & 10, discloses a method for predicting behaviors of detected objects through environment representation, by performing a behavior analysis on mobile objects in the vicinity of an autonomous vehicle), comprising:
PNG
media_image1.png
617
471
media_image1.png
Greyscale
Zhu’s Fig. 6
obtaining, from a perspective of an autonomous vehicle, data associated with a first object and a second object within a surrounding environment of the autonomous vehicle (Zhu, in at least Abstract and Col(s). 1-2, discloses the autonomous vehicle that is to detect nearby objects [i.e., from a perspective of an autonomous vehicle], such as vehicles and pedestrians. Zhu further discloses sensors that are used to detect a plurality of objects external to the vehicle [i.e., from a perspective of an autonomous vehicle], and data corresponding to the objects is sent to a processor, wherein the processor analyzes the data corresponding to the objects to identify the object as mobile objects, e.g. automobiles, trucks, pedestrians, bicycles, etc.);
generating, based on the data, a model from the perspective of the autonomous vehicle, wherein the model is indicative of a first trajectory of the first object, a second trajectory of the second object, and a dependency between the first trajectory and the second trajectory (Zhu, in at least Col. 3, discloses the data 134 is retrieved, stored or modified by processor 120 in accordance with the instructions 132, e.g., although the system and method is not limited by any particular data structure, the data is stored in computer registers, in a relational database as a table having a plurality of different fields and records, XML documents or flat files, formatted in any computer readable format, wherein image data is stored as bitmaps comprised of grids of pixels that are stored in accordance with formats that are compressed or uncompressed, lossless (e.g., BMP) or lossy (e.g., JPEG), and bitmap or vector-based (e.g., SVG), as well as computer instructions for drawing graphics. Zhu, in at least Fig(s). 1 & 6 [reproduced here for convenience] and Col(s). 8-10 & 11, further discloses the said data 134 includes detailed map information 136, e.g., highly detailed maps identifying the shape and elevation of roadways, lane lines, intersections, crosswalks, speed limits, traffic signals, buildings, signs, real time traffic information, or other such objects and information, wherein the said map information includes three-dimensional terrain maps incorporating one or more of objects listed above, e.g., the vehicle may determine that another car is expected to turn based on real-time data, e.g., using its sensors to determine the current GPS position of another car, and other data, e.g., comparing the GPS position with previously-stored lane-specific map data to determine whether the other car is within a turn lane. Zhu’s vehicle’s computer 110 calculates a route using a map, its current location, and the destination, wherein based on the route or as part of the route generation, the vehicle determines a control strategy for controlling the vehicle along the route to the destination. Zhu further discloses observation and learning is accomplished by tools and techniques of machine learning, wherein autonomous driving system 100 uses object-centric behavior models for numerous types, or classifications, of objects, including automobiles, bicycles, or pedestrians, wherein database 137 includes different object-centric behavior models for different classifications of objects, including autonomous or non-autonomous vehicles);
generating, based at least in part on the model, first data from the perspective of the autonomous vehicle indicating that the first object is predicted to merge into a lane of the second object and a potential intersection of the first trajectory of the first object and the second trajectory of the second object in the lane (Zhu, in at least Fig(s). 1, 5 & 6, Abstract and Col(s). 1-2, 8-13 & 16 discloses the processor is capable of executing computer instructions to determine the position and movement of a first mobile object relative to one or more of the other detected objects, wherein the predicted behavior of other objects is based not only on the type of object and its current trajectory, but also based on some likelihood that the object may obey traffic rules. Furthermore, Zhu disclosed vehicle’s computer 110 calculates a route using a map, its current location, and the destination, wherein based on the route or as part of the route generation, the vehicle determines a control strategy for controlling the vehicle along the route to the destination. And, Zhu’s disclosed vehicle determines its location to a few inches based on a combination of the GPS receiver data, the sensor data, as well as the detailed map information. In response, the navigation system generates a route between the present location of the vehicle and the destination. Zhu further discloses the vehicle uses the map data to determine where traffic signals or other objects should appear and take actions, for example, by signaling turns or changing lanes, and performing a behavior analysis on mobile objects in the vicinity of an autonomous vehicle to determine how the detected vehicles and pedestrians perceive their surroundings. As shown Fig. 6, autonomous vehicle 101 determines that vehicle 620 intends to tum right, as indicated by arrow C [i.e., the first trajectory of the first object], and predicts that pedestrians 610 are waiting to cross the street along path B [i.e., the second trajectory of the second object in the lane], then determines whether pedestrians 610 will cross the street before or after vehicle 620 has completed its righthand turn [i.e., potential intersection]. Therefore, autonomous vehicle 101 implements an object-centric view by determining how pedestrians 610 perceive their surroundings, and determining how they will react to those surroundings. Zhu also discloses based on vehicle 510's predicted lane change [i.e., first data indicating that the first object is predicted to merge into a lane of the second object] , autonomous vehicle 101 then determines that it is safer to continue along path C1, instead of changing lanes in accordance with path C2, in that a lane change could create a potential collision with vehicle 510 [i.e., potential intersection of the first trajectory of the first object and the second trajectory of the second object in the lane);
based at least in part on the first data, generating second data from the perspective of the autonomous vehicle indicating a predicted movement of the first object or a predicted movement of the second object that avoids the potential intersection (Zhu, in at least Abstract and Col(s). 1-2 & 10, discloses based on how the first object perceives its surroundings, the processor predicts the likely behavior of the first object, then predicts the likely behavior of the second object based on the determined position and movement of the plurality of objects and based on the predicted likely behavior of the first object. Zhu further discloses the processor adjusts the predicted likely behavior of the first object based on the predicted likely behavior of the second object. As shown Zhu ‘s Fig. 6, the autonomous vehicle 101 will continue on path A1 if it is determined that pedestrians 610 will cross the street after vehicle 620 has completed its right-hand tum. Alternatively, vehicle 101 will take path A2, if it is determined that pedestrians 610 will cross the street before vehicle 620 travels along path C); and
controlling motion for the autonomous vehicle based at least in part on the second data (Zhu, in at least Abstract and Col(s). 1-2 & 10, discloses the autonomous vehicle uses this information to safely maneuver around all nearby objects, wherein the vehicle is then capable of orienting itself autonomously in an intended position and velocity based at least in part on the likely behavior of the objects, wherein the processor then provides a command to orient the vehicle relative to the second object based on the likely behavior of the second object, which is based on the predicted likely behavior of the first object. Zhu further discloses the processor adjusts the predicted likely behavior of the first object based on the predicted likely behavior of the second object and provides a command to orient the vehicle relative to the first object based on the adjusted likely behavior of the first object).
As per claim 22, Zhu discloses the computer-implemented method of claim 21, accordingly, the rejection of claim 21 above is incorporated. Zhu further discloses comprising:
determining the potential intersection of the first trajectory and the second trajectory based at least in part on a traffic rule (Zhu, in at least Fig(s). 1 & 6, Abstract and Col(s). 1-2 & 8, discloses the processor is capable of executing computer instructions to determine the position and movement of a first mobile object relative to one or more of the other detected objects, wherein the predicted behavior of other objects is based not only on the type of object and its current trajectory, but also based on some likelihood that the object may obey traffic rules. Zhu, in at least Col(s). 11-13 & 16, further discloses the vehicle’s computer 110 calculates a route using a map, its current location, and the destination, wherein based on the route or as part of the route generation, the vehicle determines a control strategy for controlling the vehicle along the route to the destination. Zhu’s disclosed vehicle determines its location to a few inches based on a combination of the GPS receiver data, the sensor data, as well as the detailed map information. In response, the navigation system generates a route between the present location of the vehicle and the destination. Zhu also discloses the vehicle uses the map data to determine where traffic signals or other objects should appear and take actions, for example, by signaling turns or changing lanes).
As per claim 23, Zhu discloses the computer-implemented method of claim 22, accordingly, the rejection of claim 22 above is incorporated.
Zhu further discloses wherein the traffic rule is associated with a merge area (Zhu, in at least Fig(s). 1, 5 & 6 & Col(s). 8-11, Zhu discloses the predicted behavior of other objects is based not only on the type of object and its current trajectory, but also based on some likelihood that the object may obey traffic rules, wherein the system includes a library of rules about what objects will do in various situations, e.g., a car in a left-most lane that has a left-turn arrow mounted on the light will very likely turn left when the arrow turns green. Zhu further discloses based on vehicle 510's predicted lane change, autonomous vehicle 101 then determines that it is safer to continue along path C1, instead of changing lanes in accordance with path C2, in that a lane change could create a potential collision [implies the traffic rule is associated with a merge area] with vehicle 510. Zhu also discloses an adjustment in predicted behavior is required due to the likely behavior of other detected objects, e.g., in Fig. 5, autonomous vehicle 101 does not predict a lane change by vehicle 510 until after it has predicted that vehicle 60 520 is making a left-hand tum. In addition, the lane change by vehicle 510 will be made more likely, given that vehicle 520 will need to wait for vehicles 540 and 550 to pass before making it's turn [implies the traffic rule is associated with a merge area]. Zhu further discloses the computer causes the vehicle to take particular actions in response to the predicted actions of the surrounding objects, e.g., if the computer 110 determines that the other car is turning at the next intersection as noted above, the computer may slow the vehicle down as it approaches the intersection [implies the traffic rule is associated with a merge area]. Zhu also discloses data 134 includes detailed map information 136, e.g., highly detailed maps, identifying the shape and elevation of roadways, lane lines, intersections, crosswalks, speed limits, traffic signals, buildings, signs, real time traffic information, or other such objects and information).
As per claim 24, Zhu discloses the computer-implemented method of claim 21, accordingly, the rejection of claim 21 above is incorporated. Zhu further discloses comprising:
determining the potential intersection of the first trajectory and the second trajectory based at least in part on map data, the map data indicating one or more lane boundaries associated with the lane (Zhu, in at least Fig(s). 1, 5 & 6 and Col. 8, discloses the said data 134 includes detailed map information 136, e.g., highly detailed maps identifying the shape and elevation of roadways, lane lines [i.e., one or more lane boundaries associated with the lane], intersections, crosswalks, speed limits, traffic signals, buildings, signs, real time traffic information, or other such objects and information, wherein the said map information includes three-dimensional terrain maps incorporating one or more of objects listed above, e.g., the vehicle may determine that another car is expected to turn based on real-time data, e.g., using its sensors to determine the current GPS position of another car, and other data, e.g., comparing the GPS position with previously-stored lane-specific map data to determine whether the other car is within a turn lane).
As per claim 25, Cancelled
As per claim 26, Zhu discloses the computer-implemented method of claim 21, accordingly, the rejection of claim 21 above is incorporated. Zhu further discloses comprising:
determining the potential intersection of the first trajectory and the second trajectory based at least in part on sensor data, the sensor data indicating one or more lane boundaries associated with the lane (Zhu, in at least Fig(s). 1, 5 & 6 and Col(s). 7-8, discloses sensors that is used to identify, track and predict the movements of pedestrians, bicycles, other vehicles, or objects in the roadway, e.g., the sensors may provide the location and shape information of objects surrounding the vehicle to computer 110, wherein the said data 134 includes detailed map information 136, e.g., highly detailed maps identifying the shape and elevation of roadways, lane lines [i.e., one or more lane boundaries associated with the lane], intersections, crosswalks, speed limits, traffic signals, buildings, signs, real time traffic information, or other such objects and information, wherein the said map information includes three-dimensional terrain maps incorporating one or more of objects listed above, e.g., the vehicle may determine that another car is expected to turn based on real-time data, e.g., using its sensors to determine the current GPS position of another car, and other data, e.g., comparing the GPS position with previously-stored lane-specific map data to determine whether the other car is within a turn lane).
As per claim 27, Zhu discloses the computer-implemented method of claim 21, accordingly, the rejection of claim 21 above is incorporated. Zhu further discloses comprising:
generating, based at least in part on the second data, a motion plan for the autonomous vehicle based on the second data; and
providing, to at least one controller, one or more signals indicating instructions to control the autonomous vehicle in accordance with the motion plan (Zhu, in at least Abstract & Col(s). 1-2 & 4, discloses the processor adjusts the predicted likely behavior of the first object based on the predicted likely behavior of the second object and provides a command to orient the vehicle relative to the first object based on the adjusted likely behavior of the first object, wherein the processor issues a navigation command, where a navigation command comprises a command to the steering device relating to the intended direction of the vehicle, e.g., a command to turn the front wheels of a car 10 degrees to the left, or to the engine relating to the intended velocity of the vehicle, e.g., a command to accelerate. Navigation commands also include commands to brakes to slow the vehicle down, as well as other commands affecting the movement of the vehicle. Zhu further discloses the computer 110 controls the direction and speed of the vehicle by controlling various components, e.g., if the vehicle is operating in a completely autonomous mode, computer 110 causes the vehicle to accelerate by increasing fuel or other energy provided to the engine, decelerate by decreasing the fuel supplied to the engine or by applying brakes and change direction by turning the front two wheels).
As per claim 29, Zhu discloses the computer-implemented method of claim 21, accordingly, the rejection of claim 21 above is incorporated.
PNG
media_image2.png
638
416
media_image2.png
Greyscale
Zhu’s Fig. 5
Zhu further discloses wherein the predicted movement of the first object comprises the first object travelling behind the second object within the lane (Zhu, in at least Fig. 5 [reproduced here for convenience] and Col(s). 1-2, 9-10 & 15, discloses the autonomous vehicle that is to detect nearby objects, such as vehicles and pedestrians, wherein sensors that are used to detect a plurality of objects external to the vehicle, and data corresponding to the objects is sent to a processor that analyzes the data corresponding to the objects to identify the object as mobile objects, e.g. automobiles, trucks, pedestrians, bicycles, etc. Zhu further discloses autonomous vehicle 101 determines how it would react if placed in vehicle 510's position behind the turning vehicle 520, and performs a similar, object centric behavior prediction for each of the other nearby vehicles. For example, autonomous vehicle 101 may figuratively place itself in the position of vehicle 520, and determine that given the presence of vehicles 540 and 550, vehicle 520 will need to come to a complete stop before making the left- hand turn along path A2. In tum, autonomous vehicle 101 will increase the probability of vehicle 510 changing lanes along path B2, as it will be directly behind a stopped vehicle. Zhu also discloses If the second autonomous vehicle is behind the first vehicle, it may use the information to determine how to maneuver the vehicle).
As per claim 30, Zhu discloses the computer-implemented method of claim 21, accordingly, the rejection of claim 21 above is incorporated.
Zhu further discloses wherein the predicted movement of the first object comprises the first object travelling in front of the second object within the lane (Zhu, in at least Fig. 5 & Col(s). 7, discloses if the vehicle determines that another object is a bicycle that is beginning to ascend a steep hill in front of the vehicle, the computer may predict that the bicycle will soon slow down).
As per claims 31-35, the claims are directed towards computing systems that recite similar limitations performed by the computer-implemented methods of claims 21-24. 26 & 27. The cited portions of Zhu in the rejections of claims 21-24. 26 & 27 teaches the same steps performed by the computing systems of claims 31-35. Therefore, claims 31-35 are rejected under the same rationales used in the rejections of claims 21-24. 26 & 27as outlined above.
As per claim 38, Zhu discloses the computer system of claim 31, accordingly, the rejection of claim 31 [i.e., claim 21 rejection] above is incorporated.
Zhu further discloses wherein the model comprises a model trained by one or more machine-learning training techniques (Zhu, in at least Fig. 1 & Col(s). 8-10, discloses observation and learning is accomplished by tools and techniques of machine learning, wherein autonomous driving system 100 uses object-centric behavior models for numerous types, or classifications, of objects, including automobiles, bicycles, or pedestrians. Zhu further discloses database 137 includes different object-centric behavior models for different classifications of objects, including autonomous or non-autonomous vehicles).
As per claim 39, Zhu discloses the computer system of claim 31, accordingly, the rejection of claim 31 [i.e., claim 21 rejection] above is incorporated.
Zhu further discloses comprising:
generating at least one of the first trajectory or the second trajectory based at least in part on a policy associated with at least one of the predicted movement of the first object or the predicted movement of the second object, wherein the policy is associated with a scenario that comprises yielding (Zhu, in at least Fig(s). 1 & 6 & Col(s). 8-10 & 15, discloses the system includes a library of rules [i.e., a policy] about what objects will do in various situations, e.g., a car in a left-most lane that has a left-turn arrow mounted on the light will very likely turn left when the arrow turns green. Zhu further discloses the computer causes the vehicle to take particular actions [implies policy] in response to the predicted actions of the surrounding objects, e.g., if the computer 110 determines that the other car is turning at the next intersection as noted above, the computer may slow the vehicle down [i.e., a scenario that comprises yielding] as it approaches the intersection. Zhu also discloses if the second vehicle determines that the object is moving towards the second vehicle’s path, the second vehicle may slow down [i.e., policy is associated with a scenario that comprises yielding]).
As per claims 40 & 41, the claims are directed towards a computer readable media storing instructions that are executable by one or more processors to perform operations similar to steps performed by the computer-implemented method of claims 21 & 26. The cited portions of Zhu in the rejection of claims 21 & 26 teach the same steps performed by the instructions of claims 40 & 41. Therefore, claims 40 & 41 are rejected under the same rationales used in the rejection of claims 21 & 26 as outlined above.
Allowable Subject Matter
Claims 28, 36 & 37 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten to include all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See previously mailed PTO-892 form(s).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tarek Elarabi whose telephone number is (313)446-4911. The examiner can normally be reached on Monday thru Thursday; 6:00 AM - 4:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Nolan can be reached on (571)270-7016. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair.
Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or (571)272-1000.
/Tarek Elarabi, Ph.D./Primary Examiner, Art Unit 3661