DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-2 and 8-9 have been amended. Claims 15-20 have been added.
Claims 1-20 are pending.
Response to Arguments
Applicant’s arguments, see page 7, filed 10/22/2025, with respect to claims 1 and 8 objections have been fully considered and are persuasive. The objections of claims 1 and 8 have been withdrawn.
Applicant’s arguments with respect to claims 1-14 rejections under 35 USC 101 have been fully considered and are persuasive. The 35 USC 101 rejections of claims 1-14 have been withdrawn.
Applicant’s arguments with respect to the rejection(s) of claim(s) 1-14 under 35 USC 102(a)(1) have been fully considered. In regards to the Applicant’s arguments that Ma fails to disclose of obtaining information, by a processing circuit, about locations that are associated with multi-domain identifiers (MDIs) statistics, the Examiner respectfully disagrees, as these limitations are disclosed in at least paragraphs 18, 22, 23, 24, 28, and 35, as Ma discloses of collecting data pertaining to elements affecting the vehicle at various locations. However, Applicant’s arguments pertaining to newly amended limitations of “selecting narrow artificial intelligence agents for generating driving related decisions” are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of 35 USC 103, Ma et al. (20200324794; hereinafter Ma, already of record) in view of Palanisamy et al. (20200033869; hereinafter Palanisamy) as the Applicant’s arguments pertain to newly amended limitations not addressed in the prior Office Action of record.
A detailed rejection follows below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ma et al. (20200324794; hereinafter Ma, already of record) in view of Palanisamy et al. (20200033869; hereinafter Palanisamy).
Regarding claim 1, Ma teaches a method for localized driving, the method comprises:
obtaining, by a processing circuit of a vehicle, information about locations that are associated with multi-domain identifiers (MDIs) statistics (Ma: “The perception module 102 and the environmental module 104 may collect perceptual features via sensors (e.g. lidar, radar, camera, and location information) and process them to get localization and kinematic information pertaining to relevant agents and objects in the ego vehicle's environment” ¶ 18), MDIs of each location are indicative of elements affecting the vehicle at the location (Ma: “The input processed vehicle and object data 240 may be obtained from sensor data (such as, for example, cameras, radar, lidar, etc.), map data, and other data providing information about vehicles and other objects in the vicinity of the ego vehicle, and may be received via a sensor interface 245. In some embodiments, the input processed vehicle and object data 240 may be obtained from a perception module (e.g., via perception module 102 and/or environmental module 104 as shown in FIG. 1, already discussed)” ¶ 22); wherein each MDI is generated by (i) providing different domains of multi-domain information to different perception modules (“The graph extraction module 210, as further described with reference to FIG. 3 herein, may generate a series of time-stamped object graphs based on input processed vehicle and object data 240 ... local conditions data 250 may also be input to the graph extraction module 210 and encompassed, along with the processed vehicle and object data, in the generated time-stamped object graphs” ¶ 22), each perception module is associated with a dedicated domain of the different domains (Ma: Fig. 4 Element 410, “a graph attention network is a neural network that operates on graph-structured data, by stacking neural network layers in which nodes are able to attend to their neighborhoods' features ... A set of time-stamped object graphs 420 provides a set of node features (i.e., the coordinate values for each traffic agent) as input to the graph attention network 410. Each traffic agent is represented as a node in a graph and the edges denote a meaningful relationship between two agents” ¶ 28), (ii) generating class signatures by the multiple perception modules (Ma: “After processing via the M layers of the graph attention network 410, a resulting set of relational object representations 430 may be obtained. The relational object representations 430 may provide a feature matrix for each time stamp in time window” ¶ 35), , the class signatures are indicative of classes related to the multiple domains, the multiple domains comprise at least three of (a) elements sensed by one or more sensors of a vehicle (Ma: “a relational reasoning system 200 may include a framework comprising a graph extraction module 210, a first neural network 220, and a second neural network 230 ... The input processed vehicle and object data 240 may be obtained from sensor data (such as, for example, cameras, radar, lidar, etc.), map data, and other data providing information about vehicles and other objects in the vicinity of the ego vehicle, and may be received via a sensor interface 245” ¶ 22, see also ¶ 23), (b) legal limitations applicable to driving the vehicle, (c) vehicle status (Ma: “indirect interactions between vehicles (e.g., flashing headlights) or between vehicle/pedestrian/biker (e.g., manual turn signal) and other indicators (e.g., turn signals, brake lights, horns, emergency vehicle lights or sirens) may also be included in the input vehicle and object data 240” ¶ 22), and (d) ambient conditions (Ma: “The local conditions data 250 may include, for example, one or more of weather conditions, time of day, day of week, day of year, fixed obstacles, etc.” ¶ 22); and (iii) combining the class signatures to provide the MDI; wherein the multiple perception modules are configured to partially process the multi-domain information (Ma: “The second neural network 230 ... may receive as input the series of relational object representations to determine predicted object trajectories for the ego vehicle and other external objects (including other vehicles) ... this framework leverages both the benefits of relational reasoning and that of the temporal sequence learning with neural networks targeted at encoding driving norms to improve trajectory prediction” ¶ 23); wherein the MDI is used for (Ma: “The predicted vehicle trajectories 260 (i.e., prediction of future trajectories of the vehicles) resulting from the second neural network 230 may be provided as input to a vehicle navigation actuator subsystem 270 for use in navigating and controlling the autonomous vehicle” ¶ 24) selecting narrow artificial intelligence agents for generating driving related decision (see obviousness discussion below pertaining to Palanisamy);
obtaining, by the processing circuit, an expected local path of a vehicle (Ma: “Given the output collection of time-stamped object graphs 340 and the coordinate values ... for each node i at each timestamp s, trajectory prediction may be based on predicting the coordinate values” ¶ 26);
identifying path related locations, by a processing circuit, based on the expected local path and the information about the locations (Ma: “the vehicle and object coordinate data 320 may represent vehicle and object trajectory histories over the time window of measurement. In some embodiments, the vehicle and object coordinate data 320 may comprise input processed vehicle and object data 240 (FIG. 2), already discussed. In an embodiment, local conditions data 330, which may be a vector, may also be input to the graph extraction module 310” ¶ 25);
determining, by the processing circuit, expected local path MDIs statistics for use in at least partially autonomous driving of a vehicle through the expected local path (Ma: “The predicted vehicle trajectories 260 (i.e., prediction of future trajectories of the vehicles) resulting from the second neural network 230 may be provided as input to a vehicle navigation actuator subsystem 270 for use in navigating and controlling the autonomous vehicle” ¶ 24); and
responding to the expected local path MDIs statistics; wherein the responding comprises triggering a determining of the at least partially autonomous driving of the vehicle through the expected local path (Ma: “the predicted vehicle trajectories 260 (i.e., prediction of future trajectories of the vehicles) resulting from the second neural network 230 may be provided as input to a vehicle navigation actuator subsystem 270 for use in navigating and controlling the autonomous vehicle” ¶ 24).
While Ma remains silent regarding selecting narrow artificial intelligence agents for generating driving related decision, in a similar field of endeavor, Palanisamy teaches the claim limitation of selecting narrow artificial intelligence agents for generating driving related decisions (Palanisamy: “an AI driver agent system 110 that includes a set of n of driving environment processors 114, and a set of n of artificial intelligence (AI) based autonomous driver agent modules ... each driver agent 116-1 . . . 116-n follows a policy 118-1 . . . 118-n to drive a vehicle in a particular driving environment as observed by a corresponding driving environment processor” ¶ 81, see also ¶ 83). As such, it would have been obvious to one of ordinary skill in the art, at the time of effective filing and with a reasonable expectation for success, to have modified the autonomous driving system of Ma so that it also includes the element of narrow artificial intelligence agents, as taught by Palanisamy, in order to improve autonomous driving based on specific scenarios (Palanisamy: ¶ 73).
Regarding claim 2, Ma in view of Palanisamy teaches the method according to claim 1, wherein the responding further comprises executing of the at least partially autonomous driving of the vehicle through the expected local path (Ma: “Additionally, route planning input 280 from a route planning module and safety criteria input 285 from a safety module may also be applied by the vehicle navigation actuator subsystem 270 in navigating and controlling the autonomous vehicle” ¶ 24).
Regarding claim 3, Ma in view of Palanisamy teaches the method according to claim 1, wherein the determining of expected local path MDIs statistics triggers an execution of the at least partially autonomous driving of the vehicle through the expected local path (Ma: “modifying the vehicle behavior based on the predicted object trajectories and real-time perceptual error information. Modifying vehicle behavior may include issuing actuation commands to navigate the vehicle” ¶ 45).
Regarding claim 4, Ma in view of Palanisamy teaches the method according to claim 1, wherein the determining of the expected local path MDIs statistics comprising identifying most popular MDIs per path related location of the path related locations (Ma: “The relational object representations 540, the learned relational representations of each traffic agent at each time point together with their temporal features ..., via the encoder LSTM 520, the temporal location changes of each traffic agent or object ... the coordinate values of each agent at the history time points may, in turn, be fed into the decoder LSTM 530 to predict the future trajectories (i.e., object behaviors) of each traffic agent or object” ¶ 37).
Regarding claim 5, Ma in view of Palanisamy teaches the method according to claim 4, wherein the identifying most popular MDIs identifiers per path related location triggers the determining of the at least partially autonomous driving of the vehicle through the expected local path (Ma: “the coordinate values of each agent at the history time points may, in turn, be fed into the decoder LSTM 530 to predict the future trajectories (i.e., object behaviors) of each traffic agent or object ... The predicted vehicle trajectories 550 (i.e., prediction of future trajectories of the vehicles) may be output from the LSTM network 510 and utilized in connection with the autonomous vehicle actuation, e.g., the vehicle navigation actuator subsystem 270 (FIG. 2), already discussed” ¶ 37).
Regarding claim 6, Ma in view of Palanisamy teaches the method according to claim 1, further comprising predicting, based on the expected local path MDIs statistics, selected perception modules out of multiple perception modules, to be utilized during future points in times associated with the expected local path and in relation to the path related points (Ma: “the input to the relational reasoning system may be the output of a perception module at particular times, and the system would be trained based on the accurate prediction of sequential trajectories given the input data” ¶ 38).
Regarding claim 7, Ma in view of Palanisamy teaches the method according to claim 6, further comprising pre-fetching the selected perception modules to a cache memory (Ma: “a process 600 for operating an example of a relational reasoning system for an autonomous vehicle according to one or more embodiments ... the process 600 may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium” ¶ 40, see also ¶ 38, 39, 59, 63).
Regarding claim 8, Ma teaches a non-transitory computer readable medium for localized driving, the non-transitory computer readable medium stores instructions for (Ma: “includes at least one non-transitory computer readable storage medium comprising a set of instructions” ¶ 85):
...
In regards to the remainder of claim 8, the claim recites analogous limitations to claim 1, and is therefore rejected under the same premise.
In regards to claim(s) 9-14, the claim(s) recite analogous limitations to claim(s) 2-7, and are therefore rejected under the same premise.
Regarding claim 18, Ma in view of Palanisamy teaches the method according to claim 1, wherein the multiple perception modules comprises at least three perception modules out of (a) a road setting perception module that is configured to generate a class signature indicative of one or more classes of one or more static objects within an environment of the vehicle (Ma: “A set of time-stamped object graphs 420 provides a set of node features (i.e., the coordinate values for each traffic agent) as input to the graph attention network 410” ¶ 28), (b) a road user perception module that is configured to generate a class signature that is indicative of a class related to movable road users (Ma: “The relational object representations 540, the learned relational representations of each traffic agent at each time point together with their temporal features (i.e., information pertaining to local driving norms as output by graph attention network 410) ... Prediction of object behaviors may include predicting object coordinates (position), orientation (heading) and/or speed attributes (e.g., velocity)” ¶ 37), (c) a traffic rule perception module that is configured to generate a class signature that is indicative of a class of traffic rules indicated by traffic signs or other elements captured in the image, (d) a regulation perception module that is configured to generate a class signature that is indicative of a class of legal constraints, (e) an ambient condition perception module that is configured to generate a class signature that is indicative of a class of ambient conditions within the environment of the vehicle (Ma: “The relational reasoning system (specifically, the graph attention network 410 along with the LSTM network 510) may be trained using data representing a variety of situations and locations—thus making the relational reasoning system robust and capable of generalizing to changing and variable conditions with geo-location changes and local normative changes” ¶ 38), and (f) a vehicle state perception module that is configured to generate a class signature that is indicative of a class of vehicle state.
Regarding claim 19, Ma fails to teach the method according to claim 1, wherein each narrow AI agent is trained to respond to a specific fraction of driving scenarios.
While Ma remains silent regarding wherein each narrow AI agent is trained to respond to a specific fraction of driving scenarios, in a similar field of endeavor, Palanisamy teaches the claim limitation wherein each narrow AI agent is trained to respond to a specific fraction of driving scenarios (Palanisamy: “each driver agent 116-1 . . . 116-n follows a policy 118-1 . . . 118-n to drive a vehicle in a particular driving environment as observed by a corresponding driving environment processor 114-1 . . . 114-n. Each policy 118 can process state (S) of the driving environment (as observed by a corresponding driving environment processor 114), and generate actions (A) that are used to control a particular AV that is operating in that state (S) of the driving environment” ¶ 81). As such, it would have been obvious to one of ordinary skill in the art, at the time of effective filing and with a reasonable expectation for success, to have modified the autonomous driving system of Ma so that it also includes the element of narrow artificial intelligence agents, as taught by Palanisamy, in order to improve autonomous driving based on specific scenarios (Palanisamy: ¶ 73).
Regarding claim 20, Ma in view of Palanisamy teaches the method according to claim 1, wherein the multiple perception modules are multiple perception routers or multiple perception sub-routers associated with a perception router (Ma: “The graph attention network 410 is designed to capture the relational interactions among the nodes in the graphs, i.e., the spatial interactions between the traffic agents, which encode information about the driving norm in that geo-location. A set of time-stamped object graphs 420 provides a set of node features (i.e., the coordinate values for each traffic agent) as input to the graph attention network 410” ¶ 28, see also ¶ 29, 30).
In regards to claim(s) 15-17, the claim(s) recite analogous limitations to claim(s) 18-20, and are therefore rejected under the same premise.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Levinson et al. (20180196439) is in the similar field of endeavor as the claimed invention of autonomous driving assistance.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CLINT V PHAM whose telephone number is (571)272-4543. The examiner can normally be reached M-F 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Flynn can be reached at 571-272-9855. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.P./ Examiner, Art Unit 3663
/ABBY J FLYNN/ Supervisory Patent Examiner, Art Unit 3663