Prosecution Insights
Last updated: April 19, 2026
Application No. 18/400,823

DRIVING POLICY VISUALIZATION

Non-Final OA §101§103§112
Filed
Dec 29, 2023
Examiner
ARTIMEZ, DANA FERREN
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Autobrains Technologies Ltd.
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
46 granted / 80 resolved
+5.5% vs TC avg
Strong +44% interview lift
Without
With
+43.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
42 currently pending
Career history
122
Total Applications
across all art units

Statute-Specific Performance

§101
19.0%
-21.0% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
7.3%
-32.7% vs TC avg
§112
24.6%
-15.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 80 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is a Non-Final rejection on the merits of this application. Claims 1-20 are currently pending, as discussed below. Examiner Notes that the fundamentals of the rejections are based on the broadest reasonable interpretation of the claim language. Applicant is kindly invited to consider the reference as a whole. References are to be interpreted as by one of ordinary skill in the art rather than as by a novice. See MPEP 2141. Therefore, the relevant inquiry when interpreting a reference is not what the reference expressly discloses on its face but what the reference would teach or suggest to one of ordinary skill in the art. Claim Objections Claim 12 is objected to because of the following informalities: Claim 12 Line 15 “the content.” should read –the content;-- Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1 and 12 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Applicant’s specification does not provide adequate written description for the following limitations: “a multidimensional virtual force field representation of a driving policy applicable to the vehicle” of claim 1 (similarly claim 12): while the specification (published specification [0119]) describes/restates that a function may be used as an example of a representation of a multidimensional virtual force field (further, it is unclear what “multidimensional virtual force field (MVFF)” means? since the specification does not define what constitute as MVFF ), the specification does not provide a working example of such function (in particular, how it is constructed) and what variables are involved to reflect a driving policy applicable to the vehicle (in particular what constitutes a driving policy and how such a driving policy is incorporated in the claimed MVFF). Lastly, the specification (published specification [0119]) also stated that the representation may differ from a function but does not provide examples on what else could this MVFF representation be? That is, the claim covers any possible representations and one skilled in the relevant art cannot be sure what the scope or nature of MVFF is. “reducing a dimension of the multidimensional virtual force field representation…a reduced dimensional virtual force field representation that conforms with a driving of the vehicle” of claim 1 (similarly claim 12): the specification does not define what “reducing a dimension” of the MVFF means (e.g. flattening a 3D representation into 2D or 1D, simplifying output displays) and does not sufficiently describe what algorithm(s)/procedures is/are used to reduce the dimensionality (?) while the published specification ([0066-0074]) mentions that the reduction involves (or is responsive to) “integrating the MVFF with respect to a marginal variable”, “weighted information” or “a summing operation on a marginal variable of the MVFF”, the specification fails to define which variables are considered marginal and/or non-marginal (and further, how perception information are mapped with respect to that) and the specification does not provide any working example on how any and/or all “MVFF” is reduced/marginalized into a 3D or 2D representation. “receiving content from an information…wherein the at least one step of the obtaining, the estimating and the determining is impacted by the content” of claim 12 because the specification does not mention anything about “receiving content from an information source located outside of a vehicle” and does not mention anything about “at least one step of the obtaining, estimating and the determining is impacted by the content” (and in particular what is this content and defined particularly how?). The specification does not adequately describe how are “virtual force” and/or “virtual field” calculated or how they interact with a vehicle control system for applying driving related operation on (what?) content received. See MPEP 2161.01, I. and LizardTech Inc. v. Earth Resource Mapping Inc., 424 F.3d 1336, 1345 (Fed. Cir. 2005) cited therein ("Whether the flaw in the specification is regarded as a failure to demonstrate that the applicant possessed the full scope of the invention recited in [the claim] or a failure to enable the full breadth of that claim, the specification provides inadequate support for the claim under [§ 112(a)]"). Accordingly, the Examiner believes that Applicant has not demonstrated to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The claims 2-11 and 13-20 are dependent upon independent claims 1 and 12 are also rejected under 112 first paragraph by the fact that they are dependent upon the rejected independent claims. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 and 12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1 (similarly claim 12), recites the limitation “multidimensional virtual force field representation of a driving policy” is indefinite and not reasonably certain, with indeterminate metes and bounds, from the teachings of the specification, of any or all “driving policy” that may be covered by the claim; and it is unclear what “multidimensional virtual force field representation” means because of the lack of teachings from the specification. Further, it is also unclear and confusing to the Examiner how the MVFF is structured/computed and from what? Hence, this limitation renders the claim to be indefinite. For purpose of examination, Examiner is interpreting this claim limitation (in view of limited guidance from the filed specification) as “receiving observation vectors regarding surrounding information that affects a vehicle’s operation as input” . Regarding claim 1 (similarly claim 12), recites the limitation “reducing a dimension of the multidimensional virtual forced field…conforms with a driving of the vehicle” is indefinite and not reasonably certain, with indeterminate metes and bounds, from the teachings of the specification, what constitutes as a reduction in dimension of the MVFF representation. Hence, this limitation renders the claim to be indefinite. For purpose of examination, Examiner is interpreting this claim limitation (in view of limited guidance from the filed specification) as “generating an output action for a vehicle based input observation vectors”. Regarding claim 1 (similarly claim 12), recites the limitation “dynamically visualizing” is indefinite because it is unclear what or who is visualizing this driving policy and whether “dynamically” adds to the visualization in the context of temporal, interactive or any other constraints. Hence, this limitation renders the claim to be indefinite. For purpose of examination, Examiner is interpreting this claim limitation (in view of limited guidance from the filed specification) as “displaying a trajectory of a vehicle in real time”. Regarding claim 12, recites the limitation “receiving content from an information…wherein the at least one step of the obtaining, the estimating and the determining is impacted by the content” is indefinite and not reasonably certain, with indeterminate metes and bounds, from the teachings of the specification, of any or all “physical model” may be covered by the claim (in particular, what model and how is defined/calculated and related to either virtual force fields and virtual forces) and what does “physical model” represent/cover, and further, how are virtual fields related to a virtual force and associated with a physical model? Hence, this claim limitation renders the claim to be indefinite. Regarding claim 12 is indefinite in its entirety because it is unclear what is the relationship between “one or more virtual fields”, “virtual force”, and “multidimensional virtual force field” and are the “one or more virtual fields” different from “multidimensional virtual force field” (if so, how?; and if not, then why are they recited separately?). Further, it is also unclear and confusing to the Examiner, what’s the relationship between “content from an information source”, “object information regarding one or more objects” and “perception information”. Hence this claim is indefinite. The claims 2-11 and 13-20 are dependent upon independent claims 1 and 12 are also rejected under 112 second paragraph by the fact that they are dependent upon the rejected independent claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. 101 Analysis – Step 1 – YES Claim 1 is directed to a method; and claim 12 is directed to a non-transitory storage medium. Therefore, claims 1 and 12 are within at least one of the four statutory categories. 101 Analysis – Step 2A, Prong I Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent claim 12 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. The other analogous claim(s) 1 is rejected for the same reasons as the representative claim 12 as discussed here. Claim 12 recites: A non-transitory computer readable medium for driving policy visualization, the non-transitory computer readable medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform operations for field related driving, comprising: receiving content from an information source located outside of a vehicle; obtaining object information regarding one or more objects located within an environment of the vehicle; estimating, by using a neural network (NN), and based on the object information, one or more virtual fields of the one or more objects; and determining, based on the one or more virtual fields, a virtual force for use in applying a driving related operation of the vehicle, wherein the virtual force is associated with a physical model and representing an impact of the one or more objects on a behavior of the vehicle; wherein at least one step of the obtaining, the estimating and the determining is impacted by the content. receiving, by a processing circuit, perception information that comprises environmental information about an environment of a vehicle and kinematic information regarding a movement of the vehicle; receiving, by the processing circuit, a multidimensional virtual force field representation of a driving policy applicable to the vehicle; reducing a dimension of the multidimensional virtual force field representation, based on the received perception information, to produce a reduced dimensional virtual force field representation that conforms with a driving of the vehicle; and dynamically visualizing, by applying the reduced dimensional virtual force field representation, the driving policy in the driving of the vehicle. The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “estimating…one or more virtual fields of the one or more objects” and “determining…a behavior of the vehicle” limitations in the context of this claim encompasses an rider/user of a vehicle observing the vehicle’s surrounding (e.g. analyzing positions of other road users/objects) and forming a simple judgement either mentally or using pen and paper (e.g. by estimating/determining their influences of his/her vehicle). The “reducing a dimension…” and “dynamically visualizing…” limitations in the context of the claim encompasses a rider/user of a vehicle observing multiple surrounding vehicles’ behaviors and forming a simple judgment either mentally or using pen and paper on what feasible paths there are among the surrounding vehicles and how he/she should navigate his/her vehicle around to maintain safety. Accordingly, the claim recites an abstract idea, as recited in published specification (of application 17/823,069 [0028]) reproduced below: We suspect that humans employ an internal representation of surrounding objects in the form of virtual force fields that immediately imply action, thus circumventing the need for kinematics estimation. Consider a scenario in which the ego vehicle drives in one lane and a vehicle diagonally in front in an adjacent lane starts swerving into the ego lane. The human response to brake or veer off would be immediate and instinctive and can be experienced as a virtual force repelling the ego from the swerving vehicle. This virtual force representation is learned and associated with the specific road object. Examiner would also note MPEP 2106.04(a)(2)(III): The courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. Here, the determination is a form of making evaluation and judgement based on observation (driver behavior). Accordingly, the claim recites at least one abstract idea. 101 Analysis – Step 2A, Prong II Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”): A non-transitory computer readable medium for driving policy visualization, the non-transitory computer readable medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform operations for field related driving, comprising: receiving content from an information source located outside of a vehicle; obtaining object information regarding one or more objects located within an environment of the vehicle; estimating, by using a neural network (NN), and based on the object information, one or more virtual fields of the one or more objects; and determining, based on the one or more virtual fields, a virtual force for use in applying a driving related operation of the vehicle, wherein the virtual force is associated with a physical model and representing an impact of the one or more objects on a behavior of the vehicle; wherein at least one step of the obtaining, the estimating and the determining is impacted by the content. receiving, by a processing circuit, perception information that comprises environmental information about an environment of a vehicle and kinematic information regarding a movement of the vehicle; receiving, by the processing circuit, a multidimensional virtual force field representation of a driving policy applicable to the vehicle; reducing a dimension of the multidimensional virtual force field representation, based on the received perception information, to produce a reduced dimensional virtual force field representation that conforms with a driving of the vehicle; and dynamically visualizing, by applying the reduced dimensional virtual force field representation, the driving policy in the driving of the vehicle. For the following reason(s), the examiner submits that the above identified limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitation of “receiving content…”, “obtaining object information…”, receiving…perception information…”, and “receiving… multidimensional…”the examiner submits that these limitation are insignificant extra-solution activities. In particular, the receiving and obtaining steps are recited at a high level of generality (i.e. as a general means of acquiring data) and amounts to mere data gathering which is a form of insignificant extra-solution activity. Lastly, the recitations of “neural network” and “processing circuit” merely describes how to generally “apply” the otherwise abstract ideas and/or additional limitations in a generic or general-purpose computer environment, where processor is recited as generic processor performing a generic computer function of acquiring data. This generic processor limitation is no more than mere instructions to apply the exception using a generic computer component and merely automates the steps. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impost any meaningful limits on practicing the abstract idea. 101 Analysis – Step 2B Regarding Step 2B of the 2019 PEG, representative independent claim 1 do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional limitations of “neural network”, the examiner submits that the processor is recited at a high-level of generality (i.e. as a generic computer component performing generic calculation) such that it amounts no more than mere instruction to apply the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept. And as discussed above, the additional limitations discussed above are insignificant extra-solutions activities. As explained, the additional elements are recited at a high level of generality to simply implement the abstract idea and are not themselves being technologically improved. See, e.g., MPEP §2106.05; Alice Corp. v. CLS Bank, 573 U.S., 208,223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention”). Electric Power Group, LLC v, Alstom S.A., 830 F.3d 1350, 1354-55, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016) (Selecting information for collection, analysis and display constitute insignificant extra-solution activity). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016)( Generating a second menu from a first menu and sending the second menu to another location as performed by generic computer components). Hence, the claims are not patent eligible. Dependent Claims Dependent claims 2-11 and 13-20 do not recite any further limitations that causes the claims to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial except and/or additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 2-11 and 13-20 are not patent eligible under the same rationale as provided for in the rejection of claim 12. As such, claims 1-20 are rejected under 35 USC § 101 as being drawn to an abstract idea without significant more, and thus are ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Alvarez at el. (US 2021/0001884 A1 hereinafter Alvarez) in view of Kawahara et al. (US 2016/0272199 A1 hereinafter Kawahara). Regarding claim 1, Alvarez teaches A method for driving policy (see at least Abstract), the method comprises: receiving, by a processing circuit, perception information that comprises environmental information about an environment of a vehicle and kinematic information regarding a movement of the vehicle; (see at least Fig. 1-14 [0020-0085]: Automated driving systems may operate via a perception, planning, and actuation cycle where a system composed of a range of sensors (e.g. cameras, radar, lidar, IMU, GPS, etc.) create a virtual representation of the world and a decision logic system, i.e. driving policy, plans a general route from origin to destination and determines short term trajectories to navigate obstacles, preserving the vehicle integrity while respecting traffic rules. The autonomous vehicle system may include several modules or subsystems, including a perception module, environmental module, driving policy module and actuation module. The driving policy architecture may receive as input vehicle observation vectors. The vehicle observation vectors may be obtained from sensor data (e.g., cameras, radar, lidar, etc.), map data, and other data providing information about vehicles and other obstacles in the vicinity of the ego vehicle, and may include such other information as road geometry and local environmental conditions (e.g., weather, time-of-day, etc.) and data relating to the ego vehicle (e.g. current vehicle pose and velocity).) receiving, by the processing circuit, a multidimensional virtual force field representation of a driving policy applicable to the vehicle; (see at least Fig. 1-14 [0020-0085]: Automated driving systems may operate via a perception, planning, and actuation cycle where a system composed of a range of sensors (e.g. cameras, radar, lidar, IMU, GPS, etc.) create a virtual representation of the world. Plurality of neural network(s) may obtain observation vector data for a vehicle and for one or more external obstacles; and generate a first vector representing a future behavior of the vehicle based on current vehicle position and velocity and a second vector representing a prediction of future behavior of external obstacles based on current obstacle position and velocity. Given all these inputs and the trajectory output, the hidden units in place network LSTM should contain a representation of the actual spatial occupancy of the ego vehicle across a range of time points—i.e. operating to form a representation analogous to place cells.) reducing a dimension of the multidimensional virtual force field representation, based on the received perception information, to produce a reduced dimensional virtual force field representation that conforms with a driving of the vehicle; (see at least Fig. 1-14 [0020-0085]: Given all these inputs and the trajectory output, the hidden units in place network LSTM should contain a representation of the actual spatial occupancy of the ego vehicle across a range of time points—i.e. operating to form a representation analogous to place cells. The LSTM of place network may be architected as an LSTM Action Critic (A3C) Network. As an A3C network, the LSTM 710 may implement a policy function π (αt+h |st−h , θ) (for a neural network parameterized by θ) which, given a historic state from the negative time horizon to present, provides an action sequence (trajectory prediction or sequence of planned future behaviors). and it may be alleged that Alvarez does not explicitly teach A method for driving policy visualization, and dynamically visualizing, by applying the reduced dimensional virtual force field representation, the driving policy in the driving of the vehicle. Kawahara is directed to driver assistance system for collision avoidance, Kawahara teaches A method for driving policy visualization (see at least Fig. 5A-5B), and dynamically visualizing, by applying the reduced dimensional virtual force field representation, the driving policy in the driving of the vehicle. (see at least Fig. 1-11 [0033-0097]: the visualized potential distribution image is capable of being displayed for the driver to easily understand the route of travel to be taken so to reduce the psychological discomfort caused by peripheral vehicles Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Alvarez’s system and method for generating safe diving experience for autonomous vehicles to incorporate the technique of dynamically visualizing the driving policy in the driving of the vehicle by applying the reduced dimensional virtual force field representation as taught by Kawahara with reasonable expectation of success and doing so would reduce psychological discomfort that is experienced by a driver of the vehicle due to a peripheral vehicle (Kawahara [0007]). Regarding claim 12, Alvarez teaches A non-transitory computer-readable medium for driving policy (see at least Abstract), the non-transitory computer readable medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform operations for field related driving, comprising: obtaining object information regarding one or more objects located within an environment of the vehicle; (see at least Fig. 1-14 [0020-0085]: Automated driving systems may operate via a perception, planning, and actuation cycle where a system composed of a range of sensors (e.g. cameras, radar, lidar, IMU, GPS, etc.) create a virtual representation of the world and a decision logic system, i.e. driving policy, plans a general route from origin to destination and determines short term trajectories to navigate obstacles, preserving the vehicle integrity while respecting traffic rules.) estimating, by using a neural network (NN), and based on the object information, one or more virtual fields of the one or more objects; (see at least Fig. 1-14 [0020-0085]: Automated driving systems may operate via a perception, planning, and actuation cycle where a system composed of a range of sensors (e.g. cameras, radar, lidar, IMU, GPS, etc.) create a virtual representation of the world and a decision logic system, i.e. driving policy, plans a general route from origin to destination and determines short term trajectories to navigate obstacles, preserving the vehicle integrity while respecting traffic rules. receiving, by a processing circuit, perception information that comprises environmental information about an environment of a vehicle and kinematic information regarding a movement of the vehicle; (see at least Fig. 1-14 [0020-0085]: Automated driving systems may operate via a perception, planning, and actuation cycle where a system composed of a range of sensors (e.g. cameras, radar, lidar, IMU, GPS, etc.) create a virtual representation of the world and a decision logic system, i.e. driving policy, plans a general route from origin to destination and determines short term trajectories to navigate obstacles, preserving the vehicle integrity while respecting traffic rules. The autonomous vehicle system may include several modules or subsystems, including a perception module, environmental module, driving policy module and actuation module. The driving policy architecture may receive as input vehicle observation vectors. The vehicle observation vectors may be obtained from sensor data (e.g., cameras, radar, lidar, etc.), map data, and other data providing information about vehicles and other obstacles in the vicinity of the ego vehicle, and may include such other information as road geometry and local environmental conditions (e.g., weather, time-of-day, etc.) and data relating to the ego vehicle (e.g. current vehicle pose and velocity).) receiving, by the processing circuit, a multidimensional virtual force field representation of a driving policy applicable to the vehicle; (see at least Fig. 1-14 [0020-0085]: Automated driving systems may operate via a perception, planning, and actuation cycle where a system composed of a range of sensors (e.g. cameras, radar, lidar, IMU, GPS, etc.) create a virtual representation of the world. Plurality of neural network(s) may obtain observation vector data for a vehicle and for one or more external obstacles; and generate a first vector representing a future behavior of the vehicle based on current vehicle position and velocity and a second vector representing a prediction of future behavior of external obstacles based on current obstacle position and velocity. Given all these inputs and the trajectory output, the hidden units in place network LSTM should contain a representation of the actual spatial occupancy of the ego vehicle across a range of time points—i.e. operating to form a representation analogous to place cells.) reducing a dimension of the multidimensional virtual force field representation, based on the received perception information, to produce a reduced dimensional virtual force field representation that conforms with a driving of the vehicle; ((see at least Fig. 1-14 [0020-0085]: Given all these inputs and the trajectory output, the hidden units in place network LSTM should contain a representation of the actual spatial occupancy of the ego vehicle across a range of time points—i.e. operating to form a representation analogous to place cells. The LSTM of place network may be architected as an LSTM Action Critic (A3C) Network. As an A3C network, the LSTM 710 may implement a policy function π (αt+h |st−h , θ) (for a neural network parameterized by θ) which, given a historic state from the negative time horizon to present, provides an action sequence (trajectory prediction or sequence of planned future behaviors). and it may be alleged that Alvarez does not explicitly teach driving policy visualization, receiving content from an information source located outside of a vehicle; determining, based on the one or more virtual fields, a virtual force for use in applying a driving related operation of the vehicle, wherein the virtual force is associated with a physical model and representing an impact of the one or more objects on a behavior of the vehicle; and wherein at least one step of the obtaining, the estimating and the determining is impacted by the content. dynamically visualizing, by applying the reduced dimensional virtual force field representation, the driving policy in the driving of the vehicle. Kawahara is directed to driver assistance system for collision avoidance, Kawahara teaches A method for driving policy visualization (see at least Fig. 5A-5B), receiving content from an information source located outside of a vehicle; (see at least Fig. 1-11 [0011, 0029-0102]: the communication unit performs road-to-vehicle communication between the host vehicle and a radio communication station on a road side or vehicle-to-vehicle communication between the host vehicle and a communicator mounted in a peripheral vehicle or communicates with a server via base station of a wide area network. The communication unit may receive weather information and traffic congestion information.) obtaining object information regarding one or more objects located within an environment of the vehicle; (see at least Fig. 1-11 [0029-0102]: the detection unit is configured to include sensors which detect the positions, the relative speeds, and the sizes of peripheral vehicles which travel in a lane which a host vehicle travels, and in adjacent lanes.) estimating, based on the object information, one or more virtual fields of the one or more objects; (see at least Fig. 1-11 [0029-0102]: the control unit assigns a potential field to each peripheral value in the bird’s eye view image and prepares a distribution illustrating a state in which the potential fields of the peripheral vehicles are integrally distributed across the entire region of the bird’s eye view image. The potential field conceptually represents the degree of psychological discomfort that is given to the driver of the host vehicle due to the presence of a peripheral vehicle, with the degree of psychological discomfort being associated with the position of the peripheral vehicle on the road.) and determining, based on the one or more virtual fields, a virtual force for use in applying a driving related operation of the vehicle, wherein the virtual force is associated with a physical model and representing an impact of the one or more objects on a behavior of the vehicle; (see at least Fig. 1-11 [0029-0102]: the control unit determines whether the position of the host vehicle has a high potential in the potential distribution. When the host vehicle is position in a low potential field, the control unit may command the engine control ECU of the vehicle to maintain a current speed. When it is determined that the host vehicle is position in a high potential field, the control unit determines whether the host vehicle is capable of escaping from a high potential field to a low potential field by changing the position of the host vehicle relative to the peripheral vehicles in the forward and rearward direction while maintaining a current travel lane or by changing a lane to an adjacent lane from the travel lane. The control unit commands the engine control or brake control ECU and/or steering angle control ECU to adjust the speed of the host vehicle and/or change lane such that the host vehicle enters the low potential field.) wherein at least one step of the obtaining, the estimating and the determining is impacted by the content. (see at least Fig. 1-11 [0029-0102]: the potential field vale may be set to be increased to the extent that the speed of the host vehicle or the speed of the peripheral vehicle is high. The value or range of a potential field may be adjusted according to the size of the peripheral vehicle and maybe adjusted according to road conditions such as width, the curve, the slope of a road. Under bad weather conditions such as rainy, stormy, or foggy weather, a potential field may be assigned to a wide range, or the value of the potential field may be set to be greater than normal conditions, based on weather information acquired by the communication unit. A speed, the size of a peripheral vehicle, road conditions, and weather conditions are capable of being reflected in the calculation of a potential field.) dynamically visualizing, by applying the reduced dimensional virtual force field representation, the driving policy in the driving of the vehicle. (see at least Fig. 1-11 [0033-0097]: the visualized potential distribution image is capable of being displayed for the driver to easily understand the route of travel to be taken so to reduce the psychological discomfort caused by peripheral vehicles Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Alvarez’s system and method for generating safe diving experience for autonomous vehicles to incorporate the technique of receiving content from an information source located outside of a vehicle; determining, based on the one or more virtual fields, a virtual force for use in applying a driving related operation of the vehicle, wherein the virtual force is associated with a physical model and representing an impact of the one or more objects on a behavior of the vehicle; and wherein at least one step of the obtaining, the estimating and the determining is impacted by the content; and dynamically visualizing, by applying the reduced dimensional virtual force field representation, the driving policy in the driving of the vehicle.as taught by Kawahara with reasonable expectation of success and doing so would reduce psychological discomfort that is experienced by a driver of the vehicle due to a peripheral vehicle. Regarding claim 2 (similarly claim 13), the combination of Alvarez in view of Kawahara teaches The method according to claim 1 (similarly claim 12), further comprising: Alvarez further teaches determining, by the processing circuit, weight information based on the perception information. (see at least Fig. 1-14 [0020-0085]: Sensor and other inputs may be encoded into an observation vector that includes road geometry, sequences of previous actions for other road agents up to a given time horizon, and environmental conditions (e.g. raining, sunny). Embodiments of the driving policy architecture may provide a future action sequence for the ego vehicle within a specified time horizon, which is passed to the actuation system for command input. Besides the vehicle goal provided, e.g., by a route planner, a safety supervisor component may be incorporated with the driving policy that reads the sequence of commands and, if necessary, applies restrictions on the commands to provide for safety guarantees. Embodiments may integrate with the Responsibility-Sensitive Safety (RSS) mathematical framework, introduced by Intel® and Mobileye for autonomous vehicle operation, to perform safety checks with surrounding agents and provide active restrictions to the driving policy. A safety reward value may be applied to the driving policy, e.g. via RSS, based on the ego vehicle's trajectory generated by the driving policy. In some embodiments, a similar safety reward could be obtained by measuring minimum distance to surrounding vehicles and maximizing reward for longer distances or by providing some traffic rules heuristics and monitoring adherence to these rules. Safety criteria may generally be understood to include rules or guidelines for collision avoidance, for example by establishing a minimum distance metric during a particular situation. Safety criteria may also include local rules of the road such as maximum speed in the road segment, respecting signals, and/or allowing—or prohibiting—certain maneuvers (e.g., at intersections). Local rules of the road are not necessarily limited to safety and may include local behaviors. In some embodiments, local rules of the road may be provided as a separate list or other data input. That is, RSS is perception dependent and modifies an ego vehicle’s trajectory choices via safety weighting (e.g. reward/penalization). Regarding claim 3 (similarly claim 14), the combination of Alvarez in view of Kawahara teaches The method according to claim 2 (similarly claim 13), Alvarez further teaches wherein the reducing the dimension is based on the weight information. (see at least Fig. 1-14 [0020-0085]: ego vehicle’s trajectory outputs (as illustrated in the Figures, summarized in Fig. 7) are the results of a weighted decision-making process based on, e.g. ego vehicle’s predicted behavior, plurality of obstacles’ behaviors prediction, environmental context of vehicle’s surrounding, route planning goal, and safety constraints and/reward.) Regarding claim 4 (similarly claim 15), the combination of Alvarez in view of Kawahara teaches The method according to claim 2 (similarly claim 13), Alvarez further teaches determining the weight information is based on estimated future directions of progress of the vehicle (see at least Fig. 1-14 [0020-0085]: ego vehicle’s trajectory outputs (as illustrated in the Figures, summarized in Fig. 7) are the results of a weighted decision-making process based on, e.g. ego vehicle’s predicted behavior, plurality of obstacles’ behaviors prediction, environmental context of vehicle’s surrounding, route planning goal, and safety constraints and/reward. The input to the place network concatenates the output of grid network of ego vehicle, wherein the output represents the expected future behavior of the ego vehicle. Further, the safety reward/feedback system uses the actual/predicted trajectory of the ego vehicle to assign a reward (weight) to it wherein the weight/reward is based on a RSS framework.) Regarding claim 5 (similarly claim 16), the combination of Alvarez in view of Kawahara teaches The method according to claim 1 (similarly claim 12), comprising: Alvarez further teaches reducing the dimension by integrating the multidimensional virtual force field representation with respect to a marginal variable. (see at least Fig. 1-14 [0020-0085]: the input to the place network 700 concatenates the output of the grid network y_ego(t+h) (label 730 ), as well as the output of a variable number of obstacle networks, Σi=0 N y_o(t+h) i (if they exist) (label 725 ), a place vector pt that represents the current driving situation from the ego vehicle perspective (label 720 ), the goal location gt (label 735 ) provided by the route planning system, and the reward Rt (label 740 ) given by the safety monitoring system based on the last trajectory output αt−h. That is, each obstacle is passed through a neural network that predicts its respective behavior vector and these vectors are summed to form a single combined vector that is fed into the place network. The summation is a form of marginalization that aggregates over a variable number of entities/obstacles and reflects on how they collectively influence the ego vehicle’s trajectory toward a goal point.) Regarding claim 6 (similarly claim 17), the combination of Alvarez in view of Kawahara teaches The method according to claim 5 (similarly claim 16), Alvarez further teaches wherein the integrating is further responsive to weight information. (see at least Fig. 1-14 [0020-0085]: the input to the place network 700 concatenates the output of the grid network y_ego(t+h) (label 730 ), as well as the output of a variable number of obstacle networks, Σi=0 N y_o(t+h) i (if they exist) (label 725 ), a place vector pt that represents the current driving situation from the ego vehicle perspective (label 720 ), the goal location gt (label 735 ) provided by the route planning system, and the reward Rt (label 740 ) given by the safety monitoring system based on the last trajectory output αt−h. That is, each obstacle is passed through a neural network that predicts its respective behavior vector and these vectors are summed to form a single combined vector that is fed into the place network. All inputs are concatenated into a joint representation and the neural network learns weights (e.g. based on predictions, safety checks/requirement, rule adherence and goals) that determines how much each input matters for generating a final output where the driving policy integrates the scores/weights to select the best trajectory.) Regarding claim 7 (similarly claim 18), the combination of Alvarez in view of Kawahara teaches The method according to claim 5(similarly claim 16), Alvarez further teaches further comprising selecting the marginal variable based on the perception information. (see at least Fig. 1-14 [0020-0085]: the input to the place network 700 concatenates the output of the grid network y_ego(t+h) (label 730 ), as well as the output of a variable number of obstacle networks, Σi=0 N y_o(t+h) i (if they exist) (label 725 ), a place vector pt that represents the current driving situation from the ego vehicle perspective (label 720 ), the goal location gt (label 735 ) provided by the route planning system, and the reward Rt (label 740 ) given by the safety monitoring system based on the last trajectory output αt−h. The perception system feeds in information regarding surrounding obstacles, environmental information (road geometry, traffic conditions, weather conditio
Read full office action

Prosecution Timeline

Dec 29, 2023
Application Filed
Aug 09, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596371
SYSTEM AND METHOD FOR INTERCEPTION AND COUNTERING UNMANNED AERIAL VEHICLES (UAVS)
2y 5m to grant Granted Apr 07, 2026
Patent 12573078
METHOD AND APPARATUS FOR DETERMINING VEHICLE LOCATION BASED ON OPTICAL CAMERA COMMUNICATION
2y 5m to grant Granted Mar 10, 2026
Patent 12571646
Automated Discovery and Monitoring of Uncrewed Aerial Vehicle Ground-Support Infrastructure
2y 5m to grant Granted Mar 10, 2026
Patent 12560441
METHOD AND APPARATUS FOR OPTIMIZING A MULTI-STOP TOUR WITH FLEXIBLE MEETING LOCATIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12560936
SYSTEMS AND METHODS FOR OBJECT DETECTION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+43.9%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 80 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month