Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Examiner Notes that the fundamentals of the rejections are based on the broadest reasonable interpretation of the claim language. Applicant is kindly invited to consider the reference as a whole. References are to be interpreted as by one of ordinary skill in the art rather than as by a novice. See MPEP 2141. Therefore, the relevant inquiry when interpreting a reference is not what the reference expressly discloses on its face but what the reference would teach or suggest to one of ordinary skill in the art.
Status of the Claims
This is a Final Office Action in response to Applicant’s amendment of 16 November 2025. Claims 1-4, 6, 8, 12-14 and 22-32 are pending and have been considered as follows.
Response to Amendment and/or Arguments
Applicant’s amendments and/or arguments with respect to the Claim Rejections of Claims 1-4, 6, 8, 12-14 and 22-30 under 35 U.S.C. 112(a) as set forth in the office action 28 August 2025 have been considered and are NOT persuasive. Specifically, Applicant argues (pages 6-8 of Applicant’s Remarks filed on 11/16/2025) the following:
Multiple applications with same inventors and/or co-inventors recite similar terms to instant application were not rejected under 35 U.S.C 112(a) and two of which were patented.
The Office cited prior art that uses physical models; therefore physical models are known, so no detailed disclosure is needed.
“The current application is not based on a specific model – and models are known in the art –the fact that the office repetitively ignores this argument is unreasonable.”
Unknown objects inherently mean classification failure and the concept of applying repellant virtual force fields to unknown objects necessarily encompasses scenarios where classification fail, as unknown objects are by definition objects that cannot be properly classified
Virtual physical models model that may virtually apply rules of physics---that shows inventors possessed the claimed virtual physical model concept
Regarding Claims 25-29, the specification explicitly states that the NN is trained using RL and/or BC and these techniques are well known and therefore, written description requirement is satisfied.
The Examiner’s Response:
The examiner has carefully considered Applicant’s arguments and respectfully disagrees for the following reasons:
Regarding Argument (I.):
Examiner has carefully considered Applicant’s arguments and respectfully disagrees. Compliance with 35 U.S.C. 112(a) written description requirement must be determined based on the disclosure of the present application as originally filed, and not on disclosures in other applications or on the fact that related applications by the same inventors/co-inventors have issues as patents. Each application stands on its own, an Applicant has not shown that the present specification reasonably conveys possession of the claimed subject matter at issues.
Regarding Argument (II.):
Applicant’s reliance on various articles and prior art cited by examiners (in 103 rejections) to demonstrate that a person of ordinary skill in the art would understand the scope of the claims is misplaced, as written description requires that the specification itself demonstrates possession of the claimed invention, NOT that a POSTIA could supply missing details/information not disclosed in the specification. Knowledge in the art may be used to interpret what is disclosed but it cannot replace disclosure of claim limitation that are absent from the specification. The fact that physical or electrostatic models are known in the art does not establish that Applicant’s specification demonstrates possession of, or enables, the broadly claimed genus of virtual physical models encompassing mechanical, electromagnetic, and optical rules. Written description requires that the inventors convey with reasonably clarity that they were in possession of the claimed invention, and reliance on general knowledge in the art cannot substitute for a lack of disclosure in the specification itself.
Regarding Argument (III.):
Applicant argues that because physical models are generally known and the invention is not limited to any specific one, additional details should not be required. However, the claim covers all types of “virtual physical model” (e.g., mechanical, electromagnetic, optical, etc.) which makes it very broad without describing even one concrete example in details. To explain simply using an analogy: this is like claiming “a method for interplanetary travel using any physical model” but not explaining whether the travel uses fuel-efficient transfer orbit, fastest arrival high energy trajectory, gravitational slingshot maneuver or something else, each of which uses different equations, constraints, and objectives. Even though interplanetary travel and orbital mechanics are known fields, an inventor must still explain which type of trajectory is used or at least described what all covered approaches have in common and how they are implemented. Similarly here, merely stating that rules of physics must be applied does not explain how any specific model is represented, computed or integrated with the neural network and force determination process. Because the claim covers many fundamentally different types of physical modeling approaches with different behaviors and objectives, but the specification does not describe at least one in sufficient detail, the written description is not commensurate with the breadth of the claim.
Regarding Argument (IV.):
Although Applicant contends that “unknown objects” inherently encompasses classification failure, the claim affirmatively recites that determining virtual forces is performed even when classification fails, which constitutes an operational limitation requiring supporting disclosure. The specification merely restates the claim language but does not describe how classification failure is detected, what inputs are used in the absence of classification, how the neural network determines forces without semantic information, or how the system responds to such failure and different types of unknown objects differently or the same. Conceptual motivation or intended results does not substitute for disclosure of the claimed functionality.
Regarding Argument (V.):
The statement that a virtual physical model applies rules of physics is broad and generalized description that does not identify any specific rule, modeling framework, computational implementation, or structural relationship to the claimed virtual forces and neural network operations. Written description requires more than conceptual reference, it requires disclosure sufficient to show that the inventors had possession of the claimed subject matter. Absent representative examples, defined modeling approaches, or structural explanation of how such rules are implemented within the claimed system, the disclosure does not reasonably convey possession of the full scope of the claimed virtual physical model.
Regarding Argument (VI.):
Applicant’s argument that the specification satisfies 35 USC 112(a) written description because it generally states that the neural networks are trained using reinforcement learning or behavior cloning and that these techniques were well know, is NOT persuasive. The written description requirement is not satisfied merely by reciting known training methods, the specification must demonstrate inventors’ possession of the claimed mapping from object information to virtual forces within the virtual physical model framework. The specification does not define the form or representation of the virtual forces, structure or parameters of the physical model, the inputs used when classification fails, the architecture of the neural networks, the reward function for reinforcement learning, or the demonstration or loss structural for behavior cloning. Simply stating that RL or BC is used does not disclose how the claimed mapping is implemented or learned, and does not show that the inventors were in possession of the invention as claimed.
Accordingly, Applicant’s argument regarding (I.)-(VI.) under 35 U.S.C. 112(a) are NOT persuasive, the specification continues to lack adequate written description support for the claimed limitations and 35 U.S.C. 112(a) rejections are maintained. See 35 U.S.C. 112(a) rejections below for details.
Applicant’s amendments and/or arguments with respect to the Claim Rejections of Claims 1-4, 6, 8, 12-14 and 22-30 under 35 U.S.C. 112(b) as set forth in the office action 28 August 2025 have been considered and are NOT persuasive. Specifically, Applicant argues (page 9 of Applicant’s Remarks filed on 11/16/2025):
PNG
media_image1.png
334
690
media_image1.png
Greyscale
The Examiner’s Response:
The examiner has carefully considered Applicant’s arguments and respectfully disagrees. The claim language permits any model built on any rules of physics, without specifying which rules, how they are represented or how they generate virtual forces, creating an open end and extremely broad claim. Providing illustrative examples of physics domains does not meaningfully limit the term or inform a person of ordinary skill in the art what models fall within the calim or does it define how the model operates or interacts with other claimed elements, such as perception fields or neural network. Definiteness requires that the claim inform those skilled in the art with reasonable certainty as to the scope of the invention; in the instant application, because “virtual physical model” could encompass an essentially limitless range of physical models, the limitation remains indefinite. Accordingly, the claims remain indefinite and the 35 U.S.C. 112(b) rejection is maintained.
Applicant’s amendments and/or arguments with respect to the Claim Rejections of Claims 1 and 22 under 35 USC 103 as set forth in the office action 28 August 2025 have been considered and are NOT persuasive. Specifically, Applicant argues(Pages 10-12 of Applicant’s Remarks filed on 11/16/2025):
PNG
media_image2.png
215
687
media_image2.png
Greyscale
PNG
media_image2.png
215
687
media_image2.png
Greyscale
PNG
media_image3.png
245
706
media_image3.png
Greyscale
The Examiner’s Response:
The examiner has carefully considered Applicant’s arguments and respectfully disagrees for the following reasons:
Regarding Argument (I.):
Applicant’s argument is NOT persuasive because the cited reference Pflug’s disclosure of “monitoring surrounding objects to determine speed and direction” necessarily entails obtaining kinematic variables associated with those objects. Speed and direction are fundamental components of velocity, which is a kinematic variable and determining them requires tracking object position over time, thereby inherently obtaining motion-related state information. Furthermore, to monitor and distinguish surrounding objects, the system must at least identify and associated motion data with specific objects, which reasonably constitute obtaining contextual information related to those objects. Therefore, the reference teaches, either expressly or inherently, obtaining kinematic variables as broadly recited in the claims.
Regarding Argument (II.):
Applicant’s argument is NOT persuasive because prior art Kozuka expressly teaches using a neural network to generate risk-related outputs based on objects and environmental inputs, and Pflug teaches modeling object interaction using influence or virtual force representation; it would have been obvious to one of ordinary skill in the art to implement the influence/virtual force determination of Pflug using the neural network frame work of Kozuka in order to improve prediction accuracy and adaptability, since neural networks were well known for approximating complex nonlinear relationships between inputs and interaction metrics. The claim does not recite any specific architecture, training process, or structural limitation for determining the virtual forces using a neural network but instead broadly required that the forces be determined using such a model, which amounts to applying a known machine-learning technique to a known interaction-mapping algorithm. Moreover, given that the Applicant’s own specification does not provide sufficient detail as to how the neural network determines or map virtual forces, the alleged distinction lacks commensurate scope and does not persuasively demonstrate that the combined teachings of Pflug and Kozuka fails to render the claimed subject matter obvious.
Regarding Argument (III.):
Applicant’s argument is NOT persuasive because Applicant’s characterization of classification failure as an ego reaction to unknown objects is not meaningfully different from the prior art Pflug’s disclosure of assigning influence, hazard ranking and motion-related parameters to detected objects even when the objects are distance and their specific characteristics cannot yet be fully determined. Pflug teaches that even when an object is distant, only partially detected or insufficiently characterized, the system still assigns influence values and adjust motion planning accordingly, which reasonably corresponds to reacting to an unclassified or unknown object (see Page 8 of Applicant’s Remark Lines 7-15). Accordingly, the cited art teaches or at least suggests determining interaction or virtual force related values despite incomplete or failed classification.
Regarding Argument (IV.):
Applicant’s argument is NOT persuasive because the cited portion of Pflug expressly disclose determining the position and speed of surrounding vehicles and objects and using that information to determine an optimal path for collision avoidance, which requires evaluating spatial and motion-related influences regions surrounding those objects. Evaluating such influence regions, whether described as hazard zones, collision envelopes, or avoidance constraints, reasonably corresponds to evaluating one or more perception fields as broadly claimed. The claim does not recite any particular mathematical formulation, field representation, or computational structures for the alleged perception fields, but instead recites evaluating in the context of determining virtual forces. Pflug’s disclosure of assessing detected objects, determining their relative motion parameters, and modifying vehicle trajectory and driver alerts based on those assessments is sufficient to teach or at least suggest evaluating perception related influence areas in order to compute avoidance behavior.
Regarding Argument (V.):
Applicant’s argument is NOT persuasive because Pflug is relied upon for disclosing the limitation of object classification failure limitation as discussed in Argument (III.) response and Kozuka is relied upon for the technique of using a neural network to generate risk-related outputs based on objects and environmental inputs.
Accordingly, Applicant’s argument regarding (I.)-( V.) under 35 U.S.C. 103 are NOT persuasive and 35 U.S.C. 103 rejections are maintained. See 35 U.S.C. 103 rejections below for details.
Applicant’s amendments and/or arguments with respect to the Claim Rejections of Claims 26 under 35 USC 103 as set forth in the office action 28 August 2025 have been considered and are NOT persuasive. Specifically, Applicant argues(Pages 13 of Applicant’s Remarks filed on 11/16/2025):
PNG
media_image4.png
208
685
media_image4.png
Greyscale
The Examiner’s Response:
The examiner has carefully considered Applicant’s arguments and respectfully disagrees because Wayne describes a reinforcement learning framework in which a policy generator selects actions to imitate a reference state-action trajectory and a discriminator evaluates the similarity between the generated trajectory and the reference trajectory, which reward values used to update the policy parameters based on that comparison. Such a structure reasonably corresponds to defining or deriving a reward signal from behavior cloning, as the discriminator’s output—reflecting how closely the agent’s behavior matches the reference behavior—constitutes a learned reward function based on expert or reference trajectories. Accordingly, the prior art’s use of a discriminator-conditioned reward to train a policy to imitate a reference behavior teaches or at least suggests a reinforcement learning system having a reward function defined through behavior cloning under the broadest reasonable interpretation of the claim language.
Accordingly, Applicant’s argument under 35 U.S.C. 103 are NOT persuasive and 35 U.S.C. 103 rejections are maintained. See 35 U.S.C. 103 rejections below for details.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-4, 6, 8, 12-14 and 22-32 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claim 1 (similarly claim 22) recites limitation “wherein the one or more virtual forces belong to a virtual physical model…” for which Applicant has apparently not described, in the specification, in sufficiently details and thus fail to comply with the written description requirement under 35 U.S.C. 112(a). Specifically, the specification does not provide sufficient details describing what constitutes a “virtual physical model” or how such model is construed, operates, or generates the claimed virtual forces. The published specification describes:
[0067] The group of NNs may be trained to map the object information to the one or more virtual forces and to one or more virtual physical model functions that differ from the perception fields.
[0068] The group of NN may include a first NN and a second NN, wherein the first NN is trained to map the object information to the one or more perception fields and the second NN was trained to map the object information to the one or more virtual physical model functions.
[0073] The one or more virtual forces belong to a virtual physical model. The virtual physical model is a virtual model that may virtually apply rules of physics (for example mechanical rules, electromagnetic rules, optical rules) on the vehicle and/or the objects.
The originally field specification is insufficient to demonstrate that the inventors were in possession of the claimed invention as of the effective filing date of the instant application because: (i) merely stating that it may virtually apply rules of physics (for example mechanical rules, electromagnetic rules, optical rules) but does not provide specific examples of structures disclosing what any of these mechanical rules, electromagnetic rules or optical rules may be; (ii) the definition given for “virtual physical model” encompasses a wide range of unrelated physical domains without specifying how such physical model are represented, learned, and/or applied based on the object information and what information/characteristics specifically applied using any of the physical model; (iii) a person having ordinary skill in the art would not understand what types of modeling approaches may or may not fall within the scope of the claim and how this physical model relates to the claimed virtual forces; (iv) no working examples or embodiments were provided in the specification demonstrating the use of a virtual physical model that is used to compute the virtual forces; (v) the specification mentions that the virtual physical model that differs from the perception fields, the specification does not provide further explanation to clearly explain how it interacts with or different from the perception fields or virtual force in function (?). See MPEP 2161.01, I. and LizardTech Inc. v. Earth Resource Mapping Inc., 424 F.3d 1336, 1345 (Fed. Cir. 2005) cited therein ("Whether the flaw in the specification is regarded as a failure to demonstrate that the applicant possessed the full scope of the invention recited in [the claim] or a failure to enable the full breadth of that claim, the specification provides inadequate support for the claim under [§ 112(a)]"). Accordingly, the Examiner believes that Applicant has not demonstrated to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claim 1 (similarly claim 22) recites limitation “wherein the determining is made even when a classification of an object of the one or more objects fails…” for which Applicant has apparently not described, in the specification, in sufficiently details and thus fail to comply with the written description requirement under 35 U.S.C. 112(a). Specifically, the claim recites that the system is capable of determining one or more virtual forces for use in applying a driving-related operation of the vehicle even when classification of an object fails. However, the specification does not provide sufficient details to demonstrate that the inventors were in possession of such a capability as of the effective filing date of application. While the published specification describes:
[0042] Generalizability. Representing ego reactions to unknown road objects as repellant virtual force fields constitutes an inductive bias in unseen situations. There is a potential advantage to this representation in that it can handle edge cases in a safe way with less training. Furthermore, the perception field model is holistic in the sense that the same approach can be used for all aspects of the driving policy. It can also be divided into narrow driving functions to be used in ADAS such as ACC, AEB, LCA etc. Lastly, the composite nature of perception fields allows the model to be trained on atomic scenarios and still be able to properly handle more complicated scenarios.
The published specification generally suggests that the system may react to unknown objects using a repellant force fields but fails to disclose any structural or algorithm(s) regarding how the neural networks perform the determining step in the absence of object classification. Specifically, the specification fails to describe: (i) what input features are used when object classification fails; (ii) how the neural network infers/estimates virtual forces in the absence of a known object class; (iii) how the system detect failure of classification of the object and how the system react to such failure; (iv) no working examples or embodiment demonstrates the NN determining repellant force when an object cannot be classified. See MPEP 2161.01, I. and LizardTech Inc. v. Earth Resource Mapping Inc., 424 F.3d 1336, 1345 (Fed. Cir. 2005) cited therein ("Whether the flaw in the specification is regarded as a failure to demonstrate that the applicant possessed the full scope of the invention recited in [the claim] or a failure to enable the full breadth of that claim, the specification provides inadequate support for the claim under [§ 112(a)]"). Accordingly, the Examiner believes that Applicant has not demonstrated to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claims 25-29 recites limitation(s) “one or more neural networks were trained to map the object information to the one or more virtual forces..”, Applicant has apparently not described, in sufficient detail, by what algorithm(s), or by what steps or procedure, he/she determined, using one or more neural networks (NNs), the one or more virtual forces and what virtual physical model is used. The descriptions in the specification do not describe/show how the virtual forces might be determined in the first place or by what virtual physical model. The descriptions in the specification further do not describe/show how the object information are trained to mapped to the one or more virtual forces using reinforcement learning or behavior cloning. For example, the published [0061-0068] says:
[0061] Various example of training the group of NNs are provided below.
[0062] The group of NNs may be trained to map the object information to the one or more virtual forces using behavioral cloning.
[0063] The group of NNs may be trained to map the object information to the one or more virtual forces using reinforcement learning.
[0064] The group of NNs may be trained to map the object information to the one or more virtual forces using a combination of reinforcement learning and behavioral cloning.
[0065] The group of NNs may be trained to map the object information to the one or more virtual forces using a reinforcement learning that has a reward function that is defined using behavioral cloning.
[0066] The group of NNs may be trained to map the object information to the one or more virtual forces using reinforcement learning that has an initial policy that is defined using behavioral cloning.
[0067] The group of NNs may be trained to map the object information to the one or more virtual forces and to one or more virtual physical model functions that differ from the perception fields.
[0068] The group of NN may include a first NN and a second NN, wherein the first NN is trained to map the object information to the one or more perception fields and the second NN was trained to map the object information to the one or more virtual physical model functions.
See MPEP 2161.01, I. and LizardTech Inc. v. Earth Resource Mapping Inc., 424 F.3d 1336, 1345 (Fed. Cir. 2005) cited therein ("Whether the flaw in the specification is regarded as a failure to demonstrate that the applicant possessed the full scope of the invention recited in [the claim] or a failure to enable the full breadth of that claim, the specification provides inadequate support for the claim under [§ 112(a)]"). Accordingly, the Examiner believes that Applicant has not demonstrated to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
The dependent claims are also rejected under 112 first paragraph by the fact that they are dependent upon the rejected independent claim.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-4, 6, 8, 12-14 and 22-32 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1 (similarly claim 22) recites limitation(s) “virtual physical model” is indefinite and not reasonably certain, with indeterminate metes and bounds, from the teachings of the specification, because it is unclear what are these “virtual physical model” (e.g. the published specification [0073] merely described The virtual physical model is a virtual model that may virtually apply rules of physics (for example mechanical rules, electromagnetic rules, optical rules) on the vehicle and/or the objects; not limitation) and the claims allow for any model that is built on any rules of physics; hence this limitation renders the claim to be indefinite. See Nautilus, Inc. v. Biosig Instruments, Inc. (U.S. Supreme Court, 2014) which held, "A patent is invalid for indefiniteness if its claims, read in light of the patent’s specification and prosecution history, fail to inform, with reasonable certainty, those skilled in the art about the scope of the invention." See also In re Packard, 751 F.3d 1307 (Fed.Cir.2014)(“[A] claim is indefinite when it contains words or phrases whose meaning is unclear,” i.e., “ambiguous, vague, incoherent, opaque, or otherwise unclear in describing and defining the claimed invention.”) and Ex Parte McAward, Appeal No. 2015-006416 (PTAB, Aug. 25, 2017, Precedential) (“Applying the broadest reasonable interpretation of a claim, then, the Office establishes a prima facie case of indefiniteness with a rejection explaining how the metes and bounds of a pending claim are not clear because the claim contains words or phrases whose meaning is unclear.”)
The dependent claims are also rejected under 112 second paragraph by the fact that they are dependent upon the rejected independent claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 2, 8, 12, 22-24 and 30-32 are rejected under 35 U.S.C. 103 as being unpatentable over Pflug (US2014/0067206A1) in view of Kozuka et al. (US 2017/0262750 A1 hereinafter Kozuka).
Regarding claim 1 (similarly claim 22), Pflug teaches A method for perception fields driving related operations (see at least Fig. 1-3) Abstract), the method comprising:
obtaining object information regarding one or more objects located within an environment of a vehicle; (see at least Fig. 1-3 [0014-0067]: a vehicle includes an imaging or vision system that captures images exterior of the vehicle to determine potential hazards in the path of the vehicle. The system may judge the object's properties and resulting potential as a hazard. By monitoring the surrounding objects, the system is able to determine the speed and direction of each object. That information becomes attached to each object as a speed vector ( 7 ). The hazardous potential of each object becomes rated by the determination algorithm. The faster an object is (relative to the subject or host or equipped vehicle), the stronger its influence value is (see Tables 1 and 2) and a greater weighting or influence irradiation is applied to the faster object or objects.)
obtaining additional information that comprises kinematic and contextual variables related to the one or more objects; (see at least Fig. 1-3 [0014-0067]: a vehicle includes an imaging or vision system that captures images exterior of the vehicle to determine potential hazards in the path of the vehicle. The system may judge the object's properties and resulting potential as a hazard. By monitoring the surrounding objects, the system is able to determine the speed and direction of each object. That information becomes attached to each object as a speed vector ( 7 ). The hazardous potential of each object becomes rated by the determination algorithm. The faster an object is (relative to the subject or host or equipped vehicle), the stronger its influence value is (see Tables 1 and 2) and a greater weighting or influence irradiation is applied to the faster object or objects.)
determining, based on the object information in accordance with the additional information, one or more virtual forces for use in applying a driving related operations of the vehicle, (see at least Fig. 1-3 [0014-0067]: The exemplary influence map in FIG. 3 shows the influence of an object or vehicle ‘II’ with a speed vector to the left with an influence level of the value 5 (5 rings) and an object or vehicle ‘III’ with a speed vector to the right with an influence level of the value 7 (7 rings), which influence areas mostly irradiate circumferential and into the direction of the speed vector. The influence level of the objects II and III to the edges of the triangle of the to be calculated object (under test) is resting on can be calculated by counting the number of rings (and by that the influence value) the specific point or area or region is enclosed in. By summing up the influence of both other objects, the triangle has two edges with the height of 3 and one with the level 2. By that, the triangle's normal is tilted to upper left from upright (and by that the slope of the triangle will be to the upper left). When simulating the next time increment, object I is accelerated into the upper left direction. In this example, the triangle is chosen quite wide for giving example. The triangle may preferably be chosen in an infinitesimal small manner and the influence calculated not in INTEGER counting rings but in FLOAT by equation (1) to match the normal vector n more precisely.) wherein the one or more virtual forces belong to a virtual physical model and represent one or more impacts of the one or more objects on a behavior of the vehicle; (see at least Fig. 1-3 [0014-0067]: One solution for determining avoidance paths that may be optimal or semi optimal may be to handle the suspect vehicle and all foreign objects/vehicles as being like a marble having an influence value rolling or gliding over the influence map which influence values determining the heights (relate to according elevations and valleys). The marbles may have an assumed mass ‘m’ exposed to an assumed gravity ‘g’ and an inherent inertia. When in motion already (according to the speed vectors ( 7 ) in FIG. 2), there may be an assumed kinetic energy inherent to each marble. By that the marble may be turned away and slowed down when running into the direction of an elevation and may be turned to and accelerated when heading into a valley or when a faster marble closes up from behind, which may cause the map to rise in that region. Due to the influence of each object or vehicle, the influence map under the marble may change continuously while the marble glides or rolls.) wherein the determining is made even when a classification of an object of the one or more object fails and comprises representing vehicle reactions to unknown road objects as repellant virtual force fields; and wherein the determining of the one or more virtual forces comprises evaluating one or more perceptions fields; (see at least Fig. 1-3 [0014-0067]: the system may be operable to classify and ‘label’ or identify one or multiple object(s) and to set the speed and trajectory parameters and ‘matha’ properties to rank their hazardous potential or influence, even when the detected object is far from the subject vehicle and still a “spot” on the horizon, and when detection systems such as radar, laser and cameras are still unable to determine such parameters of the distant object. This hazardous influence ranking may be done by taking the speed, the distance, the size, the mass and the deformability and vulnerability of the subject vehicles or objects into account. There may be a look up table of each object's property influence value in use. )
visualizing the one or more perception fields in a driving of the vehicle; and performing the driving related operations. (see at least Fig. 1-3 [0014-0067]: The collision avoidance system that determines the position and speed of other vehicles on the road on which the subject vehicle is traveling and, when it is determined that the subject vehicle is approaching the other vehicles, the system determines one or more possible paths that avoid the other vehicles or objects and the system may select a preferred or optimal path that avoids the other vehicles and objects and requires the least aggressive maneuvering (such as hard braking and/or hard steering of the subject vehicle). The system may generate an alert to the driver of the selected path, and may display the path or paths to the driver for the driver to select. Responsive to such image processing, and when an object or other vehicle is detected, the system may generate an alert to the driver of the vehicle and/or may generate an overlay at the displayed image to highlight or enhance display of the detected object or vehicle, in order to enhance the driver's awareness of the detected object or vehicle or hazardous condition during a driving maneuver of the equipped vehicle.)
it may be alleged that Pflug does not explicitly teach determining, by using one or more neural networks, and based on the object information in accordance with the additional information, one or more virtual forces for use in applying a driving related operation of the vehicle;
Kozuka is directed to risk prediction method capable of predicting risk area having a possibility of causing dangerous situation for a running vehicle, Kozuka teaches determining, by using one or more neural networks, and based on the object information in accordance with the additional information, one or more virtual forces for use in applying a driving related operation of the vehicle; (see at least Fig. 14A-17B [0064, 0108-0115]: The risk predictor 10 estimates the acquired input images, by using the convolutional neural network, to detect a risk area and a feature thereof having a possibility that a moving object may appear into a travelling path of the vehicle and if the vehicle simply continues the current travelling, the vehicle may collide with that moving object. The risk area likely to have a risk for a vehicle being running is an area including a part of an area of a hiding object existing in a learning image and an unseen moving object existing behind the hiding object will appear later into a travelling path. For example, the risk area may be an area between two or more moving objects including a person at least one of which has a possibility of moving toward the other one of the moving objects and crossing the travelling path of the vehicle.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the Pflug’s driver assistance system using influence mapping for collision avoidance path determination to incorporate the technique of determining, by using one or more neural networks, based on the object information in accordance with the additional information, one or more virtual forces for use in applying a driving related operations of the vehicle as taught by Kozuka with reasonable expectation of success because CNN are excellent at extracting spatial features from visual data to help the vehicle better detect and localize road users to understand their positions relative to the host vehicle such that the CNN output can be utilized for path planning and driving assistance system for warning driver of risk zones and/or taking action for avoiding potential collision and thus improving roadway safety.
Regarding claim 2, the combination of Pflug in view of Kozuka teaches The method according to claim 1, Pflug further teaches wherein the determining comprises calculating, based on the one or more virtual forces applied on the vehicle, a total virtual force that is applied on the vehicle. (see at least Fig. 1-3 [0014-0067]: The exemplary influence map in FIG. 3 shows the influence of an object or vehicle ‘II’ with a speed vector to the left with an influence level of the value 5 (5 rings) and an object or vehicle ‘III’ with a speed vector to the right with an influence level of the value 7 (7 rings), which influence areas mostly irradiate circumferential and into the direction of the speed vector. The influence level of the objects II and III to the edges of the triangle of the to be calculated object (under test) is resting on can be calculated by counting the number of rings (and by that the influence value) the specific point or area or region is enclosed in. By summing up the influence of both other objects, the triangle has two edges with the height of 3 and one with the level 2. By that, the triangle's normal is tilted to upper left from upright (and by that the slope of the triangle will be to the upper left). When simulating the next time increment, object I is accelerated into the upper left direction. In this example, the triangle is chosen quite wide for giving example. The triangle may preferably be chosen in an infinitesimal small manner and the influence calculated not in INTEGER counting rings but in FLOAT by equation (1) to match the normal vector n more precisely.)
Regarding claim 8, the combination of Pflug in view of Kozuka teaches The method according to claim 1,
Pflug further teaches wherein the driving related operation comprises changing a direction of propagation of the vehicle. (see at least Fig. 1-3 [0014-0067]: In further advanced systems, the system may be able to distinguish between a normal uncritical situation and a critical situation. The system may come to a decision by assessing the predetermined possible paths. There may be certain maximum limits in presumed deceleration (so braking) measures and/or lateral acceleration (hard curving) measures in all optional paths which when overrun may turn the system into a kind of ‘critical’ mode. Then the system may not brake as comfortable as possible, but as soon and as heavy/aggressively as possible. The system may be allowed to ignore general traffic rules. By that it may turn onto the emergency lane for evading a predicted critical situation or collision (however, the system would not make such a maneuver when in the usual ‘uncritical’ mode). The system may pass at the non fast lane (overpassing on the right on right hand traffic). The system may change lanes without blinking. The system may select to go off road in case it determines that this is the least hazardous way out of the detected situation or hazardous condition.)
Regarding claim 12, the combination of Pflug in view of Kozuka teaches The method according to claim 1 comprising:
Pflug further teaches detecting a class of at least one of the one or more objects. (see at least Fig. 1-3 [0014-0067]: The system may be operable to classify and ‘label’ or identify one or multiple object(s) and to set the speed and trajectory parameters and ‘matha’ properties to rank their hazardous potential or influence, even when the detected object is far from the subject vehicle and still a “spot” on the horizon, and when detection systems such as radar, laser and cameras are still unable to determine such parameters of the distant object. When engaged, the intervention system receives data about objects (such as other traffic participants or obstacles on the road) from the vehicle image sensors and/or other environmental sensors (such as a RADAR or LIDAR or LASER and/or the like) and maybe via remote (car2X, car2car and/or the like). These may be already classified (such as, for example, a car, truck, cyclist, pedestrian, motorcyclist, horse carriage, policeman riding a horse, deer, lost load (as obstacle), pole, traffic island, traffic bollard and/or the like). )
Regarding claim 23, the combination of Pflug in view of Kozuka teaches The non-transitory computer readable medium according to claim 22,
Pflug further teaches that store instructions for determining the driving related operations based on the one or more virtual forces. (Fig. 1-3 [0014-0067]: The vision system 12 includes a control or processor 18 that is operable to process image data captured by the cameras and may provide displayed images at a display device 16 for viewing by the driver of the vehicle (although shown in FIG. 1 as being part of or incorporated in or at an interior rearview mirror assembly 20 of the vehicle, the control and/or the display device may be disposed elsewhere at or in the vehicle). The logic and control circuit of the imaging sensor may function in any known manner, and the image processing and algorithmic processing may comprise any suitable means for processing the images and/or image data.)
Regarding claim 24, the combination of Pflug in view of Kozuka teaches The non-transitory computer readable medium according to claim 22,
Pflug further teaches that store instructions for applying the driving related operation based on the one or more virtual forces (Fig. 1-3 [0014-0067]: The vision system 12 includes a control or processor 18 that is operable to process image data captured by the cameras and may provide displayed images at a display device 16 for viewing by the driver of the vehicle (although shown in FIG. 1 as being part of or incorporated in or at an interior rearview mirror assembly 20 of the vehicle, the control and/or the display device may be disposed elsewhere at or in the vehicle). The logic and control circuit of the imaging sensor may function in any known manner, and the image processing and algorithmic processing may comprise any suitable means for processing the images and/or image data.)
Regarding claim 30, the combination of Pflug in view of Kozuka teaches The method according to claim 1 comprising
Pflug further teaches imposing physical constraints on a time of evaluating of the one or more virtual perception fields and filtering of the object information. (Fig. 1-3 [0014-0067]: The system is operable continuously as the vehicle is driven along the road. Thus, the system is always collecting environmental data which are fed into the influence mapping (i.e., filtering of the object information). Further, the system is recapitulating the current state in time slots (fractions of seconds long) and reevaluating the situation (by the influence map) (i.e., the system updates in discrete, very short time interval and imposes a temporal resolution constraints). During the milliseconds that are progressing an earlier as optimal laid out collision avoidance path may become abandoned and a better one at that state of time may be selected as the preferred or optimal path since the other traffic participants may act at least in part different than assumed earlier or objects that weren't detected previously may come into view of the sensors of the subject vehicle.
Regarding claim 31, the combination of Pflug in view of Kozuka teaches The method according to claim 1,
Pflug further teaches wherein the vehicle drives on a road, the one or more objects comprises the road and a virtual field related to the road is determined based on a state of the road within the environment. (see at least Fig. 2 [0017]: (A) represents distance markers within a time frame (speed) (the more the distance, the faster), (B) represents a collision avoidance and impact degrading (steering and braking or acceleration) path, (C) represents a collision avoidance and impact degrading (steering and braking or acceleration) path, (D) represents a collision avoidance and impact degrading (steering and braking or acceleration) path, (E) represents a collision avoidance and impact degrading (steering and braking or acceleration) path, and (F) represents a collision avoidance and impact degrading (steering and braking or acceleration) path. Also, ( 1 ) represents a relatively fast vehicle and ( 2 ) represents a relatively slow vehicle. ( 3 ) represents an opposing road side (very high influence level), ( 4 ) represents a side strip, ( 5 ) represents a hard shoulder/emergency lane (high influence level), and ( 6 ) represents a soft shoulder (very high influence level). As shown in FIG. 2, ( 7 ) represents a speed vector of another vehicle, and ( 8 ) represents the subject vehicle having a relatively high speed, faster than vehicle ( 1 ), and ( 9 ) represents the subject vehicle's own speed vector.)
Regarding claim 32, the combination of Pflug in view of Kozuka teaches The method according to claim 1,
Pflug further teaches wherein the vehicle drives on a road, the one or more objects comprises the road and a virtual field related to the road is determined based on a shape of the road within the environment. (see at least Fig. 2 [0017]: (A) represents distance markers within a time frame (speed) (the more the distance, the faster), (B) represents a collision avoidance and impact degrading (steering and braking or acceleration) path, (C) represents a collision avoidance and impact degrading (steering and braking or acceleration) path, (D) represents a collision avoidance and impact degrading (steering and braking or acceleration) path, (E) represents a collision avoidance and impact degrading (steering and braking or acceleration) path, and (F) represents a collision avoidance and impact degrading (steering and braking or acceleration) path. Also, ( 1 ) represents a relatively fast vehicle and ( 2 ) represents a relatively slow vehicle. ( 3 ) represents an opposing road side (very high influence level), ( 4 ) represents a side strip, ( 5 ) represents a hard shoulder/emergency lane (high influence level), and ( 6 ) represents a soft shoulder (very high influence level). As shown in FIG. 2, ( 7 ) represents a speed vector of another vehicle, and ( 8 ) represents the subject vehicle having a relatively high speed, faster than vehicle ( 1 ), and ( 9 ) represents the subject vehicle's own speed vector.)
Claim(s) 3, 4, and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Pflug in view of Kozuka and Zhong (US 2022/0214703 A1).
Regarding claim 3, the combination of Pflug in view of Kozuka teaches The method according to claim 2 comprising
the combination of Pflug in view of Kozuka does not explicitly determining a desired virtual acceleration of the vehicle based on a total virtual acceleration that is applied on the vehicle by the total virtual force.
Zhong is directed to method and system for applying a virtual resistance to a unmanned aerial vehicle, Zhong teaches determining a desired virtual acceleration of the vehicle based on a total virtual acceleration that is applied on the vehicle by the total virtual force. (see at least Fig. 4 [0056-0065]: a desired acceleration is calculated through a resultant force in which the virtual resistance force is introduced, then a desired speed is calculated, and the flight speed of the UAV is adjusted according to the desired speed to implement a reduction of speed. Specifically, vector composition is first performed on the virtual resistance force and other forces applied to the UAV, to obtain a resultant force, and then a desired acceleration a is obtained according to the kinetic equation F=ma, and a desired speed, that is, the speed instruction, is obtained according to the desired acceleration a.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Pflug and Kozuka to incorporate the technique of determining a desired virtual acceleration of the vehicle based on a virtual acceleration that is applied on the vehicle by the total virtual force as taught by Zhong with reasonable expectation of success to improve autonomous vehicle operation safety and efficiency.
Regarding claim 4, the combination of Pflug in view of Kozuka and Zhong teaches The method according to claim 3,
the combination of Pflug in view of Kozuka does not explicitly wherein the desired virtual acceleration equals the total virtual acceleration.
Zhong is directed to method and system for applying a virtual resistance to a unmanned aerial vehicle, Zhong teaches wherein the desired virtual acceleration equals the total virtual acceleration. (see at least Fig. 4 [0056-0065]: a desired acceleration is calculated through a resultant force in which the virtual resistance force is introduced, then a desired speed is calculated, and the flight speed of the UAV is adjusted according to the desired speed to implement a reduction of speed. Specifically, vector composition is first performed on the virtual resistance force and other forces applied to the UAV, to obtain a resultant force, and then a desired acceleration a is obtained according to the kinetic equation F=ma, and a desired speed, that is, the speed instruction, is obtained according to the desired acceleration a.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Pflug and Kozuka to incorporate the technique of determining a desired virtual acceleration wherein the desired virtual acceleration equals the total virtual acceleration as taught by Zhong with reasonable expectation of success to improve autonomous vehicle operation safety and efficiency.
Regarding claim 6, the combination of Pflug in view of Kozuka teaches The method according to claim 1
the combination of Pflug in view of Kozuka does not explicitly wherein the driving related operation comprises setting, without human driver involvement, an acceleration of the vehicle to a desired virtual acceleration; wherein the desired acceleration is directional.
Zhong is directed to method and system for applying a virtual resistance to a unmanned aerial vehicle, Zhong teaches wherein the driving related operation comprises setting, without human driver involvement, an acceleration of the vehicle to a desired virtual acceleration (see at least Fig. 4 [0056-0065]: a desired acceleration is calculated through a resultant force in which the virtual resistance force is introduced, then a desired speed is calculated, and the flight speed of the UAV is adjusted according to the desired speed to implement a reduction of speed. Specifically, vector composition is first performed on the virtual resistance force and other forces applied to the UAV, to obtain a resultant force, and then a desired acceleration a is obtained according to the kinetic equation F=ma, and a desired speed, that is, the speed instruction, is obtained according to the desired acceleration a.); wherein the desired virtual acceleration is directional. (see at least Fig. 4 [0056-0065]: a desired acceleration is calculated through a resultant force in which the virtual resistance force is introduced, then a desired speed is calculated, and the flight speed of the UAV is adjusted according to the desired speed to implement a reduction of speed. Specifically, vector composition is first performed on the virtual resistance force and other forces applied to the UAV, to obtain a resultant force, and then a desired acceleration a is obtained according to the kinetic equation F=ma, and a desired speed, that is, the speed instruction, is obtained according to the desired acceleration a. Since vector composition is performed on the virtual resistance forces and other forces applied to the UAV to obtain the resultant force, the desired acceleration is a vector which indicates a direction.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Pflug and Kozuka to incorporate the technique of determining a desired virtual acceleration wherein the driving related operation comprises setting, without human driver involvement, an acceleration of the vehicle to a desired virtual acceleration as taught by Zhong with reasonable expectation of success to improve autonomous vehicle operation safety and efficiency.
Claim(s) 13 is rejected under 35 U.S.C. 103 as being unpatentable over Pflug in view of Kozuka and Lukarski et al. (US 11,465,620 B1 hereinafter Lukarski).
Regarding claim 13, the combination of Pflug in view of Kozuka teaches The method according to claim 12 comprising
The combination of Pflug in view of Kozuka does not explicitly teach selecting the one or more NNs based on a class of at least one object of the one or more objects.
Lukarski is directed to lane generation for mapping and vehicle control, Lukarski teaches selecting the one or more NNs based on a class of at least one object of the one or more objects. (see at least Fig. 10 Col. 6 Lines 15-64, Col 13 Line 60- Col. 15 Line 34 : identifying a classification of the nearby vehicle in accordance with the description of the object classifier and selecting a vehicle kinematics model for the nearby vehicle in accordance with the classification. The vehicle kinematics model may be selected, for example, from stored vehicle kinematics models that are predetermined and are associated with particular object classifications, such as by metadata that indicates that a specific vehicle kinematics model is suitable for use with vehicles of a particular classification.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Pflug and Kozuka to incorporate the technique of selecting the one or more NNs based on a class of at least one object of the one or more objects as taught by Lukarski with reasonable expectation of success to improve accuracy in determining and performing autonomous vehicle control.
Claim(s) 14 is rejected under 35 U.S.C. 103 as being unpatentable over Pflug in view of Kozuka and Banerjee et al. (US 2020/0301013 A1 hereinafter Banerjee).
Regarding claim 14, the combination of Pflug in view of Kozuka teaches The method according to claim 12 comprising
the combination of Pflug in view of Kozuka does not explicitly teach feeding the one or more NNs with class metadata related to a class of at least one object of the one or more objects.
Banerjee is directed to object detection and/or classification om a scene, Banerjee teaches feeding the one or more NNs with class metadata related to a class of at least one object of the one or more objects. (see at least [0007-0008, 0018-0028]: the method comprises feeding the image data and the encoded projected depth data into respective separate convolutional neural networks to learn separate features, joining (for example, by concatenating, by summing, by averaging, etc.) the learned separate features, and feeding the joined features into a common convolutional neural network to detect or classify objects in the scene. Each data type of fused hybrid data is fed into an independent convolutional neural network to learn features and then the learned features are concatenated and fed through fully connected layers via a convolutional neural network to detect and classify objects (class score) and predict respective bounding boxes for the objects.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Pflug and Kozuka to incorporate the technique of feeding the one or more NNs with class metadata related to a class of at least one object of the one or more objects as taught by Banerjee to provide a method and system for improving existing object detection concepts for sued sensor data (Banerjee [0005]).
Claim(s) 25-27 are rejected under 35 U.S.C. 103 as being unpatentable over Pflug in view of Kozuka and Wayne et al. (US 2020/0090042 A1 hereinafter Wayne).
Regarding claim 25, the combination of Pflug in view of Kozuka teaches The method according to claim 1,
the combination of Pflug in view of Kozuka does not explicitly teach wherein the one or more neural networks were trained to map the object information to the one or more virtual forces using a combination of reinforcement learning and behavior cloning.
Wayne is directed to systems and methods for training a neural network used to select actions to be performed by an autonomous vehicle navigating through an environment, Wayne teaches wherein the one or more neural networks were trained to map the object information to the one or more virtual forces using a combination of reinforcement learning and behavior cloning. (see at least Fig. 1-4 Abstract [0009-0014, 0087-0179]: training a reinforcement learning system to select actions to be performed by an agent (e.g. autonomous vehicle) interacting with an environment with a two-stage approaches. First, an encoder is trained based on a set of input trajectories, then the neural network is trained via reinforcement learning using encodings generated by the trained encoder. The trajectories are training/demonstration trajectories exhibiting behavior to be imitated. The system is configured to pass the trajectories through the encoder to determine a distributions over embeddings z of the demonstration trajectories, then decode the trajectories to obtain imitation trajectories, and then train the system to improve the encoder and decoder performance. That is, the NNs are trained to map the input states (i.e. object information) to actions (i.e. virtual forces) using a combination of behavior cloning and reinforcement learning).
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Pflug and Kozuka to incorporate the technique of training one or more NNs by mapping the object information to the one or more virtual forces using a combination of reinforcement learning and behavior cloning as taught by Wayne with reasonable expectation of success to enable the autonomous vehicle to learn from observation data and make safer and more accurate driving decision.
Regarding claim 26, the combination of Pflug in view of Kozuka teaches The method according to claim 1,
the combination of Pflug in view of Kozuka does not explicitly wherein the one or more neural networks were trained to map the object information to the one or more virtual forces using a reinforcement learning that has a reward function that is defined using behavior cloning.
Wayne is directed to systems and methods for training a neural network used to select actions to be performed by an autonomous vehicle navigating through an environment, Wayne teaches wherein the one or more neural networks were trained to map the object information to the one or more virtual forces using a reinforcement learning that has a reward function that is defined using behavior cloning. (see at least Fig. 1-4 Abstract [0009-0060]: the reinforcement learning neural network may comprise a policy generator and a discriminator wherein the policy generator may be used to select actions to be performed by an agent interacting with an environment to imitate a state-action trajectory, using the discriminator to discriminate between the imitated state-action trajectory and a reference trajectory, and updating parameters of the policy generator using the reward values conditioned on the target embedding vector.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Pflug and Kozuka to incorporate the technique of training one or more NNs by mapping the object information to the one or more virtual forces using a reinforcement learning that has a reward function that is defined using behavior cloning as taught by Wayne with reasonable expectation of success to enable the autonomous vehicle to learn from observation data and make safer and more accurate driving decision.
Regarding claim 27, the combination of Pflug in view of Kozuka teaches The method according to claim 1,
the combination of Pflug in view of Kozuka does not explicitly wherein the one or more neural networks were trained to map the object information to the one or more virtual forces using a reinforcement learning that has an initial policy that is defined using behavior cloning.
Wayne is directed to systems and methods for training a neural network used to select actions to be performed by an autonomous vehicle navigating through an environment, Wayne teaches wherein the one or more neural networks were trained to map the object information to the one or more virtual forces using a reinforcement learning that has an initial policy that is defined using behavior cloning. (see at least Fig. 1-4 Abstract [0009-0060]: the reinforcement learning neural network may comprise a policy generator and a discriminator wherein the policy generator may be used to select actions to be performed by an agent interacting with an environment to imitate a state-action trajectory, using the discriminator to discriminate between the imitated state-action trajectory and a reference trajectory, and updating parameters of the policy generator using the reward values conditioned on the target embedding vector.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Pflug and Kozuka to incorporate the technique of training one or more NNs by mapping the object information to the one or more virtual forces using a reinforcement learning that has an initial policy that is defined using behavior cloning as taught by Wayne with reasonable expectation of success to enable the autonomous vehicle to learn from observation data and make safer and more accurate driving decision.
Claim(s) 28-29 are rejected under 35 U.S.C. 103 as being unpatentable over Pflug in view of Kozuka and Furlan (US 2021/0122035 A1).
Regarding claim 28, the combination of Pflug in view of Kozuka teaches The method according to claim 1,
the combination of Pflug in view of Kozuka does not explicitly wherein the one or more neural networks were trained to map the object information to the one or more perception fields and one or more physical model functions that differ from the perception field.
Furlan is directed to machine vision system that employs neural networks to parse or evaluate a three-dimension environment for use in automation system, Furlan teaches wherein the one or more neural networks were trained to map the object information to the one or more perception fields and one or more physical model functions that differ from the perception field. (see at least Fig. 4-11 [0005-0009, 0046-0113]: creating a first neural network that takes in the image data from one or more cameras as well as data describing visual motion (optical flow) and outputs a set of 3D depth information, such as a voxel map, a particle distribution, or polygonal information, this set of 3D depth information also includes information regarding the movement of the 3D elements in 3D spaces. The first neural network is then trained through a machine learning algorithm such as neuroevolution or gradient descent using a large and varied automatically generated set of scenes and objects in those scenes until the output depth data and 3D movement data closely matches the depth of the given scene. As the rendered images provided are generated using highly realistic rendering technologies (such as raytracing), the training enables the neural network to handle real visual data as well as the virtual generated data. A second neural network topology is then created having the output of the spatial memory submodule as an input and with an output of object boundary data. The second neural network topology is trained through a machine learning algorithm such as neuroevolution or gradient descent using a large and varied automatically generated set of scenes and objects in those scenes until the output object boundary data closely matches the object boundaries of the given scene. As the rendered images provided are generated using highly realistic rendering technologies (such as raytracing), the training enables the neural network to handle real visual data as well as the virtual generated data. The simulated 3D scenes also have virtual forces applied to the objects to cause motion, the data describing the forces applied may also be provided as input to the neural network to aid in training.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Pflug and Kozuka to incorporate the technique of training to map the object information to the one or more perception fields and one or more physical model functions that differ from the perception field as taught by Furlan with reasonable expectation of success to provide a system that can correctly detect objects to ensure successful autonomous system operation (Furlan [0002-0004]).
Regarding claim 29, the combination of Pflug in view of Kozuka teaches The method according to claim 1,
the combination of Pflug in view of Kozuka does not explicitly wherein the one or more neural networks comprises a first NN and a second NN, wherein the first NN is trained to map the object information to the one or more perception fields and the second NN was trained to map the object information to the one or more virtual physical model functions.
Furlan is directed to machine vision system that employs neural networks to parse or evaluate a three-dimension environment for use in automation system, Furlan teaches wherein the one or more neural networks comprises a first NN and a second NN, wherein the first NN is trained to map the object information to the one or more perception fields and the second NN was trained to map the object information to the one or more virtual physical model functions. (see at least Fig. 4-11 [0005-0009, 0046-0113]: creating a first neural network that takes in the image data from one or more cameras as well as data describing visual motion (optical flow) and outputs a set of 3D depth information, such as a voxel map, a particle distribution, or polygonal information, this set of 3D depth information also includes information regarding the movement of the 3D elements in 3D spaces. The first neural network is then trained through a machine learning algorithm such as neuroevolution or gradient descent using a large and varied automatically generated set of scenes and objects in those scenes until the output depth data and 3D movement data closely matches the depth of the given scene. As the rendered images provided are generated using highly realistic rendering technologies (such as raytracing), the training enables the neural network to handle real visual data as well as the virtual generated data. A second neural network topology is then created having the output of the spatial memory submodule as an input and with an output of object boundary data. The second neural network topology is trained through a machine learning algorithm such as neuroevolution or gradient descent using a large and varied automatically generated set of scenes and objects in those scenes until the output object boundary data closely matches the object boundaries of the given scene. As the rendered images provided are generated using highly realistic rendering technologies (such as raytracing), the training enables the neural network to handle real visual data as well as the virtual generated data. The simulated 3D scenes also have virtual forces applied to the objects to cause motion, the data describing the forces applied may also be provided as input to the neural network to aid in training.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Pflug and Kozuka to provide a first and second NN and to incorporate the technique of training to map the object information to the one or more perception fields and the second NN was trained to map the object information to the one or more virtual physical model functions as taught by Furlan with reasonable expectation of success to provide a system that can correctly detect objects to ensure successful autonomous system operation (Furlan [0002-0004]).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANA F ARTIMEZ whose telephone number is (571)272-3410. The examiner can normally be reached M-F: 9:00 am-3:30 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris S. Almatrahi can be reached at (313) 446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANA F ARTIMEZ/Examiner, Art Unit 3667
/FARIS S ALMATRAHI/Supervisory Patent Examiner, Art Unit 3667