DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This is a Non-Final rejection on the merits of this application. Claims 1-20 are currently pending, as discussed below.
Examiner Notes that the fundamentals of the rejections are based on the broadest reasonable interpretation of the claim language. Applicant is kindly invited to consider the reference as a whole. References are to be interpreted as by one of ordinary skill in the art rather than as by a novice. See MPEP 2141. Therefore, the relevant inquiry when interpreting a reference is not what the reference expressly discloses on its face but what the reference would teach or suggest to one of ordinary skill in the art.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding Claim 1 (similarly claim 11), Applicant has apparently not described, in the specification, in sufficient details, by what algorithm(s) or by what steps/procedure, he/she determined, any or all of the “one or more virtual fields”, “virtual physical model”, and “ virtual physical model is built based on one or more physical laws” that fall within the scope of the claims. The published specification of instant application described the following:
[0166] Step 3020 may be driven from the virtual physical model. For example—assuming that the virtual physical model represents objects as electromagnetic charges—the one or more virtual fields are virtual electromagnetic fields and the virtual force represents an electromagnetic force generated due to the virtual charges. For example—assuming that the virtual physical model is a mechanical model—then virtual force fields are driven from the acceleration of the objects.
The descriptions in the specification apparently do not specify/define:
(i) what is/are “virtual fields” and if the virtual fields are “virtual charges” then how are the “virtual charges” assigned to the “one or more objects located within the environment of a vehicle”, e.g. are they based on object position, size, mass, velocity, threat/risk level, importance, or something else; and how (in particularly by what process/algorithm) are the electromagnetic fields and forces are computed and used, e.g. to predict object interactions or vehicle behavior?
(ii) how are the “virtual force fields” are derived from “the acceleration of the objects” and in particular by what process/algorithm and how are they assigned to the “one or more objects”;
(iii) what’s the relationship between “virtual fields”, “virtual physical models” and how to construct the virtual physical model that’s based on “one or more physical laws”; Applicant has apparently not described any criteria and/or requirements for selecting or combining one or more (and what) physical laws and in particular how to combine them.
The specification fails to provide sufficient detail to demonstrate possession of the claimed invention, particularly the concept of a virtual physical model as broadly claimed. While the illustrated examples involves electromagnetic charges/fields/forces and mechanical models are presented, there is no description on how virtual fields/forces are computed from any object information and how to select or apply virtual model or integrate the virtual fields to affect vehicle behavior or using it in a functional driving system. See MPEP 2161.01, I. and LizardTech Inc. v. Earth Resource Mapping Inc., 424 F.3d 1336, 1345 (Fed. Cir. 2005) cited therein ("Whether the flaw in the specification is regarded as a failure to demonstrate that the applicant possessed the full scope of the invention recited in [the claim] or a failure to enable the full breadth of that claim, the specification provides inadequate support for the claim under [§ 112(a)]"). Accordingly, the Examiner believes that Applicant has not demonstrated to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding Claim 5 (similarly claim 15), Applicant has apparently not described, in the specification, in sufficient details, by what algorithm(s) or by what steps/procedure, he/she determined, any or all of the “virtual physical model is a mechanical model and the virtual fields are driven from accelerations of the objects” that fall within the scope of the claims. The specification fails to provide sufficient detail to demonstrate possession of the claimed invention, particularly the concept of a virtual physical model is a mechanical model as broadly claimed (e.g. Newtonian physics, spring-mass system, Lagrangian mechanics or something else). While the illustrated examples involves electromagnetic charges/fields/forces and mechanical models are presented, there is no description on how virtual fields/forces are computed from any object information and how to select or apply virtual model or integrate the virtual fields to affect vehicle behavior or using it in a functional driving system. See MPEP 2161.01, I. and LizardTech Inc. v. Earth Resource Mapping Inc., 424 F.3d 1336, 1345 (Fed. Cir. 2005) cited therein ("Whether the flaw in the specification is regarded as a failure to demonstrate that the applicant possessed the full scope of the invention recited in [the claim] or a failure to enable the full breadth of that claim, the specification provides inadequate support for the claim under [§ 112(a)]"). Accordingly, the Examiner believes that Applicant has not demonstrated to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding Claims 7-9 (similarly claims 17-19), Applicant has apparently not described, in the specification, in sufficient details, by what algorithm(s) or by what steps/procedure, any or all of “spatial and temporal information” is extracted from the set of SIUs by any or all of “convolutional neural network”, “transformer neural network” and/or “panoptic segmentation model” because the specification fails to provide sufficient guidance or working examples of how any or all of “convolutional neural network”, “transformer neural network” and/or “panoptic segmentation model” is/are configured, trained or applied to extract both spatial and temporal information from the sensed data (the filed specification [0132, 0410-0412, 0437-0439] merely restates the claim languages). See MPEP 2161.01, I. and LizardTech Inc. v. Earth Resource Mapping Inc., 424 F.3d 1336, 1345 (Fed. Cir. 2005) cited therein ("Whether the flaw in the specification is regarded as a failure to demonstrate that the applicant possessed the full scope of the invention recited in [the claim] or a failure to enable the full breadth of that claim, the specification provides inadequate support for the claim under [§ 112(a)]"). Accordingly, the Examiner believes that Applicant has not demonstrated to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claims 1 and 11 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. Because the claims are so broad as to cover any environment, any object type and any form of virtual field modeling that including any physical laws (e.g. mechanical laws) and given with limited guidance in the specification, a person having ordinary skilled in the art at the time of effective filing would not be able to make or use the claimed invention. For example, the specification and drawings do not appear to disclose by what algorithm(s), or by what steps or procedure, he/she obtained “virtual field” based on what object information (e.g. the specification does not explain how virtual fields would differ based on object type, environmental condition, or differing vehicle behavior) and further Applicant provides no working examples for the obtaining virtual fields based on (any and what) physical laws together. The claim is drafted so broadly to encompass any environment, any object type/category, and any form of virtual field modeling to include any physical laws while the disclosure does not explain/disclose how virtual fields would differ based on the object type/category/size or the like for affecting a vehicle’s behavior. In the absence of adequate guidance or working examples, a person having ordinary skilled in the art would be required to engage in undue experimentation to practice the claimed invention. See MPEP 2164.01(a)
The dependent claims are also rejected under 112 first paragraph by the fact that they are dependent upon the rejected independent claims.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding Claim 1 (similarly claim 11), the recited limitation(s) “virtual physical model” is indefinite and not reasonably certain, with indeterminate metes and bounds1, from the teachings of the specification, because it is unclear what are these “virtual physical model” (e.g. the published specification [0166] merely described examples of electromagnetic models or mechanical models, not limitations) but the claim allows for any model that is built on one or more physical laws; hence this limitation renders the claim to be indefinite.
Regarding Claim 1 (similarly claim 11), the recited limitation(s) “potential impact on a behavior of the vehicle” is indefinite because it is unclear what represents “potential impact on a behavior of the vehicle” for example, potential impact as in predicted change in velocity, risk of collision, path deviation likelihood, or something else; and what defines the behavior of the vehicle, e.g. does it include short-term braking, steering, lane selection, or long-term route regeneration or something else. Hence this limitation renders the claim to be indefinite.
Regarding Claim 2 (similarly claim 12), the recited limitation(s) “receiving the set of SIUs, and (b) extracting the spatial and temporal information from the set of SIUs” is indefinite because it is unclear and confusing to the Examiner who and what is “receiving” the set of SIUs and what “receiving the set of SIUs” means (e.g. receiving information/signals from the sensed information units such as camera, lidar, radar and are these information raw sensor data, pre-processed data or something else) or the system is receiving sensor units? Further, it is also unclear what specific spatial and temporal information is needed/extracted. Hence this limitation renders the claim to be indefinite. For purpose of examination, Examiner is interpreting this limitation as: “receiving information from sensing unit and extracting spatial and temporal information from sensing units”.
Regarding Claim 4 (similarly claim 14), the recited limitation(s) “triggering executing of further processing of the one or more virtual fields to impact a navigation of the vehicle” is indefinite because it is unclear and confusing to the Examiner (e.g. is it “triggering execution of further processing” or “triggering further processing”?) and it is not reasonably certain, with indeterminate metes and bounds1, from the teachings of the specification what may fall within the scope of “impact a navigation of the vehicle” (e.g. what aspects of navigation is/are impacted, path planning, control or something else) hence, this claim limitation renders the claim to be indefinite.
Regarding Claim 5 (similarly claim 15), the recited limitation(s) “virtual physical model is a mechanical model and the virtual fields are driven from accelerations of the objects” is indefinite and not reasonable certain, with indeterminate metes and bounds1, from the teachings of the specification, because it is unclear what may or may not fall within the scope of “mechanical model” and it is unclear and confusing to the Examiner how are “ virtual fields are driven from accelerations of the objects” (e.g. does it mean calculated, derived, or influenced by acceleration of the objects?), hence, this limitation renders the claim to be indefinite.
The dependent claims are also rejected under 112 second paragraph by the fact that they are dependent upon the rejected independent claims.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more.
101 Analysis – Step 1 – YES
Claim 1 is directed to a method; and claim 11 is directed to a non-transitory computer-readable medium. Therefore, claims 1 and 11 are within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. The other analogous claim(s) 11 is/are rejected for the same reasons as the representative claim 1 as discussed here. Claim 1 recites:
A method that is computer implemented and is for augmented driving related virtual fields, the method comprises:
obtaining object information regarding one or more objects located within an environment of a vehicle; wherein the object information comprises spatial and temporal information extracted from a set of sensed information units (SIUs) of the environment of the vehicle that were acquired at different points in time; and
determining, by a processing circuit, and based on the object information, one or more virtual fields of the one or more objects, wherein the determining of the one or more virtual fields is based on a virtual physical model, wherein the one or more virtual fields represent a potential impact of the one or more objects on a behavior of the vehicle, wherein the virtual physical model is built based on one or more physical laws.
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “determining,…based on the object information…a potential impact of the one or more objects…” in the context of this claim encompasses a person (e.g. a driver) observing the vehicle’s surrounding (e.g. using surrounding information) and forming a simple judgement either mentally or using pen and paper, e.g. a nearby vehicle swerving into the driver’s own lane would affect/cause the driver to swerve away (or pushed away) in order to avoid an accident if the two vehicles are relatively close to each other; or a driver observing an upcoming sharp turn will cause the driver to slow down and turn); i.e. a mental process of judgement based on observation, as recited in originally filed specification “[0050] Although successful driving is contingent upon circumnavigating surrounding road objects based on their location and movement, humans are notoriously bad at estimating kinematics. We suspect that humans employ an internal representation of surrounding objects in the form of virtual force fields that immediately imply action, thus circumventing the need for kinematics estimation. Consider a scenario in which the ego vehicle drives in one lane and a vehicle diagonally in front in an adjacent lane starts swerving into the ego lane. The human response to brake or veer off would be immediate and instinctive and can be experienced as a virtual force repelling the ego from the swerving vehicle. This virtual force representation is learned and associated with the specific road object.”
Examiner would also note MPEP 2106.04(a)(2)(III): The courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. Here, the determination is a form of making evaluation and judgement based on observation. Accordingly, the claim recites at least one abstract idea.
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
A method that is computer implemented and is for augmented driving related virtual fields, the method comprises:
obtaining object information regarding one or more objects located within an environment of a vehicle; wherein the object information comprises spatial and temporal information extracted from a set of sensed information units (SIUs) of the environment of the vehicle that were acquired at different points in time; and
determining, by a processing circuit, and based on the object information, one or more virtual fields of the one or more objects, wherein the determining of the one or more virtual fields is based on a virtual physical model, wherein the one or more virtual fields represent a potential impact of the one or more objects on a behavior of the vehicle, wherein the virtual physical model is built based on one or more physical laws.
For the following reason(s), the examiner submits that the above identified limitations do not integrate the above-noted abstract idea into a practical application.
Regarding the additional limitation of “obtaining object information..”, the examiner submits that these limitation are insignificant extra-solution activities that merely uses a computer (processing circuit) to perform the process. In particular, the acquiring switching request step are recited at a high level of generality (i.e. as a general means of acquiring data) and amounts to mere data gathering which is a form of insignificant extra-solution activity. Lastly, the claim(s) further recite “processing circuit” merely describes how to generally “apply” the otherwise abstract ideas and/or additional limitations in a generic or general-purpose computer environment, where processor is recited as generic processor performing a generic computer function of acquiring data. This generic processor limitation is no more than mere instructions to apply the exception using a generic computer component and merely automates the steps.
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impost any meaningful limits on practicing the abstract idea.
101 Analysis – Step 2B
Regarding Step 2B of the 2019 PEG, representative independent claim 1 do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional limitations of “processing circuit”, the examiner submits that the processing circuit is recited at a high-level of generality (i.e. as a generic computer component performing generic calculation) such that it amounts no more than mere instruction to apply the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept. And as discussed above, the additional limitations discussed above are insignificant extra-solutions activities.
As explained, the additional elements are recited at a high level of generality to simply implement the abstract idea and are not themselves being technologically improved. See, e.g., MPEP §2106.05; Alice Corp. v. CLS Bank, 573 U.S., 208,223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention”). Electric Power Group, LLC v, Alstom S.A., 830 F.3d 1350, 1354-55, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016) (Selecting information for collection, analysis and display constitute insignificant extra-solution activity). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016)( Generating a second menu from a first menu and sending the second menu to another location as performed by generic computer components). Hence, the claims are not patent eligible.
Dependent Claims
Dependent claims 2-10, and 12-20 do not recite any further limitations that causes the claims to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial except and/or additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 2-10 and 12-20 are not patent eligible under the same rationale as provided for in the rejection of claim 1.
As such, claims 1-20 are rejected under 35 USC § 101 as being drawn to an abstract idea without significant more, and thus are ineligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-6 and 11-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Pflug (US 2014/0067206 A1).
Regarding claim 1 (similarly claim 11), Pflug teaches A method that is computer implemented and is for augmented driving related virtual fields (see at least Figs. 1-3Abstract), the method comprises:
obtaining object information regarding one or more objects located within an environment of a vehicle; wherein the object information comprises spatial and temporal information extracted from a set of sensed information units (SIUs) of the environment of the vehicle that were acquired at different points in time; (see at least Figs. 1-3 [0015-0058]: the vehicle includes an imaging system or vision system that captures images exterior of the vehicle. The system is operable continuously as the vehicle is driven along the road. The system is operable continuously as the vehicle is driven along the road. Thus, the system is always collecting environmental data which are fed into the influence mapping. Further, the system is recapitulating the current state in time slots (fractions of seconds long) and reevaluating the situation (by the influence map). During the milliseconds that are progressing an earlier as optimal laid out collision avoidance path may become abandoned and a better one at that state of time may be selected as the preferred or optimal path since the other traffic participants may act at least in part different than assumed earlier or objects that weren't detected previously may come into view of the sensors of the subject vehicle.) and
determining, by a processing circuit, and based on the object information, one or more virtual fields of the one or more objects (see at least Fig. 2-3), wherein the determining of the one or more virtual fields is based on a virtual physical model, wherein the one or more virtual fields represent a potential impact of the one or more objects on a behavior of the vehicle, wherein the virtual physical model is built based on one or more physical laws. (see at least Fig. 1-3 [0015-0058]: One solution for determining avoidance paths that may be optimal or semi optimal may be to handle the suspect vehicle and all foreign objects/vehicles as being like a marble having an influence value rolling or gliding over the influence map which influence values determining the heights (relate to according elevations and valleys). The marbles may have an assumed mass ‘m’ exposed to an assumed gravity ‘g’ and an inherent inertia. When in motion already (according to the speed vectors ( 7 ) in FIG. 2), there may be an assumed kinetic energy inherent to each marble. By that the marble may be turned away and slowed down when running into the direction of an elevation and may be turned to and accelerated when heading into a valley or when a faster marble closes up from behind, which may cause the map to rise in that region. Due to the influence of each object or vehicle, the influence map under the marble may change continuously while the marble glides or rolls. More specifically, the marble's acceleration/deceleration force and direction due to the slope of its resting surface at a specific point in the map may be calculated by superpositioning each surrounding objects influence contribution. Each object's contribution may be added to one common at a certain point such as can be seen in FIG. 3. The normal and by that the accelerating component of g of each object (marble) may be calculated accordingly. At a certain point in time, each marble may have a certain inherent inertia and acceleration. By that it is possible to presume each marble's new position and speed (or inertia) at a sequential time increment. This is already sufficient to run a basic conflict and/or collision avoidance. The nature of the marbles system will be to deflect from the most influencing objects in the local surrounding. The paths which will be gone in (near) future are mostly determined by the current influence map landscape. The higher an influence level in an area is the more it influences the future course.)
Regarding claim 2 (similarly claim 12), Pflug teaches The method according to claim 1, wherein the obtaining of the object information comprises:
Pflug further teaches receiving the set of SIUs, and (b) extracting the spatial and temporal information from the set of SIUs. (see at least Figs. 1-3 [0015-0058]: the vehicle includes an imaging system or vision system that captures images exterior of the vehicle. The system is operable continuously as the vehicle is driven along the road. The system is operable continuously as the vehicle is driven along the road. Thus, the system is always collecting environmental data which are fed into the influence mapping. Further, the system is recapitulating the current state in time slots (fractions of seconds long) and reevaluating the situation (by the influence map). During the milliseconds that are progressing an earlier as optimal laid out collision avoidance path may become abandoned and a better one at that state of time may be selected as the preferred or optimal path since the other traffic participants may act at least in part different than assumed earlier or objects that weren't detected previously may come into view of the sensors of the subject vehicle.)
Regarding claim 3 (similarly claim 13), Pflug teaches The method according to claim 1,
Pflug further teaches comprising determining, by the processing circuit, a total virtual force applied on the vehicle, based on the one or more virtual fields. (see at least Figs. 1-3 [0015-0058]: Each object's contribution may be added to one common at a certain point such as can be seen in FIG. 3. The exemplary influence map in FIG. 3 shows the influence of an object or vehicle ‘II’ with a speed vector to the left with an influence level of the value 5 (5 rings) and an object or vehicle ‘III’ with a speed vector to the right with an influence level of the value 7 (7 rings), which influence areas mostly irradiate circumferential and into the direction of the speed vector. The influence level of the objects II and III to the edges of the triangle of the to be calculated object (under test) is resting on can be calculated by counting the number of rings (and by that the influence value) the specific point or area or region is enclosed in. By summing up the influence of both other objects, the triangle has two edges with the height of 3 and one with the level 2. By that, the triangle's normal is tilted to upper left from upright (and by that the slope of the triangle will be to the upper left). When simulating the next time increment, object I is accelerated into the upper left direction. In this example, the triangle is chosen quite wide for giving example. The triangle may preferably be chosen in an infinitesimal small manner and the influence calculated not in INTEGER counting rings but in FLOAT by equation (1) to match the normal vector n more precisely.)
Regarding claim 4 (similarly claim 14), Pflug teaches The method according to claim 1,
Pflug further teaches wherein the determining of the one or more virtual fields triggering executing of further processing of the one or more virtual fields to impact a navigation of the vehicle. (see at least Figs. 1-3)
Regarding claim 5 (similarly claim 15), Pflug teaches The method according to claim 1,
Pflug further teaches wherein the virtual physical model is a mechanical model and the virtual fields are driven from accelerations of the objects. (see at least Fig. 1-3 [0015-0058]: One solution for determining avoidance paths that may be optimal or semi optimal may be to handle the suspect vehicle and all foreign objects/vehicles as being like a marble having an influence value rolling or gliding over the influence map which influence values determining the heights (relate to according elevations and valleys). The marbles may have an assumed mass ‘m’ exposed to an assumed gravity ‘g’ and an inherent inertia. When in motion already (according to the speed vectors ( 7 ) in FIG. 2), there may be an assumed kinetic energy inherent to each marble. By that the marble may be turned away and slowed down when running into the direction of an elevation and may be turned to and accelerated when heading into a valley or when a faster marble closes up from behind, which may cause the map to rise in that region. Due to the influence of each object or vehicle, the influence map under the marble may change continuously while the marble glides or rolls. More specifically, the marble's acceleration/deceleration force and direction due to the slope of its resting surface at a specific point in the map may be calculated by superpositioning each surrounding objects influence contribution. Each object's contribution may be added to one common at a certain point such as can be seen in FIG. 3. The normal and by that the accelerating component of g of each object (marble) may be calculated accordingly. At a certain point in time, each marble may have a certain inherent inertia and acceleration. By that it is possible to presume each marble's new position and speed (or inertia) at a sequential time increment. This is already sufficient to run a basic conflict and/or collision avoidance. The nature of the marbles system will be to deflect from the most influencing objects in the local surrounding. The paths which will be gone in (near) future are mostly determined by the current influence map landscape. The higher an influence level in an area is the more it influences the future course.)
Regarding claim 6 (similarly claim 16), Pflug teaches The method according to claim 1,
Pflug further teaches wherein the set of SIUs are images, each images comprises multiple pixels. (see at least Fig. 1-3 [0015-0058]: The vehicle may include any type of sensor or sensors, such as imaging sensors or radar sensors or lidar sensors or radar sensors or ultrasonic sensors or the like. The imaging sensor or camera may capture image data for image processing and may comprise any suitable camera or sensing device, such as, for example, an array of a plurality of photosensor elements arranged in at least 640 columns and 480 rows (preferably a megapixel imaging array or the like), with a respective lens focusing images onto respective portions of the array.)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Pflug in view of Yamada et al. (US 2020/0160043 A1 hereinafter Yamada).
Regarding claim 7 (similarly claim 17), Pflug teaches The method according to claim 1,
Pflug does not explicitly teach wherein the spatial and temporal information is extracted from the set of SIUs by a convolutional neural network (CNN).
Yamada is directed to image data generation device using CNN, Yamada teaches wherein the spatial and temporal information is extracted from the set of SIUs by a convolutional neural network (CNN). (see at least Fig. 1 Abstract [0024-0083]: The image recognition device 1 illustrated in FIG. 1(a) is an in-vehicle device, and includes a spatio-temporal image data generation unit 2 configured to generate image data for image recognition, and a CNN unit 3 configured to execute an image recognition process by means of artificial intelligence using deep learning. The image recognition device 1 analyze moving-image data output from an in-vehicle camera and image-recognizes the presence or absence of a pedestrian outside the vehicle and classification of an operating state (right upright, right walking, left upright, left walking, and the like).
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Pflug’s driver assistance system to incorporate the technique of extracting spatial and temporal information from sensing information by a convolutional neural network as taught by Yamada with reasonable expectation of success because CNNs are highly efficient at processing data from cameras, lidar and radar to extract spatial features in real time and by incorporating a CNN in Pflug’s system would further improve the processing efficiency of risk confidence map and for the vehicle to effectively avoid collision.
Claim(s) 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Pflug in view of Roy (US 2021/0042557 A1).
Regarding claim 8 (similarly claim 18), Pflug teaches The method according to claim 1,
Pflug does not explicitly teach wherein the spatial and temporal information is extracted from the set of SIUs by a transformer neural network (TNN).
Roy is directed to system and method for mapping and modeling a three dimensional environment, Roy teaches wherein the spatial and temporal information is extracted from the set of SIUs by a transformer neural network (TNN). (see at least Fig. 1 Abstract [0006-0013]: The point cloud data of some embodiments may be captured by a LIDAR sensor, where the program code instructions to classify objects in the environment as dynamic objects or static objects based on the modeled voxel-wise temporal changes may include program code instructions to employ a spatial transformer network to distinguish between LIDAR sensor movement and object movement. Embodiments may include program code instructions to generate a three-dimensional surface model of the environment including objects classified as static objects and excluding objects classified as dynamic objects. Embodiments may include program code instructions to employ the three-dimensional surface model of the environment to facilitate autonomous vehicle control.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Pflug’s driver assistance system to incorporate the technique of extracting spatial and temporal information from sensing information by a transformer neural network as taught by Roy with reasonable expectation of success such that the vehicle can accurately perceive its environment to distinguish static and dynamic objects for use in vehicle control.
Claim(s) 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Pflug in view of Rahimpour et al. (US 2022/0306088 A1 hereinafter Rahimpour).
Regarding claim 9 (similarly claim 19), Pflug teaches The method according to claim 1,
Pflug does not explicitly teach wherein the spatial and temporal information is extracted from the set of SIUs by a panoptic segmentation model.
Rahimpour is directed to system and method for identifying objects from image data collected by vehicle sensors, Rahimpour teaches wherein the spatial and temporal information is extracted from the set of SIUs by a panoptic segmentation model. (see at least Fig. 4-5 [0048-0061]: The computer 110 can determine whether the current environment is an urban environment or a rural environment based on a semantic segmentation model that classifies pixels in the thermal image 300 and classifies the environment based on the classified pixels, e.g., an Efficient Panoptical Segmentation model. The computer 110 can predict the trajectory 500 of the object 200 with an optical flow algorithm applied to a plurality of thermal images 300. The optical flow algorithm is programming of the computer 110 that determines movement of corresponding pixels in a plurality of images to predict motion of an object 200 captured by the pixels. The optical flow algorithm uses an intensity I(x, y, t) for a pixel x, y for a thermal image 300 captured at a time t to determine a direction of travel of the pixel, represented as partial derivatives with respect to the spatial coordinates I.sub.x, I.sub.y moving at speed V.sub.x, V.sub.y in the spatial directions, and a change of intensity of the pixel over time, represented as the partial derivative with respect to time.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Pflug’s driver assistance system to incorporate the technique of extracting spatial and temporal information from sensing information by a panoptic segmentation model as taught by Rahimpour with reasonable expectation of success such that the vehicle can identify and avoid objects on a roadway more quickly and with fewer computations than a conventional image processing technique (Rahimpour [0035]).
Claim(s) 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Pflug in view of Fong et al. (US 2010/0328055 A1 hereinafter Fong).
Regarding claim 10 (similarly claim 20), Pflug teaches The method according to claim 1,
Pflug does not explicitly teach wherein the spatial and temporal information is extracted from the set of SIUs by a segmentation and tracking module.
Fong is directed to a detection system for assisting a driver of a vehicle, Fong teaches wherein the spatial and temporal information is extracted from the set of SIUs by a segmentation and tracking module. (see at least Fig. 8 [0043-0046]: Accuracy can be improved by performing spatial segmentation in each frame. However, to reduce computational complexity and improve data processing speed, the system 5 adaptively performs spatial 77, 78 and temporal segmentation 82 and fusion 83 when necessary by defecting the variations between successive frames. The temporal segmentation module 82 is used to detect and track moving objects such that any object moving towards the vehicle at a speed higher than that of the vehicle's current speed can be identified with its movement tracked and compared with the previous frame; object boundaries with pixel accuracy can be estimated in the process.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Pflug’s driver assistance system to incorporate the technique of extracting spatial and temporal information from sensing information by a segmentation and tracking module as taught by Fong with reasonable expectation of success because it is desirable to enhance or augment visual information provided to the driver to highlight the existence of potential hazards and direct the focus of the driver in order to maintain safe driving (Fong [0003]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANA F ARTIMEZ whose telephone number is (571)272-3410. The examiner can normally be reached M-F: 9:00 am-3:30 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris S. Almatrahi can be reached at (313) 446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANA F ARTIMEZ/ Examiner, Art Unit 3667
/FARIS S ALMATRAHI/ Supervisory Patent Examiner, Art Unit 3667