DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Claim Rejections - 35 USC § 112:
Applicant's arguments filed 11/12/2025 have been fully considered but they are not persuasive. Claim 2 limitation wherein the observed states of the scene at least two different scenes each selected from a different one of objects, road shapes, or environmental conditions is not supported in Applicant’s specification as filed. Applicant argues that the specification recitation [0089] The system receives
observations of the scene and objects in the surrounding environment from one or more sensors provides support. Examiner respectfully disagrees. The specification but does not explicitly disclose at least two different scenes each selected from a different one of objects, road shapes, or environmental conditions. The rejection is maintained for claim 2.
Claim Rejections - 35 USC § 101:
Applicant’s arguments filed 11/12/2025, with respect to independent claim 1 have been fully considered and are persuasive. Amendment to independent claim 1 adds practical application to the abstract idea overcoming the rejection (i.e. performing safety actions with the electronic control unit based on the notification of step d. wherein the safety action comprises informing a control system thereby allowing the control system to take action for controlling the automated vehicle). The rejection of claims 1-5 and 7-9 has been withdrawn.
Claim Rejections - 35 USC § 103:
Applicant's arguments filed 11/12/2025 have been fully considered but they are not persuasive. In response to applicant’s argument that there is nothing in either Xiang et al or Kamenev et al to suggest the combination of the teachings of the two documents, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case,
Kamenev et al. (US 20210295171 A1; hereinafter Kamenev) cures the deficiencies of Xiang et al. (US
20220032955 A1; herein after).
Kamenev discloses a method for detecting inconsistencies in the observations from perception systems and perception sensors of an automated vehicle as described in the claimed invention[0100] comprises the steps of:
a. receiving and storing the observed states of the scene and environment from perception systems and sensors [0083] , wherein the observed states include at least one of the object positions, velocities, accelerations, headings, road shapes, or environmental conditions [0039].
b. calculating at a given timestamp, the boundaries of one or more possible states of a previously observed object at a given timestamp [0083] based on any of the previously observed bounding box, velocity, acceleration, and heading of the object [0085].
Xiang is relied upon cure the deficiencies of Kamenev. Specifically,
c. determining whether an estimated state of the object at the given timestamp stays within the calculated boundaries obtained in step b [0041].
d. sending a notification to the electronic control unit when the estimated state does not stay within a calculated boundary to the electronic control unit [0042].
e. performing safety actions with the electronic control unit based on the notification of step d wherein the safety action comprises a control system thereby allowing the control system to take action for controlling the automated vehicle [0049].
For these reasons, the rejection is maintained.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 2 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim 2 recites the limitation wherein the observed states of the scene at least two different scenes each selected from a different one of objects, road shapes, or environmental conditions which is not supported in Applicant’s specification as filed. The specification recites [0089] The system receives observations of the scene and objects in the surrounding environment from one or more sensors, but does not explicitly disclose at least two different scenes each selected from a different one of objects, road shapes, or environmental conditions.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5 and 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over Xiang et al. (US 20220032955 A1; hereafter Xiang) in view of Kamenev et al. (US 20210295171 A1; hereafter Kamenev).
Regarding claim 1, Xiang teaches a computer-implemented method for detecting inconsistencies in the observations (see at least, [0100] The abnormality detection unit G4 performs the above-described processing on all the objects detected by the diagnosis target sensor, and, based on the number of abnormality determinations) from perception systems and sensors including at least one of a camera, radar, and LIDAR (see at least, [0040] the surrounding monitor sensor 11 includes a front
camera 11A, a millimeter-wave radar 11B, a LiDAR 11C) of an automated vehicle executed on an electronic control unit (see at least, Fig 2, [0044] the object recognition processing based on the observation data generated by the surrounding monitoring sensor 11 may be performed by an
ECU (Electronic Control Unit) outside the sensor such as the automatic drive device 20) having one or more processors and a memory storing executable instructions (see at least, [0168] The control unit
…may be also realized by a dedicated computer which constitutes a processor programmed to execute one or more functions realized by computer programs), which comprises the steps of:
a. receiving and storing the observed states of the scene and environment from perception systems and sensors (see at least, [0083] The diagnostic material acquisition unit…acquires information…for determining whether or not an abnormality has occurred in any of the field recognition systems, that is, the surrounding monitoring sensor…and stores it in the recognition result holding unit) , wherein the observed states include at least one of the object positions, velocities, accelerations, headings, road shapes, or environmental conditions (see at least, [0039] surrounding monitoring sensor… generates, as observation data…data indicating…the relative speed for each detection direction and distance, or data indicating the relative position and reception intensity of a detected object),
b. calculating at a given timestamp, the boundaries of one or more possible states of a previously observed object at a given timestamp (see at least, [0083] data of the same type having different acquisition time points are sorted and saved in chronological order so that the latest data is at the top…Various data are configured so that the order of acquisition can be identified, such as adding a time stamp corresponding to the acquisition time) based on any of the previously observed bounding box, velocity, acceleration, and heading of the object (see at least, [0085] The correspondence identification unit G3 associates an object detected at a previous time with an object detected at the next time based on the position and moving speed of the detected object).
Xiang does not explicitly teach,
c. determining whether an estimated state of the object at the given timestamp, stays within the calculated boundaries obtained in step b,
d. sending a notification to the electronic control unit when the estimated state does not stay within a calculated boundary to the electronic control unit,
e. performing safety actions with the electronic control unit based on the notification of step d wherein the safety action comprises informing a control system thereby allowing the control system to take action for controlling the automated vehicle.
However, Kamenev teaches these limitations.
Kamenev teaches,
c. determining whether an estimated state of the object at the given timestamp stays within
the calculated boundaries obtained in step b (see at least, [0041] The confidence field…from the preceding time slice may then be used to determine the locations of the actors at that
time slice (e.g., T.sub.n−1)),
d. sending a notification to the electronic control unit when the estimated state does not stay within a calculated boundary to the electronic control unit (see at least, [0042] The bounding shape may then be used as a mask for the vector field…the same time slice to determine which vectors to use for finding a location of a corresponding actor…another actor…is not located at the prior time slice using the vector field),
e. performing safety actions with the electronic control unit based on the notification of step d (see at least, [0049] the outputs of the trajectory generator…may be transmitted or applied to the drive stack…of the vehicle…may be used….in performing one or more operations…e.g., obstacle avoidance) wherein the safety action comprises a control system thereby allowing the control system to take action for controlling the automated vehicle (see at least, [0049] the trajectories may be used by the autonomous vehicle 600 in performing one or more operations…obstacle avoidance…used by the drive stack 128 of the autonomous vehicle 600, such as an autonomous machine software stack executing on one or more components of the vehicle 600 (e.g., the SoC(s) 604, the CPU(s) 618, the GPU(s) 620).
It would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to have modified Xiang to include determining whether an estimated state of the object at the given timestamp, sending a notification to the electronic control unit when the estimated state does not stay within a calculated boundary to the electronic control unit, and performing safety actions, wherein the safety action comprises informing a control system thereby allowing the control system to take action for controlling the automated vehicle as taught by Kamenev in order to perform avoid collision based on the location of the drivable paths defined by avoiding detected obstacles (Kamenev, [0053]).
Regarding Claim 2, the combination of Xiang and Kamenev teaches the inconsistency detection method of claim 1. Kamenev further teaches wherein the observed states of the scene at least two different scenes each selected from a different one of objects, road shapes, or environmental conditions or combinations thereof (see at least, [0029] FIG. 2B, for a time slice at a time, T.sub.2, actors 210A-210G may be detected at locations within the environment).
It would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to have further modified Xiang to include the observed states of the scene at least two different scenes each selected from a different one of objects, road shapes, or environmental conditions or combinations thereof as taught by Kamenev in order to perform avoid collision based on the location of the drivable paths defined by avoiding detected obstacles (Kamenev, [0053]).
Regarding 3, the combination of Xiang and Kamenev teaches the inconsistency detection method of claim 1. Xiang further teaches wherein the observed and estimated states of scenes, objects, road shapes, or environmental conditions or combinations thereof are stored, and then used to calculate the boundaries of states of objects in the future or to match current observed states with future observed states (see at least, [0085] The correspondence identification unit G3 associates an object detected at a previous time with an object detected at the next time based on the position and moving speed of the detected object),
Regarding Claim 4, the combination of Xiang and Kamenev teaches the inconsistency detection method of claim 1. Xiang further teaches wherein the observed and estimated states are stored for fixed period of time, wherein the fixed period is comprised between 0.1 seconds and 10 seconds, preferably between 1 second and 5 seconds, even more preferably between 1.5 seconds and 3 seconds (see at least, [0098] The time-series data of the recognition result shows a transition of the correct recognition probability data of each type for the target object…A sampling period that defines a reference range of the past recognition result can be, for example, 4 seconds, 6 seconds).
Regarding Claim 5, the combination of Xiang and Kamenev teaches the inconsistency detection method of claim 1. Xiang further teaches wherein the observed and estimated states are stored until new information about the same object is received (see at least, [0083] data of the same type having different acquisition time points are sorted and saved in chronological order so that the latest data is at the top…Various data are configured so that the order of acquisition can be identified, such as adding a time stamp corresponding to the acquisition time).
Regarding Claim 7, the combination of Xiang and Kamenev teaches the inconsistency detection method of claim 1. Xiang further teaches wherein the boundaries are calculated based on one or more of: a. the previous bounding box of the object, b. the previous velocity of the object, c. the previous acceleration of the object, d. the previous heading of the object, e. the shapes of the road or the lane markings, f. the assumption on the maximum acceleration of the object, g. the assumption on the minimum acceleration of the object, which can be negative, h. the assumption on the maximum velocity of the object, i. the assumption on the minimum velocity of the object, j. the assumption on the space boundary that the object could reach, k. the assumption on the environment conditions fluctuation (see at least, [0085] The correspondence identification unit…associates an object detected at a previous time with an object detected at the next time based on the position and moving speed of the detected object…
corresponds to a configuration in which an object once detected is tracked).
Regarding Claim 8, the combination of Xiang and Kamenev teaches the inconsistency detection method of claim 1. Kamenev further teaches wherein the calculated boundaries are one or more of the following values: a. the maximum and minimum velocity of the object, b. the occupancy space of the object, represented by the maximum and minimum on each axis of a coordinate system (see at least, Fig 3C, [0046] a the group of vectors from the vector field…corresponding to the same (x, y)
coordinates as the group of points 320 in the confidence field…may point to a group of points 320A-2 at the time slice, T.sub.N−1…a connection between the group of points 320A-1 and 320A-2 may be made, attributed to a same actor).
It would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to have further modified Xiang to include the calculated boundaries are one or more of the following values: a. the maximum and minimum velocity of the object, b. the occupancy space of the object, represented by the maximum and minimum on each axis of a coordinate system as taught by Kamenev in order to perform avoid collision based on the location of the drivable paths defined by
avoiding detected obstacles (Kamenev, [0053]).
Regarding Claim 9, the combination of Xiang and Kamenev teaches the inconsistency detection method of claim 1. Xiang further teaches wherein the coordinate systems comprise :a. A 2D cartesian coordinate system, b. A 3D cartesian coordinate system, or c. A 2D or a 3D Frenet coordinate system (see at least, [0032] The three-dimensional map data corresponds to map data representing the positions of feature objects such as road edges, lane markings, and road signs in three-dimensional coordinates; [0099] The vertical axis of FIGS. 8 and 9 shows a probability value, and the horizontal axis shows time).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Miyamoto et al. (US 20190071077 A1) discloses optionally having the electronic control unit perform safety actions based on the notification of step d ([0055] “The support control unit 18 gives an instruction to….the vehicle control ECU 6 based on the detection result of the periphery monitoring sensor that detects the obstacle and/or the road surface marking around the host vehicle to perform”).
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TOYA PETTIEGREW whose telephone number is (313)446-6636. The examiner can normally be reached 8:30pm - 5:00pm M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jelani Smith can be reached at 571-270-3969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TOYA PETTIEGREW/Primary Examiner, Art Unit 3662