DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 12-12-2025 have been fully considered.
With respect to applicant’s argument regarding claim 8, the examiner respectfully disagrees. Applicant argues that Liu uses cell grids and not point clouds to filter out interference. Liu, ¶62-63 speaks to removing interference (phantom targets) in a radar spatial grid. He then states that the radar spatial grid is “raw radar sensor data” which would be the point cloud returns from various objects. Therefore, examiner submits that while Liu uses the terms grid and cell, it is still the radar data (i.e. point cloud data) being analyzed and cleaned of interference.
With respect to applicant’s arguments concerning claim 10, examiner respectfully disagrees with the assertion that ¶57-58 of Liu “at best describes indicating whether a cell is occupied”. Claim 10 describes generating point clouds of objects and then combining the point clouds with associated clouds from other radar sensors to create “cross potential point clouds”. This is what Liu describes, again noting that the radar cells have point clouds in them representing targets.
With respect to claim 12 and applicant’s argument that Liu doesn’t use a predetermined SNR to remove noise, the examiner submits that Liu describes removing the phantom returns base on low confidence levels and sparse number of returns. It is understood in the art that a sparse number of returns provides for a low SNR or false alarm rate threshold.
As per claims 13-15, examiner agrees that the anchor point was mis-interpreted. The anchor enclosures/virtual enclosures in the claims appear to be boxes surrounding each object, just as described in Liu as bounding boxes.
With respect to the amended claims, please see below.
Claim Objections
Claims 11-13 are objected to because of the following informalities: These claims currently still depend from cancelled claim 8. Appropriate correction is required.
Examiner’s Note: For applicant’s benefit portions of the cited reference(s) have been cited to aid in the review of the rejection(s). While every attempt has been made to be thorough and consistent within the rejection it is noted that the PRIOR ART MUST BE CONSIDERED IN ITS ENTIRETY, INCLUDING DISCLOSURES THAT TEACH AWAY FROM THE CLAIMS. See MPEP 2141.02 VI.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-5, 9-12, 7-19 and 22 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Liu, et. al., U.S. Patent Application Publication Number 2020/0103523, published April 2, 2020.
As per claims 1 and 22, Liu discloses a computer-implemented method, comprising:
receiving one or more signals reflected by one or more second objects, the signals being received by one or more radar sensors positioned on one or more first objects (Liu, Fig. 2, 202);
generating, based on the one or more received signals, one or more representations, one or more portions of the generated representations corresponding to the one or more received signals (Liu, Fig. 2, 204);
wherein one or more generated representations include one or more point clouds, and the one or more portions of the generated representations include one or more points in the one or more point clouds (Liu, Fig. 3 where it is understood that radar returns consist of multiple returns for a single object);
filtering the one or more point clouds to remove one or more points corresponding to one or more noise signals in the one or more received signals(Liu, ¶63 where phantom returns are removed based on point cloud radar data in the grid).
generating, using the one or more representations, one or more virtual enclosures encompassing the one or more second objects (Liu, Fig. 3, 314);
and detecting, using the generated one or more virtual enclosures, a presence of the one or more second objects (Liu, Fig. 3, 318).
As per claim 2, Liu further discloses the method according to claim 1, wherein the one or more radar sensors are positioned on the one or more first objects at a predetermined distance apart (Liu, Fig. 6 and ¶19 where the relationship between radars and spatial grids are known).
As per claim 3, Liu further discloses he method according to claim l wherein the one or more radar sensors include two radar sensors (Liu, ¶19).
As per claim 4, Liu further discloses the method according to claim 1 wherein the one or more radar sensors include a plurality of radar sensors (Liu, ¶19).
As per claim 5, Liu further discloses the method according to claim 4, wherein at least one radar sensor in the plurality of radar sensors is configured to receive a signal transmitted by at least one of the following: the at least one radar sensor (Liu, ¶31).
As per claim 9, Liu further discloses the method according to claim 8, wherein the one or more point clouds include one or more cross potential point clouds generated by combining one or more point clouds generated using signals received by each radar sensor in the one or more radar sensors (Liu, ¶57).
As per claim 10, Liu further discloses the method according to claim 5, wherein generation of the one or more cross potential point clouds includes clustering at least a portion of the one or more point clouds using a number of points corresponding to at least a portion of the one or more received signals being received from one or more scattering regions of a second object in the one or more second objects, and generating one or more clustered point clouds; combining at least a portion of the one or more clustered point clouds based on a determination that at least a portion of the one or more clustered point clouds is associated with the second object in the one or more second objects and determined based on signals received from different radar sensors in the one or more radar sensors, and generating the one or more cross potential point clouds (Liu, ¶57-58 using fused spatial grids based on overlapping fields of view from multiple radars).
As per claim 11, Liu further discloses the method according to claim 8, wherein the filtering includes removing one or more noise signals in the one more received signals received by each radar sensor in the one or more radar sensors (Liu, ¶63).
As per claim 12, Liu further discloses the method according to claim 8, wherein the filtering includes removing one or more noise signals in the one or more received signals using one or more predetermined signal to noise ratio thresholds (Liu, ¶63).
As per claim 16, Liu further discloses the method according to claim 8, wherein the one or more virtual enclosures include at least one of the following: a three-dimensional virtual enclosure, a two- dimensional virtual enclosure, and any combination thereof (Liu, Fig. 3, 316).
As per claim 17, Liu further discloses the method according to claim 8, wherein the one or more virtual enclosures include at least one of the following parameters: a length, a breadth, a height, one or more center coordinates, an orientation angle, and any combination thereof (Liu, Fig. 3, 316).
As per claim 18, Liu further discloses the method according to claim 1, wherein at least one of the first and second objects include at least one of the following: a vehicle, an animate object, an inanimate object, a human, a building, a moving object, a motionless object, and any combination thereof (Liu, ¶14).
As per claim 19, Liu further discloses the method according to claim 1, wherein the presence includes at least one of the following: a location, an orientation, a direction, a position, a type, a size, an existence, and any combination thereof of the one or more second objects, wherein the one or more second objects being located in an environment of the one or more first objects, wherein the presence of the one or more second objects is being determined in the environment of the one or more first objects (Liu, Fig. 3).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Roger, et. al., U.S. Patent Application Publication Number 2018/0284258, published October 4, 2018.
As per claim 6, Liu discloses the method of claim 4 but fails to expressly disclose synchronization between radar units.
Roger teaches synchronization between radar units (¶78).
It would have been obvious to a person of ordinary skill in the art at the time of the invention to synchronize the radars in order to gain the benefit of simplifying the processing and target analysis in overlapping areas.
Claim(s) 13-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu.
As per claims 13-15, Liu discloses the method of claim 8 including virtual enclosures including confidence values (Liu, ¶30).
Liu fails to expressly disclose defining anchor enclosures based on virtual enclosures to extract features.
Liu creates the bounding box based on the dynamic object including orientation, size, shape, etc. (¶46). It would have been obvious to one having ordinary skill in the art at the time of the invention was made to use virtual enclosures, since it has been held that omission of an element and its function in a combination where the remaining elements perform the same functions as before involves only routine skill in the art. In re Karlson, 136 USPQ 184. In this case, the bounding box of Liu is still created accurately as desired around a particular object and provides features such as size based on the bounding box.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is provided on form PTO-892.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARCUS E WINDRICH whose telephone number is (571)272-6417. The examiner can normally be reached M-F ~7-3:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jack Keith can be reached at 5712726878. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARCUS E WINDRICH/ Primary Examiner, Art Unit 3646