DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 3-6, 13-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential elements, such omission amounting to a gap between the elements. See MPEP § 2172.01. The omitted elements are: what the “reference line” is and how is it determined (in Claims 3 and 13). It is unclear if the reference line already exists in the sensor data or is generated through an unknown process. For example, a human could be placing the reference line, or a method is used to place a reference line. Further, the “line touch information” is missing information that indicates what it is. The line touch information could be a temporal value (time) or it could be a spatial value (pixel coordinates). Due to the unclear nature of these terms, claims 3-6 and 13-16 (claims 4-6 depend upon claim 3 and claims 14-16 depend upon claim 13) will not be considered in the 35 USC § 102 or 35 USC § 103 rejections below.
Claim 7, 8, 17, 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential elements, such omission amounting to a gap between the elements. See MPEP § 2172.01. The omitted elements are: what the “target group distribution graph” is made of (cited in claims 7 and 17). How the distribution graph determined is unclear and missing the formation of the “target group distribution graph”. Due to the unclear nature of these terms, claims 7 and 17 (claim 8 depend upon claim 7 and claim 18 depend upon claim 17) will not be considered in the 35 USC § 102 or 35 USC § 103 rejections below
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 9, 10, 11, 12, 19, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buchmeier (WO 2019/198076 A1, 2019), in view of Li (“Preceding Vehicle Detection Method Based on Information Fusion of Millimetre Wave Radar and Deep Learning Vision”, 2019).
Regarding claims 1 and 11, Buchmeier teaches A method, applied to a processing device in a detection system (Buchmeier, see image below, “detecting objects in a scene” is a method that is being interpreted as part of a “detection system”), the detection system further comprising at least two sensors (Buchmeier, pg 6, lines 2-4, reproduced below:
PNG
media_image1.png
210
1090
media_image1.png
Greyscale
”Plurality of sensors” is being interpreted as “at least two sensors”), and the method comprising:
obtaining (Buchmeier, pg 6, lines 31-34, reproduced below:
PNG
media_image2.png
182
1082
media_image2.png
Greyscale
. “Received” is being interpreted as “obtaining”), by the processing device, at least two pieces of detection information (Buchmeier, see pg 6, lines 31-34 image above: “multiple predefined primitive elements” is being interpreted as “at least two pieces of detection information”) from the at least two sensors (Buchmeier, see pg 6, lines 31-34 image above: “from the difference technology sensors” is being interpreted as “at least two sensors”), wherein the at least two sensors are in a one-to-one correspondence (Buchmeier, see pg 6, lines 31-34 image above, “correlate between multiple predefined primitive elements detected in the sensory datasets received from the different technology sensors” is being interpreted as “one-to-one correspondence”) with the at least two pieces of detection information (Buchmeier, see pg 6, lines 31-34 image above: “multiple predefined primitive elements” is being interpreted as “at least two pieces of detection information”);
determining, by the processing device, at least two pieces of formation information (Buchmeier, pg 6 lines 34 to pg 7 lines 1-2: “The fusion engine correlate together the multitude of predefined primitive elements based on a spatial and/or temporal correlation detected in the sensory datasets, for example, proximity, distance, timing of capture and/or the like”. “Spatial and/or temporal correlation detected” is being interpreted as part of “formation information” that were determined) based on the at least two pieces of detection information (Buchmeier, pg 6 lines 34 to pg 7 lines 1-2: “The fusion engine correlate together the multitude of predefined primitive elements based on a spatial and/or temporal correlation detected in the sensory datasets, for example, proximity, distance, timing of capture and/or the like”. “multitude of predefined primitive elements” is being interpreted as “at least two pieces of detection information”), wherein each piece of formation information describes a position relationship between objects detected by the corresponding sensor (Buchmeier, pg 6 lines 34 to pg 7 lines 1-2: “The fusion engine correlate together the multitude of predefined primitive elements based on a spatial and/or temporal correlation detected in the sensory datasets, for example, proximity, distance, timing of capture and/or the like”. “spatial and/or temporal correlation” is being interpreted as “position relationship between objects detected by the corresponding sensor”), and the objects comprise target objects (Buchmeier, pg 6, lines 19-24, reproduced below:
PNG
media_image3.png
270
1150
media_image3.png
Greyscale
. “Higher level objects” is being interpreted as part of “target objects”);
Buchmeier does not appear to specifically teach greater than a preset threshold, but does teach “match threshold”.
Pertaining to a similar field of endeavor, Li teaches
determining (Li, pg 7, Section 3.4, ¶1-3, reproduced below:
PNG
media_image4.png
658
1412
media_image4.png
Greyscale
. “Fusion results” are determined), by the processing device, target formation information (Li, see image above, “the vehicle type and position detected by YOLOv3-tiny algorithm and the vehicle status output detected by millimetre wave radar are used”) based on the at least two pieces of formation information (Li, see image above, “vehicle information” from “YOLOv3-tiny” and “millimetre wave radar”, which are being interpreted as “two pieces of formation information”), wherein a matching degree (Li, see image above, “Intersection-over-Union (IoU)” is being interpreted to involve a matching degree) between the target formation information (Li, see image above, “the vehicle type and position detected by YOLOv3-tiny algorithm and the vehicle status output detected by millimetre wave radar are used”) and each of the at least two pieces of formation information is greater than a preset threshold (Li, see image above: “When IoU” is in the inclusive range “[0.5,1]”. “0.5” is being interpreted as “a preset threshold” and a result greater than 0.5 is being considered a match), the target formation information describes a position relationship between at least two target objects (Li, see image above, “the vehicle type and position detected by YOLOv3-tiny algorithm and the vehicle status output detected by millimetre wave radar are used”. The data from both sensors is being interpreted as “at least two target objects” that has a position relationship), and the target formation information comprises formation position information of target objects (Li, see image above, “the vehicle type and position detected by YOLOv3-tiny algorithm and the vehicle status output detected by millimetre wave radar are used”. “Vehicle type and position” and “vehicle status” are being interpreted as “formation position information of target objects”); and
fusing (Li, see image above, “final vehicle detection fusion results” which shows fusing occurs), by the processing device based on formation position information (Li, see image above, “the vehicle type and position detected by YOLOv3-tiny algorithm and the vehicle status output detected by millimetre wave radar are used”. “Vehicle type and position” and “vehicle status” are being interpreted as “formation position information of target objects”) of any of the target objects (Li, see image above, “the vehicle type and position detected by YOLOv3-tiny algorithm and the vehicle status output detected by millimetre wave radar are used”. Which is being interpreted to include the target objects), detection information (Li, see image above, “the vehicle information detected by YOLOv3-tiny is fused with the vehicle information detected by millimetre wave radar”. The vehicle information is being interpreted as part of detection information) that is in the at least two pieces of formation information (Li, see image above, “the vehicle information detected by YOLOv3-tiny is fused with the vehicle information detected by millimetre wave radar”. YOLOv3-tiny and millimetre wave radar are being interpreted as at least two) and that corresponds to a same target object (Li, see image above, “the vehicle information detected by YOLOv3-tiny is fused with the vehicle information detected by millimetre wave radar”. Which shows the vehicle information corresponds to a same target object).
Buchmeier and Li are considered to be analogous art because they are directed to sensor fusion. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method and system for sensor fusion (as taught by Buchmeier) to include greater than a preset threshold (as taught by Li) because the combination provides an improvement to detection speed and accuracy (Li, pg 2, Section 2, item 4; Abstract).
Regarding claims 2 and 12, Buchmeier teaches The method according to claim 1, wherein each piece of detection information (Buchmeier, pg 6, line 34: “multitude of predefined primitive elements”) comprises a position feature set (Buchmeier, pg 6 line 34 to pg 7 lines 1-2: “The fusion engine correlate together the multitude of predefined primitive elements based on a spatial and/or temporal correlation detected in the sensory datasets, for example, proximity, distance, timing of capture and/or the like.” “spatial and/or temporal correlation” are being interpreted to include “position feature set”) comprising at least two position features (Buchmeier, pg 6 line 34 to pg 7 lines 1-2: “The fusion engine correlate together the multitude of predefined primitive elements based on a spatial and/or temporal correlation detected in the sensory datasets, for example, proximity, distance, timing of capture and/or the like.” “Proximity” and “distance” are being interpreted as “at least two position features”), and each of the at least two position features indicate position relationships (Buchmeier, pg 6 line 34 to pg 7 lines 1-2: “The fusion engine correlate together the multitude of predefined primitive elements based on a spatial and/or temporal correlation detected in the sensory datasets, for example, proximity, distance, timing of capture and/or the like.” “Proximity” and “distance”) between the objects detected (Buchmeier, pg 6 line 34 to pg 7 lines 1-2: “The fusion engine correlate together the multitude of predefined primitive elements based on a spatial and/or temporal correlation detected in the sensory datasets, for example, proximity, distance, timing of capture and/or the like.” “Proximity” and “distance” show between the objects detected) by the corresponding sensors and objects around the objects (Buchmeier, pg 6 line 34 to pg 7 lines 1-2: “The fusion engine correlate together the multitude of predefined primitive elements based on a spatial and/or temporal correlation detected in the sensory datasets, for example, proximity, distance, timing of capture and/or the like.” “Proximity” and “distance” show objects around the objects. The corresponding sensors collecting the data).
Regarding claims 9 and 19, Buchmeier teaches The method according to claim 1, wherein the at least two sensors comprise a first sensor and a second sensor (Buchmeier, pg 6, lines 2-4, reproduced below:
PNG
media_image1.png
210
1090
media_image1.png
Greyscale
. ”Plurality of sensors” is being interpreted as to include “a first sensor” and “a second sensor”), a spatial coordinate system (Buchmeier, pg 7, lines 1-5, reproduced below:
PNG
media_image5.png
232
992
media_image5.png
Greyscale
. “a common coordinate system” is being interpreted to be “a spatial coordinate system”) corresponding to the first sensor is a standard coordinate system (Buchmeier, see image above, “common coordinate system” is being interpreted to involve a “standard coordinate system”), a spatial coordinate system corresponding to the second sensor is a target coordinate system (Buchmeier, see image above, “common coordinate system” is being interpreted to involve a “target coordinate system”. A common coordinate system would be needed to align the different coordinate systems from the different sensors), and the method further comprises:
determining, by the processing device based on fused detection information obtained (Buchmeier, see image above, “predefined primitive elements” are being interpreted to be obtained from “detection information”) by fusing the detection information (Buchmeier, see image above: “the fusion engine may first align” which is being interpreted as part of the fusion process that uses the detection information) that is in the at least two pieces of formation information (Buchmeier, see image above: “the fusion engine may first align the predefined primitive elements with respect to a common reference”, which shows aligning requires at least two pieces of formation information) and that corresponds to the same target object (Buchmeier, see image above, “a common object detected in the scene by at least some of the sensors” is being interpreted as “the same target object”), mapping (Buchmeier, see image above, “align”) relationships between at least two pieces of standard point information (Buchmeier, see image above, “predefined primitive elements” are being interpreted to involve “at least two pieces of standard point information” that will be used in the alignment procedure) and at least two pieces of target point information (Buchmeier, see image above, “the fusion engine may first align the predefined primitive elements with respect to a common reference”. “At least two pieces of target point information” is being interpreted as part of the alignment procedure; the target result of the alignment procedure), wherein the standard point information indicates position information of objects that are in the target object set (Buchmeier, see image above, “the fusion engine may first align the predefined primitive elements with respect to a common reference”. Which shows the predefined primitive elements have a position location that is part of the alignment procedure) and that are in the standard coordinate system (Buchmeier, see image above, “predefined primitive elements” are being interpreted to involve “standard coordinate system” that will be used in the alignment procedure), the target point information indicates position information of the objects that are in the target object set (Buchmeier, see image above, “the fusion engine may first align the predefined primitive elements with respect to a common reference, for example, a common coordinate system, a common object detected in the scene by at least some of the sensors and/or the like.” Which shows the target objects have position information, this is being interpreted as the result of the alignment procedure) and that are in the target coordinate system (Buchmeier, see image above, the alignment result is being interpreted as the “target coordinate system” that the primitive elements are aligned to), and the at least two pieces of standard point information (Buchmeier, see image above, “predefined primitive elements” is being interpreted as “at least two pieces of standard point information”) are in a one-to-one correspondence (Buchmeier, see image above, “spatial and/or temporal correlation” of the “multitude of predefined primitive elements”) with the at least two pieces of target point information (Buchmeier, see image above, which shows the alignment procedure aligns common objects that have “at least two pieces of target point information”); and
determining, by the processing device, a mapping relationship (Buchmeier, see image above, “spatial and/or temporal correlation” of the “multitude of predefined primitive elements” that are later aligned) between the standard coordinate system (Buchmeier, see image above, “predefined primitive elements” are being interpreted to involve “standard coordinate system” that will be used in the alignment procedure) and the target coordinate system (Buchmeier, see image above, the alignment result is being interpreted as the “target coordinate system” that the primitive elements are aligned to) based on the mapping relationship (Buchmeier, see image above, “correlation” and “align” are being interpreted as part of “the mapping relationship”) between the standard point information (Buchmeier, see image above, “predefined primitive elements” are being interpreted to have “standard point information” that are used for the alignment procedure) and the target point information (Buchmeier, see image above, the alignment result is being interpreted as the “target coordinate system” that the primitive elements are aligned to, or the target point information).
Regarding claims 10 and 20, Buchmeier teaches The method according to claim 1, further comprising: calculating (Buchmeier, pg 23, lines 8-22, reproduced below:
PNG
media_image6.png
608
1004
media_image6.png
Greyscale
. “Adjusted” is being interpreted as part of “calculating”), by the processing device, a time difference (Buchmeier, see image above: “The time shift may lead to a difference”) between time axes of the at least two sensors (Buchmeier, see image above: “The time shift may lead to a difference”. Which is being interpreted to include at least two sensors) based on a fusion result of the detection information (Buchmeier, see image above, “The time shift may lead to a difference in the detected predefined primitive elements”. Which shows the “predefined primitive elements” are from the “detection information”) that is in the at least two pieces of formation information (Buchmeier, see image above, “The time shift may lead to a difference in the detected predefined primitive elements”. Which shows the “predefined primitive elements” are from the “detection information” that has at least two pieces of formation information) and that corresponds to the same target object (Buchmeier, see image above, “The time shift may lead to a difference in the detected predefined primitive elements”. “predefined primitive elements” is being interpreted as “corresponds to the same target object”. Otherwise, the alignment procedure would not have a reference point to verify if the alignment procedure was successful).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNNY B DUONG whose telephone number is (571)272-1358. The examiner can normally be reached Monday - Thursday 10a-9p (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571)272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.B.D./Examiner, Art Unit 2667
/MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667