DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 04/18/2024 is being considered by the examiner.
Specification Objections
The specification is objected to because of the following informalities:
On Page 21, line 22, “parameter setting step (310)…” should read “parameter setting step (S310)…” in order to avoid a typographical and consistency error.
On Page 21, line 23, “parameter setting step (210)…” should read “parameter setting step (S210)…” in order to avoid a typographical and consistency error.
On Page 21, line 24, “parameter setting step (310)…” should read “parameter setting step (S310)…” in order to avoid a typographical and consistency error.
On Page 21, line 25, “parameter setting step (310)…” should read “parameter setting step (S310)…” in order to avoid a typographical and consistency error.
On Page 22, line 6, “filtering step (320)…” should read “filtering step (S320)…” in order to avoid a typographical and consistency error.
On Page 22, line 7, “filtering step (220)…” should read “filtering step (S220)…” in order to avoid a typographical and consistency error.
Appropriate correction is required.
Claim Objections
Claims 3-6, 8, and 10-13 are objected to because of the following informalities:
In claim 8, line 10, the term “the processor so as to transmit” should be changed to “the processor in order to provide clarity and grammatical simplicity.
In claim 8, line 14, the term “the processor so as to transmit” should be changed to “the processor in order to provide clarity and grammatical simplicity.
In claim 8, line 14, the term “the processor so as to transmit” should be changed to “the processor in order to provide clarity and grammatical simplicity.
In claim 3, line 7, the term “creating a voxel grid” should be changed to “creating the voxel grid” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 3, line 9, the term “for a voxel corresponding” should be changed to “for the voxel corresponding” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 4, line 3, the term “for a voxel corresponding” should be changed to “for the voxel corresponding” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 4, line 5, the term “whether a voxel ID” should be changed to “whether the voxel ID” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 5, line 8, the term “creating a voxel grid” should be changed to “creating the voxel grid” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 5, line 11, the term “for a voxel corresponding” should be changed to “for the voxel corresponding” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 5, line 12, the term “the point when a voxel ID” should be changed to “the point when the voxel ID” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 6, line 3, the term “for a voxel corresponding” should be changed to “for the voxel corresponding” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 6, line 5, the term “whether a voxel ID” should be changed to “whether the voxel ID” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 10, line 8, the term “creating a voxel grid” should be changed to “creating the voxel grid” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 10, line 8, the term “for a voxel corresponding” should be changed to “for the voxel corresponding” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 11, line 3, the term “for a voxel corresponding” should be changed to “for the voxel corresponding” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 11, line 5, the term “whether a voxel ID” should be changed to “whether the voxel ID” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 12, line 8, the term “creating a voxel grid” should be changed to “creating the voxel grid” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 12, line 11, the term “for a voxel corresponding” should be changed to “for the voxel corresponding” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 12, line 12, the term “the point when a voxel ID” should be changed to “the point when the voxel ID” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 13, line 3, the term “for a voxel corresponding” should be changed to “for the voxel corresponding” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
In claim 13, line 5, the term “whether a voxel ID” should be changed to “whether the voxel ID” in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph issues.
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function.
Claim 8, recites limitations that use words like “means” (or “step”) or similar terms with functional language and do invoke 35 U.S.C. 112(f):
Claim 8; recites the limitation, “a determination module configured to...” [Line 5].
Claim 8; recites the limitation, “storage unit being configured to …” [Line 11].
Claim 8; recites the limitation, “input/output unit being configured to…” [Line 15].
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
After a careful analysis, as disclosed above, and a careful review of the specification the following limitations in claim 8;
(i) “determination module” (Fig. 2, #30. Page 10, Lines [11-23] and Page 11, Lines [11-13 and 19-21]-The determination module 30 may include a computer device. For example, the determination module 30 may include a desktop computer, a laptop computer, a server, a tablet computer, and other devices capable of performing information processing functions. The determination module 30 may be connected to the radar 20 to receive the position coordinates and the reflected signal intensity of the point output by the radar 20. The determination module 30 may include a processor 310 configured to execute program code, a storage unit 320 connected to the processor 310 so as to transmit and receive data to and from the processor, the storage unit being configured to store the program code, the radar sensing file F storing the reflected signal received by the radar 20, and a dictionary, and an input/output unit 340 connected to the processor 310 so as to transmit and receive data to and from the processor, the input/output unit being configured to receive a parameter required for voxelization. The determination module 30 may further include a communication unit 330 connected to the processor 310 so as to transmit and receive data to and from the processor. The storage unit 320 may store data required to perform the method of determining the position of the object sensed by the radar using spatial voxelization. The storage unit 320 may include RAM, ROM, memory, a hard disk, or cloud storage. The input/output unit 340 may include a keyboard, a mouse, a touchpad, a touchscreen, or a pen configured to receive input by a user. The input/output unit 340 may include a display or a speaker configured to provide information to the user. The determination module is illustrated in fig. 2, as #30 thus having sufficient structure or material wherein is a desktop computer, a laptop computer, a server, a tablet computer processor and memory.).
(ii) “storage unit” (Fig. 2, #320. Page 11, Lines [11-18]- The storage unit 320 may store data required to perform the method of determining the position of the object sensed by the radar using spatial voxelization. The storage unit 320 may include RAM, ROM, memory, a hard disk, or cloud storage. The storage unit 320 may store program code written to perform each step of the method of determining the position of the object sensed by the radar using spatial voxelization. The program code may be executed by the processor 310. The storage unit 320 may store a radar sensing file F, a voxel dictionary, position coordinates of separately stored points, parameters, and other data. The storage unit is illustrated in Fig. 2, as black box #320 thus having sufficient structure or material wherein is a memory.).
(iii) “input/output unit” (Fig. 2, #320. Page 11, Lines [11-18]-the input/output unit 340 may include a keyboard, a mouse, a touchpad, a touchscreen, or a pen configured to receive input by a user. The input/output unit 340 may include a display or a speaker configured to provide information to the user. The input/output unit 340 may provide a screen configured to allow the user to input a parameter, and may visually provide analysis results. The input/output unit is illustrated in Fig. 2, as black box #340 thus having sufficient structure or material wherein is a display, keyboard, a mouse, a touchpad, a touchscreen, or a pen.).
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1 and 8 and associated dependent claims are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 1 recites the limitation “searching for a voxel corresponding to the position” in Line 8. The office finds the term “searching for a voxel corresponding to the position” rendering the claim indefinite. It is not clear what the applicant refers to as “searching for a voxel corresponding to the position”, whether or not it is a new voxel corresponding to the position the applicant is referring to since claims 1 respectively recites “a voxel corresponding to a position” in line 5. For purpose of examination the examiner is interpreting the limitation as “searching for the voxel corresponding to the position”. The office respectfully requests the Applicant to amend claim 1 in order to clarify the claimed invention.
Claim 1 recites the limitation “separately storing position coordinates of the point when a voxel ID of the searched voxel” in Line 10. The office finds the term “separately storing position coordinates of the point when a voxel ID of the searched voxel” rendering the claim indefinite. It is not clear what the applicant refers to as “separately storing position coordinates of the point when a voxel ID of the searched voxel”, whether or not “the point when a voxel ID” is a new voxel ID the applicant is referring to since claims 1 respectively recites “of the point in a voxel ID” in line 7. For purpose of examination the examiner is interpreting the limitation as “separately storing position coordinates of the point when the voxel ID of the searched voxel”. The office respectfully requests the Applicant to amend claim 1 in order to clarify the claimed invention.
Claim 8 recites the limitation “searching for a voxel corresponding to the position” in Lines 23-24. The office finds the term “searching for a voxel corresponding to the position” rendering the claim indefinite. It is not clear what the applicant refers to as “searching for a voxel corresponding to the position”, whether or not it is a new voxel corresponding to the position the applicant is referring to since claims 1 respectively recites “a voxel corresponding to a position” in line 20. For purpose of examination the examiner is interpreting the limitation as “searching for the voxel corresponding to the position”. The office respectfully requests the Applicant to amend claim 1 in order to clarify the claimed invention.
Claim 8 recites the limitation “separately storing position coordinates of the point when a voxel ID of the searched voxel” in Line 25. The office finds the term “separately storing position coordinates of the point when a voxel ID of the searched voxel” rendering the claim indefinite. It is not clear what the applicant refers to as “separately storing position coordinates of the point when a voxel ID of the searched voxel”, whether or not “the point when a voxel ID” is a new voxel ID the applicant is referring to since claim 8 respectively recites “of the point in a voxel ID” in line 20. For purpose of examination the examiner is interpreting the limitation as “separately storing position coordinates of the point when the voxel ID of the searched voxel”. The office respectfully requests the Applicant to amend claim 5 in order to clarify the claimed invention.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2 and 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over HAYAKAWA et al. (US 6573855 B1), hereinafter referenced as HAYAKAWA, in view of LI et al. (US 20220214448 A1), hereinafter referenced as LI, and further in view of ISHIKAWA et al. (US 20200160526 A1), hereinafter referenced as ISHIKAWA.
Regarding claim 1, HAYAKAWA explicitly teaches a method of determining a position of an object sensed by a radar using spatial voxelization (Fig. 3. Col. 15. Lines [58-61]-HAYAKAWA discloses a three-dimensional voxel data generating means 31 for editing and processing the received signals input from the receiving circuit 14 in terms of their relationship relative to the position (x, y) on the medium surface and the time (t).), the method comprising:
a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object (Figs. 1 and 3. Col. 17, Lines [55-61]-HAYAKAWA discloses the digitized received signal is stored in a predetermined area of a memory 21a inside the data processing unit as the multiple scales source three-dimensional voxel data s (x, y, t) such that the coordinates (x, y, t) determined by the position (x, y) on the medium surface and the reflection time (t) of the reflected wave 5 from the object 2 are encoded (wherein the received signal is a reflected signal).), searching for a voxel corresponding to a position of a point stored in the radar sensing file (Figs. 1 and 3. Col. 17, Lines [15-19]-HAYAKAWA discloses a section coordinate designating means 33b for designating a coordinate point on the displayed section in response to a manual operation on the input unit 22 such as a mouse, thereby to select the voxel at this coordinate point as an object voxel.), and
a point extraction step of loading the radar sensing file (Figs. 1 and 3. Col. 17, Lines [55-61]-HAYAKAWA discloses the digitized received signal is stored in a predetermined area of a memory 21a inside the data processing unit as the multiple scales source three-dimensional voxel data s (x, y, t) such that the coordinates (x, y, t) determined by the position (x, y) on the medium surface and the reflection time (t) of the reflected wave 5 from the object 2 are encoded (wherein the received signal is a reflected signal).), searching for a voxel corresponding to the position of the point stored in the radar sensing file (Figs. 1 and 3. Col. 17, Lines [15-19]-HAYAKAWA discloses a section coordinate designating means 33b for designating a coordinate point on the displayed section in response to a manual operation on the input unit 22 such as a mouse, thereby to select the voxel at this coordinate point as an object voxel.), and
HAYAKAWA fails to explicitly teach cumulatively storing a reflected signal intensity of the point in a voxel ID of the searched voxel to create a dictionary; and a reference point position determination step of determining a position of a reference point for calibration based on the separately stored position coordinates of the point.
However, LI explicitly teaches cumulatively storing a reflected signal intensity of the point in a voxel ID of the searched voxel to create a dictionary (Fig. 1A. Paragraph [0060]-LI discloses then at least one 3D voxel grid corresponding to each reflectivity of each scanning line is determined based on position information of the multiple target scanning points, i.e. at least one 3D voxel grid corresponding to each box in the reflectivity calibration table is determined. Then target reflectivity information in each box may be determined based on an average reflectivity value corresponding to at least one 3D voxel grid corresponding to each box, and the reflectivity calibration table is generated.);
a reference point position determination step of determining a position of a reference point for calibration based on the separately stored position coordinates of the point (Fig. 1A. Paragraph [0032]-LI discloses in the point cloud data collected by the primary radar, the data corresponding to each scanning point includes position information and reflectivity of the scanning point in a rectangular coordinate system corresponding to the primary radar (wherein the position of the primary radar is the reference point).)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA of a method of determining a position of an object sensed by a radar using spatial voxelization, the method comprising: a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file with the teachings of LI of cumulatively storing a reflected signal intensity of the point in a voxel ID of the searched voxel to create a dictionary; and a reference point position determination step of determining a position of a reference point for calibration based on the separately stored position coordinates of the point.
Wherein having HAYAKAWA’s point cloud and voxel data processing method cumulatively storing a reflected signal intensity of the point in a voxel ID of the searched voxel to create a dictionary; and a reference point position determination step of determining a position of a reference point for calibration based on the separately stored position coordinates of the point.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing method that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and LI relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while LI can more easily calibrate the reflectivity of the radar. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and LI et al. (US 20220214448 A1), Paragraph [0044].
Although, LI teaches the voxel ID and the dictionary. HAYAKAWA in view of LI fail to explicitly teach separately storing position coordinates of the point when a voxel ID of the searched voxel is the same as a voxel having a largest accumulated value of the reflected signal intensity in the dictionary.
However, ISHIKAWA explicitly teaches separately storing position coordinates of the point when a voxel ID of the searched voxel is the same as a voxel having a largest accumulated value of the reflected signal intensity in the dictionary (Figs. 1, 4, and 10. Paragraph [0042]-ISHIKAWA discloses a position obtaining unit 1050 obtains the position of interest in the first image, and by using the deformation information obtained by the deformation information obtaining unit 1030, obtains the corresponding position in the second image corresponding to the position of interest in the first image. Further in paragraph [0071]-ISHIKAWA discloses I2_max obtained as above denotes the maximum intensity within the search area ω_pos2, and I2_min denotes the minimum intensity within the search area ω_pos2. In this light, the difference obtaining unit 1070 is an example of second obtaining means that obtains a maximum or a minimum of a voxel value or an interpolated value within a search area on the basis of at least one of a voxel value included in a partial area included in the search area and an interpolated value obtained by interpolation of a voxel of the second image, which is the target image (wherein search area ω_pos2 is the ID.).).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI of a method of determining a position of an object sensed by a radar using spatial voxelization, the method comprising: a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file with the teachings of ISHIKAWA of separately storing position coordinates of the point when a voxel ID of the searched voxel is the same as a voxel having a largest accumulated value of the reflected signal intensity in the dictionary.
Wherein having HAYAKAWA’s point cloud and voxel data processing method separately storing position coordinates of the point when a voxel ID of the searched voxel is the same as a voxel having a largest accumulated value of the reflected signal intensity in the dictionary.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing method that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and ISHIKAWA relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while ISHIKAWA noise in the subtraction image can be reduced. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and ISHIKAWA et al. (US 20200160526 A1), Paragraph [0004].
Regarding claim 2, HAYAKAWA in view of LI and further in view of ISHIKAWA explicitly teach the method according to claim 1,
HAYAKAWA fails to explicitly teach wherein the radar sensing file is created by the radar receiving the reflected signal and storing position coordinates and reflected signal intensity of a point in a form of point cloud data.
However, LI explicitly teaches wherein the radar sensing file is created by the radar receiving the reflected signal (Figs. 1A-B. Paragraph [0050]-LI discloses the radar acquires the point cloud data by scanning the environment periodically) and storing position coordinates and reflected signal intensity of a point in a form of point cloud data (Fig. 1A. Paragraph [0032]-LI discloses in the point cloud data collected by the primary radar, the data corresponding to each scanning point includes position information and reflectivity of the scanning point in a rectangular coordinate system corresponding to the primary radar.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA of a method of determining a position of an object sensed by a radar using spatial voxelization, the method comprising: a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file with the teachings of LI of wherein the radar sensing file is created by the radar receiving the reflected signal and storing position coordinates and reflected signal intensity of a point in a form of point cloud data.
Wherein having HAYAKAWA’s point cloud and voxel data processing method wherein the radar sensing file is created by the radar receiving the reflected signal and storing position coordinates and reflected signal intensity of a point in a form of point cloud data.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing method that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and LI relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while LI can more easily calibrate the reflectivity of the radar. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and LI et al. (US 20220214448 A1), Paragraph [0044].
Regarding claim 8, HAYAKAWA explicitly teaches an apparatus for determining a position of an object sensed by a radar using spatial voxelization (Fig. 3. Col. 15. Lines [58-61]-HAYAKAWA discloses a three-dimensional voxel data generating means 31 for editing and processing the received signals input from the receiving circuit 14 in terms of their relationship relative to the position (x, y) on the medium surface and the time (t).), the apparatus comprising:
an input/output unit (Fig. 1, #20 called a data analyze includes #23 called a display unit and #22 called an input unit. Col. 1, Lines [45-47]-HAYAKAWA discloses a display unit 23 comprising a CRT monitor, a liquid crystal display or the like for displaying image data or output results at each stage of the processing.) connected to the processor so as to transmit and receive data to and from the processor (Fig. 1. Col. 15, Lines [41-50]-HAYAKAWA discloses this data analyzer 20 includes a data processing unit 21 comprising a microcomputer, a semiconductor memory or the like, an input unit 22 comprising a mouse, a keyboard or the like for receiving an instruction from the outside and a display unit 23 comprising a CRT monitor, a liquid crystal display or the like for displaying image data or output results at each stage of the processing. The data analyzer 20 further includes an external auxiliary storage 24 comprising a magnetic disc or the like for storing the data or output results at each stage of the processing.), the input/output unit being configured to receive a parameter required for voxelization (Col. 9, Lines [9-15]-HAYAKAWA discloses a section displaying means 33a for selecting a desired section of the three-dimensional voxel data S (x, y, t) generated by the three-dimensional voxel data generating means 31 in response to a manual operation from the input unit 22 such as a mouse and then displaying this section on the display unit 23.), and
a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object (Figs. 1 and 3. Col. 17, Lines [55-61]-HAYAKAWA discloses the digitized received signal is stored in a predetermined area of a memory 21a inside the data processing unit as the multiple scales source three-dimensional voxel data s (x, y, t) such that the coordinates (x, y, t) determined by the position (x, y) on the medium surface and the reflection time (t) of the reflected wave 5 from the object 2 are encoded (wherein the received signal is a reflected signal).), searching for a voxel corresponding to a position of a point stored in the radar sensing file (Figs. 1 and 3. Col. 17, Lines [15-19]-HAYAKAWA discloses a section coordinate designating means 33b for designating a coordinate point on the displayed section in response to a manual operation on the input unit 22 such as a mouse, thereby to select the voxel at this coordinate point as an object voxel.),
a point extraction step of loading the radar sensing file (Figs. 1 and 3. Col. 17, Lines [55-61]-HAYAKAWA discloses the digitized received signal is stored in a predetermined area of a memory 21a inside the data processing unit as the multiple scales source three-dimensional voxel data s (x, y, t) such that the coordinates (x, y, t) determined by the position (x, y) on the medium surface and the reflection time (t) of the reflected wave 5 from the object 2 are encoded (wherein the received signal is a reflected signal).), searching for a voxel corresponding to the position of the point stored in the radar sensing file (Figs. 1 and 3. Col. 17, Lines [15-19]-HAYAKAWA discloses a section coordinate designating means 33b for designating a coordinate point on the displayed section in response to a manual operation on the input unit 22 such as a mouse, thereby to select the voxel at this coordinate point as an object voxel.), and
HAYAKAWA fails to explicitly teach a radar configured to transmit a radio wave signal and to receive a reflected signal reflected by the object; and a determination module configured to analyze a radar sensing file storing the reflected signal received by the radar using spatial voxelization and to determine a position of a reflector based on which calibration is performed, wherein the determination module comprises: a processor configured to execute program code; a storage unit connected to the processor so as to transmit and receive data to and from the processor, the storage unit being configured to store the program code, the radar sensing file storing the reflected signal received by the radar, and a dictionary; and the program code is written to perform: and cumulatively storing a reflected signal intensity of the point in a voxel ID of the searched voxel to create a dictionary; a reference point position determination step of determining a position of a reference point for calibration based on the separately stored position coordinates of the point.
However, LI explicitly teaches a radar (Fig. 1B, #11 called a first radar. Paragraph [0030]) configured to transmit a radio wave signal and to receive a reflected signal reflected by the object (Figs. 1A-B. Paragraph [0049]-LI discloses multiple pieces of pose data may be calculated according to time when the primary radar or the secondary radar transmits and receives a radio beam); and
a determination module (Fig. 4, illustrates electronic device #400 with a processor $401 and memory #402. Paragraph [0130]-LI discloses when the electronic device 400 operates, the processor 401 communicates with the memory 402 through the bus 403, so that the processor 401 performs any point cloud data fusion method as described above.) configured to analyze a radar sensing file storing the reflected signal received by the radar using spatial voxelization and to determine a position of a reflector based on which calibration is performed (Figs. 1A-B. Paragraph [0032]-LI discloses in the point cloud data collected by the primary radar, the data corresponding to each scanning point includes position information and reflectivity of the scanning point in a rectangular coordinate system corresponding to the primary radar. The point cloud data collected by the secondary radar may include data respectively corresponding to multiple scanning points. In the point cloud data collected by the secondary radar, the data corresponding to each scanning point includes position information and reflectivity of the scanning point in a rectangular coordinate system corresponding to the secondary radar.),
wherein the determination module comprises (Fig. 4, illustrates electronic device #400 with a processor $401 and memory #402. Paragraph [0130]):
a processor (Fig. 4, #401 called a processor. Paragraph [0130].) configured to execute program code (Fig. 4. Paragraph [0131]-LI discloses further provide a computer-readable storage medium, which has a computer program stored thereon which, when executed by a processor, performs the point cloud data fusion method described in any of the above method embodiments.);
a storage unit (Fig. 4, #402 called a memory. Paragraph [0130]) connected to the processor so as to transmit and receive data to and from the processor (Fig. 4. Paragraph [0130]-LI discloses the processor 401 exchanges data with the external memory 4022 through the memory 4021. When the electronic device 400 operates, the processor 401 communicates with the memory 402 through the bus 403, so that the processor 401 performs any point cloud data fusion method as described above.), the storage unit being configured to store the program code, the radar sensing file storing the reflected signal received by the radar, and a dictionary (Fig. 4. Paragraph [0130]-LI discloses the memory 4021 here is also referred to as an internal memory, and is configured to temporarily store operation data in the processor 401 and data exchanged with the external memory 4022 such as a hard disk (wherein the operational data is signal received by the radar and point cloud data forms a dictionary).); and
the program code is written to perform (Fig. 4. Paragraph [0131]-LI discloses further provide a computer-readable storage medium, which has a computer program stored thereon which, when executed by a processor, performs the point cloud data fusion method described in any of the above method embodiments.):
and cumulatively storing a reflected signal intensity of the point in a voxel ID of the searched voxel to create a dictionary (Fig. 1A. Paragraph [0060]-LI discloses then at least one 3D voxel grid corresponding to each reflectivity of each scanning line is determined based on position information of the multiple target scanning points, i.e. at least one 3D voxel grid corresponding to each box in the reflectivity calibration table is determined. Then target reflectivity information in each box may be determined based on an average reflectivity value corresponding to at least one 3D voxel grid corresponding to each box, and the reflectivity calibration table is generated.);
a reference point position determination step of determining a position of a reference point for calibration based on the separately stored position coordinates of the point (Fig. 1A. Paragraph [0032]-LI discloses in the point cloud data collected by the primary radar, the data corresponding to each scanning point includes position information and reflectivity of the scanning point in a rectangular coordinate system corresponding to the primary radar.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA an apparatus for determining a position of an object sensed by a radar using spatial voxelization, the apparatus comprising: an input/output unit connected to the processor so as to transmit and receive data to and from the processor, the input/output unit being configured to receive a parameter required for voxelization, and, a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file, with the teachings of LI of a radar configured to transmit a radio wave signal and to receive a reflected signal reflected by the object; and a determination module configured to analyze a radar sensing file storing the reflected signal received by the radar using spatial voxelization and to determine a position of a reflector based on which calibration is performed, wherein the determination module comprises: a processor configured to execute program code; a storage unit connected to the processor so as to transmit and receive data to and from the processor, the storage unit being configured to store the program code, the radar sensing file storing the reflected signal received by the radar, and a dictionary; and the program code is written to perform: and cumulatively storing a reflected signal intensity of the point in a voxel ID of the searched voxel to create a dictionary; a reference point position determination step of determining a position of a reference point for calibration based on the separately stored position coordinates of the point.
Wherein having HAYAKAWA’s point cloud and voxel data processing apparatus a radar configured to transmit a radio wave signal and to receive a reflected signal reflected by the object; and a determination module configured to analyze a radar sensing file storing the reflected signal received by the radar using spatial voxelization and to determine a position of a reflector based on which calibration is performed, wherein the determination module comprises: a processor configured to execute program code; a storage unit connected to the processor so as to transmit and receive data to and from the processor, the storage unit being configured to store the program code, the radar sensing file storing the reflected signal received by the radar, and a dictionary; and the program code is written to perform: and cumulatively storing a reflected signal intensity of the point in a voxel ID of the searched voxel to create a dictionary; a reference point position determination step of determining a position of a reference point for calibration based on the separately stored position coordinates of the point.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing apparatus that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and LI relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while LI can more easily calibrate the reflectivity of the radar. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and LI et al. (US 20220214448 A1), Paragraph [0044].
Although, LI teaches the voxel ID and the dictionary. HAYAKAWA in view of LI fail to explicitly teach separately storing position coordinates of the point when a voxel ID of the searched voxel is the same as a voxel having a largest accumulated value of the reflected signal intensity in the dictionary; and
However, ISHIKAWA explicitly teaches separately storing position coordinates of the point when a voxel ID of the searched voxel is the same as a voxel having a largest accumulated value of the reflected signal intensity in the dictionary; and (Figs. 1, 4, and 10. Paragraph [0042]-ISHIKAWA discloses a position obtaining unit 1050 obtains the position of interest in the first image, and by using the deformation information obtained by the deformation information obtaining unit 1030, obtains the corresponding position in the second image corresponding to the position of interest in the first image. Further in paragraph [0071]-ISHIKAWA discloses I2_max obtained as above denotes the maximum intensity within the search area ω_pos2, and I2_min denotes the minimum intensity within the search area ω_pos2. In this light, the difference obtaining unit 1070 is an example of second obtaining means that obtains a maximum or a minimum of a voxel value or an interpolated value within a search area on the basis of at least one of a voxel value included in a partial area included in the search area and an interpolated value obtained by interpolation of a voxel of the second image, which is the target image (wherein search area ω_pos2 is the ID.).).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI an apparatus for determining a position of an object sensed by a radar using spatial voxelization, the apparatus comprising: an input/output unit connected to the processor so as to transmit and receive data to and from the processor, the input/output unit being configured to receive a parameter required for voxelization, and, a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file, with the teachings of ISHIKAWA of separately storing position coordinates of the point when a voxel ID of the searched voxel is the same as a voxel having a largest accumulated value of the reflected signal intensity in the dictionary; and
Wherein having HAYAKAWA’s point cloud and voxel data processing apparatus separately storing position coordinates of the point when a voxel ID of the searched voxel is the same as a voxel having a largest accumulated value of the reflected signal intensity in the dictionary; and
The motivation behind the modification would have been to obtain a point cloud and voxel data processing apparatus that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and ISHIKAWA relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while ISHIKAWA noise in the subtraction image can be reduced. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and ISHIKAWA et al. (US 20200160526 A1), Paragraph [0004].
Regarding claim 9, HAYAKAWA in view of LI and further in view of ISHIKAWA explicitly teach the apparatus according to claim 8,
HAYAKAWA fails to explicitly teach wherein the radar sensing file is created by the radar receiving the reflected signal and storing position coordinates and reflected signal intensity of a point in a form of point cloud data.
However, LI explicitly teaches wherein the radar sensing file is created by the radar receiving the reflected signal (Figs. 1A-B. Paragraph [0050]-LI discloses the radar acquires the point cloud data by scanning the environment periodically) and storing position coordinates and reflected signal intensity of a point in a form of point cloud data (Fig. 1A. Paragraph [0032]-LI discloses in the point cloud data collected by the primary radar, the data corresponding to each scanning point includes position information and reflectivity of the scanning point in a rectangular coordinate system corresponding to the primary radar.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA an apparatus for determining a position of an object sensed by a radar using spatial voxelization, the apparatus comprising: an input/output unit connected to the processor so as to transmit and receive data to and from the processor, the input/output unit being configured to receive a parameter required for voxelization, and, a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file, with the teachings of LI wherein the radar sensing file is created by the radar receiving the reflected signal and storing position coordinates and reflected signal intensity of a point in a form of point cloud data.
Wherein having HAYAKAWA’s point cloud and voxel data processing apparatus wherein the radar sensing file is created by the radar receiving the reflected signal and storing position coordinates and reflected signal intensity of a point in a form of point cloud data.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing apparatus that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and LI relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while LI can more easily calibrate the reflectivity of the radar. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and LI et al. (US 20220214448 A1), Paragraph [0044].
Claims 3-7 and 10-14 are rejected under 35 U.S.C. 103 as being unpatentable over HAYAKAWA et al. (US 6573855 B1), hereinafter referenced as HAYAKAWA, in view of LI et al. (US 20220214448 A1), hereinafter referenced as LI, and further in view of ISHIKAWA et al. (US 20200160526 A1), hereinafter referenced as ISHIKAWA, and further in view of DOUILLARD et al. (US 20180364717 A1), hereinafter referenced as DOUILLARD.
Regarding claim 3, HAYAKAWA in view of LI and further in view of ISHIKAWA explicitly teach the method according to claim 2,
HAYAKAWA further explicitly teaches wherein the voxelization step comprises (Figs. 1 and 3. Col. 17, Lines [55-61]-HAYAKAWA discloses the digitized received signal is stored in a predetermined area of a memory 21a inside the data processing unit as the multiple scales source three-dimensional voxel data s (x, y, t) such that the coordinates (x, y, t) determined by the position (x, y) on the medium surface and the reflection time (t) of the reflected wave 5 from the object 2 are encoded (wherein the received signal is a reflected signal).):
a dictionary creation step of searching, in the voxel grid, for a voxel corresponding to the position of the point stored in the radar sensing file (Figs. 1 and 3. Col. 17, Lines [15-19]-HAYAKAWA discloses a section coordinate designating means 33b for designating a coordinate point on the displayed section in response to a manual operation on the input unit 22 such as a mouse, thereby to select the voxel at this coordinate point as an object voxel.),
HAYAKAWA fails to explicitly teach a space range in which a voxel grid is to be created; a voxel grid creation step of creating a voxel grid based on the voxel size and the space range; and cumulatively storing the reflected signal intensity of the point in a voxel ID of the searched voxel to create a dictionary.
However, LI explicitly teaches a space range in which a voxel grid is to be created (Fig. 2. Paragraph [0042]-LI discloses if the first sample point cloud data is sample point cloud data within a first distance range, a second distance range corresponding to the voxel map data may be determined from the first distance range. The second distance range corresponding to the voxel map data is located within the first distance range.);
a voxel grid creation step of creating a voxel grid based on the voxel size and the space range (Fig. 2. Paragraph [0042]-LI discloses if the first sample point cloud data is sample point cloud data within a first distance range, a second distance range corresponding to the voxel map data may be determined from the first distance range. The second distance range corresponding to the voxel map data is located within the first distance range. And then the voxel map data in the second distance range is divided to obtain multiple 3D voxel grids within the second distance range, and initial data of each 3D voxel grid is determined, i.e. the initial data of each 3D voxel grid is set as a preset initial value); and
and cumulatively storing the reflected signal intensity of the point in a voxel ID of the searched voxel to create a dictionary (Figs. 1A and 2. Paragraph [0060]-LI discloses then at least one 3D voxel grid corresponding to each reflectivity of each scanning line is determined based on position information of the multiple target scanning points, i.e. at least one 3D voxel grid corresponding to each box in the reflectivity calibration table is determined. Then target reflectivity information in each box may be determined based on an average reflectivity value corresponding to at least one 3D voxel grid corresponding to each box, and the reflectivity calibration table is generated.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA of a method of determining a position of an object sensed by a radar using spatial voxelization, the method comprising: a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file with the teachings of LI of wherein a space range in which a voxel grid is to be created; a voxel grid creation step of creating a voxel grid based on the voxel size and the space range; and cumulatively storing the reflected signal intensity of the point in a voxel ID of the searched voxel to create a dictionary..
Wherein having HAYAKAWA’s point cloud and voxel data processing method wherein a space range in which a voxel grid is to be created; a voxel grid creation step of creating a voxel grid based on the voxel size and the space range; and cumulatively storing the reflected signal intensity of the point in a voxel ID of the searched voxel to create a dictionary.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing method that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and LI relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while LI can more easily calibrate the reflectivity of the radar. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and LI et al. (US 20220214448 A1), Paragraph [0044].
HAYAKAWA in view of LI and further in view of ISHIKAWA fail to explicitly teach a parameter setting step of setting parameters comprising a voxel size, lower and upper thresholds of the reflected signal intensity, and a filtering step of excluding points having reflected signal intensities deviating from a range between the lower and upper thresholds.
However, DOUILLARD explicitly teaches a parameter setting step of setting parameters comprising a voxel size (Fig. 2. Paragraph [0045]- DOUILLARD discloses the voxel space module 212 can define dimensions of a voxel space, including a length, width, and height of the voxel space. Further, the voxel space module 212 may determine a size of individual voxels.), lower and upper thresholds of the reflected signal intensity (Fig. 2. Paragraph [0045]-DOUILLARD discloses filtering may include removing data below a threshold amount of data per voxel (e.g., a number of LIDAR data points associated with a voxel) or over a predetermined number of voxels (e.g., a number of LIDAR data points associated with a number of proximate voxels). Further in paragraph [0015]- DOUILLARD discloses LIDAR data may be accumulated in the voxel space, with an individual voxel including processed data, such as: a number of data points, an average intensity, average x-value of LIDAR data associated with the individual voxel; average-y value of the LIDAR data associated with the individual voxel; average z-value of the LIDAR data associated with the individual voxel; and a covariance matrix based on the LIDAR data associated with the voxel.), and
a filtering step of excluding points having reflected signal intensities deviating from a range between the lower and upper thresholds (Fig. 2. Paragraph [0045]-DOUILLARD discloses filtering may include removing data below a threshold amount of data per voxel (e.g., a number of LIDAR data points associated with a voxel) or over a predetermined number of voxels (e.g., a number of LIDAR data points associated with a number of proximate voxels) Further in paragraph [0015]- DOUILLARD discloses LIDAR data may be accumulated in the voxel space, with an individual voxel including processed data, such as: a number of data points, an average intensity, average x-value of LIDAR data associated with the individual voxel; average-y value of the LIDAR data associated with the individual voxel; average z-value of the LIDAR data associated with the individual voxel; and a covariance matrix based on the LIDAR data associated with the voxel.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA of a method of determining a position of an object sensed by a radar using spatial voxelization, the method comprising: a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file with the teachings of DOUILLARD of a parameter setting step of setting parameters comprising a voxel size, lower and upper thresholds of the reflected signal intensity, and a filtering step of excluding points having reflected signal intensities deviating from a range between the lower and upper thresholds.
Wherein having HAYAKAWA’s point cloud and voxel data processing method a parameter setting step of setting parameters comprising a voxel size, lower and upper thresholds of the reflected signal intensity, and a filtering step of excluding points having reflected signal intensities deviating from a range between the lower and upper thresholds.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing method that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and DOUILLARD relate to processing and analyzing data in voxels, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while DOUILLARD complex multi-dimensional data, such as LIDAR data, can be represented in a voxel space, which can partition the data, allowing for efficient evaluation and processing of the data. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and DOUILLARD et al. (US 20180364717 A1), Paragraph [0020].
Regarding claim 4, HAYAKAWA in view of LI and further in view of ISHIKAWA and further in view of DOUILLARD explicitly teach the method according to claim 3,
HAYAKAWA further explicitly teaches wherein the dictionary creation step comprises (Figs. 1 and 3. Col. 17, Lines [15-19]-HAYAKAWA discloses a section coordinate designating means 33b for designating a coordinate point on the displayed section in response to a manual operation on the input unit 22 such as a mouse, thereby to select the voxel at this coordinate point as an object voxel.):
a voxel search step of searching for a voxel corresponding to the position of the point stored in the radar sensing file in the voxel grid (Figs. 1 and 3. Col. 17, Lines [15-19]-HAYAKAWA discloses a section coordinate designating means 33b for designating a coordinate point on the displayed section in response to a manual operation on the input unit 22 such as a mouse, thereby to select the voxel at this coordinate point as an object voxel.);
a determination step of determining whether a voxel ID of the searched voxel is present in the dictionary (Fig. 14. Col. 22, Lines [33-44]-HAYAKAWA discloses the three-dimensional voxel data corresponding to the reception position has a received signal intensity as a data value, but the other three-dimensional voxels have no substantive data values and are deficient in the data. Here, for the sake of convenience, the former three-dimensional voxels is defined as a source voxel and the latter three-dimensional voxels are defined as deficient voxels, respectively.);
an addition step of, when the voxel ID of the searched voxel is not present in the dictionary, adding the voxel ID of the searched voxel to the dictionary (Fig. 14. Col. 22, Lines [33-44]-HAYAKAWA discloses the three-dimensional voxel data corresponding to the reception position has a received signal intensity as a data value, but the other three-dimensional voxels have no substantive data values and are deficient in the data. Here, for the sake of convenience, the former three-dimensional voxels is defined as a source voxel and the latter three-dimensional voxels are defined as deficient voxels, respectively. Because deficient voxels may be generated depending on the moving pathway, the data analyzer 21 includes a linear interpolating means 26 for interpolating such deficient voxels by a one-dimensional linear interpolation (wherein deficient voxels are not present voxels).),
an accumulation step of, when the voxel ID of the searched voxel is present in the dictionary (Fig. 14. Col. 22, Lines [33-44]-HAYAKAWA discloses the three-dimensional voxel data corresponding to the reception position has a received signal intensity as a data value, but the other three-dimensional voxels have no substantive data values and are deficient in the data. Here, for the sake of convenience, the former three-dimensional voxels is defined as a source voxel and the latter three-dimensional voxels are defined as deficient voxels, respectively. Because deficient voxels may be generated depending on the moving pathway, the data analyzer 21 includes a linear interpolating means 26 for interpolating such deficient voxels by a one-dimensional linear interpolation (wherein source voxels are present voxels).).
HAYAKAWA fails to explicitly teach storing the reflected signal intensity in the voxel ID, storing a count of the point recorded in the voxel ID, dividing the reflected signal intensity by the count, and storing an average reflected signal intensity; cumulatively storing the reflected signal intensity of the voxel ID of the searched voxel in the dictionary, cumulatively storing a count of the point recorded in the voxel ID, dividing the reflected signal intensity by the count, and storing an average reflected signal intensity.
However, LI explicitly teaches storing the reflected signal intensity in the voxel ID (Fig. 1A. Paragraph [0060]-LI discloses then at least one 3D voxel grid corresponding to each reflectivity of each scanning line is determined based on position information of the multiple target scanning points, i.e. at least one 3D voxel grid corresponding to each box in the reflectivity calibration table is determined. Then target reflectivity information in each box may be determined based on an average reflectivity value corresponding to at least one 3D voxel grid corresponding to each box, and the reflectivity calibration table is generated.), storing a count of the point recorded in the voxel ID, dividing the reflected signal intensity by the count, and storing an average reflected signal intensity (Fig. 1A. Paragraph [0042]-LI discloses when the data of the 3D voxel grid includes an average reflectivity value, a reflectivity variance and a number of scanning points, the initial data of each 3D voxel grid may be that the average reflectivity value is 0, the reflectivity variance is 0 and the number of scanning points is 0. And then the initial data of each 3D voxel grid is updated using the point cloud data of the multiple scanning points in the first sample point cloud data to obtain updated data of each 3D voxel grid (wherein the count is the number of scanning points).); and
cumulatively storing the reflected signal intensity of the voxel ID of the searched voxel in the dictionary (Fig. 1A. Paragraph [0060]-LI discloses then at least one 3D voxel grid corresponding to each reflectivity of each scanning line is determined based on position information of the multiple target scanning points, i.e. at least one 3D voxel grid corresponding to each box in the reflectivity calibration table is determined. Then target reflectivity information in each box may be determined based on an average reflectivity value corresponding to at least one 3D voxel grid corresponding to each box, and the reflectivity calibration table is generated.), cumulatively storing a count of the point recorded in the voxel ID, dividing the reflected signal intensity by the count, and storing an average reflected signal intensity (Fig. 1A. Paragraph [0042]-LI discloses when the data of the 3D voxel grid includes an average reflectivity value, a reflectivity variance and a number of scanning points, the initial data of each 3D voxel grid may be that the average reflectivity value is 0, the reflectivity variance is 0 and the number of scanning points is 0. And then the initial data of each 3D voxel grid is updated using the point cloud data of the multiple scanning points in the first sample point cloud data to obtain updated data of each 3D voxel grid (wherein the count is the number of scanning points).).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of , HAYAKAWA in view of LI and further in view of ISHIKAWA and further in view of DOUILLARD of a method of determining a position of an object sensed by a radar using spatial voxelization, the method comprising: a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file with the teachings of LI of storing the reflected signal intensity in the voxel ID, storing a count of the point recorded in the voxel ID, dividing the reflected signal intensity by the count, and storing an average reflected signal intensity; cumulatively storing the reflected signal intensity of the voxel ID of the searched voxel in the dictionary, cumulatively storing a count of the point recorded in the voxel ID, dividing the reflected signal intensity by the count, and storing an average reflected signal intensity.
Wherein having HAYAKAWA’s point cloud and voxel data processing method storing the reflected signal intensity in the voxel ID, storing a count of the point recorded in the voxel ID, dividing the reflected signal intensity by the count, and storing an average reflected signal intensity; cumulatively storing the reflected signal intensity of the voxel ID of the searched voxel in the dictionary, cumulatively storing a count of the point recorded in the voxel ID, dividing the reflected signal intensity by the count, and storing an average reflected signal intensity.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing method that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and LI relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while LI can more easily calibrate the reflectivity of the radar. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and LI et al. (US 20220214448 A1), Paragraph [0044].
Regarding claim 5, HAYAKAWA in view of LI and further in view of ISHIKAWA explicitly teach the method according to claim 2,
HAYAKAWA further explicitly teaches wherein the point extraction step comprises (Figs. 1 and 3. Col. 17, Lines [55-61]-HAYAKAWA discloses the digitized received signal is stored in a predetermined area of a memory 21a inside the data processing unit as the multiple scales source three-dimensional voxel data s (x, y, t) such that the coordinates (x, y, t) determined by the position (x, y) on the medium surface and the reflection time (t) of the reflected wave 5 from the object 2 are encoded (wherein the received signal is a reflected signal).):
a top voxel point extraction step of loading the radar sensing file (Figs. 1 and 3. Col. 17, Lines [55-61]-HAYAKAWA discloses the digitized received signal is stored in a predetermined area of a memory 21a inside the data processing unit as the multiple scales source three-dimensional voxel data s (x, y, t) such that the coordinates (x, y, t) determined by the position (x, y) on the medium surface and the reflection time (t) of the reflected wave 5 from the object 2 are encoded (wherein the received signal is a reflected signal).), searching for a voxel corresponding to the position of the point stored in the radar sensing file (Figs. 1 and 3. Col. 17, Lines [15-19]-HAYAKAWA discloses a section coordinate designating means 33b for designating a coordinate point on the displayed section in response to a manual operation on the input unit 22 such as a mouse, thereby to select the voxel at this coordinate point as an object voxel.),
HAYAKAWA fails to explicitly teach a space range in which a voxel grid is to be created; a voxel grid creation step of creating a voxel grid based on the voxel size and the space range.
However, LI explicitly teaches a space range in which a voxel grid is to be created (Fig. 2. Paragraph [0042]-LI discloses if the first sample point cloud data is sample point cloud data within a first distance range, a second distance range corresponding to the voxel map data may be determined from the first distance range. The second distance range corresponding to the voxel map data is located within the first distance range.);
a voxel grid creation step of creating a voxel grid based on the voxel size and the space range (Fig. 2. Paragraph [0042]-LI discloses if the first sample point cloud data is sample point cloud data within a first distance range, a second distance range corresponding to the voxel map data may be determined from the first distance range. The second distance range corresponding to the voxel map data is located within the first distance range. And then the voxel map data in the second distance range is divided to obtain multiple 3D voxel grids within the second distance range, and initial data of each 3D voxel grid is determined, i.e. the initial data of each 3D voxel grid is set as a preset initial value); and
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA of a method of determining a position of an object sensed by a radar using spatial voxelization, the method comprising: a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file with the teachings of LI of a space range in which a voxel grid is to be created; a voxel grid creation step of creating a voxel grid based on the voxel size and the space range.
Wherein having HAYAKAWA’s point cloud and voxel data processing method a space range in which a voxel grid is to be created; a voxel grid creation step of creating a voxel grid based on the voxel size and the space range.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing method that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and LI relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while LI can more easily calibrate the reflectivity of the radar. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and LI et al. (US 20220214448 A1), Paragraph [0044].
HAYAKAWA in view of LI fail to explicitly teach separately storing position coordinates of the point when a voxel ID of the searched voxel is the same as a voxel ID of a top voxel having a largest accumulated value of the reflected signal intensity in the dictionary, and extracting a point of the top voxel.
However, ISHIKAWA explicitly teaches separately storing position coordinates of the point when a voxel ID of the searched voxel is the same as a voxel ID of a top voxel having a largest accumulated value of the reflected signal intensity in the dictionary (Figs. 1, 4, and 10. Paragraph [0042]-ISHIKAWA discloses a position obtaining unit 1050 obtains the position of interest in the first image, and by using the deformation information obtained by the deformation information obtaining unit 1030, obtains the corresponding position in the second image corresponding to the position of interest in the first image. Further in paragraph [0071]-ISHIKAWA discloses I2_max obtained as above denotes the maximum intensity within the search area ω_pos2, and I2_min denotes the minimum intensity within the search area ω_pos2. In this light, the difference obtaining unit 1070 is an example of second obtaining means that obtains a maximum or a minimum of a voxel value or an interpolated value within a search area on the basis of at least one of a voxel value included in a partial area included in the search area and an interpolated value obtained by interpolation of a voxel of the second image, which is the target image (wherein search area ω_pos2 is the ID.).)., and extracting a point of the top voxel (Figs. 1, 4, and 10. Paragraph [0071]-ISHIKAWA discloses I2_max obtained as above denotes the maximum intensity within the search area ω_pos2, and I2_min denotes the minimum intensity within the search area ω_pos2. In this light, the difference obtaining unit 1070 is an example of second obtaining means that obtains a maximum or a minimum of a voxel value or an interpolated value within a search area on the basis of at least one of a voxel value included in a partial area included in the search area and an interpolated value obtained by interpolation of a voxel of the second image, which is the target image.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA of a method of determining a position of an object sensed by a radar using spatial voxelization, the method comprising: a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file with the teachings of ISHIKAWA of separately storing position coordinates of the point when a voxel ID of the searched voxel is the same as a voxel ID of a top voxel having a largest accumulated value of the reflected signal intensity in the dictionary, and extracting a point of the top voxel.
Wherein having HAYAKAWA’s point cloud and voxel data processing method separately storing position coordinates of the point when a voxel ID of the searched voxel is the same as a voxel ID of a top voxel having a largest accumulated value of the reflected signal intensity in the dictionary, and extracting a point of the top voxel.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing method that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and ISHIKAWA relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while ISHIKAWA noise in the subtraction image can be reduced. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and ISHIKAWA et al. (US 20200160526 A1), Paragraph [0004].
HAYAKAWA in view of LI and further in view of ISHIKAWA fail to explicitly teach a parameter setting step of setting parameters comprising a voxel size, lower and upper thresholds of the reflected signal intensity, a filtering step of excluding points having reflected signal intensities deviating from a range between the lower and upper thresholds.
However, DOUILLARD explicitly teaches a parameter setting step of setting parameters comprising a voxel size (Fig. 2. Paragraph [0045]- DOUILLARD discloses the voxel space module 212 can define dimensions of a voxel space, including a length, width, and height of the voxel space. Further, the voxel space module 212 may determine a size of individual voxels.), lower and upper thresholds of the reflected signal intensity (Fig. 2. Paragraph [0045]-DOUILLARD discloses filtering may include removing data below a threshold amount of data per voxel (e.g., a number of LIDAR data points associated with a voxel) or over a predetermined number of voxels (e.g., a number of LIDAR data points associated with a number of proximate voxels) Further in paragraph [0015]- DOUILLARD discloses LIDAR data may be accumulated in the voxel space, with an individual voxel including processed data, such as: a number of data points, an average intensity, average x-value of LIDAR data associated with the individual voxel; average-y value of the LIDAR data associated with the individual voxel; average z-value of the LIDAR data associated with the individual voxel; and a covariance matrix based on the LIDAR data associated with the voxel.), and
a filtering step of excluding points having reflected signal intensities deviating from a range between the lower and upper thresholds (Fig. 2. Paragraph [0045]-DOUILLARD discloses filtering may include removing data below a threshold amount of data per voxel (e.g., a number of LIDAR data points associated with a voxel) or over a predetermined number of voxels (e.g., a number of LIDAR data points associated with a number of proximate voxels) Further in paragraph [0015]- DOUILLARD discloses LIDAR data may be accumulated in the voxel space, with an individual voxel including processed data, such as: a number of data points, an average intensity, average x-value of LIDAR data associated with the individual voxel; average-y value of the LIDAR data associated with the individual voxel; average z-value of the LIDAR data associated with the individual voxel; and a covariance matrix based on the LIDAR data associated with the voxel.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA of a method of determining a position of an object sensed by a radar using spatial voxelization, the method comprising: a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file with the teachings of DOUILLARD of a parameter setting step of setting parameters comprising a voxel size, lower and upper thresholds of the reflected signal intensity, a filtering step of excluding points having reflected signal intensities deviating from a range between the lower and upper thresholds.
Wherein having HAYAKAWA’s point cloud and voxel data processing method a parameter setting step of setting parameters comprising a voxel size, lower and upper thresholds of the reflected signal intensity, a filtering step of excluding points having reflected signal intensities deviating from a range between the lower and upper thresholds.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing method that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and DOUILLARD relate to processing and analyzing data in voxels, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while DOUILLARD complex multi-dimensional data, such as LIDAR data, can be represented in a voxel space, which can partition the data, allowing for efficient evaluation and processing of the data. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and DOUILLARD et al. (US 20180364717 A1), Paragraph [0020].
Regarding claim 6, HAYAKAWA in view of LI and further in view of ISHIKAWA and further in view of DOUILLARD explicitly teach the method according to claim 5,
HAYAKAWA further explicitly teaches wherein the top voxel point extraction step comprises (Figs. 1 and 3. Col. 17, Lines [55-61]-HAYAKAWA discloses the digitized received signal is stored in a predetermined area of a memory 21a inside the data processing unit as the multiple scales source three-dimensional voxel data s (x, y, t) such that the coordinates (x, y, t) determined by the position (x, y) on the medium surface and the reflection time (t) of the reflected wave 5 from the object 2 are encoded (wherein the received signal is a reflected signal).):
a voxel search step of searching for a voxel corresponding to the position of the point stored in the radar sensing file in the voxel grid (Figs. 1 and 3. Col. 17, Lines [15-19]-HAYAKAWA discloses a section coordinate designating means 33b for designating a coordinate point on the displayed section in response to a manual operation on the input unit 22 such as a mouse, thereby to select the voxel at this coordinate point as an object voxel.);
HAYAKAWA in view of LI fail to explicitly teach a determination step of determining whether a voxel ID of the searched voxel matches the voxel ID of the top voxel in the dictionary; and a point position storage step of, when the voxel ID of the searched voxel matches the voxel ID of the top voxel, storing the position of the point in the voxel ID of the top voxel.
However, ISHIKAWA explicitly teaches a determination step of determining whether a voxel ID of the searched voxel matches the voxel ID of the top voxel in the dictionary (Figs. 1, 4, and 10. Paragraph [0042]-ISHIKAWA discloses a position obtaining unit 1050 obtains the position of interest in the first image, and by using the deformation information obtained by the deformation information obtaining unit 1030, obtains the corresponding position in the second image corresponding to the position of interest in the first image. Further in paragraph [0071]-ISHIKAWA discloses I2_max obtained as above denotes the maximum intensity within the search area ω_pos2, and I2_min denotes the minimum intensity within the search area ω_pos2. In this light, the difference obtaining unit 1070 is an example of second obtaining means that obtains a maximum or a minimum of a voxel value or an interpolated value within a search area on the basis of at least one of a voxel value included in a partial area included in the search area and an interpolated value obtained by interpolation of a voxel of the second image, which is the target image (wherein search area ω_pos2 is the ID.).); and
a point position storage step of, when the voxel ID of the searched voxel matches the voxel ID of the top voxel, storing the position of the point in the voxel ID of the top voxel (Figs. 1, 4, and 10. Paragraph [0042]-ISHIKAWA discloses a position obtaining unit 1050 obtains the position of interest in the first image, and by using the deformation information obtained by the deformation information obtaining unit 1030, obtains the corresponding position in the second image corresponding to the position of interest in the first image. Further in paragraph [0071]-ISHIKAWA discloses I2_max obtained as above denotes the maximum intensity within the search area ω_pos2, and I2_min denotes the minimum intensity within the search area ω_pos2. In this light, the difference obtaining unit 1070 is an example of second obtaining means that obtains a maximum or a minimum of a voxel value or an interpolated value within a search area on the basis of at least one of a voxel value included in a partial area included in the search area and an interpolated value obtained by interpolation of a voxel of the second image, which is the target image (wherein search area ω_pos2 is the ID.).).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA and further in view of DOUILLARD of a method of determining a position of an object sensed by a radar using spatial voxelization, the method comprising: a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file with the teachings of ISHIKAWA of a determination step of determining whether a voxel ID of the searched voxel matches the voxel ID of the top voxel in the dictionary; and a point position storage step of, when the voxel ID of the searched voxel matches the voxel ID of the top voxel, storing the position of the point in the voxel ID of the top voxel.
Wherein having HAYAKAWA’s point cloud and voxel data processing method a determination step of determining whether a voxel ID of the searched voxel matches the voxel ID of the top voxel in the dictionary; and a point position storage step of, when the voxel ID of the searched voxel matches the voxel ID of the top voxel, storing the position of the point in the voxel ID of the top voxel.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing method that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and ISHIKAWA relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while ISHIKAWA noise in the subtraction image can be reduced. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and ISHIKAWA et al. (US 20200160526 A1), Paragraph [0004].
Regarding claim 7, HAYAKAWA in view of LI and further in view of ISHIKAWA and further in view of DOUILLARD explicitly teach the method according to claim 6,
HAYAKAWA in view of LI fail to explicitly teach wherein the top voxel point extraction step further comprises determining whether a current file comprising the point matches a target file when the voxel ID of the searched voxel matches the voxel ID of the top voxel in the dictionary, when the current file matches the target file, the point position storage step is performed, and when the current file does not match the target file, the point position storage step is not performed.
However, ISHIKAWA explicitly teaches wherein the top voxel point extraction step further comprises determining whether a current file comprising the point matches a target file when the voxel ID of the searched voxel matches the voxel ID of the top voxel in the dictionary (Figs. 1, 4, and 10. Paragraph [0042]-ISHIKAWA discloses a position obtaining unit 1050 obtains the position of interest in the first image, and by using the deformation information obtained by the deformation information obtaining unit 1030, obtains the corresponding position in the second image corresponding to the position of interest in the first image. Further in paragraph [0071]-ISHIKAWA discloses I2_max obtained as above denotes the maximum intensity within the search area ω_pos2, and I2_min denotes the minimum intensity within the search area ω_pos2. In this light, the difference obtaining unit 1070 is an example of second obtaining means that obtains a maximum or a minimum of a voxel value or an interpolated value within a search area on the basis of at least one of a voxel value included in a partial area included in the search area and an interpolated value obtained by interpolation of a voxel of the second image, which is the target image (wherein search area ω_pos2 is the ID.).),
when the current file matches the target file, the point position storage step is performed (Figs. 1, 4, and 10. Paragraph [0042]-ISHIKAWA discloses a position obtaining unit 1050 obtains the position of interest in the first image, and by using the deformation information obtained by the deformation information obtaining unit 1030, obtains the corresponding position in the second image corresponding to the position of interest in the first image. Further in paragraph [0071]-ISHIKAWA discloses I2_max obtained as above denotes the maximum intensity within the search area ω_pos2, and I2_min denotes the minimum intensity within the search area ω_pos2. In this light, the difference obtaining unit 1070 is an example of second obtaining means that obtains a maximum or a minimum of a voxel value or an interpolated value within a search area on the basis of at least one of a voxel value included in a partial area included in the search area and an interpolated value obtained by interpolation of a voxel of the second image, which is the target image (wherein search area ω_pos2 is the ID.).), and
when the current file does not match the target file, the point position storage step is not performed (Fig. 8. Paragraph [0069]-ISHIKAWA discloses that if a position V_ij is identical with the position of any of the voxels 4010, the voxel value is obtained as I2_s_ij; if not, the intensity in the second image at the position V_ij is obtained through interpolation processing (wherein interpolation processing is not the point position storage step).).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA and further in view of DOUILLARD of a method of determining a position of an object sensed by a radar using spatial voxelization, the method comprising: a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file with the teachings of ISHIKAWA of wherein the top voxel point extraction step further comprises determining whether a current file comprising the point matches a target file when the voxel ID of the searched voxel matches the voxel ID of the top voxel in the dictionary, when the current file matches the target file, the point position storage step is performed, and when the current file does not match the target file, the point position storage step is not performed.
Wherein having HAYAKAWA’s point cloud and voxel data processing method wherein the top voxel point extraction step further comprises determining whether a current file comprising the point matches a target file when the voxel ID of the searched voxel matches the voxel ID of the top voxel in the dictionary, when the current file matches the target file, the point position storage step is performed, and when the current file does not match the target file, the point position storage step is not performed.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing method that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and ISHIKAWA relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while ISHIKAWA noise in the subtraction image can be reduced. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and ISHIKAWA et al. (US 20200160526 A1), Paragraph [0004].
Regarding claim 10, HAYAKAWA in view of LI and further in view of ISHIKAWA explicitly teach the apparatus according to claim 9,
HAYAKAWA further explicitly teaches wherein the voxelization step comprises (Figs. 1 and 3. Col. 17, Lines [55-61]-HAYAKAWA discloses the digitized received signal is stored in a predetermined area of a memory 21a inside the data processing unit as the multiple scales source three-dimensional voxel data s (x, y, t) such that the coordinates (x, y, t) determined by the position (x, y) on the medium surface and the reflection time (t) of the reflected wave 5 from the object 2 are encoded (wherein the received signal is a reflected signal).):
a dictionary creation step of searching, in the voxel grid, for a voxel corresponding to the position of the point stored in the radar sensing file (Figs. 1 and 3. Col. 17, Lines [15-19]-HAYAKAWA discloses a section coordinate designating means 33b for designating a coordinate point on the displayed section in response to a manual operation on the input unit 22 such as a mouse, thereby to select the voxel at this coordinate point as an object voxel.).
HAYAKAWA fails to explicitly teach a space range in which a voxel grid is to be created; a voxel grid creation step of creating a voxel grid based on the voxel size and the space range; and cumulatively storing the reflected signal intensity of the point in a voxel ID of the searched voxel to create a dictionary.
However, LI explicitly teaches a space range in which a voxel grid is to be created (Fig. 2. Paragraph [0042]-LI discloses if the first sample point cloud data is sample point cloud data within a first distance range, a second distance range corresponding to the voxel map data may be determined from the first distance range. The second distance range corresponding to the voxel map data is located within the first distance range.);
a voxel grid creation step of creating a voxel grid based on the voxel size and the space range (Fig. 2. Paragraph [0042]-LI discloses if the first sample point cloud data is sample point cloud data within a first distance range, a second distance range corresponding to the voxel map data may be determined from the first distance range. The second distance range corresponding to the voxel map data is located within the first distance range. And then the voxel map data in the second distance range is divided to obtain multiple 3D voxel grids within the second distance range, and initial data of each 3D voxel grid is determined, i.e. the initial data of each 3D voxel grid is set as a preset initial value); and
and cumulatively storing the reflected signal intensity of the point in a voxel ID of the searched voxel to create a dictionary (Figs. 1A and 2. Paragraph [0060]-LI discloses then at least one 3D voxel grid corresponding to each reflectivity of each scanning line is determined based on position information of the multiple target scanning points, i.e. at least one 3D voxel grid corresponding to each box in the reflectivity calibration table is determined. Then target reflectivity information in each box may be determined based on an average reflectivity value corresponding to at least one 3D voxel grid corresponding to each box, and the reflectivity calibration table is generated.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA an apparatus for determining a position of an object sensed by a radar using spatial voxelization, the apparatus comprising: an input/output unit connected to the processor so as to transmit and receive data to and from the processor, the input/output unit being configured to receive a parameter required for voxelization, and, a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file, with the teachings of LI of a space range in which a voxel grid is to be created; a voxel grid creation step of creating a voxel grid based on the voxel size and the space range; and cumulatively storing the reflected signal intensity of the point in a voxel ID of the searched voxel to create a dictionary.
Wherein having HAYAKAWA’s point cloud and voxel data processing apparatus a space range in which a voxel grid is to be created; a voxel grid creation step of creating a voxel grid based on the voxel size and the space range; and cumulatively storing the reflected signal intensity of the point in a voxel ID of the searched voxel to create a dictionary.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing apparatus that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and LI relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while LI can more easily calibrate the reflectivity of the radar. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and LI et al. (US 20220214448 A1), Paragraph [0044].
HAYAKAWA in view of LI and further in view of ISHIKAWA fail to explicitly teach a parameter setting step of setting parameters comprising a voxel size, lower and upper thresholds of the reflected signal intensity, and; a filtering step of excluding points having reflected signal intensities deviating from a range between the lower and upper thresholds.
However, DOUILLARD explicitly teaches a parameter setting step of setting parameters comprising a voxel size (Fig. 2. Paragraph [0045]- DOUILLARD discloses the voxel space module 212 can define dimensions of a voxel space, including a length, width, and height of the voxel space. Further, the voxel space module 212 may determine a size of individual voxels.), lower and upper thresholds of the reflected signal intensity (Fig. 2. Paragraph [0045]-DOUILLARD discloses filtering may include removing data below a threshold amount of data per voxel (e.g., a number of LIDAR data points associated with a voxel) or over a predetermined number of voxels (e.g., a number of LIDAR data points associated with a number of proximate voxels). Further in paragraph [0015]- DOUILLARD discloses LIDAR data may be accumulated in the voxel space, with an individual voxel including processed data, such as: a number of data points, an average intensity, average x-value of LIDAR data associated with the individual voxel; average-y value of the LIDAR data associated with the individual voxel; average z-value of the LIDAR data associated with the individual voxel; and a covariance matrix based on the LIDAR data associated with the voxel.), and
a filtering step of excluding points having reflected signal intensities deviating from a range between the lower and upper thresholds (Fig. 2. Paragraph [0045]-DOUILLARD discloses filtering may include removing data below a threshold amount of data per voxel (e.g., a number of LIDAR data points associated with a voxel) or over a predetermined number of voxels (e.g., a number of LIDAR data points associated with a number of proximate voxels). Further in paragraph [0015]- DOUILLARD discloses LIDAR data may be accumulated in the voxel space, with an individual voxel including processed data, such as: a number of data points, an average intensity, average x-value of LIDAR data associated with the individual voxel; average-y value of the LIDAR data associated with the individual voxel; average z-value of the LIDAR data associated with the individual voxel; and a covariance matrix based on the LIDAR data associated with the voxel.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA of an apparatus for determining a position of an object sensed by a radar using spatial voxelization, the apparatus comprising: an input/output unit connected to the processor so as to transmit and receive data to and from the processor, the input/output unit being configured to receive a parameter required for voxelization, and, a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file, with the teachings of DOUILLARD of a parameter setting step of setting parameters comprising a voxel size, lower and upper thresholds of the reflected signal intensity, and; a filtering step of excluding points having reflected signal intensities deviating from a range between the lower and upper thresholds.
Wherein having HAYAKAWA’s point cloud and voxel data processing apparatus with a parameter setting step of setting parameters comprising a voxel size, lower and upper thresholds of the reflected signal intensity, and; a filtering step of excluding points having reflected signal intensities deviating from a range between the lower and upper thresholds.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing method that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and DOUILLARD relate to processing and analyzing data in voxels, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while DOUILLARD complex multi-dimensional data, such as LIDAR data, can be represented in a voxel space, which can partition the data, allowing for efficient evaluation and processing of the data. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and DOUILLARD et al. (US 20180364717 A1), Paragraph [0020].
Regarding claim 11, HAYAKAWA in view of LI and further in view of ISHIKAWA and further in view of DOUILLARD explicitly teach the apparatus according to claim 10,
HAYAKAWA further explicitly teaches wherein the dictionary creation step comprises (Figs. 1 and 3. Col. 17, Lines [15-19]-HAYAKAWA discloses a section coordinate designating means 33b for designating a coordinate point on the displayed section in response to a manual operation on the input unit 22 such as a mouse, thereby to select the voxel at this coordinate point as an object voxel.):
a voxel search step of searching for a voxel corresponding to the position of the point stored in the radar sensing file in the voxel grid (Figs. 1 and 3. Col. 17, Lines [15-19]-HAYAKAWA discloses a section coordinate designating means 33b for designating a coordinate point on the displayed section in response to a manual operation on the input unit 22 such as a mouse, thereby to select the voxel at this coordinate point as an object voxel.);
a determination step of determining whether a voxel ID of the searched voxel is present in the dictionary (Fig. 14. Col. 22, Lines [33-44]-HAYAKAWA discloses the three-dimensional voxel data corresponding to the reception position has a received signal intensity as a data value, but the other three-dimensional voxels have no substantive data values and are deficient in the data. Here, for the sake of convenience, the former three-dimensional voxels is defined as a source voxel and the latter three-dimensional voxels are defined as deficient voxels, respectively.);
an addition step of, when the voxel ID of the searched voxel is not present in the dictionary, adding the voxel ID of the searched voxel to the dictionary (Fig. 14. Col. 22, Lines [33-44]-HAYAKAWA discloses the three-dimensional voxel data corresponding to the reception position has a received signal intensity as a data value, but the other three-dimensional voxels have no substantive data values and are deficient in the data. Here, for the sake of convenience, the former three-dimensional voxels is defined as a source voxel and the latter three-dimensional voxels are defined as deficient voxels, respectively. Because deficient voxels may be generated depending on the moving pathway, the data analyzer 21 includes a linear interpolating means 26 for interpolating such deficient voxels by a one-dimensional linear interpolation (wherein deficient voxels are not present voxels).),
an accumulation step of, when the voxel ID of the searched voxel is present in the dictionary (Fig. 14. Col. 22, Lines [33-44]-HAYAKAWA discloses the three-dimensional voxel data corresponding to the reception position has a received signal intensity as a data value, but the other three-dimensional voxels have no substantive data values and are deficient in the data. Here, for the sake of convenience, the former three-dimensional voxels is defined as a source voxel and the latter three-dimensional voxels are defined as deficient voxels, respectively. Because deficient voxels may be generated depending on the moving pathway, the data analyzer 21 includes a linear interpolating means 26 for interpolating such deficient voxels by a one-dimensional linear interpolation (wherein source voxels are present voxels).),
HAYAKAWA fails to explicitly teach storing the reflected signal intensity in the voxel ID, storing a count of the point recorded in the voxel ID, dividing the reflected signal intensity by the count, and storing an average reflected signal intensity; and cumulatively storing the reflected signal intensity of the voxel ID of the searched voxel in the dictionary, cumulatively storing a count of the point recorded in the voxel ID, dividing the reflected signal intensity by the count, and storing an average reflected signal intensity.
However, LI explicitly teaches storing the reflected signal intensity in the voxel ID (Fig. 1A. Paragraph [0060]-LI discloses then at least one 3D voxel grid corresponding to each reflectivity of each scanning line is determined based on position information of the multiple target scanning points, i.e. at least one 3D voxel grid corresponding to each box in the reflectivity calibration table is determined. Then target reflectivity information in each box may be determined based on an average reflectivity value corresponding to at least one 3D voxel grid corresponding to each box, and the reflectivity calibration table is generated.), storing a count of the point recorded in the voxel ID, dividing the reflected signal intensity by the count, and storing an average reflected signal intensity (Fig. 1A. Paragraph [0042]-LI discloses when the data of the 3D voxel grid includes an average reflectivity value, a reflectivity variance and a number of scanning points, the initial data of each 3D voxel grid may be that the average reflectivity value is 0, the reflectivity variance is 0 and the number of scanning points is 0. And then the initial data of each 3D voxel grid is updated using the point cloud data of the multiple scanning points in the first sample point cloud data to obtain updated data of each 3D voxel grid (wherein the count is the number of scanning points).); and
cumulatively storing the reflected signal intensity of the voxel ID of the searched voxel in the dictionary (Fig. 1A. Paragraph [0060]-LI discloses then at least one 3D voxel grid corresponding to each reflectivity of each scanning line is determined based on position information of the multiple target scanning points, i.e. at least one 3D voxel grid corresponding to each box in the reflectivity calibration table is determined. Then target reflectivity information in each box may be determined based on an average reflectivity value corresponding to at least one 3D voxel grid corresponding to each box, and the reflectivity calibration table is generated.), cumulatively storing a count of the point recorded in the voxel ID, dividing the reflected signal intensity by the count, and storing an average reflected signal intensity (Fig. 1A. Paragraph [0042]-LI discloses when the data of the 3D voxel grid includes an average reflectivity value, a reflectivity variance and a number of scanning points, the initial data of each 3D voxel grid may be that the average reflectivity value is 0, the reflectivity variance is 0 and the number of scanning points is 0. And then the initial data of each 3D voxel grid is updated using the point cloud data of the multiple scanning points in the first sample point cloud data to obtain updated data of each 3D voxel grid (wherein the count is the number of scanning points).).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA and further in view of DOUILLARD an apparatus for determining a position of an object sensed by a radar using spatial voxelization, the apparatus comprising: an input/output unit connected to the processor so as to transmit and receive data to and from the processor, the input/output unit being configured to receive a parameter required for voxelization, and, a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file, with the teachings of LI of storing the reflected signal intensity in the voxel ID, storing a count of the point recorded in the voxel ID, dividing the reflected signal intensity by the count, and storing an average reflected signal intensity; and cumulatively storing the reflected signal intensity of the voxel ID of the searched voxel in the dictionary, cumulatively storing a count of the point recorded in the voxel ID, dividing the reflected signal intensity by the count, and storing an average reflected signal intensity.
Wherein having HAYAKAWA’s point cloud and voxel data processing apparatus storing the reflected signal intensity in the voxel ID, storing a count of the point recorded in the voxel ID, dividing the reflected signal intensity by the count, and storing an average reflected signal intensity; and cumulatively storing the reflected signal intensity of the voxel ID of the searched voxel in the dictionary, cumulatively storing a count of the point recorded in the voxel ID, dividing the reflected signal intensity by the count, and storing an average reflected signal intensity.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing apparatus that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and LI relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while LI can more easily calibrate the reflectivity of the radar. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and LI et al. (US 20220214448 A1), Paragraph [0044].
Regarding claim 12, HAYAKAWA in view of LI and further in view of ISHIKAWA explicitly teach the apparatus according to claim 9,
HAYAKAWA further explicitly teaches wherein the point extraction step comprises (Figs. 1 and 3. Col. 17, Lines [55-61]-HAYAKAWA discloses the digitized received signal is stored in a predetermined area of a memory 21a inside the data processing unit as the multiple scales source three-dimensional voxel data s (x, y, t) such that the coordinates (x, y, t) determined by the position (x, y) on the medium surface and the reflection time (t) of the reflected wave 5 from the object 2 are encoded (wherein the received signal is a reflected signal).):
a top voxel point extraction step of loading the radar sensing file (Figs. 1 and 3. Col. 17, Lines [55-61]-HAYAKAWA discloses the digitized received signal is stored in a predetermined area of a memory 21a inside the data processing unit as the multiple scales source three-dimensional voxel data s (x, y, t) such that the coordinates (x, y, t) determined by the position (x, y) on the medium surface and the reflection time (t) of the reflected wave 5 from the object 2 are encoded (wherein the received signal is a reflected signal).), searching for a voxel corresponding to the position of the point stored in the radar sensing file (Figs. 1 and 3. Col. 17, Lines [15-19]-HAYAKAWA discloses a section coordinate designating means 33b for designating a coordinate point on the displayed section in response to a manual operation on the input unit 22 such as a mouse, thereby to select the voxel at this coordinate point as an object voxel.),
HAYAKAWA fails to explicitly teach a space range in which a voxel grid is to be created; a voxel grid creation step of creating a voxel grid based on the voxel size and the space range.
However, LI explicitly teaches a space range in which a voxel grid is to be created (Fig. 2. Paragraph [0042]-LI discloses if the first sample point cloud data is sample point cloud data within a first distance range, a second distance range corresponding to the voxel map data may be determined from the first distance range. The second distance range corresponding to the voxel map data is located within the first distance range.);
a voxel grid creation step of creating a voxel grid based on the voxel size and the space range (Fig. 2. Paragraph [0042]-LI discloses if the first sample point cloud data is sample point cloud data within a first distance range, a second distance range corresponding to the voxel map data may be determined from the first distance range. The second distance range corresponding to the voxel map data is located within the first distance range. And then the voxel map data in the second distance range is divided to obtain multiple 3D voxel grids within the second distance range, and initial data of each 3D voxel grid is determined, i.e. the initial data of each 3D voxel grid is set as a preset initial value); and
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA an apparatus for determining a position of an object sensed by a radar using spatial voxelization, the apparatus comprising: an input/output unit connected to the processor so as to transmit and receive data to and from the processor, the input/output unit being configured to receive a parameter required for voxelization, and, a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file, with the teachings of LI of a space range in which a voxel grid is to be created; a voxel grid creation step of creating a voxel grid based on the voxel size and the space range.
Wherein having HAYAKAWA’s point cloud and voxel data processing apparatus with a space range in which a voxel grid is to be created; a voxel grid creation step of creating a voxel grid based on the voxel size and the space range.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing apparatus that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and LI relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while LI can more easily calibrate the reflectivity of the radar. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and LI et al. (US 20220214448 A1), Paragraph [0044].
HAYAKAWA in view of LI fail to explicitly teach separately storing position coordinates of the point when a voxel ID of the searched voxel is the same as a voxel ID of a top voxel having a largest accumulated value of the reflected signal intensity in the dictionary, and extracting a point of the top voxel.
However, ISHIKAWA explicitly teaches separately storing position coordinates of the point when a voxel ID of the searched voxel is the same as a voxel ID of a top voxel having a largest accumulated value of the reflected signal intensity in the dictionary (Figs. 1, 4, and 10. Paragraph [0042]-ISHIKAWA discloses a position obtaining unit 1050 obtains the position of interest in the first image, and by using the deformation information obtained by the deformation information obtaining unit 1030, obtains the corresponding position in the second image corresponding to the position of interest in the first image. Further in paragraph [0071]-ISHIKAWA discloses I2_max obtained as above denotes the maximum intensity within the search area ω_pos2, and I2_min denotes the minimum intensity within the search area ω_pos2. In this light, the difference obtaining unit 1070 is an example of second obtaining means that obtains a maximum or a minimum of a voxel value or an interpolated value within a search area on the basis of at least one of a voxel value included in a partial area included in the search area and an interpolated value obtained by interpolation of a voxel of the second image, which is the target image (wherein search area ω_pos2 is the ID.).)., and extracting a point of the top voxel (Figs. 1, 4, and 10. Paragraph [0071]-ISHIKAWA discloses I2_max obtained as above denotes the maximum intensity within the search area ω_pos2, and I2_min denotes the minimum intensity within the search area ω_pos2. In this light, the difference obtaining unit 1070 is an example of second obtaining means that obtains a maximum or a minimum of a voxel value or an interpolated value within a search area on the basis of at least one of a voxel value included in a partial area included in the search area and an interpolated value obtained by interpolation of a voxel of the second image, which is the target image.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA an apparatus for determining a position of an object sensed by a radar using spatial voxelization, the apparatus comprising: an input/output unit connected to the processor so as to transmit and receive data to and from the processor, the input/output unit being configured to receive a parameter required for voxelization, and, a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file, with the teachings of ISHIKAWA of separately storing position coordinates of the point when a voxel ID of the searched voxel is the same as a voxel ID of a top voxel having a largest accumulated value of the reflected signal intensity in the dictionary, and extracting a point of the top voxel.
Wherein having HAYAKAWA’s point cloud and voxel data processing apparatus separately storing position coordinates of the point when a voxel ID of the searched voxel is the same as a voxel ID of a top voxel having a largest accumulated value of the reflected signal intensity in the dictionary, and extracting a point of the top voxel.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing apparatus that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and ISHIKAWA relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while ISHIKAWA noise in the subtraction image can be reduced. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and ISHIKAWA et al. (US 20200160526 A1), Paragraph [0004].
HAYAKAWA in view of LI and further in view of ISHIKAWA fail to explicitly teach a parameter setting step of setting parameters comprising a voxel size, lower and upper thresholds of the reflected signal intensity, a filtering step of excluding points having reflected signal intensities deviating from a range between the lower and upper thresholds.
However, DOUILLARD explicitly teaches a parameter setting step of setting parameters comprising a voxel size (Fig. 2. Paragraph [0045]- DOUILLARD discloses the voxel space module 212 can define dimensions of a voxel space, including a length, width, and height of the voxel space. Further, the voxel space module 212 may determine a size of individual voxels.), lower and upper thresholds of the reflected signal intensity (Fig. 2. Paragraph [0045]-DOUILLARD discloses filtering may include removing data below a threshold amount of data per voxel (e.g., a number of LIDAR data points associated with a voxel) or over a predetermined number of voxels (e.g., a number of LIDAR data points associated with a number of proximate voxels) Further in paragraph [0015]- DOUILLARD discloses LIDAR data may be accumulated in the voxel space, with an individual voxel including processed data, such as: a number of data points, an average intensity, average x-value of LIDAR data associated with the individual voxel; average-y value of the LIDAR data associated with the individual voxel; average z-value of the LIDAR data associated with the individual voxel; and a covariance matrix based on the LIDAR data associated with the voxel.), and
a filtering step of excluding points having reflected signal intensities deviating from a range between the lower and upper thresholds (Fig. 2. Paragraph [0045]-DOUILLARD discloses filtering may include removing data below a threshold amount of data per voxel (e.g., a number of LIDAR data points associated with a voxel) or over a predetermined number of voxels (e.g., a number of LIDAR data points associated with a number of proximate voxels) Further in paragraph [0015]- DOUILLARD discloses LIDAR data may be accumulated in the voxel space, with an individual voxel including processed data, such as: a number of data points, an average intensity, average x-value of LIDAR data associated with the individual voxel; average-y value of the LIDAR data associated with the individual voxel; average z-value of the LIDAR data associated with the individual voxel; and a covariance matrix based on the LIDAR data associated with the voxel.);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA of an apparatus for determining a position of an object sensed by a radar using spatial voxelization, the apparatus comprising: an input/output unit connected to the processor so as to transmit and receive data to and from the processor, the input/output unit being configured to receive a parameter required for voxelization, and, a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file, with the teachings of DOUILLARD of a parameter setting step of setting parameters comprising a voxel size, lower and upper thresholds of the reflected signal intensity, and; a filtering step of excluding points having reflected signal intensities deviating from a range between the lower and upper thresholds.
Wherein having HAYAKAWA’s point cloud and voxel data processing apparatus with a parameter setting step of setting parameters comprising a voxel size, lower and upper thresholds of the reflected signal intensity, and; a filtering step of excluding points having reflected signal intensities deviating from a range between the lower and upper thresholds.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing method that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and DOUILLARD relate to processing and analyzing data in voxels, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while DOUILLARD complex multi-dimensional data, such as LIDAR data, can be represented in a voxel space, which can partition the data, allowing for efficient evaluation and processing of the data. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and DOUILLARD et al. (US 20180364717 A1), Paragraph [0020].
Regarding claim 13, HAYAKAWA in view of LI and further in view of ISHIKAWA and further in view of DOUILLARD explicitly teach the apparatus according to claim 12,
HAYAKAWA further explicitly teaches wherein the top voxel point extraction step comprises (Figs. 1 and 3. Col. 17, Lines [55-61]-HAYAKAWA discloses the digitized received signal is stored in a predetermined area of a memory 21a inside the data processing unit as the multiple scales source three-dimensional voxel data s (x, y, t) such that the coordinates (x, y, t) determined by the position (x, y) on the medium surface and the reflection time (t) of the reflected wave 5 from the object 2 are encoded (wherein the received signal is a reflected signal).):
a voxel search step of searching for a voxel corresponding to the position of the point stored in the radar sensing file in the voxel grid (Figs. 1 and 3. Col. 17, Lines [15-19]-HAYAKAWA discloses a section coordinate designating means 33b for designating a coordinate point on the displayed section in response to a manual operation on the input unit 22 such as a mouse, thereby to select the voxel at this coordinate point as an object voxel.);
HAYAKAWA in view of LI fail to explicitly teach a determination step of determining whether a voxel ID of the searched voxel matches the voxel ID of the top voxel in the dictionary; and a point position storage step of, when the voxel ID of the searched voxel matches the voxel ID of the top voxel, storing the position of the point in the voxel ID of the top voxel.
However, ISHIKAWA explicitly teaches a determination step of determining whether a voxel ID of the searched voxel matches the voxel ID of the top voxel in the dictionary (Figs. 1, 4, and 10. Paragraph [0042]-ISHIKAWA discloses a position obtaining unit 1050 obtains the position of interest in the first image, and by using the deformation information obtained by the deformation information obtaining unit 1030, obtains the corresponding position in the second image corresponding to the position of interest in the first image. Further in paragraph [0071]-ISHIKAWA discloses I2_max obtained as above denotes the maximum intensity within the search area ω_pos2, and I2_min denotes the minimum intensity within the search area ω_pos2. In this light, the difference obtaining unit 1070 is an example of second obtaining means that obtains a maximum or a minimum of a voxel value or an interpolated value within a search area on the basis of at least one of a voxel value included in a partial area included in the search area and an interpolated value obtained by interpolation of a voxel of the second image, which is the target image (wherein search area ω_pos2 is the ID.).); and
a point position storage step of, when the voxel ID of the searched voxel matches the voxel ID of the top voxel, storing the position of the point in the voxel ID of the top voxel (Figs. 1, 4, and 10. Paragraph [0042]-ISHIKAWA discloses a position obtaining unit 1050 obtains the position of interest in the first image, and by using the deformation information obtained by the deformation information obtaining unit 1030, obtains the corresponding position in the second image corresponding to the position of interest in the first image. Further in paragraph [0071]-ISHIKAWA discloses I2_max obtained as above denotes the maximum intensity within the search area ω_pos2, and I2_min denotes the minimum intensity within the search area ω_pos2. In this light, the difference obtaining unit 1070 is an example of second obtaining means that obtains a maximum or a minimum of a voxel value or an interpolated value within a search area on the basis of at least one of a voxel value included in a partial area included in the search area and an interpolated value obtained by interpolation of a voxel of the second image, which is the target image (wherein search area ω_pos2 is the ID.).).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA an apparatus for determining a position of an object sensed by a radar using spatial voxelization, the apparatus comprising: an input/output unit connected to the processor so as to transmit and receive data to and from the processor, the input/output unit being configured to receive a parameter required for voxelization, and, a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file, with the teachings of ISHIKAWA of a determination step of determining whether a voxel ID of the searched voxel matches the voxel ID of the top voxel in the dictionary; and a point position storage step of, when the voxel ID of the searched voxel matches the voxel ID of the top voxel, storing the position of the point in the voxel ID of the top voxel.
Wherein having HAYAKAWA’s point cloud and voxel data processing apparatus with a determination step of determining whether a voxel ID of the searched voxel matches the voxel ID of the top voxel in the dictionary; and a point position storage step of, when the voxel ID of the searched voxel matches the voxel ID of the top voxel, storing the position of the point in the voxel ID of the top voxel.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing apparatus that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and ISHIKAWA relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while ISHIKAWA noise in the subtraction image can be reduced. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and ISHIKAWA et al. (US 20200160526 A1), Paragraph [0004].
Regarding claim 14, HAYAKAWA in view of LI and further in view of ISHIKAWA and further in view of DOUILLARD explicitly teach the apparatus according to claim 13,
HAYAKAWA in view of LI fail to explicitly teach wherein the top voxel point extraction step further comprises determining whether a current file comprising the point matches a target file when the voxel ID of the searched voxel matches the voxel ID of the top voxel in the dictionary, when the current file matches the target file, the point position storage step is performed, and when the current file does not match the target file, the point position storage step is not performed.
However, ISHIKAWA explicitly teaches wherein the top voxel point extraction step further comprises determining whether a current file comprising the point matches a target file when the voxel ID of the searched voxel matches the voxel ID of the top voxel in the dictionary (Figs. 1, 4, and 10. Paragraph [0042]-ISHIKAWA discloses a position obtaining unit 1050 obtains the position of interest in the first image, and by using the deformation information obtained by the deformation information obtaining unit 1030, obtains the corresponding position in the second image corresponding to the position of interest in the first image. Further in paragraph [0071]-ISHIKAWA discloses I2_max obtained as above denotes the maximum intensity within the search area ω_pos2, and I2_min denotes the minimum intensity within the search area ω_pos2. In this light, the difference obtaining unit 1070 is an example of second obtaining means that obtains a maximum or a minimum of a voxel value or an interpolated value within a search area on the basis of at least one of a voxel value included in a partial area included in the search area and an interpolated value obtained by interpolation of a voxel of the second image, which is the target image (wherein search area ω_pos2 is the ID.).),
when the current file matches the target file, the point position storage step is performed (Figs. 1, 4, and 10. Paragraph [0042]-ISHIKAWA discloses a position obtaining unit 1050 obtains the position of interest in the first image, and by using the deformation information obtained by the deformation information obtaining unit 1030, obtains the corresponding position in the second image corresponding to the position of interest in the first image. Further in paragraph [0071]-ISHIKAWA discloses I2_max obtained as above denotes the maximum intensity within the search area ω_pos2, and I2_min denotes the minimum intensity within the search area ω_pos2. In this light, the difference obtaining unit 1070 is an example of second obtaining means that obtains a maximum or a minimum of a voxel value or an interpolated value within a search area on the basis of at least one of a voxel value included in a partial area included in the search area and an interpolated value obtained by interpolation of a voxel of the second image, which is the target image (wherein search area ω_pos2 is the ID.).), and
when the current file does not match the target file, the point position storage step is not performed (Fig. 8. Paragraph [0069]-ISHIKAWA discloses that if a position V_ij is identical with the position of any of the voxels 4010, the voxel value is obtained as I2_s_ij; if not, the intensity in the second image at the position V_ij is obtained through interpolation processing (wherein interpolation processing is not the point position storage step.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HAYAKAWA in view of LI and further in view of ISHIKAWA and further in view of DOUILLARD an apparatus for determining a position of an object sensed by a radar using spatial voxelization, the apparatus comprising: an input/output unit connected to the processor so as to transmit and receive data to and from the processor, the input/output unit being configured to receive a parameter required for voxelization, and, a voxelization step of loading a radar sensing file storing a reflected signal returned as a result of a radio wave signal transmitted by the radar being reflected by the object, searching for a voxel corresponding to a position of a point stored in the radar sensing file, with the teachings of ISHIKAWA of wherein the top voxel point extraction step further comprises determining whether a current file comprising the point matches a target file when the voxel ID of the searched voxel matches the voxel ID of the top voxel in the dictionary, when the current file matches the target file, the point position storage step is performed, and when the current file does not match the target file, the point position storage step is not performed.
Wherein having HAYAKAWA’s point cloud and voxel data processing apparatus wherein the top voxel point extraction step further comprises determining whether a current file comprising the point matches a target file when the voxel ID of the searched voxel matches the voxel ID of the top voxel in the dictionary, when the current file matches the target file, the point position storage step is performed, and when the current file does not match the target file, the point position storage step is not performed.
The motivation behind the modification would have been to obtain a point cloud and voxel data processing apparatus that enhances the accuracy in detecting objects based on received signals. Since both HAYAKAWA and ISHIKAWA relate to processing and analyzing data received through a radar system, wherein HAYAKAWA to provide a method or means which affords easy interpolation of a deficient voxel when such voxel deficient in data is present in three-dimensional voxel data so as to enable high-efficiency and high-precision detection of location of the underground buried object, while ISHIKAWA noise in the subtraction image can be reduced. Please see HAYAKAWA et al. (US 6573855 B1), Col. 4, Lines [19-34], and ISHIKAWA et al. (US 20200160526 A1), Paragraph [0004].
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered
pertinent to applicant’s disclosure.
TORIYA et al. (US 20200166626 A1) - This information processing device is provided with: a candidate point extraction unit for extracting, on the basis of the position in three-dimensional space of a target point specified in an intensity map of a signal from an observed object acquired through radar and the shape of the observed object, a candidate point that contributes to the signal at the target point; an evaluation unit for evaluating the reliability of the candidate point in terms of signal analysis on the basis of geographic information indicating the state of a surface including the candidate point; and an output unit for outputting information indicating the result of the evaluation.
KIM et al. (US 12184849 B2) - A device and method for performing fast grid-based refining segmentation (FGRS) for video-based point cloud compression (V-PCC) is proposed. The method may include dividing a space of a three-dimensional (3D) point cloud into multiple grids to derive multiple voxels, searching for filled voxels including one or more points, searching for surrounding voxels which are filled voxels within a certain radius from each of the filled voxels, and searching for edge voxels which are present at a segment edge among all the filled voxels. The method may also include calculating smooth scores for surrounding voxels of each edge voxel and calculating a smooth score sum which is a smooth score for the edge voxel on the basis of the smooth scores of the surrounding voxels of the edge voxel, and updating a projection plane index (PPI) for each individual point in the edge voxel using the calculated smooth score sum.
TAKIGUCHI et al (US 20100034426 A1) - It is an object to measure a position of a feature around a road. An image memory unit stores images in which neighborhood of the road is captured. Further, a three-dimensional point cloud model memory unit 709 stores a point cloud showing three-dimensional coordinates obtained by laser measurement which is carried out simultaneously to the image-capturing of the images as a road surface shape model. A model projecting unit 172 projects a point cloud on the image, and an image displaying unit 341 displays the point cloud superimposed with the image on the displaying device. Using an image point inputting unit 342, a pixel on a feature of a measurement target is specified by a user as a measurement image point. A neighborhood extracting unit 171 extracts a point which is located adjacent to the measurement image point and superimposed on the feature for the measurement target from the point cloud. A feature position calculating unit 174 outputs three-dimensional coordinates shown by the extracted point as three-dimensional coordinates of the feature for the measurement target.
WINZELL et al. (US 10878542 B2) - A method and a system for filtering thermal image data. The method comprises: capturing thermal image data by a thermal image detector; forming a signal distribution of intensity values; identifying a first part in the signal distribution of intensity values, the first part being a peak having an intensity width equal to or smaller than a predetermined intensity span being based on a resolution parameter of the thermal image detector; identifying a second part having an intensity width larger than the predetermined intensity span; determining an intensity range between the first part and the second part; and filtering, if the intensity range is larger than a predetermined minimum intensity range, the thermal image data by excluding thermal image data forming part of the first part.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ETHAN N WOLFSON whose telephone number is (571)272-1898. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ETHAN N WOLFSON/Examiner, Art Unit 2673
/CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673