Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This office action is in response to the patent application filed on June 27, 2024. Claims 1-20 are currently pending.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in Application No. KR10-2023-0156605, filed on November 13, 2023.
Should applicant desire to obtain the benefit of foreign priority under 35 U.S.C. 119(a)-(d) prior to declaration of an interference, a certified English translation of the foreign application must be submitted in reply to this action. 37 CFR 41.154(b) and 41.202(e). Failure to provide a certified translation may result in no benefit being accorded for the non-English application.
No action the part of the applicant is required at this time.
Claim Rejections – 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 6, 10-13, 17, & 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 2018/0232947 A1, to Nehmadi et al., hereafter Nehmadi.
Regarding Claim 1, Nehmadi discloses A vehicle control apparatus (Nehmadi [0049], Examiner Note: Nehmadi discloses an apparatus which controls a vehicle)
comprising: a sensor (Nehmadi [0048], Examiner Note: Nehmadi discloses a high-density video-rate capable passive sensor and a low-density active sensor);
memory storing a plurality of models, each model of the plurality of models corresponding to a respective object type (Nehmadi [0048] & [0151], Examiner Note: Nehmadi discloses a memory, 215, for the 3d modeling. Nehmadi further discloses being able to distinguish object type); and
a processor configured to (Nehmadi [0048], Examiner Note: Nehmadi discloses a processing system, 210):
obtain, via the sensor, a point cloud corresponding to a target object (Nehmadi [0052], Examiner Note: Nehmadi discloses collecting a 3D map (i.e. point data) of the surrounding area);
match, based on identifying a target model, of the plurality of models, that corresponds to an object type of the target object, a first reference point, which is included in the point cloud and which corresponds to a designated location of the target object, with a second reference point, which is included in the target model and which corresponds to the designated location (Nehmadi [0058], Examiner Note: Nehmadi discloses determining changes in distances between multiple points (i.e. first reference point & second reference point) of an object or objects which are then measured to each other to determine whether the vehicle is moving or not);
determine, based on matching a first heading direction of the point cloud with a second heading direction of the target model, a proportion, of the target model, that overlaps with the point cloud, wherein each of the first heading direction and the second heading direction indicates a moving direction of the target object (Nehmadi [0066]-[0067] & Fig. 4, Examiner Note: Nehmadi discloses comparing a first frame and second frame which represents two different vehicle positions (i.e. headings) and overlaying the 3D points on the passive image in order to determine a change in the vehicle’s movement, and therefore, direction);
determine, based on the proportion, an occlusion level of the point cloud (Nehmadi [0121] & [0132], Examiner Note: Nehmadi discloses determining the occlusions when applying motion data to objects); and
for controlling a vehicle, output a signal indicating the occlusion level of the point cloud (Nehmadi [0155], Examiner Note: Nehmadi discloses the vehicle tracking specific features (i.e. signal is output controlling the vehicle) as a result of occlusions).
Regarding Claim 2, Nehmadi discloses The vehicle control apparatus of claim 1,
Nehmadi further discloses wherein the processor is further configured to train a neural network model based on the point cloud and the occlusion level (Nehmadi [0079-[0080], Examiner Note: Nehmadi discloses training a neural network to perform object detection (i.e. point cloud) and tracking objects frame-to-frame (i.e. occlusion)).
Regarding Claim 6, Nehmadi discloses The vehicle control apparatus of claim 1, wherein the processor is configured to determine the proportion by: determining, based on a horizontal angle range of the sensor, a predetermined horizontal resolution; determining, based on a vertical angle range of the sensor, a predetermined vertical resolution (Nehmadi [0061], Examiner Note: Nehmadi discloses the LiDAR sensor, 282, is able to scan around the vehicle at 360 degrees (i.e. horizontal and vertical angle ranges and resolutions));
splitting, based on the predetermined horizontal resolution and the predetermined vertical resolution, the point cloud into grids (Nehmadi [0020] & Fig. 1B, Examiner Note: Nehmadi discloses a LiDAR 3D render map in the form of a grid); and
determining, based on the grids, the proportion (Nehmadi [0066]-[0067] & Fig. 4, Examiner Note: Nehmadi discloses comparing a first frame and second frame which represents two different vehicle positions and overlaying the 3D points on the passive image in order to determine an occlusion between the images).
Regarding Claim 10, Nehmadi discloses The vehicle control apparatus of claim 1, wherein the processor is further configured to:
Nehmadi further discloses perform, based on the occlusion level, labeling on the point cloud (Nehmadi [0132], Examiner Note: Nehmadi discloses using an occlusion grid to assign (i.e. label) a confidence level for each voxel based on sensor motion readings).
Regarding Claim 11, Nehmadi discloses The vehicle control apparatus of claim 1, wherein the processor is further configured to:
Nehmadi further discloses determine whether to determine the occlusion level, based on at least one of a color of the target object or a distance between the vehicle and the target object (Nehmadi [0132]-[0136] & Fig. 13, Examiner Note: Nehmadi discloses using colors on voxels to determine occlusion).
With respect to Claim 12, all the limitations have been analyzed in view of claim 1, and it has been determined that claim 12 does not teach or define any new limitations beyond those previously recited in Claim 1. Therefore, claim 12 is also rejected over the same rationale as claim 1.
With respect to Claim 13, all the limitations have been analyzed in view of claim 2, and it has been determined that claim 13 does not teach or define any new limitations beyond those previously recited in Claim 2. Therefore, claim 13 is also rejected over the same rationale as claim 2.
With respect to Claim 17, all the limitations have been analyzed in view of claim 6, and it has been determined that claim 17 does not teach or define any new limitations beyond those previously recited in Claim 6. Therefore, claim 17 is also rejected over the same rationale as claim 6.
With respect to Claim 20, all the limitations have been analyzed in view of claim 11, and it has been determined that claim 20 does not teach or define any new limitations beyond those previously recited in Claim 11. Therefore, claim 20 is also rejected over the same rationale as claim 11.
Claim Rejections – 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 3 & 14 are rejected under 35 U.S.C. 103 as being unpatentable over US 2018/0232947 A1, to Nehmadi et al., hereafter Nehmadi as applied to claims 1 & 12 above, and further in view of US 2019/0311636 A1, to Fanelli et al. hereafter Fanelli.
Regarding Claim 3, Nehmadi discloses The vehicle control apparatus of claim 1, wherein the processor is configured to match the first reference point with the second reference point by:
However, Nehmadi does not specifically disclose determining a first space in a form of a first hexahedron comprising the point cloud; determining a second space in a form of a second hexahedron comprising the target model; determining a first center point, corresponding to an intersection of lines connecting vertices forming the first space, and a second center point, corresponding to an intersection of lines connecting vertices forming the second space; and matching the first reference point with the second reference point based on matching the first center point with the second center point.
Fanelli, directed to the same problem, teaches determining a first space in a form of a first hexahedron comprising the point cloud; determining a second space in a form of a second hexahedron comprising the target model; determining a first center point, corresponding to an intersection of lines connecting vertices forming the first space, and a second center point, corresponding to an intersection of lines connecting vertices forming the second space; and matching the first reference point with the second reference point based on matching the first center point with the second center point (Fanelli [0015], Examiner Note: Fanelli teaches voxels (i.e. first hexahedron with a center and intersecting lines connecting vertices) associated with airspace obstacles and coordinate information (i.e. second hexahedron with a center and intersecting lines connecting vertices) and compare them in order to identify matches which would require center-point on center-point comparison).
Therefore, it would have been obvious for one of ordinary skill in the art, before the filing date of the claimed invention and with a reasonable likelihood of success, to modify the 3d data image comparison system of Nehmadi with the voxel comparison of Fanelli in order to account for the possibility of obstacles shifting over time (Fanelli [0015]).
With respect to Claim 14, all the limitations have been analyzed in view of claim 3, and it has been determined that claim 14 does not teach or define any new limitations beyond those previously recited in Claim 3. Therefore, claim 14 is also rejected over the same rationale as claim 3.
Claims 4-5 & 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over US 2018/0232947 A1, to Nehmadi et al., hereafter Nehmadi as applied to claims 1 & 12 above, and further in view of US 2020/0200912 A1, to Chen et al. hereafter Chen.
Regarding Claim 4, as shown above, Nehmadi discloses The vehicle control apparatus of claim 1, wherein the processor is further configured to:
However, Nehmadi does not specifically disclose match the first heading direction with the second heading direction based on scaling a first size of the target model to match a second size of the point cloud.
Chen, in the same field of endeavor, teaches match the first heading direction with the second heading direction based on scaling a first size of the target model to match a second size of the point cloud (Chen [0020], Examiner Note: Chen teaches adjusting the voxel size as needed to compare when detecting objects).
Therefore, it would have been obvious for one of ordinary skill in the art, before the filing date of the claimed invention and with a reasonable likelihood of success, to modify the 3d data image comparison system of Nehmadi with the voxel size change of Chen in order to better identify objects (Chen [0002]).
Regarding Claim 5, Nehmadi in view of Chen, as shown above, teaches The vehicle control apparatus of claim 4, wherein the processor is configured to: scale the first size of the target model to match the second size of the point cloud based on changing at least one of a width, a length, or a height of the target model (Chen [0020], Examiner Note: Chen teaches being able to change the width, length, and height of the model).
With respect to Claim 15, all the limitations have been analyzed in view of claim 4, and it has been determined that claim 15 does not teach or define any new limitations beyond those previously recited in Claim 4. Therefore, claim 15 is also rejected over the same rationale as claim 4.
With respect to Claim 16, all the limitations have been analyzed in view of claim 5, and it has been determined that claim 16 does not teach or define any new limitations beyond those previously recited in Claim 5. Therefore, claim 16 is also rejected over the same rationale as claim 5.
Allowable Subject Matter
Claims 7-9 & 18-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: …adding, based on identifying a first point corresponding to at least part of the target object in at least part of the first shaded area, a first voxel, comprising the first point, to a first occupation voxel; adding a second voxel, comprising a second point corresponding to at least part of the target object in at least part of the second shaded area, to a second occupation voxel; and…wherein the first shaded area and the second shaded area are out of a detection range of the sensor, wherein the first occupation voxel comprises voxels having at least one point that is identified in the point cloud, and wherein the second occupation voxel comprises voxels having at least one point that is identified in the target model. When considered in combination with the other claim limitations render the claims novel and non-obvious over the prior art of record. Specifically, the prior art of record does not disclose nor teach adding together points on voxels representing shaded areas to occupation voxels in which the shaded areas are outside the detection range of a sensor and in which the two different occupation voxels are from a point cloud and target model, respectively.
The closest prior art is US 2018/0232947 A1, to Nehmadi et al. (hereafter Nehmadi) which determines the movement of an environment with sensors and uses 3d map data, including voxels, to update the collected information with the latest information. However, Nehmadi makes no reference to the level of detail regarding shaders nor adding together voxels that is required in the instant application.
Another close prior art is US 2019/0311636 A1, to Fanelli et al. (hereafter Fanelli) which discloses updating airspace and determining actions to perform, with respect to obstacles, based on airspace voxels. However, Fanelli makes no reference to the level of detail regarding shaders nor adding together voxels that is required in the instant application.
The combination of Nehmadi and Fanelli fail to disclose adding together points on voxels representing shaded areas to occupation voxels in which the shaded areas are outside the detection range of a sensor and in which the two different occupation voxels are from a point cloud and target model, respectively. Therefore claim 7 and its dependent claims 8 & 9 are allowable over prior art, as are the corresponding claims of the other independent claim, claims 18 & 19.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Doria, David (US 2019/0138823 A1) discloses detecting occluded regions with a grid representation of a scene in order to show free and not free space in an environment.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL T DOWLING whose telephone number is (703)756-1459. The examiner can normally be reached M-T: 8-5:30, First F: Off, Second F: 8-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Helal Algahaim can be reached at (571) 270-5227. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL T DOWLING/ Examiner, Art Unit 3666
/HELAL A ALGAHAIM/ SPE , Art Unit 3666