Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 02/13/2026 have been fully considered but they are not persuasive.
Applicant argues Kake does not teach LiDAR, as recited in amended Claims 1 and 11.
Although it is true that Kake fails to teach a LIDAR system, Skrobanski does, as shown in the previous rejections of Claims 9 and 19. Further, applicant has failed to make any argument against Skrobanski.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-8, 10, 11, 13-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kake (WO 2018062251 A1) in view of Skrobanski (US 20160012638 A1).
Claim 1: Kake teaches a data processing apparatus, comprising:
At least one memory that is configured to store instructions, at least one processor that is configured to execute the instructions to: display, when a viewpoint for display of point group data obtained by scanning a display object is input, the point group data viewed from the viewpoint (pg. 5);
and determine image data corresponding to the point group data to be displayed, from a plurality of pieces of image data obtained by imaging the display object by a camera from different positions, and to display the determined image data alongside the point group data (pg 5).
Kake does not teach, but Skrobanski does teach wherein the point group data is data on the display object acquired by LiDAR ([0032] - LiDAR scanning module and [0049] - can create virtual models).
It would have been obvious before the effective filing date to use the LiDAR, as taught by Skrobanski, in the apparatus as taught by Kake (specifically instead of Kake's cameras). Because LiDAR is a well known method which would yield predictable results. Additionally, compared to cameras, LiDAR allows for faster distance measurement, as there is less to process to calculate distance (i.e.: camera would have a stereo image with two images used to calculate depth, while LiDAR only uses TOF).
Claim 3: Kake, as modified, teaches the data processing apparatus according to Claim1,
wherein the at least one processor is further configured to execute the instructions to calculate imaged structure portions common to an imaging area of the display object captured in the image data and structure data indicating positions of structures disposed in the display object (pg 9 - virtual space construction),
and to calculate visible structure portions common to visible areas facing the viewpoint side in the point group data and the structure data (pg 9 - determining image from viewing position),
and to determine the image data to be displayed, from the plurality of pieces of image data based on the imaged structure portions corresponding to the calculated visible structure portions (pg. 9 - displaying).
Claim 4: Kake, as modified, teaches the data processing apparatus according to Claim 3, wherein the at least one processor is further configured to execute the instructions to the image data including the most imaged structure portions corresponding to the visible structure portions, as the image data to be displayed (pg. 9 - removing human voxels and identifies position in real space).
Claim 5: Kake, as modified, teaches the data processing apparatus according to Claim3, wherein the at least one processor is further configured to execute the instructions to hold the plurality of pieces of image data, to hold imaging information including imaging conditions of the plurality of pieces of image data (pg 5, creating voxels from images implies images are kept),
to hold site information including the structure data, and an image data display unit configured to display the determined image data (pg 6, generating and displaying spatial image).
Claim 6: Kake, as modified, teaches the data processing apparatus according to Claim 5, wherein the at least one processor is further configured to execute the instructions to calculate the imaging area on a structure map indicating layout of the structures in the display object by using the imaging information, and calculate the imaged structure portions by using the calculated imaging area (pg 5 - mapping with voxels).
Claim 7: Kake, as modified, teaches the data processing apparatus according to Claim3, wherein the at least one processor is further configured to execute the instructions to hold the point group data obtained by scanning the display object (pg 5 - creating voxels - implies voxels are kept),
to receive input of the viewpoint for display of the point group data, to calculate sight line information including relationship between the input viewpoint and the point group data, and to display the point group data viewed from the viewpoint based on the calculated sight line information (pg 6).
Claim 8: Kake, as modified, teaches the data processing apparatus according to Claim 7, wherein the at least one processor is further configured to execute the instructions to calculate the visible areas on a structure map indicating layout of the structures in the display object by using the sight line information, and calculate the visible structure portions by using the calculated visible areas (pg 5, using voxels to map structures).
Claim 10: Kake, as modified, teaches a data processing system, comprising: the data processing apparatus according to Claim1; wherein the at least one processor is further configured to execute the instructions to to acquire the point group data (pg 3-4, acquiring DM image).
Claim 11: Claim 11 is a method claim corresponding to Claim 1. Thus, see rejection above.
Claims 13-18: Claims 13-18 are method claims corresponding to Claims 3-8. Thus, see rejections above.
Claim 20: Claim 20 is a method claim corresponding to Claim 10. Thus, see rejection above.
Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Kake (WO 2018062251 A1) in view of Skrobanski (US 20160012638 A1), in view of Taya (US 20180295289 A1).
Claim 2: Kake, as modified, teaches the data processing apparatus according to Claim 1. Kake, as modified, does not teach, but Taya does teach wherein the at least one processor is further configured to execute the instructions to: display the image data in synchronization with a timing when the point group data is displayed ([0084] - displaying image data and point group together for user selection).
It would have been obvious before the effective filing date to use the display, as taught by Taya, in the apparatus as taught by Kake, as modified, because, as Taya teaches, this allows for easier selection of a virtual viewpoint ([0084]).
Claim 12: Claim 12 is a method claim corresponding to Claim 2. Thus, see rejection above.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CLARA CHILTON whose telephone number is (703)756-1080. The examiner can normally be reached Monday-Friday 6-2 MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Helal Algahaim can be reached at 571-270-5227. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CLARA G CHILTON/Examiner, Art Unit 3645
/HELAL A ALGAHAIM/SPE , Art Unit 3645