Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: " a storage control unit" in claims 1-6.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
For the sake of further prosecution, the Examiner will treat: "a storage control unit" as hardware configured to perform the recited functions.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. § 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claim 5 is rejected under 35 U.S.C. § 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Claim 5 recites “wherein the storage control unit is configured to: acquire all of the minimum necessary 3D point cloud data from the storage area of the layer one layer lower at a time for the storage areas other than a storage area of an upper layer one layer higher than the lowest layer among the storage areas of the plurality of layers, and acquire the minimum necessary 3D point cloud data for each of the unit spaces from the storage area of the lowest layer arranged on the server for the storage area of the upper layer one layer higher than the lowest layer.” It is unclear what “at a time” is applied to and which layers are referred to by “for the storage areas other than a storage area of an upper layer one layer higher than the lowest layer among the storage areas of the plurality of layers”. For the purpose of prior art analysis, Examiner assumes “for the storage areas other than a storage area of an upper layer one layer higher than the lowest layer among the storage areas of the plurality of layers” refers to all layers two or more layers above the lowest layer and “at a time” is acquiring, in one operation, the point cloud data for that storage area from the immediately lower layer.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4 and 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Honma (JP6301678B2) and Kato (WO2021065535A1).
Regarding claim 1, Honma teaches a point cloud data display device (Honma; ¶0035, describes a terminal including a display) comprising: storage areas of a plurality of layers configured to hierarchically store three-dimensional (3D) point cloud data measured for each of a plurality of continuous unit spaces (Honma; ¶0094, describes terminal-side storage unit, storing point cloud data in multiple hierarchical mesh layers, M1 (1 meter mesh width), M2 (10 meter), and M3 (100 meter), with, ¶0143, point cloud data stored in corresponding mesh tables. ¶0104-106, 100 meshes from the finer mesh layer M1 are encompassed within a single mesh of the coarser mesh layer M2 and representative mesh positions from the finer layer are selected to populate the coarser layer. Each mesh layer defines continuous unit spaces (mesh cells) over the mapped area with upper-layer coarser mesh unit spaces encompassing multiple lower-layer finer mesh unit spaces and point cloud data for upper layers generated by extracting representative point group data from the corresponding lower-layer mesh regions. This teaches hierarchical storage across multiple storage areas over continuous unit spaces where data associated with lower-layer finer unit spaces is used to form the data stored for the corresponding upper-layer coarser unit spaces.) a storage control unit configured to acquire, for each of the storage areas of the plurality of layers, minimum necessary 3D point cloud data to be stored in a storage area of each layer from a storage area of a layer one layer lower, and cause the minimum necessary 3D point cloud data to be stored (Honma; ¶0138-141, S11, S13; describes a terminal-side point cloud data storage unit that populates each mesh layer from the immediately preceding finer layer. The storage unit calculates representative mesh position coordinates based on the mesh width of the finer layer and the XY coordinates of the mesh closest to reference coordinates. ¶0143, S15, S17; it then calculates coordinates of representative mesh positions, extracts point group data from the representative mesh position region in the finer density level storage table and stores the extracted data in the next coarser density level storage table. ¶0144, the process is repeated for all meshes and all density levels until extraction is completed. This teaches acquiring, from the storage area of the layer one layer lower, minimum necessary 3D point cloud data (representative pints with non-representative points thinned out and not stored) and causing it to be stored in the current layer’s storage area.) a display unit configured to read the 3D point cloud data stored in a storage area of a highest layer among the storage areas of the plurality of layers according to a viewpoint position and a line-of-sight direction of a user, and display the 3D point cloud data in a 3D virtual reality space (Honma; ¶0085-86, 88; discloses a display control unit that receives camera parameters including camera position (viewpoint position) and angle of view (line-of-site direction), defines a gazing point in a 3D coordinate system, determines density level regions as a function of distance from the gazing point, generates extraction parameters identifying mesh positions and density levels, and displays the returned point cloud data in 3D. ¶0089, extraction parameters are sent in the order of mesh layer M3, then mesh layer M2, then M1, with the extracted point cloud data displayed on the display. This teaches a display unit that reads point cloud data stored in the highest (coarsest) layer M3 according to viewpoint position and line-of-sight direction and displays the data in a 3D rendered view space.)
However, Honma does not explicitly disclose 3D point cloud data included in a lower layer is included in an upper layer.
Kato; pg. 4, Octree, describes a hierarchical Octree-based voxel representation in which each upper node is generated from voxel information of the lower nodes, so higher hierarchy levels contain encoded data derived from lower hierarchy levels. This teaches the lower layer data is included in an upper layer.
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Honma’s viewpoint-driven hierarchical point cloud retrieval and display with Kato’s hierarchical encoding structure in order to reduce latency and improve efficiency.
Regarding claim 6, Honma in view of Kato teaches all of the limitations of claim 6. Claim 6, has similar limitations as of claim 1, but is broader, not reciting the display unit configured to read and display the 3D point cloud data according to a viewpoint position and a line-of-sight direction of a user, but does recite the storage areas of a plurality of layers and a storage control unit as previously discussed in claim 1, therefore it is rejected under the same rationale as claim 1.
Claim 7, has similar limitations as of claim 1, therefore it is rejected under the same rationale as claim 1.
Claim 8, has similar limitations as of Claim 1 except it is a CRM claim, therefore it is rejected under the same rationale as Claim 1. Kato, pg. 11 ¶3, describes implementation using a CPU and memory executing a program.
Regarding claim 2, Honma in view of Kato teaches the point cloud data display device according to claim 1, wherein the storage control unit is configured to perform acquisition of the 3D point cloud data to the storage areas by using a data structure in which a size of an amount of data of the 3D point cloud data included in each unit space varies (Honma; ¶0143, describes the storage control unit uses mesh point group tables as the data structure for storing point cloud data, where different mesh positions contain varying amounts of point cloud data. ¶0105-0107, representative mesh positions are selected to store point cloud data while other measurement points are thinned out and not stored. ¶0101-0102, meshes in which multiple measurement points P1, P2, and P3 are associated with a mesh position and stored as point group data RPj, and, ¶0106-0107, meshes in which only representative measurement points P11 and P12 are stored for the mesh, with other measurement points in that mesh region omitted. This teaches using a data structure in which the size of the amount of data included in each unit space varies).
Regarding claim 3, Honma in view of Kato teaches the point cloud data display device according to claim 1, wherein the storage control unit is configured to determine the 3D point cloud data to be acquired on a basis of the viewpoint position and the line-of-sight direction of the user (Honma; ¶0085-86, describes the display control unit receives camera parameters including camera position and angle of view (viewpoint position and line-of-site direction), defines a gaze point in a 3D coordinate system, and determines a field of view based on those parameters. Based on the gaze point and field of view, density level regions are defined as a function of distance from the gaze point and, ¶0088, extraction parameters are generated specifying mesh positions and density levels overlapping those regions. ¶0089, the extraction parameters are transferred to the terminal-side pint group data management section, which acquires the corresponding point cloud data from storage and returns it for display. Honma’s terminal-side point group data management section, operating with viewpoint-based extraction parameters from the display control unit, reads on a storage control unit configured to determine the 3D point cloud data to be acquired on the basis of viewpoint position and line-of-sight direction).
Regarding claim 4, Honma in view of Kato teaches the point cloud data display device according to claim 1, wherein a storage area of a lowest layer among the storage areas of the plurality of layers includes all of the 3D point cloud data for all the unit spaces, and is arranged on a server with which the storage control unit is enabled to communicate via a network (Honma; ¶0102, describes that all of the read point group data RPj (measurement point data) are allocated to meshes of the mesh layer M1 at course/fine (density) level L = 1 and stored by the terminal-side first point group data storage unit. ¶0094, the mesh layer M1 is defined as a mesh layer having a mesh width of 1 meter square (unit spaces) over the mapped area and, ¶0101-0102, all measurement points Pj are assigned to some mesh position M1(n) in this layer. The storage area corresponding to the lowest layer (mesh layer M1 with density level L = 1) includes all of the 3D point cloud data for all the unit spaces in the mapped region).
However, Honma does not explicitly describe placing the lowest layer storage areas on a remote server or accessing it over a network.
Kato (pg. 9 ¶3 and 5-6) describes a content server and a terminal apparatus, which has hierarchically layered and encoded point cloud data is stored and managed on a content server and acquired by a terminal device using a communication path. In Kato’s system (pg. 9 ¶9-10) the layered point cloud data, including hierarchies to the finest resolution, is maintained on the server side, and the terminal’s processing units acquire and decode the necessary layers over a network connection. This teaches arranging hierarchical point cloud storage, including the lowest layer data, on a server and enabling a terminal-side control unit to communicate with that server over a network to acquire point cloud data.
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Honma’s hierarchical point cloud storage system with Kato’s server-based storage architecture with the benefit of reducing local storage requirements.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Honma (JP6301678B2), Kato (WO2021065535A1), and Haibo (CN112184539A).
Regarding claim 5, Honma in view of Kato teaches the point cloud data display device according to claim 4, wherein the storage control unit is configured to: acquire the minimum necessary 3D point cloud data for each of the unit spaces from the storage area of the lowest layer arranged on the server for the storage area of the upper layer one layer higher than the lowest layer (Honma; pg. 15 ¶0138-139 and ¶0143-0144, describes hierarchical storage across mesh layers M1, M2, and M3 where, for each layer above the lowest, the terminal-side first point cloud data storage unit populates that layer by acquiring minimum necessary representative point cloud data from the immediately lower layer. Honma explains calculating representative mesh positions based on lower-layer mesh width and reference coordinates, reading representative mesh positions Dmn from the lower layer mesh point group table and storing the extracted data into the next coarser layer’s mesh point group table, repeating unit storage for all meshes/density levels is completed. This teaches acquiring minimum necessary data from the storage area of the layer one layer lower when populating higher layers.)
(Kato; pg. 9 ¶2-6, describes a content providing system in which a content server stores and manages multi-layered bitstreams of encoded point cloud data supplied from a content generation device and communicates with a terminal device via a network to supply the managed content. ¶9, the terminal device communicates with the content server to acquire bitstreams including coded data of a desired layer (LoD) from multi-layered bitstreams managed by the content server. Together, Honma’s mesh-by-mesh minimum necessary extraction for populating a first upper layer from the lowest layer with Kato’s server based architecture, with the lowest layer on the server, reads on acquire the minimum necessary 3D point cloud data for each of the unit spaces from the storage area of the lowest layer arranged on the server for the storage area of the upper layer one layer higher than the lowest layer)
However, Honma in view of Kato does not explicitly disclose to acquire all of the minimum necessary 3D point cloud data from the storage area of the layer one layer lower at a time for the storage areas other than a storage area of an upper layer one layer higher than the lowest layer among the storage areas of the plurality of layers.
Haibo; pg. 4-5 S2-S5, describes dividing the cubic area where the point cloud data is located into a plurality of first layers along a preset coordinate direction, determining a storage address interval for each first layer according to the data amount in that layer, and writing the point cloud data into a point cloud storage array according to the determined storage address intervals so that each first layer has its own storage region. Then sequentially read the data of each first layer from the point cloud storage array according to the corresponding storage address interval, generate and store a feature map for that first layer and clear the data of that storage address interval and cache data before processing the next first layer. This teaches that, for a first layer, all data in its storage interval is read together as a complete unit before moving to the next layer which corresponds to acquiring all of the minimum necessary 3D point cloud data from the storage area of the layer one layer lower “at a time” when applied to the upper local layers of Honma’s hierarchy.
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Honma’s hierarchical point cloud storage with Kato’s server-based architecture and Haibo’s batch processing for each layer because they all address efficient management of large point cloud datasets with the benefit of improved system performance.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAN F KALHORI whose telephone number is (571)272-5475. The examiner can normally be reached Mon-Fri 8:30-5:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached at (571) 272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAN F KALHORI/Examiner, Art Unit 2618
/DEVONA E FAULK/Supervisory Patent Examiner, Art Unit 2618