Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
The status of claims 1-16 is:
Claims 1-16 are pending.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 05/10/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“generator” in claim 8
“connectivity information corrector” in claim 9
“connectivity patch configurator” in claim 9
“mapping information generator” in claim 9.
Because this/these claim limitation(s) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 9-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 9-12 recites the limitation "The device of claim 1" in line 1 of each claim. There is insufficient antecedent basis for this limitation in the claim. Claim 1 recites a method, not a device. For the sake of examination, claims 9-12 will be interpreted as intending to be dependent on claim 8.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 6-8, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (U.S. Patent Publication No 2021/0217203, hereinafter “Kim”) in view of Graziosi et al. (U.S. Patent Publication No 2022/0108483, hereinafter “Graziosi”).
Regarding claim 1, Kim discloses a method of transmitting 3D data, the method comprising:
generating a geometry image (Kim [0158]: “In some embodiments, an encoder, such as any of the encoders described herein may follow a depth/geometry image generation process to generate a depth/geometry image for a patch of a point cloud, wherein the relative placement of a point in the depth/geometry image indicates its location in a projection plane upon which a segment of a point cloud is being projected”), an attribute image (Kim [0133]: “In some embodiments, an occupancy map may be encoded and decoded by a video compression module, such as video compression module 218”), an occupancy map (Kim [0129]: “In some embodiments, an occupancy map may be encoded and decoded by a video compression module, such as video compression module 218”), and auxiliary information based on geometry information and attribute information contained in mesh data (Kim [0527]: “At 818, the encoder encodes the auxiliary information along with the packed image frames. In some embodiments, an arithmetic encoder or other type of encoder may be used to encode auxiliary information, while a video-based encoder may be used to encode the packed image frames. In some embodiments, an auxiliary patch-info compression module of an encoder, such as auxiliary patch info compression module 222 of encoder 200 illustrated in FIG. 2A may generate, format, and/or encode the auxiliary information. In some embodiments, any of the other encoders described herein may generate, format, and/or encode auxiliary information as described in regard to FIG. 8A”; Kim [0020]: “Various examples are described herein in terms of a point cloud. However, the encoder/encoding techniques and the decoder/decoding techniques described herein may be applied to various other types of 3D visual volumetric content representations, including meshes”; Kim Fig. 8A);
encoding the geometry image, the attribute image, the occupancy map, and the auxiliary information, respectively (the mapping above shows each piece of information being encoded);
transmitting a bitstream containing the encoded geometry image, the encoded attribute image, the encoded occupancy map, the encoded auxiliary information (Kim [0762]: “At 1406, an encoder such as encoder 104 may compress the point cloud and at 1408 the encoder or a post processor may packetize and transmit the compressed point cloud, via a network 1410”; Kim [0020]: “Various examples are described herein in terms of a point cloud. However, the encoder/encoding techniques and the decoder/decoding techniques described herein may be applied to various other types of 3D visual volumetric content representations, including meshes”; Kim [0420]: “The additional downscaling results in a downscaled image frame 528 that is then encoded into a bit stream 530”) and signaling information (Kim [0515]: “In some embodiments, the simple design of the auxiliary metadata structure shown in Table 6, may be improved upon by extending the syntax of the auxiliary data unit to make the signaling of auxiliary patch information more flexible, robust, and efficient to encode”).
Kim does not explicitly disclose the method, the method comprising:
splitting connectivity information contained in the mesh data into a plurality of connectivity patches and encoding the connectivity information contained in each of the split connectivity patches on a basis of the connectivity patches; and
transmitting a bitstream containing the encoded connectivity information.
However, Graziosi teaches the method, the method comprising:
splitting connectivity information contained in the mesh data into a plurality of connectivity patches (Graziosi [0034]: “A method to perform coding of meshes using the V3C standard for coding of volumetric data is described herein. A method to segment the mesh surfaces and propose a joint surface sampling and 2D patch generation is described. For each patch, the local connectivity and the position of the vertices projected to the 2D patches is encoded”) and encoding the connectivity information contained in each of the split connectivity patches on a basis of the connectivity patches (Graziosi [0034]: “A method to perform coding of meshes using the V3C standard for coding of volumetric data is described herein. A method to segment the mesh surfaces and propose a joint surface sampling and 2D patch generation is described. For each patch, the local connectivity and the position of the vertices projected to the 2D patches is encoded”); and
transmitting a bitstream containing the encoded connectivity information (Graziosi [0008]: “In one aspect, a method comprises performing mesh voxelization on an input mesh, implementing patch generation which segments the mesh into patches including a rasterized mesh surface and vertices location and connectivity information, generating a visual volumetric video-based compression (V3C) image from the rasterized mesh surface, implementing video-based mesh compression with the vertices location and connectivity information and generating a V3C bitstream based on the V3C image and the video-based mesh compression”).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the splitting of connectivity information as taught by Graziosi with the method of Kim because encoding more information would allow for more accurate decoding of data. This motivation for the combination of Kim and Graziosi is supported by KSR rationale (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Regarding claim 8, it is rejected under the same analysis as claim 1 above along with Kim’s disclosure of a generator (Kim [0158]: “In some embodiments, an encoder, such as any of the encoders described herein may follow a depth/geometry image generation process to generate a depth/geometry image for a patch of a point cloud”), an encoder (Kim Abstract: “A system comprises an encoder”), and a transmitter (Kim [0119]: “The captured point cloud 110 may be provided to encoder 104, wherein encoder 104 generates a compressed version of the point cloud (compressed attribute information 112) that is transmitted via network 114 to decoder 116”) and Graziosi’s teaching a connectivity information processor (Graziosi [0009]: “a processor coupled to the memory, the processor configured for processing the application”).
It would have been obvious to combine Kim and Graziosi for the same reasons as stated for claim 1 above.
Regarding claim 6, Kim discloses the method, wherein boundary connectivity information positioned between the connectivity patches is not transmitted (Kim [0232]: “More specifically, in some embodiments alternative projections may be used. For example, instead of using a cubic projection, a cylindrical or spherical type of a projection method may be used. Such methods may reduce, if not eliminate, redundancies that may exist in the cubic projection and reduce the number or the effect of “seams” that may exist in cubic projections. Such seams may create artifacts at object boundaries, for example. Eliminating or reducing the number or effect of such seams may result in improved compression/subjective quality as compared to cubic projection methods. For a spherical projection case, a variety of sub-projections may be used, such as the equirectangular, equiangular, and authagraph projection among others. These projections may permit the projection of a sphere onto a 2D plane. In some embodiments, the effects of seams may be de-emphasized by overlapping projections, wherein multiple projections are made of a point cloud, and the projections overlap with one another at the edges, such that there is overlapping information at the seams. A blending effect could be employed at the overlapping seams to reduce the effects of the seams, thus making them less visible”).
Regarding claim 7, Kim discloses the method, wherein boundary connectivity information positioned between the connectivity patches is included in one of the connectivity patches to be encoded and transmitted (Kim [0246]: “Also, as can be seen there are hard boundaries between the patch images and padding, wherein adjacent points at the boundaries have considerably different values. Also, as can be seen in FIG. 3H, the padding values are selected such that boundaries are smooth. A smother image may require fewer bits to encode than an image with hard boundaries. Also, because the location of active and non-active points is known based on the information in the occupancy map, there is not a need for a hard boundary in the packed and padded image frame to be able to distinguish pad pixels from patch pixels. As used herein, a pixel that corresponds to a patch image may be referred to as a “full” pixel and a pixel that corresponds to a pad portion may be referred to as an “empty” pixel”).
Regarding claim 13, Kim discloses a method of receiving 3D data, the method comprising:
receiving a bitstream containing an encoded geometry image (Kim [0430]: “In a closed loop compression procedure, the geometry bit-stream 596 may further be video-decompressed/decoded at the encoder to generate reconstructed down-scaled geometry images 588, which may have a similar frame rate and size as down-scaled geometry images 586”; Kim Fig. 5B), an encoded attribute image (Kim Fig. 5B: attribute images), an encoded occupancy map (Kim Fig. 5B: occupancy map decompression (236)), encoded auxiliary information (Kim Fig. 5B: auxiliary patch-info decompression (238)), and signaling information (Kim [0515]: “In some embodiments, the simple design of the auxiliary metadata structure shown in Table 6, may be improved upon by extending the syntax of the auxiliary data unit to make the signaling of auxiliary patch information more flexible, robust, and efficient to encode”);
reconstructing geometry information and attribute information by decoding the encoded geometry image, the encoded attribute image, the encoded occupancy map, and the encoded auxiliary information, respectively, based on the signaling information (Kim [0419]: “FIG. 5B illustrates components of a decoder 520 that includes geometry, texture, and/or other attribute upscaling, according to some embodiments. For example, decoder 520 includes texture up-scaler 512, attribute up-scaler 514, and spatial up-scaler 516. Any of the decoders described herein may further include a texture up-scaler component 512, an attribute up-scaler component 514, and/or a spatial image up-scaler component 516 as shown for decoder 520 in FIG. 5B”; Kim Fig. 5B); and
reconstructing mesh data based on the reconstructed geometry information and attribute information (Kim [0020]: “Various examples are described herein in terms of a point cloud. However, the encoder/encoding techniques and the decoder/decoding techniques described herein may be applied to various other types of 3D visual volumetric content representations, including meshes”; Kim Fig. 5B).
Kim does not explicitly disclose the method, the method comprising:
decoding the encoded connectivity information on a connectivity patch-by-patch basis based on the signaling information and the reconstructed geometry information; and
reconstructing mesh data based on the decoded connectivity information.
However, Graziosi teaches the method, the method comprising:
decoding the encoded connectivity information on a connectivity patch-by-patch basis based on the signaling information and the reconstructed geometry information (Graziosi [0083]: “The algorithms are able to be used to reconstruct the mesh connectivity on the decoder side at the patch level (the vertex list is signaled, which can be available, for example, via the occupancy map)”); and
reconstructing mesh data based on the decoded connectivity information (Graziosi [0083]: “The algorithms are able to be used to reconstruct the mesh connectivity on the decoder side at the patch level (the vertex list is signaled, which can be available, for example, via the occupancy map)”).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate using connectivity information in decoding as taught by Graziosi with the method of Kim because having more information would allow for more accurate decoding of data. This motivation for the combination of Kim and Graziosi is supported by KSR exemplary rationale (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Claim(s) 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over the Kim and Graziosi combination in view of Faramarzi et al. (WO 2020180161 A1 filed in the IDS received 05/10/2024, hereinafter “Faramarzi”).
Regarding claim 14, the Kim and Graziosi does not explicitly disclose the method, wherein the decoding of the connectivity information comprises: converting a vertex index in a corresponding connectivity patch to a vertex index in a frame based on mapping information included in the signaling information.
However, Faramarzi teaches the method, wherein the decoding of the connectivity information comprises: converting a vertex index in a corresponding connectivity patch to a vertex index in a frame based on mapping information included in the signaling information (Faramarzi [0130]: “The decoder 550b uses the traversal map to update the order of the vertices of the reconstructed geometry and attribute to match the order of vertices presumed in the vertex indices of the connectivity information”; Faramarzi [0149]: “After the reordering information decoder 565 decodes the compressed reordering information, the vertex index updater 562, updates the indices associated with the reconstructed connectivity information. The vertex index updater 562 updates the index associated with the reconstructed connectivity information such that the index matches the index of the reconstructed vertex coordinates and attributes. That is, the reverse vertex traversal map of FIG. 5B is applied to the connectivity information”).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the vertex index conversion of Faramarzi with the method of Kim and Graziosi because it allows for the compression and reconstruction of mesh by using 2D frames which require less bandwidth during transmission (Faramarzi [0042]). This motivation for the combination of Kim, Graziosi, and Faramarzi is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention and rationale (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Regarding claim 15, the Kim and Graziosi combination does not explicitly disclose the method, wherein the vertex index in the frame mapped to the vertex index in the connectivity patch included in the mapping information is a frame-based index or a connectivity patch-based index.
However, Faramarzi teaches the method, wherein the vertex index in the frame mapped to the vertex index in the connectivity patch included in the mapping information is a frame-based index or a connectivity patch-based index (Faramarzi [0144]: “By packing the vertex coordinates and attribute information 513a as a raw patch instead of individual patches, the point cloud encoder 520a simplifies encoding as compared to the point cloud encoder 522 of FIGS. 5B and 5C. For example, the point cloud encoder 520a does not need to generate patches since the vertices are not partitioned into different patches. Additionally, the point cloud encoder 520a does not need to pack the individual patches into a frame, fill the inter-patch space of a frames with image padding, perform geometry smoothing, perform color smoothing, generate and compress an occupancy map, generate and compress auxiliary patch information, and the like”, shows that it is a patch-based index).
It would have been obvious to combine Kim, Graziosi, and Faramarzi for the same reason as used for claim 14 above.
Allowable Subject Matter
Claims 2-5 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claims 9-12 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AIDAN KEUP whose telephone number is (703)756-4578. The examiner can normally be reached Monday - Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AIDAN KEUP/ Examiner, Art Unit 2666 /Molly Wilburn/Primary Examiner, Art Unit 2666