Prosecution Insights
Last updated: April 18, 2026
Application No. 17/919,836

POINT CLOUD DATA PROCESSING DEVICE AND PROCESSING METHOD

Final Rejection §103§112
Filed
Oct 19, 2022
Examiner
GADOMSKI, STEFAN J
Art Unit
2485
Tech Center
2400 — Computer Networks
Assignee
LG Electronics Inc.
OA Round
4 (Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
2y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
313 granted / 412 resolved
+18.0% vs TC avg
Moderate +7% lift
Without
With
+7.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
26 currently pending
Career history
438
Total Applications
across all art units

Statute-Specific Performance

§101
5.9%
-34.1% vs TC avg
§103
46.4%
+6.4% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
22.6%
-17.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 412 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed 08/18/2025 has been entered. No claims have been added or cancelled. Claims 1-20 have been amended. Claims 1-20 remain pending in the application. Response to Arguments Applicant’s arguments, see pages 9-10, filed 08/18/2025, with respect to the 103 rejection have been fully considered. Applicant has provided citations (paragraphs 347-349, 410, 419, FIG. 26, FIG. 41) indicating support for the amendments without an explanation as to how the cited paragraphs support the amended claim language. Provided below is an analysis of the cited paragraphs with respect to the amended claim language. Paragraphs 347-349 along with FIG. 26 describe SPS syntax structure that includes information related to prediction unit (PU) splitting including “pu_coding_flag” that indicates whether inter prediction is performed in a PU. These paragraphs appear to provide support for amended limitation, “wherein the bitstream includes first information related to an inter prediction being used for the point of the point cloud data.” Paragraph 410 clarifies the example of PUs to be merged and an example of an MMV_list described in paragraphs 408 and 409. The paragraph explains: 1) motion vectors (MVs) of PUs are merged; 2) indexes of PUs with MVs to be merged do not overlap in MMV_list; 3) there may be multiple MMV_list; and 4) the merge list may be shared among one or more frames. This paragraph appears to provide support for amended limitation, “wherein first point cloud data related to the inter prediction is merged with second point cloud data related to the inter prediction,” (the MV of the PUs with indices in the list are merged, the MV including “pu_coding_flag” from paragraph 349). Amended claim limitation, “wherein the merged second point cloud data is derived based on the first point cloud data,” appears to simply recite the resulting relationship of the second point cloud data with respect to the first point cloud data after the merge is completed, which adds nothing to the claim because merging any piece of data with another piece of data results in data “derived” from both pieces of original data. Paragraph 419 describes a flag named “allow_use_next_frame” included in SPS syntax structure signaling information related to PU merging described by paragraphs 412-425. This flag indicates whether to allow another slice/tile/frame to reference an MMV_list. Paragraph 419 links to paragraph 410 through the MMV_list. However, even with paragraph 410 stating MMV_lists may be shared amongst frames, it is unclear how any of the cited paragraphs would support the amended claim language, “[wherein the bitstream includes]… second information related to a merge for frames for the inter prediction.” As explained above, MVs of PUs are merged. MMV_list simply includes the indices of PUs with MVs to be merged. Frames appear to share MMV_list data, but frames themselves are never merged nor is there any information related to a merge for frames. Therefore, the amended claim limitation constitutes new matter. With respect to the previously presented prior art Applicant asserts, “Vosoughi is totally silent regarding wherein first point cloud data related to the inter prediction is merged with second point cloud data related to the inter prediction, and the merged second point cloud data is derived based on the first point cloud data.” Examiner concurs. However, during an updated search the Huang reference presented in the rejection below was found. Huang discloses the merging of motion vectors from point clouds. Interpreting amended first and second point cloud data related to the inter prediction as inter prediction motion vectors, consistent with paragraph 410 of the specification, Huang discloses merging two point cloud motion vector data. Previously presented Lasserre continues to disclose motion vectors from point cloud data can be inter predicted. Combining Lasserre and Huang with Mammou discloses or suggests the amended claim language. The claims are now rejected under 112(a) for new matter and 103 over the combination of Mammou, Lasserre, and Huang. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding independent amended claim language, “[wherein the bitstream includes]… second information related to a merge for frames for the inter prediction,” appears to be new matter as the amendment is not supported by the specification. As explained in the response to arguments above, MVs of PUs are merged. An MMV_list simply includes the indices of PUs with MVs to be merged. Frames appear to share MMV_list data, but frames themselves are never merged nor is there any information related to a merge for frames. Therefore the amended claim limitation constitutes new matter. Dependent claims fall together accordingly. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 6, 11, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Mammou et al. “G-PCC codec description v2”, hereafter Mammou, in view of Lasserre et al. US 2020/0258247 A1, hereafter Lasserre, further in view of Huang et al. US 2021/0279522 A1, hereafter Huang. Regarding claim 1, Mammou discloses a method (point cloud coding) [section 1] comprising: encoding geometry data of a point of point cloud data in a bitstream (point cloud positions are coded first) [section 2]; and encoding attribute data of the point of the point cloud data (section 3.5 details the attributes transfer (recoulouring) module that is used to transfer attributes to point cloud geometry that has been compressed and then reconstructed (decompressed) at the encoder, prior to attribute encoding) [section 3]. However, Mammou fails to explicitly disclose wherein the bitstream includes first information related to an inter prediction being used for the point of the point cloud data, and second information related to a merge for frames for the inter prediction wherein first point cloud data related to the inter prediction is merged with second point cloud data related to the inter prediction, and wherein the merged second point cloud data is derived based on the first point cloud data. Lasserre, in an analogous environment, discloses wherein the bitstream includes first information related to an inter prediction being used for the point of the point cloud data (the point cloud 10 may be represented in a picture; these data associated to the generation of the predictor, may comprise; a prediction type, for instance a flag indicating if the prediction mode is intra or inter) [0057; 0171; 0172]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the inter prediction flag for points of point cloud data, with the invention disclosed by Mammou, the motivation being to avoid errors [0007]. Further, Huang in an analogous environment, discloses second information related to a merge for frames for the inter prediction (feature vectors) [0132] wherein first point cloud data related to the inter prediction is merged with second point cloud data related to the inter prediction, and wherein the merged second point cloud data is derived based on the first point cloud data (merge the first feature vectors of the different point clouds to obtain a first merged feature vector) [0038]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to merge vectors of point cloud data, as disclosed by Huang, with the invention disclosed by Mammou and Lasserre, the motivation being improve appearance [abstract]. Claim 6 is drawn to a device adapted to implement the method of claim 1, and is therefore rejected in the same manner as above. However, the claims also recite a memory; and at least one processor connected to the memory, which Mammou also teaches (G-PCC encoder) [Figure 1]. Regarding claim 11, Mammou discloses a method (point cloud coding) [section 1] comprising: decoding geometry data of a point of point cloud data in a bitstream (section 3.2 describes the details of the Octree method for geometry encoding/decoding) [section 3]; and decoding attribute data of the point of the point cloud data (section 3.5 details the attributes transfer (recoulouring) module that is used to transfer attributes to point cloud geometry that has been compressed and then reconstructed (decompressed) at the encoder, prior to attribute encoding) [section 3]. However, Mammou fails to explicitly disclose wherein the bitstream includes first information related to an inter prediction being used for the point of the point cloud data, and second information related to a merge for frames for the inter prediction wherein first point cloud data related to the inter prediction is merged with second point cloud data related to the inter prediction, and wherein the merged second point cloud data is derived based on the first point cloud data. Lasserre, in an analogous environment, discloses wherein the bitstream includes first information related to an inter prediction being used for the point of the point cloud data (the point cloud 10 may be represented in a picture; these data associated to the generation of the predictor, may comprise; a prediction type, for instance a flag indicating if the prediction mode is intra or inter) [0057; 0171; 0172]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the inter prediction flag for points of point cloud data, with the invention disclosed by Mammou, the motivation being to avoid errors [0007]. Further, Huang in an analogous environment, discloses second information related to a merge for frames for the inter prediction (feature vectors) [0132] wherein first point cloud data related to the inter prediction is merged with second point cloud data related to the inter prediction, and wherein the merged second point cloud data is derived based on the first point cloud data (merge the first feature vectors of the different point clouds to obtain a first merged feature vector) [0038]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to merge vectors of point cloud data, as disclosed by Huang, with the invention disclosed by Mammou and Lasserre, the motivation being improve appearance [abstract]. Claim 16 is drawn to a device adapted to implement the method of claim 11, and is therefore rejected in the same manner as above. However, the claims also recite a memory; and at least processor connected to the memory which Mammou also teaches (G-PCC decoder) [Figure 1]. Claims 2 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Mammou, Lasserre, and Huang further in view of Li et al. “Advanced 3D Motion Prediction for Video-Based Dynamic Point Cloud Compression”, hereafter Li. Regarding claim 2, Mammou, Lasserre, and Huang address all of the features with respect to claim 1 as outlined above. Mammou further discloses voxelizing the geometry data (the process of position quantization, duplicate removal, and assignment of attributes to the remaining points is called voxelization. In other words, voxelization is the process of grouping points together into voxels) [section 3.1.3]. However while the combination discloses voxelizaion as the process of grouping points together into voxels, the combination fails to explicitly disclose the encoding of the geometry further comprises: splitting the voxelized geometry data into one or more prediction units (PUs) for inter prediction, wherein each of the PUs is a unit of the inter prediction. Li, in an analogous environment, discloses the encoding of the geometry further comprises: splitting the voxelized geometry into one or more prediction units (PUs) for inter prediction, wherein each of the PUs is a unit of the inter prediction (a geometry-based method using the accurate 3D reconstructed geometry provided by the 2D geometry video to estimate the 2D MV of the attribute video…searching in the reference frame to find the block with the smallest 3D geometry compared with the current PU; geometry-based motion prediction) [section 1; section IV.B.]. Mammou, Lasserre, Huang, and Li are analogous because they are both related to point cloud compression. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the prediction units, as disclosed by Li, with the invention disclosed by Mammou and Lasserre and Huang, the motivation being coding gains [abstract]. Claim 7 is drawn to a device adapted to implement the method of claim 2, and is therefore rejected in the same manner as above. However, the claims also recite an encoder, a geometry encoder, an attribute encoder, and a transmitter, which Mammou also teaches (G-PCC encoder) [Figure 1]. Claims 3, 4, 8, and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Mammou, Lasserre, Huang, and Li further in view of Jeong et al. US 2021/0127134 A1, hereafter Jeong. Regarding claim 3, Mammou, Lasserre, Huang, and Li address all of the features with respect to claim 2 as outlined above. However, the combination fails to explicitly disclose the splitting of the voxelized geometry data into one or more PUs comprises: determining a split mode for performing additional splitting on at least one of the split PUs; and splitting the at least one of the split PUs into one or more sub PUs according to the determined split mode, wherein each of the sub PUs is a unit of the inter prediction. Jeong, in an analogous environment, discloses the splitting of the voxelized geometry data into one or more PUs comprises: determining a split mode for performing additional splitting on at least one of the split PUs; and splitting the at least one of the split PUs into one or more sub PUs according to the determined split mode, wherein each of the sub PUs is a unit of the inter prediction (partition modes for a prediction unit having a size of 2Nx2N, 2NxN, Nx2N, or NxN may be applied to inter prediction; the encoder may generate split information indicating whether to split a coding unit…may generate, for split coding unit, partition mode information for determining a prediction unit and a transform unit split information for determining a transform unit) [0081; 0085]. Mammou, Lasserre, Huang, Li, and Jeong are analogous because they are related to compression. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the sub-prediction units and split mode, as disclosed by Jeong, with the invention disclosed by Mammou, Lasserre, Huang, and Li, the motivation being accuracy improvement [0018]. Regarding claim 4, Mammou, Lasserre, Huang, Li, and Jeong addresses all of the features with respect to claim 3 as outlined above. Jeong further discloses the bitstream contains: information indicating whether the inter prediction is performed in the PUs (partition modes for a prediction unit having a size of 2Nx2N, 2NxN, Nx2N, or NxN may be applied to inter prediction) [0081]. information indicating whether the splitting into the one or more PUs is applied at a specific level among one or more levels constituting an octree structure of the geometry data or a frame unit of the point cloud data (coding units having a tree structure; the encoder may generate split information indicating whether to split a coding unit…may generate, for split coding unit, partition mode information for determining a prediction unit and a transform unit split information for determining a transform unit) [0064; 0085]; and information indicating the split mode (a split mode of the current block…may obtain information indicating one of the rectangular split mode and the triangular split mode from a bitstream) [0281]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the sub-prediction units and split mode, as disclosed by Jeong, with the invention disclosed by Mammou, Lasserre, Huang, and Li, the motivation being accuracy improvement [0018]. Claims 8 and 9 are drawn to a device adapted to implement the method of claims 3 and 4, and is therefore rejected in the same manner as above. However, the claims also recite an encoder, a geometry encoder, an attribute encoder, and a transmitter, which Mammou also teaches (G-PCC encoder) [Figure 1]. Claims 5 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Mammou, Lasserre, Vosoughi, and Li further in view of Li et al. US 2020/0077114 A1, hereafter Li. Regarding claim 5, Mammou, Lasserre, Huang, and Li address all of the features with respect to claim 2 as outlined above. However, the combination fails to explicitly disclose the encoding of the geometry datacomprises: searching for neighbor PUs for at least one of the split PUs; securing a motion vector of the PUs and motion vectors of the neighbor PUs and comparing a difference between the motion vector of the PUs and the motion vectors of the neighbor PUs; and based on the difference being less than or equal to a preset value, generating a list including indexes of the neighbor PUs and determining a motion vector representing the generated list, wherein the bitstream contains signaling information related to the list and the determined motion vector. Li, in an analogous environment, discloses the encoding (any decoder technology except the parsing/entropy decoding that is present in a decoder also necessarily needs to be present, in substantially identical function form, in a corresponding encoder) [0061] of the geometry data comprises: searching for neighbor PUs for at least one of the split PUs (checking motion information from either spatial or temporal neighboring blocks of the current block) [0099]; securing a motion vector of the PUs and motion vectors of the neighbor PUs and comparing a difference between the motion vector of the PUs and the motion vectors of the neighbor PUs (merge candidates in a candidate list are primarily formed by checking motion information from either spatial or temporal neighboring blocks of the current block) [0099]; and based on the difference being less than or equal to a preset value (small than or equal to a given threshold, the motion vectors of the candidates A and the candidate B are similar) [0133], generating a list including indexes of the neighbor PUs and determining a motion vector representing the generated list (when any of the candidate blocks are valid candidates, for example, are coded with motion vectors, then, the motion information of the valid candidate blocks can be added into the merge candidate list) [0099], wherein the bitstream contains signaling information related to the list and the determined motion vector (bitstream, the prediction information being indicative of a prediction mode that is based on a candidate list; the entropy encoder (625) is configured to include the general control data, the selected prediction information…, the residue information, and other suitable information in the bitstream) [abstract; 0086]. Mammou, Lasserre, Huang, Li, and Li are analogous because they are related to compression. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the neighboring blocks, as disclosed by Li, with the invention disclosed by Mammou, Lasserre, Huang, and Li, the motivation being improve efficiency [0077]. Claim 10 is drawn to a device adapted to implement the method of claim 5, and is therefore rejected in the same manner as above. However, the claims also recite an encoder, a geometry encoder, an attribute encoder, and a transmitter, which Mammou also teaches (G-PCC encoder) [Figure 1]. Claims 12-14 and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Mammou, Lasserre, and Huang, further in view of Jeong et al. US 2021/0127134 A1, hereafter Jeong. Regarding claim 12, Mammou, Lasserre, and Huang address all of the features with respect to claim 11 as outlined above. However, the combination fails to explicitly disclose the bitstream contains information related to prediction unit (PU) splitting for splitting the geometry data into one or more PUs for inter prediction, wherein the information related to the PU splitting comprises: information indicating whether the inter prediction is performed in the PUs; information indicating whether the PU splitting is applied at a specific level among one or more levels constituting an octree structure of the geometry data or a frame unit of the point cloud data; and information indicating a split mode of the one or more PUs. Jeong, in an analogous environment, discloses the bitstream contains information related to prediction unit (PU) splitting for splitting the geometry data into one or more PUs for inter prediction (partition modes for a prediction unit having a size of 2Nx2N, 2NxN, Nx2N, or NxN may be applied to inter prediction) [0081], wherein the information related to the PU splitting comprises: information indicating whether the inter prediction is performed in the PUs (partition modes for a prediction unit having a size of 2Nx2N, 2NxN, Nx2N, or NxN may be applied to inter prediction) [0081]; information indicating whether the PU splitting is applied at a specific level among one or more levels constituting an octree structure of the geometry data or a frame unit of the point cloud data (coding units having a tree structure; the encoder may generate split information indicating whether to split a coding unit…may generate, for split coding unit, partition mode information for determining a prediction unit and a transform unit split information for determining a transform unit) [0064; 0085]; and information indicating a split mode of the one or more PUs (a split mode of the current block…may obtain information indicating one of the rectangular split mode and the triangular split mode from a bitstream) [0281]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the sub-prediction units and split mode, as disclosed by Jeong, with the invention disclosed by Mammou and Lasserre and Huang, the motivation being accuracy improvement [0018]. Regarding claim 13, Mammou, Lasserre, Huang, and Jeong address all of the features with respect to claim 12 as outlined above. Jeong further discloses the decoding of the geometry data comprises: splitting the geometry data into the one or more prediction units (PUs) (coding units having a tree structure; the encoder may generate split information indicating whether to split a coding unit…may generate, for split coding unit, partition mode information for determining a prediction unit and a transform unit split information for determining a transform unit) [0064; 0085], wherein each of the PUs is a unit of the inter prediction (partition modes for a prediction unit having a size of 2Nx2N, 2NxN, Nx2N, or NxN may be applied to inter prediction) [0081]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the sub-prediction units and split mode, as disclosed by Jeong, with the invention disclosed by Mammou, Lasserre, Huang, the motivation being accuracy improvement [0018]. Regarding claim 14, Mammou, Lasserre, Huang, and Jeong address all of the features with respect to claim 13 as outlined above. Jeong further discloses the decoding of the geometry data further comprises: splitting at least one of the split PUs into one or more sub PUs (coding units having a tree structure; the encoder may generate split information indicating whether to split a coding unit…may generate, for split coding unit, partition mode information for determining a prediction unit and a transform unit split information for determining a transform unit) [0064; 0085], wherein each of the sub PUs is a unit of the inter prediction(partition modes for a prediction unit having a size of 2Nx2N, 2NxN, Nx2N, or NxN may be applied to inter prediction) [0081]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the sub-prediction units and split mode, as disclosed by Jeong, with the invention disclosed by Mammou, Lasserre, Huang, the motivation being accuracy improvement [0018]. Claims 17-19 are drawn to a device adapted to implement the method of claims 12-14, and are therefore rejected in the same manner as above. However, the claims also recite an decoder, a geometry decoder, an attribute decoder, and a receiver, which Mammou also teaches (G-PCC decoder) [Figure 1]. Claims 15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mammou, Lasserre, and Huang further in view of Li et al. US 2020/0077114 A1, hereafter Li. Regarding claim 15, Mammou, Lasserre, Huang, address all of the features with respect to claim 11 as outlined above. However, the combination fails to explicitly disclose the decoding of the geometry data comprises: splitting the geometry data into one or more prediction units (PUs); and based on a merged motion vector being present for the PUs, allocating the merged motion vector to the PUs, wherein the merged motion vector is a motion vector allocated to at least one PU merged according to PU merging, and wherein the bitstream contains information related to merging the PU merging. Li, in an analogous environment, discloses the decoding of the geometry data comprises: splitting the geometry into one or more prediction units (PUs) (prediction units (PUs)…a prediction operation in coding (encoding/decoding) is performed in the unit of a prediction block) [0078]; and based on a merged motion vector being present for the PUs, allocating the merged motion vector to the PUs (a candidate list is constructed in response to the prediction mode) [0148], wherein the merged motion vector is a motion vector allocated to at least one PU merged according to PU merging (the candidate list includes at least a side candidate that is located at a neighboring position to the block) [0148], and wherein the bitstream contains information related to merging the PU merging (bitstream, the prediction information being indicative of a prediction mode that is based on a candidate list; the entropy encoder (625) is configured to include the general control data, the selected prediction information…, the residue information, and other suitable information in the bitstream) [abstract; 0086]. Mammou, Lasserre, Huang, and Li are analogous because they are related to compression. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the merged blocks, as disclosed by Li, with the invention disclosed by Mammou, Lasserre, Huang, the motivation being improve efficiency [0077]. Claim 20 is drawn to a device adapted to implement the method of claim 15, and is therefore rejected in the same manner as above. However, the claims also recite an encoder, a geometry decoder, an attribute decoder, and a receiver, which Mammou also teaches (G-PCC encoder) [Figure 1]. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEFAN GADOMSKI whose telephone number is (571)270-5701. The examiner can normally be reached Monday - Friday, 12-8PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 571-272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. STEFAN GADOMSKI Primary Examiner Art Unit 2485 /STEFAN GADOMSKI/Primary Examiner, Art Unit 2485
Read full office action

Prosecution Timeline

Oct 19, 2022
Application Filed
Jun 14, 2024
Non-Final Rejection — §103, §112
Sep 13, 2024
Response Filed
Dec 13, 2024
Final Rejection — §103, §112
Mar 06, 2025
Request for Continued Examination
Mar 17, 2025
Response after Non-Final Action
May 15, 2025
Non-Final Rejection — §103, §112
Aug 18, 2025
Response Filed
Nov 26, 2025
Final Rejection — §103, §112
Apr 01, 2026
Request for Continued Examination
Apr 08, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602821
IMAGING DEVICE FOR CALCULATING THREE-DIMENSIONAL POSITION ON THE BASIS OF IMAGE CAPTURED BY VISUAL SENSOR
2y 5m to grant Granted Apr 14, 2026
Patent 12602771
IN SITU WAFER SEAL CHUCK DEFECTS IDENTIFICATION
2y 5m to grant Granted Apr 14, 2026
Patent 12596035
THERMAL IMAGING CAMERA DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12581104
MULTIVIEW ACQUISITION INFORMATION SUPPLEMENTAL ENHANCEMENT INFORMATION
2y 5m to grant Granted Mar 17, 2026
Patent 12573019
ELECTROPLATING CHAMBER LEAKAGE PLATING WARNING METHOD AND SYSTEM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
83%
With Interview (+7.4%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 412 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month