Prosecution Insights
Last updated: April 19, 2026
Application No. 17/689,394

THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE

Final Rejection §103
Filed
Mar 08, 2022
Examiner
ITSKOVICH, MIKHAIL
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Panasonic Intellectual Property Corporation of America
OA Round
6 (Final)
35%
Grant Probability
At Risk
7-8
OA Rounds
4y 0m
To Grant
59%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
206 granted / 585 resolved
-22.8% vs TC avg
Strong +24% interview lift
Without
With
+23.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
62 currently pending
Career history
647
Total Applications
across all art units

Statute-Specific Performance

§101
11.5%
-28.5% vs TC avg
§103
53.5%
+13.5% vs TC avg
§102
12.3%
-27.7% vs TC avg
§112
20.4%
-19.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 585 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed on 12/12/2025 have been fully considered but they are not persuasive. Applicant argues: “On pages 9 and 10 of the Office Action, the Examiner states the following: "an item of attribute information of the parent node is calculated using items of attribute information of child nodes of the parent node." (This feature appears to belong to the second mode. As noted above, Mammou teaches predicting attribute information of a child based on one of the parent nodes . See Mammou, Paragraphs 533, 107, 625-626. Chou teaches "a given point cloud frame references a reconstructed version of a previously coded/decoded frame (reference frame) in the same temporal layer or a lower temporal layer." Chou, Paragraphs 290-292, 299, Fig. 26, and statement of motivation below. So, this data prediction appears to be known as well.)" Based on the Examiner's above-noted comments (as well as the Examiner's comments in the rejection of claims under 35 U.S.C. l 12(a)), Applicant notes that the Examiner's appears to consider the limitation of "an item of attribute information of the parent node is calculated using items of attribute information of child nodes of the parent node" to be a drafting error, and that the reverse relationship is what is intended to have been claimed by the Applicant.” Examiner notes that Applicant fails to address the embodiments cited to Chou, Paragraphs 290-292, 299, Fig. 26 which remain relevant to the present claims. See updated reasons for rejection below. Applicant argues regarding the newly amended language: “In order to expedite prosecution of the instant application, Applicant notes that claim 1 has been amended to require only "in the generating of the predicted value, the predicted value of the current node is generated using one or more items of attribute information of one or more second nodes among first nodes … It is respectfully submitted that neither Mammou not Chou teach this feature of the presently claimed invention. It is respectfully submitted that the above-noted features of claim 1 would not be obvious to one of ordinary skill in the art, and further, neither Mammou nor Chou discloses these features of the claimed invention.” Examiner notes that the claim amendments appears to select one of the claimed modes that was previously rejected, and remains rejected for substantively the same reasons. Applicant does not address the reasons for rejection or explain why this claim element would not be obvious in view of the rejection. See updated reasons for rejection below. To expedite prosecution Examiner suggests directing the claims to particular items of attribute information and predicted values that define a problem that Applicant proposes to solve, and claim particular methods of prediction and calculation to define a solution that solves that problem. 35 USC § 112 Examiner withdraws the rejection of Claims 1-14 under 35 U.S.C. 112(a). The claims recite “an item of attribute information of the parent node is calculated using items of attribute information of child nodes of the parent node.” Applicant has cited support of this claim language in the Arguments on 12/12/2025: “at least the disclosure at page 137, line 1 to page 138, line 2 of the specification as originally filed, which recites the following: "An item of attribute information of a node belonging to a high layer such as a parent node is calculated from, for example, items of attribute information of a low layer of the node. For example, an item of attribute information of a parent node is an average value of a plurality of items of attribute information of a plurality of child nodes of the parent node. Note that the method for calculating an item of attribute information of a node belonging to a high layer is not limited to calculating an arithmetic average and may be another method such as calculating a weighted average."” Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-14 are rejected under 35 U.S.C. 103 as being unpatentable over US 20190311500 to Mammou (“Mammou”) in view of US 20170347122 to Chou (“Chou”). Generally, Examiner notes that Applicant uses terminology that may be applied to different aspects of video encoding known in the art. Examiner suggests providing claim language that qualifies the terms like “attribute information, node, predicted value, transform, first/second/third component” to refer to particular video coding terms and standards in the art. Regarding Claim 1: “A three-dimensional data encoding· method comprising: generating a predicted value of an item of attribute information of a current node in an N-ary tree structure of three-dimensional points included in point cloud data, N being an integer greater than or equal to 2; and (“a point cloud is compressed via a patching technique … such as the set of 2D images describing the geometry … 1. Quadtree coding may proceed as follows: … b. Recursively split the square into 4 sub-squares … ” thus the information of the point cloud point can be split into 4 quadtree leaves, which can then be further Split. Mammou, Paragraphs 510, 521-536. See additional tree representations in Chou Paragraph 298 and Figs. 27-28, (including an octree partition exemplified in Specification, Page 30 lines 2-4). See statement of motivation below.) encoding the item of attribute information of the current node using the predicted value and a transform process that performs, for each of layers in the N-ary tree structure, an operation for separating each of input signals into a first component and a second component, (For example the data of the point can be separated (and thus transformed) into predicted data and encoded difference data which can be embodied in a higher layer of the encoding process: “the missed points can be recorded by the difference from Q(d(Q)). For example, instead of signally d(Q) [second component layer] for the multiple missed points, a d(Q) value can be signaled for a first one of the missed points and a further difference relative to a previous difference [creating a further component layer] can be signaled for the other ones of the multiple missed points that share the same reference point.” Mammou, Paragraph 533. Thus, the data of the tree structures can be coded in two or more coding layers, that can be separated into further coding layers. Note similar embodiments of coding of prediction residual using HAHT in Chou, Paragraph 135 and of coding elements of one layer by referencing information or a quality in the same or a lower parent layer in Chou, Paragraphs 290-292, and 299, and statement of motivation below.) wherein in the generating of the predicted value: the predicted value of the current node is generated using one or more items of attribute information of one or more second nodes among first nodes, the first nodes including a parent node of the current node and belonging to a same layer as the parent node in the N-ary tree structure, the parent node and the first nodes being included in the N-ary tree structure, Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, in a prediction of the values under the modes above: (a) use previously coded nodes of a tree that are in a higher layer (first nodes including a parent node) to predict the values of the current node, and (b) different nodes in this group (second and third) can be used as references. Prior art provides an example of this: For the first mode and second modes, “the missed points can be recorded by the difference from Q(d(Q)). For example, instead of signally d(Q) for the multiple missed points, a d(Q) value can be signaled for a first one of the missed points and a further difference relative to a previous difference can be signaled for the other ones of the multiple missed points that share the same reference point.” Thus, an attribute of each point in a point cloud (node) can be predicted by referencing (as delta of) an attribute Q of a parent for the first missing point (exemplifying a second node) or to a previously coded parent of another missing point (exemplifying another second or a third node), all belong to a group of parent points (exemplifying first nodes). See Mammou, Paragraphs 533, 107, 625-626. Here in the first mode the referenced Q can be from the parent nodes that are the first nodes but not third nodes, and the second mode can be from parent nodes that are first nodes and third nodes. See additional embodiments of referencing information in the same or another layer in Mammou, Paragraphs 142-145 and similarly in Chou, Paragraphs 290-292, and 299, and statement of motivation below. Finally note that there is a high degree of substitutability within the standards: “a missed point P may be located at a same location in a patch projection (same tangential and bitangential axis), but may be located at a different depth,” where in some instances a predicted node can be changed to a lower level to allow for a larger pool of the “first” nodes to be referenced. See Mammou, Paragraphs 510, 625-626. Also note that reference parameters can be signaled at any level in the hierarchy. Mammou, Paragraph 492. This indicates a wide substitutability of nodes and levels to reference and thus a wide degree of obviousness for using a particular data dependence.) an item of attribute information of the parent node is calculated using items of attribute information of child nodes of the parent node.” (First note that if the attribute information is embodied in the predicted value of the above element, since the predicted value of a child is based on the parent, then the predicted value of the parent is easily calculated by reversing the prediction operation above. But there are many other examples reading on this element. For example, resolution of the parent node (in a layer with first nodes) can be calculated based on the resolution of the child nodes in another layer: “In some embodiments, scaling/up-sampling/down-sampling could be applied to the produced patch images/layers in order to control the number of points in the reconstructed point cloud. … Guided up-sampling strategies may be performed on the layers that were down-sampled given the full resolution image from another "primary" layer that was not down-sampled.” Mammou, Paragraphs 142-144. In another attribute embodiment, “In some embodiments, the generated layers may be encoded with different precisions.” Mammou, Paragraph 145 See similar embodiments in Chou, Paragraphs 290-292, 299, Fig. 26, and statement of motivation below.) Mammou does not teach: “the transform process is a Region Adaptive Hierarchical Transform (RAHT) or a HAAR transform.” However, it appears that this technique was known in the art: Chou teaches the above technique in the context of processing three-dimensional point cloud data: “For example, an encoder uses a region-adaptive hierarchical transform ("RAHT"), which can provide a very compact way to represent the attributes of occupied points in point cloud data, … In some example implementations, the RAHT is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The RAHT traverses a hierarchical representation of point cloud data (specifically, an octtree ), …” Chou, Paragraphs 99-100, 338. Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of Mammou to perform “a Region Adaptive Hierarchical Transform (RAHT) or a HAAR transform” on point cloud data as taught in Chou, because this approach is “computationally simpler than many previous approaches to compression of point cloud data.” Chou, Paragraph 99. Finally, in reviewing the present application, there does not seem to be objective evidence that the claim limitations are particularly directed to: addressing a particular problem which was recognized but unsolved in the art, producing unexpected results at the level of the ordinary skill in the art, or any other objective indicators of non-obviousness. Where Mammou does not explicitly teach an embodiment of the claim where the recursive quadtree partition results in more than one partition level, Chou teaches the above claim feature in the context of encoding 3D information using existing video coding standards: “a hierarchical representation of point cloud data (specifically, an octtree ), starting from a top level and continuing through successively lower levels.” Chou, Paragraphs 100, 305, and Figs. 27-28. This embodiment corresponds to the example tree in Specification, Page 30, lines 2-4. Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of Mammou to partition the point data into more than one level of quadtree partitioning as taught in Chou, in order to provide for efficient compression and encoding of 3D video data. Chou, Paragraph 99. Regarding Claim 2: “The three-dimensional data encoding method according to claim 1, wherein in the first mode, an item of attribute information of a fourth node among the one or more second nodes is directly used as the predicted value, and in the second mode, the predicted value is calculated from items of attribute information of fifth nodes including the one or more third nodes.” (Note that the lower first node information can be coded directly or itself predicted from a higher second node information to be combined with a coded difference: “the missed points can be recorded by the difference from Q(d(Q)) [calculated from a fifth node]. For example, instead of signally d(Q) [directly from the fourth / parent node] for the multiple missed points, a d(Q) value can be signaled for a first one of the missed points and a further difference relative to a previous difference [i.e. from attribute information of a previous node] can be signaled for the other ones of the multiple missed points that share the same reference point.” Mammou, Paragraph 533.) Regarding Claim 3: “The three-dimensional data encoding method according to claim 2, wherein the fourth node is the parent node.” (As noted in Claims 1 and 2, the fourth node can be a reference node or in a higher level partition of the quadtree and thus a parent node, for “signaling d(Q) [directly from the fourth / parent node] for the multiple missed points” Mammou, Paragraph 533.) Regarding Claim 4: “The three-dimensional data encoding method according to claim 1, comprising: generating predicted values of fourth nodes that include the current node and belong to a same layer as the current node, (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, the third node can be the current node or a higher tree partition of the current node. Claim 1 indicates creation of the current tree partition level as well as higher intermediate tree partitions: “a point cloud is compressed via a patching technique … such as the set of 2D images describing the geometry … 1. Quadtree coding may proceed as follows: … b. Recursively split the square into 4 sub-squares … ” thus the information of the point cloud point can be split into 4 quadtree leaves, which can then be further Split. Mammou, Paragraphs 510, 521-536.) wherein in the encoding: … the transform process is performed on items of attribute information of the third nodes to generate first transform coefficients; … the transform process is performed on the predicted values of the fourth nodes to generate second transform coefficients; … difference values between corresponding ones of the first transform coefficients and the second transform coefficients are calculated; (Note that the coded points can be encoded as a difference value between the predicted values and the target values: “the missed points can be recorded by the difference from Q(d(Q)). For example, instead of signally d(Q) for the multiple missed points, a d(Q) value can be signaled for a first one of the missed points and a further difference relative to a previous difference can be signaled for the other ones of the multiple missed points that share the same reference point.” Mammou, Paragraph 533.) and the difference values calculated are encoded, and” (Note that the lower first node information can be coded directly or itself predicted from a higher second node information to be combined with a coded difference: “the missed points can be recorded by the difference from Q(d(Q)). For example, instead of signally d(Q) for the multiple missed points, a d(Q) value can be signaled for a first one of the missed points and a further difference relative to a previous difference can be signaled for the other ones of the multiple missed points that share the same reference point.” Mammou, Paragraph 533.) Where Mammou does not explicitly teach an embodiment of the claim where the recursive quadtree partition results in more than one partition level, Chou teaches the above claim feature in the context of encoding 3D information using existing video coding standards: “a hierarchical representation of point cloud data (specifically, an octtree ), starting from a top level and continuing through successively lower levels.” Chou, Paragraphs 100, 305. Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of Mammou to partition the point data into more than one level of quadtree partitioning as taught in Chou, in order to provide for efficient compression and encoding of 3D video data. Chou, Paragraph 99. Regarding Claim 5: “The three-dimensional data encoding method according to claim 4, … wherein the transform process is an integer-to-integer transform, and … in the generating of the second transform coefficients, fractional portions of the predicted values of the fourth nodes are discarded, and … the transform process is performed on the predicted values after the discarding to generate the second transform coefficients.” (Mammou describes performing integer value quantization (discarding less significant bits) before splitting the node into sub-tree units and before splitting the coded information into predicted and difference information. Mammou, Paragraphs 517-531.) Regarding Claim 6: “The three-dimensional data encoding method according to claim 4, wherein the transform process is an integer-to·integer transform, and in the calculating of the difference values, fractional portions of the second transform coefficients are discarded, and the difference values are calculated using the second transform coefficients after the discarding.” (Mammou describes performing quantization on integer values (discarding less significant or fractional bits) before splitting the node into sub-tree units and before splitting the coded information into predicted and difference information. Mammou, Paragraphs 517-531.) Claim 7, “A three-dimensional data decoding method” is rejected for reasons stated in Claim 1, because the decoding method steps of Claim 7 precisely reverse the encoding method steps of Claim 1. Claims 8-12 are rejected for reasons stated for Claims 2-6 in view of the Claim 7 rejection. Claim 13, “A three-dimensional data encoding device” is rejected for reasons stated for Claim 1, and because prior art teaches the following: “a processor; and memory, wherein, using the memory, the processor …” (“FIG. 16 illustrates an example computer system 1600 that may implement an encoder or decoder or any other ones of the components described herein,” Mammou, Paragraph 635-638.) Claim 14, “A three-dimensional data decoding device” is rejected for reasons stated for Claims 7 and 13. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIKHAIL ITSKOVICH whose telephone number is (571)270-7940. The examiner can normally be reached Mon. - Thu. 9am - 8pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at (571)272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MIKHAIL ITSKOVICH/Primary Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Mar 08, 2022
Application Filed
Sep 23, 2023
Non-Final Rejection — §103
Jan 29, 2024
Response Filed
May 06, 2024
Final Rejection — §103
Aug 07, 2024
Response after Non-Final Action
Aug 12, 2024
Response after Non-Final Action
Sep 04, 2024
Request for Continued Examination
Sep 26, 2024
Response after Non-Final Action
Sep 30, 2024
Non-Final Rejection — §103
Dec 30, 2024
Response Filed
Apr 09, 2025
Final Rejection — §103
Aug 08, 2025
Request for Continued Examination
Aug 12, 2025
Response after Non-Final Action
Aug 23, 2025
Non-Final Rejection — §103
Dec 12, 2025
Response Filed
Jan 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12548733
Automating cryo-electron microscopy data collection
2y 5m to grant Granted Feb 10, 2026
Patent 12489911
IMAGE CODING METHOD, IMAGE DECODING METHOD, IMAGE CODING APPARATUS, RECEIVING APPARATUS, AND TRANSMITTING APPARATUS
2y 5m to grant Granted Dec 02, 2025
Patent 12477146
ENCODING AND DECODING METHOD, DEVICE AND APPARATUS
2y 5m to grant Granted Nov 18, 2025
Patent 12452404
METHOD FOR DETERMINING SPECIFIC LINEAR MODEL AND VIDEO PROCESSING DEVICE
2y 5m to grant Granted Oct 21, 2025
Patent 12432328
SYSTEM AND METHOD FOR RENDERING THREE-DIMENSIONAL IMAGE CONTENT
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
35%
Grant Probability
59%
With Interview (+23.8%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 585 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month