Prosecution Insights
Last updated: April 19, 2026
Application No. 18/626,825

ENCODING DEVICE, DECODING DEVICE, ENCODING METHOD, AND DECODING METHOD

Non-Final OA §102§103§112
Filed
Apr 04, 2024
Examiner
PARK, HYORIM NMN
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Panasonic Intellectual Property Corporation of America
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+38.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
9 currently pending
Career history
10
Total Applications
across all art units

Statute-Specific Performance

§101
4.0%
-36.0% vs TC avg
§103
60.0%
+20.0% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/11/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 13 and 14 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, regards as the invention. Regarding claim 13, the phrase “wherein the dihedral angle information indicates the dihedral angle based on a difference between the dihedral angle and a value predicted from previously encoded information.” is vague which renders the claim indefinite because the limitation, dihedral angle itself, is also based on the difference between the limitation itself and another value. Regarding claim 14, the phrase “wherein the information indicating the two angles indicates each of the two angles based on a difference between the angle and a value predicted from previously encoded information.” is vague which renders the claim indefinite because the limitation, each of two angles itself, is also based on the difference between the limitation itself and another value. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-12, and 29 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Chen et al. (US 8884953 B2) (Hereinafter referred to as Chen). Regarding claim 1, Chen discloses an encoding device comprising: memory; and a circuit accessible to the memory, wherein in operation, the circuit: encodes first vertex information, second vertex information, and third vertex information, the first vertex information indicating a position of a first vertex of a first triangle that is a triangle in a three-dimensional mesh and on a first plane, the second vertex information indicating a position of a second vertex of the first triangle, the third vertex information indicating a position of a third vertex of the first triangle; and (Fig. 1; Claim 1, “A method for encoding a 3D mesh model composed of triangles, the method being executed by one or more processors or processing elements and comprising determining pairs of triangles, each pair being a reference triangle and a triangle to be predicted, wherein both triangles have a common side along a first axis; for each of said pairs of triangles, determining a dihedral angle between the reference triangle and the triangle to be predicted;”) PNG media_image1.png 359 561 media_image1.png Greyscale encodes, as fourth vertex information indicating a position of a fourth vertex of a second triangle, (i) dihedral angle information indicating a dihedral angle formed by the first plane and a second plane and (ii) identifying information for identifying a position of a fifth vertex that is a virtual vertex of a third triangle, the second triangle being a triangle that is in the three-dimensional mesh and on the second plane and includes the second vertex and the third vertex, the third triangle being a virtual triangle that is on the first plane, includes the second vertex and the third vertex, and corresponds to the second triangle. (Fig. 1; Col 1 Line 62 – Col 2 Line 1), “there is a spatial angle θ between the reference triangle Δuvw and the new, so-called spanning triangle Δurv. This spatial angle, referred to as dihedral angle, is encoded. It defines the rotation of the spanning triangle Δurv around the common side uv that the reference triangle Δuvw, the co-planar prediction triangle Δurpv and the spanning triangle Δurv have.”) Regarding claim 2, Chen discloses The encoding device according to claim 1, wherein the third triangle is similar to the second triangle. (Note that Fig. 1 indicates the third triangle (Δurpv) is similar to the second triangle (Δurv). Regarding claim 3, Chen discloses The encoding device according to claim 1, wherein the third triangle is an orthogonal projection of the second triangle for the first plane. (Note that Fig. 1 and Fig. 10 a) indicates the third triangle is generated when projecting the second triangle for the first plane orthogonally.) PNG media_image2.png 359 561 media_image2.png Greyscale PNG media_image3.png 433 637 media_image3.png Greyscale Regarding claim 4, Chen discloses The encoding device according to claim 1, wherein the identifying information includes information that indicates two angles relative to at least one edge of the first triangle and is for identifying, based on two positions of two vertexes of the at least one edge, the position of the fifth vertex using the two angles. (Fig. 5; Col 2, Lines 19-26: “The multi-way prediction exploits all possible reference triangles and uses the average of all single-way predicted positions as the multi-way prediction result. [GA03] and [LAM02] divide vertex positions into tangential coordinates and dihedral angles, as shown in FIG. 5. The dihedral angle between the reference triangle and spanning triangle is predicted and encoded using this kind of local coordinate system.”) PNG media_image4.png 293 627 media_image4.png Greyscale Regarding claim 5, Chen discloses The encoding device according to claim 1, wherein the identifying information includes information indicating a two-dimensional vector between the position of the fifth vertex and a position of a sixth vertex that is a virtual vertex of a fourth triangle, the fourth triangle being a virtual triangle that is on the first plane, includes the second vertex and the third vertex, and is similar to the first triangle. (Fig. 8; Col 5 Line 43 – Col 6 Line 3; “The advanced prediction triangle, as defined by one side uv of the reference triangle Tref (or Δuvw) and an advanced predicted vertex rAdvPred, is used to predict the spanning triangle Tspan (or Δurv), which is defined by said side uv of the reference triangle and the actual vertex r. As can be seen, the residual (i.e. the prediction error) between r and rAdvPred is much smaller than that between r and the conventionally predicted vertex rPred. The additional cost for encoding the dihedral angle α is minimized by using the below-described angle clustering. Therefore the actual vertex is cheaper to encode using the advanced predicted vertex, since it uses less data. Still the shape of the advanced prediction triangle TAdvPred is the same as that of the auxiliary triangle (ΔurPredv), i.e. they have the same angles and side lengths (in below-described embodiments with two advanced prediction triangles, this is still true, since both are mirrored, i.e. two angles and two sides are just exchanged). The shape of the spanning triangle TSpan however may be different. It is to be noted here that the reference triangle Tref and the auxiliary triangle ΔurPredv are co-planar, whereas the advanced prediction triangle TAdvPred (ΔurAdvPredv) has a dihedral angle α against them. In one embodiment, the advanced prediction triangle TAdvpred and the spanning triangle Tspan have substantially the same dihedral angle α against the reference plane of Tref. In another embodiment, as described below, the dihedral angle of the spanning triangle may differ from the dihedral angle of the advanced prediction triangle within a defined, small range.”) PNG media_image5.png 404 693 media_image5.png Greyscale Regarding claim 6, Chen discloses The encoding device according to claim 1, wherein the identifying information includes information that indicates two scalar factors and is for identifying the position of the fifth vertex by multiplying two unit vectors corresponding to two edges of the first triangle by the two scalar factors. (Fig 5, Note that Fig. 5 indicates two edges P0P and P1P for vertex P. A unit vector is considered as 1.) PNG media_image4.png 293 627 media_image4.png Greyscale Regarding claim 7, Chen discloses The encoding device according to claim 1,wherein the identifying information includes information indicating a relation between the second triangle and the third triangle. (Fig. 1, Note that Fig. 1 indicates a relation between the second triangle and the third triangle.) PNG media_image1.png 359 561 media_image1.png Greyscale Regarding claim 8, Chen discloses The encoding device according to claim 5,wherein the identifying information includes information indicating a relation between the first triangle and the fourth triangle. (Fig. 8, Note that Fig. 8 indicates a relation between the first triangle and the fourth triangle.) PNG media_image5.png 404 693 media_image5.png Greyscale Regarding claim 9, Chen discloses The encoding device according to claim 1,wherein the identifying information includes information indicating at least one of a relation between the first triangle and a fourth triangle or a relation between the second triangle and the third triangle in which a position of a sixth vertex that is a virtual vertex of the fourth triangle is regarded as coinciding with the position of the fifth vertex, the fourth triangle being a virtual triangle that is on the first plane, includes the second vertex and the third vertex, and is similar to the first triangle. (Fig. 9; Col 6 Line 51-58, “The resulting mirrored triangle ΔurAdvPredv fully matches the spanning triangle Tspan (or Δurv). In the next step it is determined which of the three prediction triangles is the best: ΔurPredv, ΔurRotPredv or ΔurAdvPredv. This is determined by comparing their respective residuals, and choosing the prediction triangle that generates the smallest residual. Obviously it will be ΔurAdvPredv in this case, since the residual is zero.”) PNG media_image6.png 336 611 media_image6.png Greyscale Regarding claim 10, Chen discloses The encoding device according to claim 1, wherein the circuit: selects one mode from among a plurality of modes including one or more identifying modes in which the dihedral angle information and the identifying information are encoded; encodes mode information indicating the one mode; and encodes the dihedral information and the identifying information when the one mode is included in the one or more identifying modes. (claim 3, “wherein a first enhanced prediction triangle of a first mode and a second enhanced prediction triangle of a second mode are mirrored along a co-planar second axis that is orthogonal to said first axis”; Claim 4, “Method according to claim 3, wherein in the first mode the enhanced prediction triangle corresponds to a co-planar parallelogram extension of the reference triangle that is rotated by said representative dihedral angle on said first axis, and wherein the enhanced prediction triangles of the first and second mode are co-planar and both have said side along the first axis common with the reference triangle.”) Regarding claim 11, Chen discloses The encoding device according to claim 10,wherein when the one mode is included in the one or more identifying modes, the identifying information includes information of a type defined according to the one mode. (Col 9 Line 62 - Col 10 Line 11, “First on geometry level, i.e. in the header of a group of vertex data the prediction mode information includes:…Second, on vertex level, i.e. in the header information of each vertex, the prediction mode information is added.") Regarding claim 12, Chen discloses The encoding device according to claim 11,wherein the one or more identifying modes include at least one of a first identifying mode, a second identifying mode, a third identifying mode, or a fourth identifying mode, in the first identifying mode, the identifying information includes information that indicates two angles relative to at least one edge of the first triangle and is for identifying, based on two positions of two vertexes of the at least one edge, the position of the fifth vertex using the two angles, (Fig. 5; Col 2 Line 19-26, “The multi-way prediction exploits all possible reference triangles and uses the average of all single-way predicted positions as the multi-way prediction result. [GA03] and [LAM02] divide vertex positions into tangential coordinates and dihedral angles, as shown in FIG. 5. The dihedral angle between the reference triangle and spanning triangle is predicted and encoded using this kind of local coordinate system.”) in the second identifying mode, the identifying information includes information indicating a two-dimensional vector between the position of the fifth vertex and a position of a sixth vertex that is a virtual vertex of a fourth triangle, the fourth triangle being a virtual triangle that is on the first plane, includes the second vertex and the third vertex, and is similar to the first triangle, (Fig. 8; Col 5 Line 43 – Col 6 Line 3; “The advanced prediction triangle, as defined by one side uv of the reference triangle Tref (or Δuvw) and an advanced predicted vertex rAdvPred, is used to predict the spanning triangle Tspan (or Δurv), which is defined by said side uv of the reference triangle and the actual vertex r. As can be seen, the residual (i.e. the prediction error) between r and rAdvPred is much smaller than that between r and the conventionally predicted vertex rPred. The additional cost for encoding the dihedral angle α is minimized by using the below-described angle clustering. Therefore the actual vertex is cheaper to encode using the advanced predicted vertex, since it uses less data. Still the shape of the advanced prediction triangle TAdvPred is the same as that of the auxiliary triangle (ΔurPredv), i.e. they have the same angles and side lengths (in below-described embodiments with two advanced prediction triangles, this is still true, since both are mirrored, i.e. two angles and two sides are just exchanged). The shape of the spanning triangle TSpan however may be different. It is to be noted here that the reference triangle Tref and the auxiliary triangle ΔurPredv are co-planar, whereas the advanced prediction triangle TAdvPred (ΔurAdvPredv) has a dihedral angle α against them. In one embodiment, the advanced prediction triangle TAdvpred and the spanning triangle Tspan have substantially the same dihedral angle α against the reference plane of Tref. In another embodiment, as described below, the dihedral angle of the spanning triangle may differ from the dihedral angle of the advanced prediction triangle within a defined, small range.”) in the third identifying mode, the identifying information includes information that indicates two scalar factors and is for identifying the position of the fifth vertex by multiplying two unit vectors corresponding to two edges of the first triangle by the two scalar factors, and (Fig. 5, Note that Fig. 5 indicates two edges P0P and P1P for vertex P. It can be considered a unit vector as 1.) in the fourth identifying mode, the identifying information includes information indicating a relation between the first triangle and the fourth triangle in which the position of the sixth vertex is regarded as coinciding with the position of the fifth vertex. (Fig. 9; Col 6 Line 51-58, “The resulting mirrored triangle ΔurAdvPredv fully matches the spanning triangle Tspan (or Δurv). In the next step it is determined which of the three prediction triangles is the best: ΔurPredv, ΔurRotPredv or ΔurAdvPredv. This is determined by comparing their respective residuals, and choosing the prediction triangle that generates the smallest residual. Obviously it will be ΔurAdvPredv in this case, since the residual is zero.”) Regarding claim 29, the claim 29 is similar in scope to claim 1 and is rejected under the same rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 15-26, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 8884953 B2) (Hereinafter referred to as Chen). Regarding claims 15-26, the bodies of claims 15-28 are corresponding to the bodies of claims 1-14 respectively. The only difference is that one is reciting an encoding device while the other one is reciting a decoding device. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the instant application to perform the reverse process in order to recover the encoded data. Furthermore, applying a known technique (e.g., encoding process recited in Chen) to a known device (methods, or products), such as decoder ready for improvement to yield predictable results would be been obvious to a person of ordinary skill in the art. Regarding claims 30, the body of claim 30 is corresponding to the body of clam 1. The only difference is that one is reciting an encoding device while the other one is reciting a decoding device. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the instant application to perform the reverse process in order to recover the encoded data. Furthermore, applying a known technique (e.g., encoding process recited in Chen) to a known device (methods, or products), such as decoder ready for improvement to yield predictable results would be been obvious to a person of ordinary skill in the art. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 13-14, and 27-28 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 8884953 B2) (Hereinafter referred to as Chen) in view of Ahn et al. "Predictive compression of geometry, color and normal data of 3-D mesh models." IEEE Transactions on Circuits and Systems for Video Technology 16.2 (2006): 291-299.) (Hereinafter referred to as Ahn) Regarding Claim 13, Chen does not explicitly to disclose The encoding device according to claim 1, wherein the dihedral angle information indicates the dihedral angle based on a difference between the dihedral angle and a value predicted from previously encoded information. Ahn more explicitly teaches, in the context of encoding device, wherein the dihedral angle information indicates the dihedral angle based on a difference between the dihedral angle and a value predicted from previously encoded information. (see Fig. 3 of Ahn; III. GEOMETRY COMPRESSION of Ahn, “To enhance the prediction performance further, the proposed algorithm estimates the signed dihedral angle between two triangles.”) As both Chen and Ahn are from the same filed of endeavor, It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include wherein the dihedral angle information indicates the dihedral angle based on a difference between the dihedral angle and a value predicted from previously encoded information in the context of encoding device disclosed by Chen according to the teaching of Ahn in order to develop an efficient mesh compression scheme. (see I. INTRODUCTION of Ahn) Regarding Claim 14, Chen discloses The encoding device according to claim 4, wherein the information indicating the two angles indicates each of the two angles (Fig. 4). However, Chen does not explicitly disclose based on a difference between the angle and a value predicted from previously encoded information. Ahn more explicitly teaches, in the context of decoding device, based on a difference between the angle and a value predicted from previously encoded information. (see Fig. 3 of Ahn; III. GEOMETRY COMPRESSION of Ahn, “To enhance the prediction performance further, the proposed algorithm estimates the signed dihedral angle between two triangles.”) As both Chen and Ahn are from the same field of endeavor, It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include wherein the dihedral angle information indicates the dihedral angle based on a difference between the dihedral angle and a value predicted from previously encoded information in the context of encoding device disclosed by Chen according to the teaching of Ahn in order to develop an efficient mesh compression scheme. (see I. INTRODUCTION of Ahn) Regarding claims 27-28, the bodies of claims 27-28 are corresponding to the bodies of claims 13-14 respectively. The only difference is that one is reciting an encoding device while the other one is reciting a decoding device. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the instant application to perform the reverse process in order to recover the encoded data. Furthermore, applying a known technique (e.g., encoding process recited in Chen) to a known device (methods, or products), such as decoder ready for improvement to yield predictable results would be been obvious to a person of ordinary skill in the art. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hyorim Park whose telephone number is (571)272-3859. The examiner can normally be reached Monday - Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571) 272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Hyorim Park/Examiner, Art Unit 2619 /JASON CHAN/Supervisory Patent Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Apr 04, 2024
Application Filed
Feb 19, 2026
Non-Final Rejection — §102, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+100.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month