Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on September 25, 2023, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Amendments
Applicant’s Amendment to the claims filed on September 25, 2023, has been entered and made of record.
Independent Claim(s) 1, 2, and 15
Amended Claim(s) 1-8 and 10-12
Canceled Claim(s) 13-14 and 16-18
Newly Added Claim(s) 19-25
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-2, 10, 15 and 23 are rejected under 35 U.S.C. 112(b) as failing to set forth the subject matter which the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the applicant regards as the invention.
Regarding claims 1, 2, and 15, the limitation “selecting a context based on an apparent angle (AAd) representing an interval angle seen from a sensor that captured the point” is recited. However, there is no algorithm provided in the claims or specification on how (AAd) is used to select a context. Rather, (
φ
p
r
e
d
), (
φ
l
e
f
t
,
d
), (
φ
r
i
g
h
t
,
d
) are used for selecting a context using methods from the prior art. (AAd) is used as a ratio with (
∆
φ
) for dividing the contexts into sub groups. The Examiner suggests specifying that AAd is used along with
∆
φ
for indexing subsets of contexts, but the context is actually determined by the prior art algorithm using (
φ
p
r
e
d
), (
φ
l
e
f
t
,
d
), (
φ
r
i
g
h
t
,
d
).
Regarding claims 10 and 23, the limitation “wherein contexts are grouped into at least two context subsets based on a range of particular values of the ratio” is recited. This claim and the specification does not provide an algorithm or reasoning for how contexts are grouped into subsets based on the ratio of (
∆
φ
/ AAd). Pages 30-31 teach that each subset corresponds to a level of prediction quality of (
φ
p
r
e
d
) [pp.30, 27-28]. However, the specification simply states that each subset can contain contexts known in the prior art [pp.30, 29], but there is no discussion of why certain contexts are placed into each subset. It is unclear which contexts should be associated with higher or lower levels of the prediction quality of (
φ
p
r
e
d
); thus, it is unclear how grouping contexts into subsets solves the proposed problem in the background section. The Examiner suggests explaining how contexts are selected for each subset and why specific contexts correspond to the level of the prediction quality of (
φ
p
r
e
d
).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-9, 15, and 19-22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by the International Organization for Standardization (G-PCC codec description. MPEG 3D Graphics Coding. URL: https://mpeg-pcc.org/index.php/public-contributions/g-pcc-codec-description-is-updated-w-r-t-decisions-from-the-october-2020-meeting/), hereafter ISO.
Regarding claim 1, ISO teaches a method of encoding a point cloud into a bitstream of encoded point cloud data representing a physical object ([Abstract] “This document provides a detailed description of the point cloud compression G-PCC (Geometry based Point Cloud Compression).”), the method comprising an azimuthal coding mode providing a series of bits for encoding a coordinate of a point of the point cloud ([3.2.6] “The azimuthal mode that works similarly to the angular mode was introduced to improve planar coding mode and IDCM nodes coding. It uses azimuthal angle of already coded nodes to improve compression of binary occupancy coding through the prediction of the x or y plane position of the planar mode and the prediction of x or y-coordinate bits in IDCM nodes.”), wherein the method comprises:
dividing an interval, to which the point coordinate belongs to, into a left half interval and a right half interval (Figs. 52 shows the steps of determining a planar direction for the point coordinate. Fig. 53 shows dividing the plane into left and right sections. [3.2.6.2] “The azimuthal angular information enhances the plane position set of contexts. At first, the planar direction (x-planar or y-planar) predicted by azimuthal angle is selected based on the following conditions…”);
selecting a context based on an apparent angle (AAd) representing an interval angle seen from a sensor that captured the point (The Inter Direct Coding Mode (IDCM) of G-PCC includes using a predicted angle, which is the closest to the center of the plane, and a left and right angle, signifying the left and right halves of the plane. These angles form an interval, and a context is chosen from these three angles and the interval. [3.2.6.1.2] “The correction of the azimuthal angle to obtain the prediction angle
φ
p
r
e
d
is conducted by a multiple n of the elementary azimuthal shift
∆
φ
h
such as to become the closest possible from the azimuthal angle of the center of the current node.” [3.2.6.2] “Then, an azimuthal angular context is selected based on
φ
p
r
e
d
,
φ
l
e
f
t
and
φ
r
i
g
h
t
from 16 angular context values. It depends on…”); and
context-adaptive binary entropy encoding a bit (bd) of the series of bits, into the bitstream, based on the selected context, the encoded bit (bd) indicating which of the two half intervals the point coordinate belongs to ([Section 3.2.6.3] “IDCM with azimuthal angular coding mode are introduced to enhance IDCM. In this mode, x (or y) coordinate bits are still bypassed and y (or x) coordinate bits are entropy coded by using azimuthal angle contexts. z-coordinate bits are entropy coded using angular contexts. The x (or y) coordinate bit is coded based on a x (or y)-interval as following steps (Figure 51, Figure 52),
Split in two after (de)coding each x (or y)-coordinate bit
Azimuthal angular prediction of the sub-interval based on azimuthal predictor
Azimuthal context to (de)code the bit
Context determine based on the sub-interval prediction”).
Regarding claim 2, ISO teaches a method of decoding a point cloud from a bitstream of encoded point cloud data representing a physical object ([Abstract] “This document provides a detailed description of the point cloud compression G-PCC (Geometry based Point Cloud Compression).”), the method comprising an azimuthal coding mode providing a series of bits for decoding a coordinate of a point of the point cloud ([3.2.6] “The azimuthal mode that works similarly to the angular mode was introduced to improve planar coding mode and IDCM nodes coding. It uses azimuthal angle of already coded nodes to improve compression of binary occupancy coding through the prediction of the x or y plane position of the planar mode and the prediction of x or y-coordinate bits in IDCM nodes.”), wherein the method comprises:
dividing an interval, to which the point coordinate belongs to, into a left half interval and a right half interval (Figs. 52 shows the steps of determining a planar direction for the point coordinate. Fig. 53 shows dividing the plane into left and right sections. [3.2.6.2] “The azimuthal angular information enhances the plane position set of contexts. At first, the planar direction (x-planar or y-planar) predicted by azimuthal angle is selected based on the following conditions…”);
selecting a context based on an apparent angle (AAd) representing an interval angle seen from a sensor that captured the point (The Inter Direct Coding Mode (IDCM) of G-PCC includes using a predicted angle, which is the closest to the center of the plane, and a left and right angle, signifying the left and right halves of the plane. These angles form an interval, and a context is chosen from these three angles and the interval. [3.2.6.1.2] “The correction of the azimuthal angle to obtain the prediction angle
φ
p
r
e
d
is conducted by a multiple n of the elementary azimuthal shift
∆
φ
h
such as to become the closest possible from the azimuthal angle of the center of the current node.” [3.2.6.2] “Then, an azimuthal angular context is selected based on
φ
p
r
e
d
,
φ
l
e
f
t
and
φ
r
i
g
h
t
from 16 angular context values. It depends on…”); and
context-adaptive binary entropy decoding a bit (bd), from the bitstream, based on the selected context, the decoded bit (bd) indicating which of the two half intervals the point coordinate belongs to ([Section 3.2.6.3] “IDCM with azimuthal angular coding mode are introduced to enhance IDCM. In this mode, x (or y) coordinate bits are still bypassed and y (or x) coordinate bits are entropy coded by using azimuthal angle contexts. z-coordinate bits are entropy coded using angular contexts. The x (or y) coordinate bit is coded based on a x (or y)-interval as following steps (Figure 51, Figure 52),
Split in two after (de)coding each x (or y)-coordinate bit
Azimuthal angular prediction of the sub-interval based on azimuthal predictor
Azimuthal context to (de)code the bit
Context determine based on the sub-interval prediction”).
Regarding claim 3, ISO teaches the method of claim 2, wherein the apparent angle (AAd) is estimated based on at least one of a first angle (
φ
n
o
d
e
,
d
) associated with a lower bound of the interval, a second angle (
φ
t
o
p
,
d
) associated with an upper bound of the interval and a third angle (
φ
m
i
d
d
l
e
,
d
) associated with a middle point of the interval (In claim 2, the apparent angle is defined as representing an interval angle seen from a sensor that captured the point. Regarding IDCM, ISO teaches in section 3.2.6 initializing an interval based on multiple angles left, right, and predicted. See Figs. 52 and 53. Additionally, the previous section 3.2.5.4, teaches defining a z-axis interval using top and bottom planes.).
Regarding claim 4, ISO teaches the method of claim 3, wherein the apparent angle (AAd) is estimated based on the first angle (
φ
n
o
d
e
,
d
) and the second angle (
φ
t
o
p
,
d
) (ISO teaches in section 3.2.6 initializing an interval based on multiple angles left, right, and predicted. See Figs. 52 and 53. Additionally, the previous section 3.2.5.4, teaches defining a z-axis interval using top and bottom planes.).
Regarding claim 5, ISO teaches the method of claim 3, wherein the apparent angle (AAd) is estimated based on the first angle (
φ
n
o
d
e
,
d
) and the third angle (
φ
m
i
d
d
l
e
,
d
) (ISO teaches in section 3.2.6 initializing an interval based on multiple angles left, right, and predicted. See Figs. 52 and 53. Additionally, the previous section 3.2.5.4, teaches defining a z-axis interval using top and bottom planes.).
Regarding claim 6, ISO teaches the method of claim 3, wherein the apparent angle (AAd) is estimated based on the second angle (
φ
t
o
p
,
d
) and the third angle (
φ
m
i
d
d
l
e
,
d
) (ISO teaches in section 3.2.6 initializing an interval based on multiple angles left, right, and predicted. See Figs. 52 and 53. Additionally, the previous section 3.2.5.4, teaches defining a z-axis interval using top and bottom planes.).
Regarding claim 7, ISO teaches the method of claim 3, wherein the apparent angle (AAd) is estimated based on an interval size (sd) and a distance (r) between the point and the sensor that captured the point (The apparent angle represents an interval angle seen from the sensor. In Figs. 43-44, ISO shows initializing an interval from the node before determining the angles top and bottom which pass through the interval.).
Regarding claim 8, ISO teaches the method of claim 2, wherein the context is selected based on relative magnitudes of an elementary azimuthal angle (
∆
φ
) and the apparent angle (AAd) ([3.2.6.1.1] “The azimuthal coding mode is applied for nodes where the «azimuthal angular size» node_size / r is small enough for the eligibility with r relative to the lidar head position (𝑥𝑥Lidar, 𝑦𝑦Lidar, 𝑧𝑧Lidar), i.e. if the azimuthal angular is lower than the elementary azimuthal shift Δ𝜑𝜑ℎ.” Section 3.2.6.2 teaches choosing a context based on the left, right, and predicted azimuth angles when the ratio is less than 1.).
Regarding claim 9, ISO teaches the method of claim 8, wherein the context is selected based on a ratio between the elementary azimuthal angle (
∆
φ
) over the apparent angle (AAd) (Using this ratio to determine the prediction quality of (
φ
p
r
e
d
) would be obvious to one of ordinary skill in the art, because IDCM accounts for this ratio by ensuring that the node or interval size is smaller than the elementary azimuth angle. A context is selected using (
φ
p
r
e
d
), (
φ
l
e
f
t
,
d
), (
φ
r
i
g
h
t
,
d
) when the ratio is less than 1, and no context can be selected when the ratio is greater than 1. [3.2.6.1.1] “The azimuthal coding mode is applied for nodes where the «azimuthal angular size» node_size / r is small enough for the eligibility with r relative to the lidar head position (𝑥𝑥Lidar, 𝑦𝑦Lidar, 𝑧𝑧Lidar), i.e. if the azimuthal angular is lower than the elementary azimuthal shift Δ𝜑𝜑ℎ.”).
Regarding claim 15, ISO teaches a non-transitory computer-readable storage medium having stored thereon instructions that, when executed by one or more processors, cause the one or more processors (ISO teaches methods for G-PCC, which would be executed on a computer requiring at least storage, memory, and a processor.) to carry out a method of encoding a point cloud from a bitstream of encoded point cloud data representing a physical object ([3.2.6] “The azimuthal mode that works similarly to the angular mode was introduced to improve planar coding mode and IDCM nodes coding. It uses azimuthal angle of already coded nodes to improve compression of binary occupancy coding through the prediction of the x or y plane position of the planar mode and the prediction of x or y-coordinate bits in IDCM nodes.”),
the method comprising an azimuthal coding mode providing a series of bits for encoding a coordinate of a point of the point cloud ([3.2.6] “The azimuthal mode that works similarly to the angular mode was introduced to improve planar coding mode and IDCM nodes coding. It uses azimuthal angle of already coded nodes to improve compression of binary occupancy coding through the prediction of the x or y plane position of the planar mode and the prediction of x or y-coordinate bits in IDCM nodes.”), wherein the method comprises:
dividing an interval, to which the point coordinate belongs to, into a left half interval and a right half interval (Figs. 52 shows the steps of determining a planar direction for the point coordinate. Fig. 53 shows dividing the plane into left and right sections. [3.2.6.2] “The azimuthal angular information enhances the plane position set of contexts. At first, the planar direction (x-planar or y-planar) predicted by azimuthal angle is selected based on the following conditions…”);
selecting a context based on an apparent angle representing an interval angle seen from a sensor that captured the point (The Inter Direct Coding Mode (IDCM) of G-PCC includes using a predicted angle, which is the closest to the center of the plane, and a left and right angle, signifying the left and right halves of the plane. These angles form an interval, and a context is chosen from these three angles and the interval. [3.2.6.1.2] “The correction of the azimuthal angle to obtain the prediction angle
φ
p
r
e
d
is conducted by a multiple n of the elementary azimuthal shift
∆
φ
h
such as to become the closest possible from the azimuthal angle of the center of the current node.” [3.2.6.2] “Then, an azimuthal angular context is selected based on
φ
p
r
e
d
,
φ
l
e
f
t
and
φ
r
i
g
h
t
from 16 angular context values. It depends on…”); and
context-adaptive binary entropy encoding a bit of the series of bits, into the bitstream, based on the selected context, said coded bit indicating which of the two half intervals the point coordinate belongs to ([Section 3.2.6.3] “IDCM with azimuthal angular coding mode are introduced to enhance IDCM. In this mode, x (or y) coordinate bits are still bypassed and y (or x) coordinate bits are entropy coded by using azimuthal angle contexts. z-coordinate bits are entropy coded using angular contexts. The x (or y) coordinate bit is coded based on a x (or y)-interval as following steps (Figure 51, Figure 52),
Split in two after (de)coding each x (or y)-coordinate bit
Azimuthal angular prediction of the sub-interval based on azimuthal predictor
Azimuthal context to (de)code the bit
Context determine based on the sub-interval prediction”).
Regarding claim 19, ISO teaches the method of claim 1, wherein the apparent angle (AAd) is estimated based on at least one of a first angle (
φ
n
o
d
e
,
d
) associated with a lower bound of the interval, a second angle (
φ
t
o
p
,
d
)associated with an upper bound of the interval and a third angle (
φ
m
i
d
d
l
e
,
d
) associated with a middle point of the interval (In claim 1, the apparent angle is defined as representing an interval angle seen from a sensor that captured the point. Regarding IDCM, ISO teaches in section 3.2.6 initializing an interval based on multiple angles left, right, and predicted. See Figs. 52 and 53. Additionally, the previous section 3.2.5.4, teaches defining a z-axis interval using top and bottom planes.).
Regarding claim 20, ISO teaches the method of claim 19, wherein the apparent angle (AAd) is estimated based on the first angle (
φ
n
o
d
e
,
d
) and the second angle (
φ
t
o
p
,
d
), based on the first angle (
φ
n
o
d
e
,
d
) and the third angle, or the second angle (
φ
t
o
p
,
d
) and the third angle (
φ
m
i
d
d
l
e
,
d
) (ISO teaches in section 3.2.6 initializing an interval based on multiple angles left, right, and predicted. See Figs. 52 and 53. Additionally, the previous section 3.2.5.4, teaches defining a z-axis interval using top and bottom planes.); or
the apparent angle (AAd) is estimated based on an interval size (sd) and a distance (r) between the point and the sensor that captured the point (The apparent angle represents an interval angle seen from the sensor. In Figs. 43-44, ISO shows initializing an interval from the node before determining the angles top and bottom which pass through the interval.).
Regarding claim 21, ISO teaches the method of claim 1, wherein the context is selected based on relative magnitudes of an elementary azimuthal angle (
∆
φ
) and the apparent angle (AAd) ([3.2.6.1.1] “The azimuthal coding mode is applied for nodes where the «azimuthal angular size» node_size / r is small enough for the eligibility with r relative to the lidar head position (𝑥𝑥Lidar, 𝑦𝑦Lidar, 𝑧𝑧Lidar), i.e. if the azimuthal angular is lower than the elementary azimuthal shift Δ𝜑𝜑ℎ.” Section 3.2.6.2 teaches choosing a context based on the left, right, and predicted azimuth angles when the ratio is less than 1.).
Regarding claim 22, ISO teaches the method of claim 21, wherein the context is selected based on a ratio between the elementary azimuthal angle (
∆
φ
) over the apparent angle (AAd) (Using this ratio to determine the prediction quality of (
φ
p
r
e
d
) would be obvious to one of ordinary skill in the art, because IDCM accounts for this ratio by ensuring that the node or interval size is smaller than the elementary azimuth angle. A context is selected using (
φ
p
r
e
d
), (
φ
l
e
f
t
,
d
), (
φ
r
i
g
h
t
,
d
) when the ratio is less than 1, and no context can be selected when the ratio is greater than 1. [3.2.6.1.1] “The azimuthal coding mode is applied for nodes where the «azimuthal angular size» node_size / r is small enough for the eligibility with r relative to the lidar head position (𝑥𝑥Lidar, 𝑦𝑦Lidar, 𝑧𝑧Lidar), i.e. if the azimuthal angular is lower than the elementary azimuthal shift Δ𝜑𝜑ℎ.”).
Allowable Subject Matter
Claims 10-12 and 23-25 contain allowable subject matter. However, these claims are rejected due to the rejections to claims 10 and 23 under 35 USC 112(b). If the 35 USC 112(b) rejections are overcome, then these claims would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 10, grouping contexts into subsets in context-based binary coding is known in the art. For example, see 0077-0079 of WO 2020/010127 A1 (from the IDS dated 09/25/2023) which discusses separating available contexts into subsets based on a significance flag in an application of context-based binary adaptive coding. However, discussion of grouping contexts based on the ratio of (
∆
φ
/AAd) before the effective filing date of the claimed invention was limited in the prior art.
ISO motivates selecting a context using (
φ
p
r
e
d
), (
φ
l
e
f
t
,
d
), (
φ
r
i
g
h
t
,
d
) when the ratio is less than 1, and no context when the ratio is greater than 1; see section 3.2.6.1.1. However, ISO does not discuss further subdividing the groups of contexts based on the ratio.
Regarding claim 11, ISO teaches wherein selecting the context from the contexts of the selected context subset depends on a predicted azimuthal angle (
φ
p
r
e
d
) associated with the point and a left angle (
φ
l
e
f
t
,
d
) associated with the left half interval and a right angle (
φ
r
i
g
h
t
,
d
) associated with the right half interval ([3.2.6.2] “Then, an azimuthal angular context is selected based on
φ
p
r
e
d
,
φ
l
e
f
t
and
φ
r
i
g
h
t
from 16 angular context values. It depends on…”). However, claim 11 depends from claim 10 and would be allowable if claim 10 is amended to overcome the current rejection.
Regarding claim 12, organizing the subsets into a table does not provide an inventive step, as the use of look-up tables are well known to one of ordinary skill in the art. However, claim 12 depends from claim 10. And would be allowable if claim 10 is amended to overcome the current rejection.
Regarding claim 23, grouping contexts into subsets in context-based binary coding is known in the art. For example, see 0077-0079 of WO 2020/010127 A1 (from the IDS dated 09/25/2023) which discusses separating available contexts into subsets based on a significance flag in an application of context-based binary adaptive coding. However, discussion of grouping contexts based on the ratio of (
∆
φ
/AAd) before the effective filing date of the claimed invention was limited in the prior art.
ISO motivates selecting a context using (
φ
p
r
e
d
), (
φ
l
e
f
t
,
d
), (
φ
r
i
g
h
t
,
d
) when the ratio is less than 1, and no context when the ratio is greater than 1; see section 3.2.6.1.1. However, ISO does not discuss further subdividing the groups of contexts based on the ratio.
Regarding claim 24, ISO teaches the method of claim 23, wherein selecting the context from the contexts of the selected context subset depends on a predicted azimuthal angle (
φ
p
r
e
d
) associated with the point and a left angle (
φ
l
e
f
t
,
d
)associated with the left half interval and a right angle (
φ
r
i
g
h
t
,
d
) associated with the right half interval ([3.2.6.2] “Then, an azimuthal angular context is selected based on
φ
p
r
e
d
,
φ
l
e
f
t
and
φ
r
i
g
h
t
from 16 angular context values. It depends on…”). However, claim 24 depends from claim 23 and would be allowable if claim 23 is amended to overcome the current rejection.
Regarding claim 25, organizing the subsets into a table does not provide an inventive step, as the use of look-up tables are well known to one of ordinary skill in the art. However, claim 25 depends from claim 23 and would be allowable if claim 23 is amended to overcome the current rejection.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Graziosi et al. (An overview of ongoing point cloud compression standardization activities: video-based (V-PCC) and geometry-based (G-PCC). APSIPA Transactions on Signal and Information Processing, 9, 13.) teaches an overview of G-PCC including a discussion of the algorithm and its use cases.
Auwera et al. (WO 2021/262540 A1) teaches methods for simplifying the planar and azimuthal modes in G-PCC.
Hur et al. (US 2021/0209813 A1) teaches a method for encoding geometry data of a point cloud by determining the level of detail throughout the geometry and attribute encoding points based on the level of detail.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC JAMES SHOEMAKER whose telephone number is (571)272-6605. The examiner can normally be reached Monday through Friday from 8am to 5pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner' s supervisor, JENNIFER MEHMOOD, can be reached at (571)272-2976. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Eric Shoemaker/
Patent Examiner
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664