Prosecution Insights
Last updated: April 19, 2026
Application No. 18/830,801

ENCODING METHOD AND APPARATUS, AND DECODING METHOD AND APPARATUS

Non-Final OA §103§112
Filed
Sep 11, 2024
Examiner
CHIO, TAT CHI
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
90%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
610 granted / 836 resolved
+15.0% vs TC avg
Strong +17% interview lift
Without
With
+16.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
49 currently pending
Career history
885
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
52.4%
+12.4% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 836 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1-20 recite “probe data,” but probe data is not defined in any of claims. Thus, it is not clear what “probe data” is. Claim 6 recites “…determining a first difference between each probe of the plurality of probes and at least one first target probe of the each probe based on a diffuse reflection coefficient of probe data of the each probe and a diffuse reflection coefficient of probe data of the at least one first target probe of the each probe, wherein the first target probe of the each probe is a probe whose distance from a position of the each probe is less than a first threshold….” The claim recites one first target probe of the each probe. It appears that each probe is a first target probe. However, if each probe is a target probe, then it is not clear why there is a need to determine a difference between each probe and the first target probe of the each probe, which is also the each probe itself. Further, it is not clear how to determine the distance from a position of the each probe is less than a first threshold because the claim only recites one point (from a position of the each probe), it does not define the second point (to where?) when measuring a distance. Therefore, the scope of claim 6 cannot be ascertained. Claim 8 recites “…determining a second difference between each probe of the plurality of probes and at least one second target probe of the each probe based on distance data of probe data of the each probe and distance data of probe data of the at least one second target probe of the probe, wherein the second target probe of the each probe is a probe whose distance from a position of the each probe is less than a second threshold….” The claim recites one second target probe of the each probe. It appears that each probe is a second target probe. However, if each probe is a target probe, then it is not clear why there is a need to determine a difference between each probe and the second target probe of the each probe, which is also the each probe itself. Further, the claim recites “the at least one second target probe of the probe.” It is not clear what “the probe” is referring to. In other words, there is insufficient antecedent basis for this limitation in the claim. Finally, it is not clear how to determine the distance from a position of the each probe is less than a second threshold because the claim only recites one point (from a position of the each probe), it does not define the second point (to where?) when measuring a distance. Therefore, the scope of claim 8 cannot be ascertained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 3-5, 7, 11-15, 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stengel et al. (US 2022/0138988 A1) in view of Taquet et al. (US 2025/0056041 A1) Consider claim 1, Stengel teaches an encoding method, comprising: obtaining probe data of a plurality of probes arranged in a scene (because probe data is not defined, it is interpreted as data/information associated with a light field. As shown in operation 104, a light field for the scene is computed at the remote device, utilizing ray tracing. In one embodiment, the remote device may perform ray tracing within a scene to determine the light field for the scene. [0024]. The light field may include a plurality of light probes. In yet another embodiment, each light probe within the light field may store information about light reflected off surfaces within the scene. [0025]); dividing the probe data into a plurality of probe data groups (each block within the array may include color texture information and visibility texture information. For example, the color texture information may include lighting color information within the block. In another example, the visibility texture information may include distance information. [0026]; encoding color and encoding visibility. [0186] – [0189]. The information in each block (probe data) is divided into color texture information and visibility texture information); performing first encoding on a first probe data group in the plurality of probe data groups to generate a first encoding result (encoding color. [0186] – [0189]); performing second encoding on a second probe data group in the plurality of probe data groups to generate a second encoding result (encoding visibility. [0186] – [0189]), wherein an encoding scheme of the first encoding is different from an encoding scheme of the second encoding (the encoding of color is different from the encoding of visibility. [0186] – [0189]). However, Stengel does not explicitly teach generating a bitstream based on the first encoding result and the second encoding result. Taquet teaches generating a bitstream based on the first encoding result and the second encoding result (The sensor can then estimate the distance to the object from the time difference between the sending and the receiving of the signal (corresponding to r.sub.3D), and it can generate a point by providing sensed data comprising point geometry data (typically coordinates (r.sub.3D,s,λ) or (r.sub.2, ϕ, θ) or directly coordinates (x,y,z)) and other attributes like color, reflectance, acquisition timestamps, etc., for the given sensing direction (coordinates (s,λ)). The encoder 13 may provide a bitstream B organized in several chunks (for example slices) and each chunk can be sent, through a second communication channel, as soon as it is available/ready (for low latency streaming) to the decoder 14. [0075]. Other attributes like color are considered as the first probe data group; geometry data is considered as the second probe data group. The encoding method comprises obtaining a sensing coverage data representative of at least one range of order indexes associated with sensed data and encoding said sensing coverage data into the bitstream, and the decoding method comprises decoding a sensing coverage data from the bitstream. [0084] For instance, encoding/decoding point cloud geometry data and/or attributes data into/from the bitstream is based on the sensing coverage data, and so encoding/decoding the sensing coverage data into/from a bitstream is also advantageous because it may improve the compression of point clouds obtained from partially probed sensing path. [0089].). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to incorporate the known technique of generating a bitstream based on encoding result because such incorporation would improve the compression of point clouds obtained from partially probed sensing path. [0089]. Consider claim 3, Taquet teaches the dividing the probe data into a plurality of probe data groups comprises: dividing the probe data into the plurality of probe data groups based on target information of the probe data (the sensor can then estimate the distance to the object from the time difference between the sending and the receiving of the signal (corresponding to r.sub.3D), and it can generate a point by providing sensed data comprising point geometry data (typically coordinates (r.sub.3D,s,λ) or (r.sub.2, ϕ, θ) or directly coordinates (x,y,z)) and other attributes like color, reflectance, acquisition timestamps, etc., for the given sensing direction (coordinates (s,λ)). [0075]. Geometry data is based on distance of object; attributes based on color, reflectance, etc. of the object). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to incorporate the known technique of generating a bitstream based on encoding result because such incorporation would improve the compression of point clouds obtained from partially probed sensing path. [0089]. Consider claim 4, Taquet teaches the target information comprises a three-dimensional spatial position of each probe in the plurality of probes, and the dividing the probe data into the plurality of probe data groups based on target information of the probe data comprises: dividing the probe data into the plurality of probe data groups based on the three-dimensional spatial position of each probe in the plurality of probes (the sensor can then estimate the distance to the object from the time difference between the sending and the receiving of the signal (corresponding to r.sub.3D), and it can generate a point by providing sensed data comprising point geometry data (typically coordinates (r.sub.3D,s,λ) or (r.sub.2, ϕ, θ) or directly coordinates (x,y,z)) and other attributes like color, reflectance, acquisition timestamps, etc., for the given sensing direction (coordinates (s,λ)). [0075]. Geometry data is based on distance of object; attributes based on color, reflectance, etc. of the object). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to incorporate the known technique of generating a bitstream based on encoding result because such incorporation would improve the compression of point clouds obtained from partially probed sensing path. [0089]. Consider claim 5, Stengel teaches the target information comprises a diffuse reflection coefficient ([0176] – [0183]), and the dividing the probe data into the plurality of probe data groups based on target information of the probe data comprises: dividing illumination data in the probe data into a plurality of probe data groups based on the diffuse reflection coefficient of the probe data ([0176] – [0183]). Consider claim 7, Taquet teaches the target information comprises distance data, and the dividing the probe data into the plurality of probe data groups based on target information of the probe data comprises: dividing visibility data in the probe data into a plurality of probe data groups based on the distance data of the probe data (the sensor can then estimate the distance to the object from the time difference between the sending and the receiving of the signal (corresponding to r.sub.3D), and it can generate a point by providing sensed data comprising point geometry data (typically coordinates (r.sub.3D,s,λ) or (r.sub.2, ϕ, θ) or directly coordinates (x,y,z)) and other attributes like color, reflectance, acquisition timestamps, etc., for the given sensing direction (coordinates (s,λ)). [0075]. Geometry data is based on distance of object; attributes based on color, reflectance, etc. of the object). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to incorporate the known technique of generating a bitstream based on encoding result because such incorporation would improve the compression of point clouds obtained from partially probed sensing path. [0089]. Consider claim 11, Stengel teaches the bitstream comprises at least one of grouping information, arrangement information, or encoding information (compression/decompression format of color and visibility [0186] – [0189]), and the grouping information represents a grouping manner of the probe data (compression/decompression format of color and visibility [0186] – [0189]), the arrangement information represents arrangement information of the probe data, the encoding information represents an encoding scheme of the plurality of probe data groups (compression/decompression format of color and visibility [0186] – [0189]). Consider claim 12, Stengel teaches a decoding method ([0035], [0116] and Fig. 6), comprising: obtaining aa plurality of pieces of probe data (each block within the array may include color texture information and visibility texture information. For example, the color texture information may include lighting color information within the block. In another example, the visibility texture information may include distance information. [0026]; encoding color and encoding visibility. [0186] – [0189]), and the plurality of pieces of probe data belong to a plurality of probe data groups (each block within the array may include color texture information and visibility texture information. For example, the color texture information may include lighting color information within the block. In another example, the visibility texture information may include distance information. [0026]; encoding color and encoding visibility. [0186] – [0189]); performing first decoding on a first probe data group in the plurality of probe data groups to generate a first decoding result (The client device 604 may receive the encoded display data via the communication interface 620 and the decoder 622 may decode the encoded display data to generate the display data. The client device 604 may then display the display data via the display 624. [0116]; [0153]; [0168] – [0172]; encoding color, and the decoding process is performed in reverse. [0186] – [0189]); performing second decoding on a second probe data group in the plurality of probe data groups to generate a second decoding result (The client device 604 may receive the encoded display data via the communication interface 620 and the decoder 622 may decode the encoded display data to generate the display data. The client device 604 may then display the display data via the display 624. [0116]; [0153]; [0168] – [0172]; encoding visibility, and the decoding process is performed in reverse. [0186] – [0189]), wherein a decoding scheme of the first decoding is different from a decoding scheme of the second decoding (the encoding of color is different from the encoding of visibility. [0186] – [0189]. Thus, the decoding of color is different from the decoding of visibility); obtaining probe data of a plurality of probes based on the first decoding result and the second decoding result (The client device 604 may receive the encoded display data via the communication interface 620 and the decoder 622 may decode the encoded display data to generate the display data. The client device 604 may then display the display data via the display 624. [0116]; [0153]; [0168] – [0172]; encoding color and encoding visibility, and the decoding process is performed in reverse. [0186] – [0189]; the compressed light field may be computed remotely, compressed using lossless or lossy compression, and provided to the client device, where the client device may decompress the compressed light field and may use the decompressed light field to perform global illumination for the scene at the client device. [0149]; as shown in operation 904, the compressed light field data is decompressed, and a color conversion of the decompressed light field data is performed to obtain the light field for the scene. [0153] – [0154].); and performing rendering based on the probe data (The client device 604 may receive the encoded display data via the communication interface 620 and the decoder 622 may decode the encoded display data to generate the display data. The client device 604 may then display the display data via the display 624. [0116]; [0153]; [0168] – [0172]). However, Stengel does not explicitly teach obtaining a bitstream. Taquet teaches obtaining a bitstream (The encoding method comprises obtaining a sensing coverage data representative of at least one range of order indexes associated with sensed data and encoding said sensing coverage data into the bitstream, and the decoding method comprises decoding a sensing coverage data from the bitstream. [0084]; see also [0075] and [0089]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to incorporate the known technique of generating a bitstream based on encoding result because such incorporation would improve the compression of point clouds obtained from partially probed sensing path. [0089]. Consider claim 13, Taquet teaches obtaining grouping information (the sensor can then estimate the distance to the object from the time difference between the sending and the receiving of the signal (corresponding to r.sub.3D), and it can generate a point by providing sensed data comprising point geometry data (typically coordinates (r.sub.3D,s,λ) or (r.sub.2, ϕ, θ) or directly coordinates (x,y,z)) and other attributes like color, reflectance, acquisition timestamps, etc., for the given sensing direction (coordinates (s,λ)). [0075]. Geometry data is based on distance of object; attributes based on color, reflectance, etc. of the object)), wherein the grouping information represents a grouping manner of the plurality of pieces of probe data (the sensor can then estimate the distance to the object from the time difference between the sending and the receiving of the signal (corresponding to r.sub.3D), and it can generate a point by providing sensed data comprising point geometry data (typically coordinates (r.sub.3D,s,λ) or (r.sub.2, ϕ, θ) or directly coordinates (x,y,z)) and other attributes like color, reflectance, acquisition timestamps, etc., for the given sensing direction (coordinates (s,λ)). [0075]. Geometry data is based on distance of object; attributes based on color, reflectance, etc. of the object)); and grouping the plurality of pieces of probe data in the bitstream based on the grouping information to obtain the plurality of probe data groups (the sensor can then estimate the distance to the object from the time difference between the sending and the receiving of the signal (corresponding to r.sub.3D), and it can generate a point by providing sensed data comprising point geometry data (typically coordinates (r.sub.3D,s,λ) or (r.sub.2, ϕ, θ) or directly coordinates (x,y,z)) and other attributes like color, reflectance, acquisition timestamps, etc., for the given sensing direction (coordinates (s,λ)). [0075]. Geometry data is based on distance of object; attributes based on color, reflectance, etc. of the object)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to incorporate the known technique of generating a bitstream based on encoding result because such incorporation would improve the compression of point clouds obtained from partially probed sensing path. [0089]. Consider claim 14, Stengel teaches obtaining decoding information (compression/decompression format of color and visibility [0186] – [0189]), wherein the decoding information represents a decoding scheme of the plurality of probe data groups (compression/decompression format of color and visibility [0186] – [0189]), and the decoding scheme comprises the decoding scheme corresponding to the first decoding and the decoding scheme corresponding to the second decoding (compression/decompression format of color and visibility [0186] – [0189]). Consider claim 15, Taquet teaches obtaining arrangement information, wherein the arrangement information represents an arrangement manner of the plurality of pieces of probe data (the sensor can then estimate the distance to the object from the time difference between the sending and the receiving of the signal (corresponding to r.sub.3D), and it can generate a point by providing sensed data comprising point geometry data (typically coordinates (r.sub.3D,s,λ) or (r.sub.2, ϕ, θ) or directly coordinates (x,y,z)) and other attributes like color, reflectance, acquisition timestamps, etc., for the given sensing direction (coordinates (s,λ)). [0075]. Geometry data is based on distance of object; attributes based on color, reflectance, etc. of the object)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to incorporate the known technique of generating a bitstream based on encoding result because such incorporation would improve the compression of point clouds obtained from partially probed sensing path. [0089]. Consider claim 18, claim 18 recites a decoding apparatus ([0035], [0116] and Fig. 6), comprising: a memory configured to store instructions ([0021], [0120] – [0125], Fig. 7); at least one processor coupled to the memory ([0021], [0120] – [0125], Fig. 7), and configured to execute the instructions to cause the decoding apparatus to perform the method recited in claim 12 (see rejection for claim 12). Consider claim 19, claim 19 recites a decoding apparatus that performs the method recited in claim 13. Thus, it is rejected for the same reasons. Consider claim 20, claim 20 recites a decoding apparatus that performs the method recited in claim 14. Thus, it is rejected for the same reasons. Claim(s) 2, 9-10 and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stengel et al. (US 2022/0138988 A1) in view of Taquet et al. (US 2025/0056041 A1) and Ricard et al. (US 2021/0056730 A1) Consider claim 2, the combination of Stengel and Taquet teaches all the limitations in claim 1 but does not explicitly teach the plurality of probe data groups comprise at least one probe data group comprising probe data in a current frame and probe data in a non-current frame, and the obtaining probe data of a plurality of probes comprises: obtaining probe data of a plurality of probes in the current frame; and obtaining probe data of a plurality of probes in the non-current frame. Ricard teaches the plurality of probe data groups comprise at least one probe data group comprising probe data in a current frame and probe data in a non-current frame (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; a projection mode may be used to indicate if the first geometry image GI0 may store the depth values of the 2D samples of either the first or second layer and the second geometry image GI1 may store the depth values associated with the 2D samples of either the second or first layer. For example, when a projection mode equals 0, then the first geometry image GI0 may store the depth values of 2D samples of the first layer and the second geometry image GI1 may store the depth values associated with 2D samples of the second layer. Reciprocally, when a projection mode equals 1, then the first geometry image GI0 may store the depth values of 2D samples of the second and the second geometry image GI1 may store the depth values associated with 2D samples of the first layer. [0089] – [0090]; the texture image generator TIG may code (store) the texture (attribute) values T0 associated with 2D samples of the first layer as pixel values of a first texture image TI0 and the texture values T1 associated with the 2D samples of the second layer as pixel values of a second texture image TI1. Alternatively, the texture image generating module TIG may code (store) the texture values T1 associated with 2D samples of the second layer as pixel values of the first texture image TI0 and the texture values D0 associated with the 2D samples of the first layer as pixel values of the second geometry image GI1. [0098] – [0104]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]), and the obtaining probe data of a plurality of probes comprises: obtaining probe data of a plurality of probes in the current frame (two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0046]; [0080] – [0082]; [0089] – [0090]; [0098] – [0104]); and obtaining probe data of a plurality of probes in the non-current frame (two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0046]; [0080] – [0082]; [0089] – [0090]; [0098] – [0104]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to incorporate the teachings of Ricard into the combination of Stengel and Taquet because such incorporation would help better handle the case of multiple 3D samples being projected to a same 2D sample of the projection plane. [0083]. Consider claim 9, Ricard teaches the method further comprises: arranging the probe data into a two-dimensional picture based on a grouping status of the probe data (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0080] – [0082]; [0089] – [0090]; [0098] – [0104]), wherein the two-dimensional picture comprises a plurality of picture blocks ([0038]), and the plurality of picture blocks one-to-one correspond to the plurality of probe data groups (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0080] – [0082]; [0089] – [0090]; [0098] – [0104]); the performing first encoding on a first probe data group in the plurality of probe data groups to generate a first encoding result comprises: performing the first encoding on a picture block that is in the two-dimensional picture and that corresponds to the first probe data group to generate the first encoding result (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0080] – [0082]; [0089] – [0090]; [0098] – [0104]); and the performing second encoding on a second probe data group in the plurality of probe data groups to generate a second encoding result comprises: performing the second encoding on a picture block that is in the two-dimensional picture and that corresponds to the second probe data group to generate the second encoding result (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0080] – [0082]; [0089] – [0090]; [0098] – [0104]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to incorporate the teachings of Ricard into the combination of Stengel and Taquet because such incorporation would help better handle the case of multiple 3D samples being projected to a same 2D sample of the projection plane. [0083]. Consider claim 10, Ricard teaches arranging the plurality of probe data groups into a plurality of two-dimensional pictures (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0080] – [0082]; [0089] – [0090]; [0098] – [0104]), wherein the plurality of two-dimensional pictures one-to-one correspond to the plurality of probe data groups (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0080] – [0082]; [0089] – [0090]; [0098] – [0104]); the performing first encoding on a first probe data group in the plurality of probe data groups to generate a first encoding result comprises: performing the first encoding on a two-dimensional picture that is in the plurality of two-dimensional pictures and that corresponds to the first probe data group to generate the first encoding result (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0080] – [0082]; [0089] – [0090]; [0098] – [0104]); and the performing second encoding on a second probe data group in the plurality of probe data groups to generate a second encoding result comprises: performing the second encoding on a two-dimensional picture that is in the plurality of two-dimensional pictures and that corresponds to the second probe data group to generate the second encoding result (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0080] – [0082]; [0089] – [0090]; [0098] – [0104]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to incorporate the teachings of Ricard into the combination of Stengel and Taquet because such incorporation would help better handle the case of multiple 3D samples being projected to a same 2D sample of the projection plane. [0083]. Consider claim 16, Ricard teaches the bitstream comprises a two-dimensional picture (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0080] – [0082]; [0089] – [0090]; [0098] – [0104]), the two-dimensional picture comprises a plurality of picture blocks, and the plurality of picture blocks one-to-one correspond to the plurality of probe data groups (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0080] – [0082]; [0089] – [0090]; [0098] – [0104]); the performing first decoding on a first probe data group in the plurality of probe data groups to generate a first decoding result comprises: performing the first decoding on a picture block that is in the two-dimensional picture and that corresponds to the first probe data group to generate the first decoding result (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; decoding geometry images and color information. [0156] – [0164]; [0214] – [0220]. The methods of FIG. 7 (encoding) and 7b (decoding) may be used in different use cases. [0231]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0080] – [0082]; [0089] – [0090]; [0098] – [0104]); and the performing second decoding on a second probe data group in the plurality of probe data groups to generate a second decoding result comprises: performing the second decoding on a picture block that is in the two-dimensional picture and that corresponds to the second probe data group to generate the second decoding result (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; decoding geometry images and color information. [0156] – [0164]; [0214] – [0220]. The methods of FIG. 7 (encoding) and 7b (decoding) may be used in different use cases. [0231]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0080] – [0082]; [0089] – [0090]; [0098] – [0104]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to incorporate the teachings of Ricard into the combination of Stengel and Taquet because such incorporation would help better handle the case of multiple 3D samples being projected to a same 2D sample of the projection plane. [0083]. Consider claim 17, Ricard teaches the bitstream comprises a plurality of two-dimensional pictures (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0080] – [0082]; [0089] – [0090]; [0098] – [0104]), and the plurality of two-dimensional pictures one-to-one correspond to the plurality of probe data groups (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0080] – [0082]; [0089] – [0090]; [0098] – [0104]); the performing first decoding on a first probe data group in the plurality of probe data groups to generate a first decoding result comprises: performing the first decoding on a two-dimensional picture that is in the plurality of two-dimensional pictures and that corresponds to the first probe data group to generate the first decoding result (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; decoding geometry images and color information. [0156] – [0164]; [0214] – [0220]. The methods of FIG. 7 (encoding) and 7b (decoding) may be used in different use cases. [0231]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0080] – [0082]; [0089] – [0090]; [0098] – [0104]); and the performing second decoding on a second probe data group in the plurality of probe data groups to generate a second decoding result comprises: performing the second decoding on a two-dimensional picture that is in the plurality of two-dimensional pictures and that corresponds to the second probe data group to generate the second decoding result (geometry/texture images representing the geometry/attributes of 3D samples of the input point cloud frame IPCF. [0046]; [0080] – [0082]; decoding geometry images and color information. [0156] – [0164]; [0214] – [0220]. The methods of FIG. 7 (encoding) and 7b (decoding) may be used in different use cases. [0231]; two geometry images GI0 and GI1 (two layers) may be used to encode the geometry of the point cloud frame PCF. For example, the first geometry image may store the depth values D0 associated with the 2D samples with the lowest depth (first layer) and the second geometry image GI1 may store the depth values D1 associated with the 2D samples with the highest depth (second layer). Next, at least one stuffing 3D sample may be added according to the method of FIG. 7 or 7b. Finally, the color information associated with the 3D samples of the point cloud PCF may be encoded as two texture image TI0 and TI1. The first texture image TI1 encodes the color information relative to the first geometry image GI0 and the second texture image TI1 encodes the color information relative to the second geometry image GI1. A third texture image T12 may encode the color information associated with at least one stuffing sample when an explicit color-coding mode is assigned to said at least one stuffing 3D sample. [0233]; see also [0080] – [0082]; [0089] – [0090]; [0098] – [0104]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to incorporate the teachings of Ricard into the combination of Stengel and Taquet because such incorporation would help better handle the case of multiple 3D samples being projected to a same 2D sample of the projection plane. [0083]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAT CHI CHIO whose telephone number is (571)272-9563. The examiner can normally be reached Monday-Thursday 10am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAMIE J ATALA can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAT C CHIO/Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Sep 11, 2024
Application Filed
Feb 27, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587653
Spatial Layer Rate Allocation
2y 5m to grant Granted Mar 24, 2026
Patent 12549764
THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12549845
CAMERA SETTING ADJUSTMENT BASED ON EVENT MAPPING
2y 5m to grant Granted Feb 10, 2026
Patent 12546657
METHODS AND SYSTEMS FOR REMOTE MONITORING OF ELECTRICAL EQUIPMENT
2y 5m to grant Granted Feb 10, 2026
Patent 12549710
MULTIPLE HYPOTHESIS PREDICTION WITH TEMPLATE MATCHING IN VIDEO CODING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
90%
With Interview (+16.6%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 836 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month