Prosecution Insights
Last updated: April 19, 2026
Application No. 18/665,041

METHOD AND APPARATUS FOR LIDAR POINT CLOUD CODING USING POINTWISE PREDICITON

Non-Final OA §103
Filed
May 15, 2024
Examiner
FUJITA, KATRINA R
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Digitalinsights Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
94%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
472 granted / 674 resolved
+8.0% vs TC avg
Strong +24% interview lift
Without
With
+24.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
25 currently pending
Career history
699
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
15.3%
-24.7% vs TC avg
§112
11.8%
-28.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 674 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 5 and 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Park et al. (US 2022/0343548) and Yea et al. (US 2021/0329270). Regarding claim 1, Park et al. discloses a method performed by a point cloud decoding device for decoding a current point, the method comprising: decoding, from a bitstream, a quantized residual point, a quantization parameter (“The attribute information decoder may include an attribute information entropy decoder 31000, a geometry information mapper 31010, a residual attribute information inverse quantizer 31020” at paragraph 0589, line 1), a prediction candidate list index (“A prediction candidate list index may be received from the encoder, and the attribute value of a corresponding candidate may be used as a predicted value” at paragraph 0318, line 2); reconstructing a residual point by dequantizing the quantized residual point by using the quantization parameter (“The residual attribute information inverse quantizer 31020 may inversely quantize the residual attribute information. The residual attribute information inverse quantizer 31020 may inversely quantize the received transformed and quantized attribute information based on a quantization value. The inversely quantized transformed residual attribute information may be input to the residual attribute information inverse transformer” at paragraph 0593); determining a prediction candidate list according to the prediction candidate list index (“For attribute decoding, a prediction candidate list may be configured.” At paragraph 0318, line 1; “The bitstreams of FIGS. 24 to 27 may carry the prediction candidate list index. When the candidate corresponding to the received index is an attribute-based candidate, the index difference may be restored by decoding sig_flag, gt1_flag, partity_flag, gt3_flag, and remain_data. Accordingly, the index of a neighbor node reference by the current point that is compressed and transmitted may be determined and restored” at paragraph 0318, line 5); determining a predicted point from the prediction candidate list (“The attribute information predictor 31040 may generate a predicted value of the attribute information in order to restore the attribute information. The attribute information predictor 31040 generates predicted attribute information based on the attribute information about the points in the memory 31050. The predicted information may be obtained by performing entropy decoding” at paragraph 0595); reconstructing the current point by adding the residual point and the predicted point (“The inversely transformed residual attribute information may be combined with the predicted attribute information generated by the attribute information predictor and stored in the memory” at paragraph 0594, line 7); and storing a reconstructed current point in a buffer (the data is stored as recited above). Park et al. does not explicitly disclose a predictor index and determining a predicted point from the prediction candidate list by using the predictor index. Yea et al. teaches a method in the same field of endeavor of point cloud encoding and decoding, the method comprising: decoding, from a bitstream, a predictor index (“The maximum number of predictor candidate (also referred to as Max NumCand) can be defined and further be encoded into attributes header. In current G-PCC attributes coding, Max NumCand can be set to equal to the number of nearest neighbors in prediction plus one (e.g., numberOfNearestNeighborsInPrediction+1), and can further be used in encoding and decoding predictor index with truncated unary binarization” at paragraph 0117); determining a predicted point from the prediction candidate list by using the predictor index (“After creating predictor candidates, best predictor can be selected by applying a rate-distortion optimization procedure and then, selected predictor index can be arithmetically encoded” at paragraph 0116, last sentence; the predictor index is decoded to determine which predictor was used during the encoding process). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a predictor index as taught by Yea et al. in the system of Park et al. to allow the most optimal predictor to be chosen for the encoding process. Regarding claim 2, Park et al. discloses a method further comprising: decoding information for a residual coordinate system conversion from the bitstream (“The geometry information entropy decoder 30000 may decode geometry information included in the received geometry information bitstream based on the entropy scheme. The geometry information entropy decoder 30000 may perform entropy decoding on the input bitstream. For example, for entropy decoding, various methods such as Exponential Golomb, CAVLC, and CABAC may be applied. The geometry information entropy decoder may decode information related to the geometry information prediction performed by the encoding device. The quantized residual geometry information generated through entropy decoding may be input to the residual geometry information inverse quantizer” at paragraph 0580); and inversely converting a coordinate system of geometric information for the residual point by using the information for the residual coordinate system conversion (“The coordinate inverse transformer 30040 may inversely transform the coordinates of the geometry information. The coordinate inverse transformer 30040 may perform coordinate inverse transformation based on the coordinate transform-related information provided from the geometry information entropy decoder and the reconstructed geometry information stored in the memory” at paragraph 0584). Regarding claim 5, Park et al. discloses a method wherein determining the prediction candidate list comprises: determining a first prediction candidate list (“For attribute decoding, a prediction candidate list may be configured.” At paragraph 0318, line 1) or a second prediction candidate list as the prediction candidate list according to the prediction candidate list index (“The bitstreams of FIGS. 24 to 27 may carry the prediction candidate list index. When the candidate corresponding to the received index is an attribute-based candidate, the index difference may be restored by decoding sig_flag, gt1_flag, partity_flag, gt3_flag, and remain_data. Accordingly, the index of a neighbor node reference by the current point that is compressed and transmitted may be determined and restored” at paragraph 0318, line 5). Regarding claim 11, Park et al. discloses a method performed by a point cloud encoding device for encoding a current point, the method comprising: obtaining the current point; determining a prediction candidate list index (“The prediction candidate list index of the selected candidate is entropy-coded” at paragraph 0293, line 4); determining a prediction candidate list according to the prediction candidate list index (“For attribute encoding, a prediction candidate list is configured, and one candidate is selected through RDO between prediction candidates and used as a prediction value” at paragraph 0293, line 1); determining a predicted point from the prediction candidate list (“For example, one candidate is selected based on the difference between the attributes of the four candidates and the source attribute and the weighted sum of the bit numbers used for attribute coding, and the attribute value of the candidate is selected as a predicted value. The number of candidates selected based on the attributes may be 1 to 4. As a method of searching for a neighbor point based on attributes, RDO may be used or a point having the closest attribute to the current point may be searched” at paragraph 0300, line 1); generating a residual point by subtracting the predicted point from the current point (“A difference between the reconstructed attribute information and the predicted attribute information generated by the attribute information predictor may be estimated and input to the residual attribute information transformer” at paragraph 0558, last sentence); determining a quantization parameter (“The residual attribute information quantizer generates transformed and quantized residual attribute information based on the quantization value of the received transformed residual attribute information” at paragraph 0560, line 1); quantizing the residual point by using the quantization parameter (“The attribute information quantizer 29030 may quantize the residual attribute information or quantize the attribute information, like the residual attribute information quantizer” at paragraph 0563); and generating a bitstream by encoding a quantized residual point, the prediction candidate list index, and the quantization parameter (“The attribute information entropy encoder 29040 may encode the attribute information based on the entropy scheme. An attribute information bitstream may be generated” at paragraph 0569). Park et al. does not explicitly disclose determining a predicted point from the prediction candidate list by using the predictor index and encoding the predictor index. Yea et al. teaches a method in the same field of endeavor of point cloud encoding and decoding, the method comprising: determining a predicted point from the prediction candidate list by using the predictor index (see Table 1 along with paragraph 0116); generating a bitstream by encoding the predictor index (“After creating predictor candidates, best predictor can be selected by applying a rate-distortion optimization procedure and then, selected predictor index can be arithmetically encoded” at paragraph 0116, last sentence). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a predictor index as taught by Yea et al. in the system of Park et al. to allow the most optimal predictor to be chosen for the encoding process. Regarding claim 12, Park et al. discloses a method further comprising: determining information for a residual coordinate system conversion (“Information about whether to perform coordinate transform and the coordinate information may be signaled in a unit such as sequence, frame, tile, slice, block, or the like” at paragraph 0543, line 1); converting a coordinate system of geometric information for the residual point by using the information for the residual coordinate system conversion (“The coordinate transformer 28000 may transform the coordinates of geometry information. The coordinate transformer 28000 may receive geometry information as input and transform the same into a coordinate system different from the existing coordinate system” at paragraph 0542, line 1); and encoding the information for the residual coordinate system conversion (“The transformed and quantized geometry information may be input to the geometry information entropy encoder and the residual geometry information quantizer” at paragraph 0545, last sentence). Regarding claim 13, Park et al. discloses a method further comprising: generating a reconstructed residual point by dequantizing the quantized residual point with the quantization parameter (“The residual geometry information inverse quantizer 28000 may perform inverse quantization to reconstruct the residual geometry information” at paragraph 0548, line 1); inversely converting the coordinate system of geometric information with respect to the reconstructed residual point by using the information for the residual coordinate system conversion (“The residual geometry information inverse quantizer 28000 receives the quantized residual geometry information and scales the same with a quantization value to reconstruct the residual geometry information” at paragraph 0548, line 3); reconstructing the current point by adding the reconstructed residual point and the predicted point (“The reconstruct residual geometry information may be added to the predicted geometry information to reconstruct geometry information and store the same in the memory” at paragraph 0548, last sentence); and storing a reconstructed current point in a buffer (the data is stored as mentioned above). Regarding claim 14, Park et al. discloses a computer-readable recording medium storing a bitstream generated by a point cloud encoding method, the point cloud encoding method comprising: obtaining the current point; determining a prediction candidate list index (“The prediction candidate list index of the selected candidate is entropy-coded” at paragraph 0293, line 4); determining a prediction candidate list according to the prediction candidate list index (“For attribute encoding, a prediction candidate list is configured, and one candidate is selected through RDO between prediction candidates and used as a prediction value” at paragraph 0293, line 1); determining a predicted point from the prediction candidate list (“For example, one candidate is selected based on the difference between the attributes of the four candidates and the source attribute and the weighted sum of the bit numbers used for attribute coding, and the attribute value of the candidate is selected as a predicted value. The number of candidates selected based on the attributes may be 1 to 4. As a method of searching for a neighbor point based on attributes, RDO may be used or a point having the closest attribute to the current point may be searched” at paragraph 0300, line 1); generating a residual point by subtracting the predicted point from the current point (“A difference between the reconstructed attribute information and the predicted attribute information generated by the attribute information predictor may be estimated and input to the residual attribute information transformer” at paragraph 0558, last sentence); determining a quantization parameter (“The residual attribute information quantizer generates transformed and quantized residual attribute information based on the quantization value of the received transformed residual attribute information” at paragraph 0560, line 1); quantizing the residual point by using the quantization parameter (“The attribute information quantizer 29030 may quantize the residual attribute information or quantize the attribute information, like the residual attribute information quantizer” at paragraph 0563); and generating a bitstream by encoding a quantized residual point, the prediction candidate list index, and the quantization parameter (“The attribute information entropy encoder 29040 may encode the attribute information based on the entropy scheme. An attribute information bitstream may be generated” at paragraph 0569). Park et al. does not explicitly disclose determining a predicted point from the prediction candidate list by using the predictor index and encoding the predictor index. Yea et al. teaches a method in the same field of endeavor of point cloud encoding and decoding, the method comprising: determining a predicted point from the prediction candidate list by using the predictor index (see Table 1 along with paragraph 0116); generating a bitstream by encoding the predictor index (“After creating predictor candidates, best predictor can be selected by applying a rate-distortion optimization procedure and then, selected predictor index can be arithmetically encoded” at paragraph 0116, last sentence). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a predictor index as taught by Yea et al. in the system of Park et al. to allow the most optimal predictor to be chosen for the encoding process. Claim(s) 3 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Park et al. and Yea et al. as applied to claim 1 above, and further in view of Ray et al. (US 2021/0314616). The Park et al. and Yea et al. combination discloses a method further comprising: converting a coordinate system of stored reconstructed points in the buffer, wherein converting the coordinate system of the stored reconstructed points comprises: converting a coordinate system of geometric information for a point cloud including the stored reconstructed points (“The coordinate inverse transformer 30040 may inversely transform the coordinates of the geometry information. The coordinate inverse transformer 30040 may perform coordinate inverse transformation based on the coordinate transform-related information provided from the geometry information entropy decoder and the reconstructed geometry information stored in the memory” Park et al. at paragraph 0584). The Park et al. and Yea et al. combination does not explicitly disclose converting a coordinate system of geometric information for a point cloud including the stored reconstructed points from an internal coordinate system of the point cloud decoding device to a world coordinate system. Ray et al. teaches a method in the same field of endeavor of point cloud encoding and decoding, comprising: converting a coordinate system of stored reconstructed points in the buffer, wherein converting the coordinate system of the stored reconstructed points comprises: converting a coordinate system of geometric information for a point cloud including the stored reconstructed points from an internal coordinate system of the point cloud decoding device to a world coordinate system (“Furthermore, geometry reconstruction unit 312 may perform a reconstruction to determine coordinates of points in a point cloud. Inverse transform coordinate unit 320 may apply an inverse transform to the reconstructed coordinates to convert the reconstructed coordinates (positions) of the points in the point cloud from a transform domain back into an initial domain” at paragraph 0073; while not explicit, the initial domain is feasibly a global domain). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a global coordinate transformation as suggested by Ray et al. for the transformation of the Park et al. and Yea et al. combination for purposes of aligning the reconstruction to a common coordinate system. Claim(s) 4 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Park et al. and Yea et al. as applied to claim 1 above, and further in view of Lee et al. (US 2024/0020885). The Park et al. and Yea et al. combination discloses a method wherein decoding the quantized residual point comprises: with respect to a point cloud generated by a LiDAR (“Point cloud content includes a point cloud video (images and/or videos) representing an object and/or environment located in various 3D spaces (e.g., a 3D space representing a real environment, a 3D space representing a virtual environment, etc.). Accordingly, the point cloud content providing system according to the embodiments may capture a point cloud video using one or more cameras (e.g., an infrared camera capable of securing depth information, an RGB camera capable of extracting color information corresponding to the depth information, etc.), a projector (e.g., an infrared pattern projector to secure depth information), a LiDAR” Park et al. at paragraph 0073, line 1), decoding the quantized residual point through a total rotation angle of the LiDAR turning about a rotational axis (“The geometry information transform/quantization unit 28000 may transform the geometry information and quantize the same based on an octree. The geometry information transform/quantization unit 28000 receives geometry information as an input, applies one or more transform techniques such as positional transformation and/or rotational transformation thereto, and quantizes the geometry information by dividing the geometry information by a quantization value to generate transformed and quantized geometry information” Park et al. at paragraph 0545, line 1). The Park et al. and Yea et al. combination does not explicitly disclose that decoding the quantized residual point comprises: with respect to a point cloud generated by a cylindrical LiDAR, decoding the quantized residual point through a total rotation angle of the cylindrical LiDAR turning about a rotational axis. Lee et al. teaches a method in the same field of endeavor of point cloud encoding and decoding, wherein decoding the quantized residual point comprises: with respect to a point cloud generated by a cylindrical LiDAR (“The geometry information according to the embodiments is information indicating the position (for example, location) of a point, and may be expressed by parameters of a coordinate system such as a Cartesian coordinate system, a cylindrical coordinate system” at paragraph 0230, line 4; “Point cloud content includes a point cloud video (images and/or videos) representing an object and/or environment located in various 3D spaces (e.g., a 3D space representing a real environment, a 3D space representing a virtual environment, etc.). Accordingly, the point cloud content providing system according to the embodiments may capture a point cloud video using one or more cameras (e.g., an infrared camera capable of securing depth information, an RGB camera capable of extracting color information corresponding to the depth information, etc.), a projector (e.g., an infrared pattern projector to secure depth information), a LiDAR” at paragraph 0079, line 1), decoding the quantized residual point through a total rotation angle of the cylindrical LiDAR turning about a rotational axis (“The geometry information transformation quantizer 36004 receives geometry information as an input, applies one or more transformations such as position transformation and/or rotation transformation, and then quantizes the geometry information by dividing the geometry information with quantization values to generate transformed quantized geometry information” at paragraph 0356, line 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to generate cylindrical lidar data as taught by Lee et al. for the geometrical information of the Park et al. and Yea et al. to accommodate for particular imaging setups. Claim(s) 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Park et al. and Yea et al. as applied to claim 1 above, and further in view of Gao et al. (US 2021/0306664). Regarding claim 6, the Park et al. and Yea et al. combination discloses the elements of claim 5 as described above. The Park et al. and Yea et al. combination does not explicitly disclose updating the first prediction candidate list by using the reconstructed current point, wherein updating the first prediction candidate list comprises: in response to a determination that the reconstructed current point is obtained from an identical object to one of objects containing points as included in the first prediction candidate list, deleting a most similar point to the reconstructed current point from the first prediction candidate list and adding the reconstructed current point to a foremost of the first prediction candidate list. Gao et al. teaches a method in the same field of endeavor of point cloud encoding and decoding, comprising: updating the first prediction candidate list by using the reconstructed current point, wherein updating the first prediction candidate list comprises: in response to a determination that the reconstructed current point is obtained from an identical object to one of objects containing points as included in the first prediction candidate list, maintaining only one of the points in the candidate list (“According to an embodiment, a candidate list may be implemented as a sliding buffer. A new point may be inserted at the beginning of the list and the last point may get pushed out of the list. In one embodiment, before insertion of a new point, a geometry position and attribute value of the new point are compared with the geometry position and attribute value of the candidates in the candidate list. When there is a point of a candidate in the candidate list with the same geometry position and attribute, the new point is not inserted” at paragraph 0143, line 1). The choice to not add the new point in lieu of the existing candidate point is equivalent to deleting the existing candidate point and adding the new point. The net result is that only one of the candidate points is maintained. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to candidate check as taught by Gao et al. in the prediction list generation of the Park et al. and Yea et al. combination to eliminate redundancies in the list, thereby limiting the resources needed to store and process the candidate list. Regarding claim 7, Gao et al. discloses a method wherein updating the first prediction candidate list comprises: checking whether the points are obtained from the identical object by using geometric information of the points included in the first prediction candidate list and geometric information of the reconstructed current point (“According to an embodiment, a candidate list may be implemented as a sliding buffer. A new point may be inserted at the beginning of the list and the last point may get pushed out of the list. In one embodiment, before insertion of a new point, a geometry position and attribute value of the new point are compared with the geometry position and attribute value of the candidates in the candidate list. When there is a point of a candidate in the candidate list with the same geometry position and attribute, the new point is not inserted” at paragraph 0143, line 1). Claim(s) 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Park et al. and Yea et al. as applied to claim 5 above, and further in view of Pu et al. (US 2016/0014407) and Ramasubramonian et al. (US 20220207780). Regarding claim 8, the Park et al. and Yea et al. combination discloses the elements of claim 5 as described above. The Park et al. and Yea et al. combination does not explicitly disclose generating the second prediction candidate list by using the quantized current point. Pu et al. teaches a method in the same field of endeavor of data encoding and decoding, comprising: generating the second prediction candidate list (“In the example of FIG. 11, for each respective neighbor pixel of a predefined set of neighbor pixels in a line above or a column left of a current block, video decoder 30 generates, in the extra palette predictor list, a respective candidate specifying the value of the respective neighbor pixel (1100)“ at paragraph 0153, last sentence) by using the quantized current point (“In the example operation of FIG. 11, video decoder 30 may perform actions (1112) through (1116) for each respective candidate in the extra palette predictor list. Particularly, video decoder 30 determines whether the syntax element corresponding to the respective candidate indicates the respective candidate is reused in the current block (1112)” at paragraph 0157, line 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to generate an extra list as taught by Pu et al. in the decoding of the Park et al. and Yea et al. combination for purposes of reducing bitstream size (see Pu et al. at paragraph 0095). The Park et al., Yea et al. and Pu et al. combination does not explicitly disclose decoding LiDAR parameters and predicting a quantized current point and a location of the quantized current point by using the LiDAR parameters. Ramasubramonian et al. teaches a method in the same field of endeavor of point cloud encoding and decoding, comprising: decoding LiDAR parameters (“Angular mode may be used in predictive geometry coding, where the characteristics of LIDAR sensors may be utilized in coding the prediction tree more efficiently” at paragraph 0067, line 1; “A new predictor leveraging the characteristics of lidar could be introduced. For instance, the rotation speed of the lidar scanner around the z-axis is usually constant. Therefore, the G-PCC decoder may predict the current {tilde over (ϕ)}(j)” at paragraph 0094, line 1); predicting a quantized current point and a location of the quantized current point by using the LiDAR parameters (“Decode the model parameters {tilde over (t)}(i) and {tilde over (z)}(i) and the quantization parameters q.sub.r q.sub.ζ, q.sub.θ and q.sub.ϕ” at paragraph 0099; “Decode the ({tilde over (r)}, {tilde over (ϕ)}, i) parameters associated with the nodes according to the geometry predictive scheme” at paragraph 0100, line 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to consider the lidar parameters as taught by Ramasubramonian et al. in the coding of the Park et al., Yea et al. and Pu et al. combination for purposes of coding the data more efficiently (Ramasubramonian et al. at paragraph 0067). Regarding claim 9, the Park et al., Yea et al., Pu et al. and Ramasubramonian et al. combination discloses a method wherein generating the second prediction candidate list comprises: searching closest points by distance to the quantized current point from stored reconstructed points in the buffer (“According to embodiments, neighbor point candidates may be selected based on similar attributes to generate a neighbor point set. Whether to generate the neighbor point set based on distance or similar attributes may be signaled to the decoder according to a method applied to the encoder” Park et al. at paragraph 0255); and generating the second prediction candidate list by using a preset number of searched points (“In the example of FIG. 11, video decoder 30 may determine whether a number of candidates in the extra palette predictor list exceeds a threshold (1104). Responsive to determining the number of candidates in the extra palette predictor list exceeds the threshold (“YES” of 1104), video decoder 30 may truncate the extra palette predictor list such that a size of the extra palette predictor list is limited to a particular number of candidates (e.g., the threshold) (1106). Responsive to determining the number of candidates in the extra palette predictor list does not exceed the threshold (“NO” of 1104), video decoder 30 may refrain from truncating the extra palette predictor list (1108)” Pu et al. at paragraph 0155). Regarding claim 10, the Park et al., Yea et al., Pu et al. and Ramasubramonian et al. combination discloses a method wherein generating the second prediction candidate list comprises: searching points included in a previous frame for sharing common locations with and being spatially adjacent to the quantized current point (“According to embodiments, neighbor point candidates may be selected based on similar attributes to generate a neighbor point set. Whether to generate the neighbor point set based on distance or similar attributes may be signaled to the decoder according to a method applied to the encoder” Park et al. at paragraph 0255; “The inter-predictor may use information required for inter-prediction of the current prediction unit provided by the encoding device to perform inter-prediction of the current prediction unit based on information included in at least one of a space before the current space including the current prediction unit or a space after the current space” Park et al. at paragraph 0582, line 10); and generating the second prediction candidate list by using a preset number of searched points (“In the example of FIG. 11, video decoder 30 may determine whether a number of candidates in the extra palette predictor list exceeds a threshold (1104). Responsive to determining the number of candidates in the extra palette predictor list exceeds the threshold (“YES” of 1104), video decoder 30 may truncate the extra palette predictor list such that a size of the extra palette predictor list is limited to a particular number of candidates (e.g., the threshold) (1106). Responsive to determining the number of candidates in the extra palette predictor list does not exceed the threshold (“NO” of 1104), video decoder 30 may refrain from truncating the extra palette predictor list (1108)” Pu et al. at paragraph 0155). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATRINA R FUJITA whose telephone number is (571)270-1574. The examiner can normally be reached Monday - Friday 9:30-5:30 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 5712723638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATRINA R FUJITA/Primary Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

May 15, 2024
Application Filed
Mar 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597250
DETECTION OF PLANT DETRIMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12582476
SYSTEMS FOR PLANNING AND PERFORMING BIOPSY PROCEDURES AND ASSOCIATED METHODS
2y 5m to grant Granted Mar 24, 2026
Patent 12585698
MULTIMEDIA FOCALIZATION
2y 5m to grant Granted Mar 24, 2026
Patent 12586190
SYSTEM AND METHOD OF CLASSIFICATION OF BIOLOGICAL PARTICLES
2y 5m to grant Granted Mar 24, 2026
Patent 12566341
PREDICTING SIZING AND/OR FITTING OF HEAD MOUNTED WEARABLE DEVICE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
94%
With Interview (+24.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 674 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month