Prosecution Insights
Last updated: April 19, 2026
Application No. 18/397,046

ENHANCED SIGNALING OF DEPTH REPRESENTATION INFORMATION SUPPLEMENTAL ENHANCEMENT INFORMATION

Final Rejection §102
Filed
Dec 27, 2023
Examiner
NOH, JAE NAM
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Bytedance Inc.
OA Round
2 (Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 2m
To Grant
76%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
382 granted / 445 resolved
+27.8% vs TC avg
Minimal -10% lift
Without
With
+-10.0%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 2m
Avg Prosecution
26 currently pending
Career history
471
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
37.5%
-2.5% vs TC avg
§102
31.5%
-8.5% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 445 resolved cases

Office Action

§102
DETAILED ACTION This action is in response to the amendment filed on 7/23/2025. Claims 1 and 3-20 are pending. Response to Arguments Claim Rejections 35 USC §102 & 103 Applicant's amendment filed 7/23/2025 have been fully considered but they are not persuasive. Applicant states: 1. Page 10 “…Thus, Hannuksela does not disclose that a value of a first syntax element in a depth representation information (DRI) supplemental enhancement information (SEI) message is in a range of N to M, inclusive, ..., wherein N=0, and M=65535 as recited in Applicant's independent claims. As such, Hannuksela fails to disclose at least one limitation of independent claims 1, 12, 16, and 19, and consequently fails to anticipate claims 1 and 3-20. “, any emphasis not shown.”, any emphasis not shown. Examiner’s response: Reference discloses when depth_representation_type 3 in the table of [545] is used, depth_nonlinear_representation_model[i] defines how the depth values are remapped [546] and furthermore in [547], it shows that all of the calculated values are clipped between 0-255 using the Clip3 (0,255,…) function. Information Disclosure Statement The references listed on the Information Disclosure Statement submitted on 7/23/2025 has/have been considered by the examiner (see attached PTO-1449). Claim Mapping Notation In this office action, following notations are being used to refer to the paragraph numbers or column number and lines of portions of the cited reference. “[0027]…” (Paragraph number [0027]) [4:3-15] ”…” (Column 4 Lines 3-15) Furthermore, unless necessary to distinguish from other references in this action, “et al.” will be omitted when referring to the reference. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim 1, 3-20 are rejected under 35 U.S.C. 102(a1) and (a2) as being anticipated by HANNUKSELA et al. (US 20150304665 A1). 1. A method of processing video data, comprising: performing, for a conversion between a video and a bitstream of the video, according to a rule, wherein the rule specifies that a value of a first syntax element in a depth representation information (DRI) supplemental enhancement information (SEI) message is in a range of N to M, inclusive, where N and M are integers and N is less than M, “[0006] Some embodiments provide a method for encoding and decoding video information. In many embodiments, to indicate a composition of pictures of different time instants, some usability information may be embedded to the video bitstream indicating the intended display behavior when more than one layer is used and associated display behavior using this information.” “[546] Continuing the exemplary semantics of the depth representation SEI message…depth_nonlinear_representation_model[i] specifies the piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity. When depth_representation_type is equal to 3, depth view component contains nonlinearly transformed depth samples…” wherein the first syntax element in the DRI SEI message specifies piece-wise linear segments for mapping of decoded luma sample values of an auxiliary picture to a scale that is uniformly quantized in terms of disparity, and wherein N=0, and M=65535. “[0545] Continuing the exemplary semantics of the depth representation SEI message, depth_representation_type specifies the representation definition of luma pixels in coded frame of depth views as specified in the table below. In the table below, disparity specifies the horizontal displacement between two texture views and Z value specifies the distance from a camera.” “[546] Continuing the exemplary semantics of the depth representation SEI message…depth_nonlinear_representation_model[i] specifies the piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity. When depth_representation_type is equal to 3, depth view component contains nonlinearly transformed depth samples…” “[0547] Variable DepthLUT[i] for i in the range of 0 to 255, inclusive, is specified as follows. TABLE-US-00019 depth_nonlinear_representation_model[ 0 ] = 0 depth_nonlinear_representation_model[depth_nonlinear_representation_num + 1 ] = 0 for( k=0; k<= depth_nonlinear_representation_num; ++k ) { pos1 = ( 255 * k) / (depth_nonlinear_representation_num + 1 ) dev1 = depth_nonlinear_representation_model[ k ] pos2 = ( 255 * ( k+1) ) / (depth_nonlinear_representation_num + 1 ) ) dev2 = depth_nonlinear_representation_model[ k+1 ] x1 = pos1 − dev1 y1 = pos1 + dev1 x2 = pos2 − dev2 y2 = pos2 + dev2 for ( x = max( x1, 0 ); x <= min( x2, 255 ); ++x )    DepthLUT[ x ] = Clip3( 0, 255, Round( ( ( x − x1 ) * ( y2 − y1 ) ) ÷ ( x2 − x1 ) + y1 ) ) }” 3. The method of claim 1, wherein the first syntax element is Exp-Golomb-coded using an unsigned integer. “[0070] When describing H.264/AVC and HEVC as well as in example embodiments, the following descriptors may be used to specify the parsing process of each syntax element. [0071] b(8): byte having any pattern of bit string (8 bits). [0072] se(v): signed integer Exp-Golomb-coded syntax element with the left bit first. [0073] u(n): unsigned integer using n bits. When n is “v” in the syntax table, the number of bits varies in a manner dependent on the value of other syntax elements. The parsing process for this descriptor is specified by n next bits from the bitstream interpreted as a binary representation of an unsigned integer with the most significant bit written first. [0074] ue(v): unsigned integer Exp-Golomb-coded syntax element with the left bit first.” 4. The method of claim 1, wherein whether the first syntax element being included in the DRI SEI message is based on a value of a second syntax element, and “[546] Continuing the exemplary semantics of the depth representation SEI message…depth_nonlinear_representation_model[i] specifies the piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity. When depth_representation_type is equal to 3, depth view component contains nonlinearly transformed depth samples…” wherein the second syntax element in the DRI SEI message specifies a representation definition of decoded luma samples of auxiliary pictures. “[0545] Continuing the exemplary semantics of the depth representation SEI message, depth_representation_type specifies the representation definition of luma pixels in coded frame of depth views as specified in the table below. In the table below, disparity specifies the horizontal displacement between two texture views and Z value specifies the distance from a camera.” 5. The method of claim 4, wherein the first syntax element is included in the DRI SEI message when the value of the second syntax element is equal to 3. “[546] Continuing the exemplary semantics of the depth representation SEI message…depth_nonlinear_representation_model[i] specifies the piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity. When depth_representation_type is equal to 3, depth view component contains nonlinearly transformed depth samples…” 6. The method of claim 1, wherein the first syntax element having an index equal to 0 and the first syntax element having an index equal to a number of piece-wise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity have a same value. “[0546] Continuing the exemplary semantics of the depth representation SEI message, all_views_equal_flag equal to 0 specifies that depth representation base view may not be identical to respective values for each view in target views. all_views_equal_flag equal to 1 specifies that the depth representation base views are identical to respective values for all target views. depth_representaion_base_view_id[i] specifies the view identifier for the NAL unit of either base view which the disparity for coded depth frame of i-th view_id is derived from (depth_representation_type equal to 1 or 3) or base view which the Z-axis for the coded depth frame of i-th view_id is defined as the optical axis of (depth_representation_type equal to 0 or 2). depth_nonlinear_representation_num_minus 1+2 specifies the number of piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity. depth_nonlinear_representation_model[i] specifies the piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity. When depth_representation_type is equal to 3, depth view component contains nonlinearly transformed depth samples. Variable DepthLUT [i], as specified below, is used to transform coded depth sample values from nonlinear representation to the linear representation—disparity normalized in range from 0 to 255. The shape of this transform is defined by means of line-segment-approximation in two-dimensional linear-disparity-to-nonlinear-disparity space. The first (0, 0) and the last (255, 255) nodes of the curve are predefined. Positions of additional nodes are transmitted in form of deviations (depth_nonlinear_representation_model[i]) from the straight-line curve. These deviations are uniformly distributed along the whole range of 0 to 255, inclusive, with spacing depending on the value of nonlinear_depth_representation_num. [0547] Variable DepthLUT[i] for i in the range of 0 to 255, inclusive, is specified as follows. TABLE-US-00019 depth_nonlinear_representation_model[ 0 ] = 0 depth_nonlinear_representation_model[depth_nonlinear_representation_num + 1 ] = 0 for( k=0; k<= depth_nonlinear_representation_num; ++k ) { pos1 = ( 255 * k) / (depth_nonlinear_representation_num + 1 ) dev1 = depth_nonlinear_representation_model[ k ] pos2 = ( 255 * ( k+1) ) / (depth_nonlinear_representation_num + 1 ) ) dev2 = depth_nonlinear_representation_model[ k+1 ] x1 = pos1 − dev1 y1 = pos1 + dev1 x2 = pos2 − dev2 y2 = pos2 + dev2 for ( x = max( x1, 0 ); x <= min( x2, 255 ); ++x )    DepthLUT[ x ] = Clip3( 0, 255, Round( ( ( x − x1 ) * ( y2 − y1 ) ) ÷ ( x2 − x1 ) + y1 ) ) }” 7. The method of claim 6, wherein the same value is zero. “[0546] Continuing the exemplary semantics of the depth representation SEI message, all_views_equal_flag equal to 0 specifies that depth representation base view may not be identical to respective values for each view in target views. all_views_equal_flag equal to 1 specifies that the depth representation base views are identical to respective values for all target views. depth_representaion_base_view_id[i] specifies the view identifier for the NAL unit of either base view which the disparity for coded depth frame of i-th view_id is derived from (depth_representation_type equal to 1 or 3) or base view which the Z-axis for the coded depth frame of i-th view_id is defined as the optical axis of (depth_representation_type equal to 0 or 2). depth_nonlinear_representation_num_minus 1+2 specifies the number of piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity. depth_nonlinear_representation_model[i] specifies the piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity. When depth_representation_type is equal to 3, depth view component contains nonlinearly transformed depth samples. Variable DepthLUT [i], as specified below, is used to transform coded depth sample values from nonlinear representation to the linear representation—disparity normalized in range from 0 to 255. The shape of this transform is defined by means of line-segment-approximation in two-dimensional linear-disparity-to-nonlinear-disparity space. The first (0, 0) and the last (255, 255) nodes of the curve are predefined. Positions of additional nodes are transmitted in form of deviations (depth_nonlinear_representation_model[i]) from the straight-line curve. These deviations are uniformly distributed along the whole range of 0 to 255, inclusive, with spacing depending on the value of nonlinear_depth_representation_num. [0547] Variable DepthLUT[i] for i in the range of 0 to 255, inclusive, is specified as follows. TABLE-US-00019 depth_nonlinear_representation_model[ 0 ] = 0 depth_nonlinear_representation_model[depth_nonlinear_representation_num + 1 ] = 0 for( k=0; k<= depth_nonlinear_representation_num; ++k ) { pos1 = ( 255 * k) / (depth_nonlinear_representation_num + 1 ) dev1 = depth_nonlinear_representation_model[ k ] pos2 = ( 255 * ( k+1) ) / (depth_nonlinear_representation_num + 1 ) ) dev2 = depth_nonlinear_representation_model[ k+1 ] x1 = pos1 − dev1 y1 = pos1 + dev1 x2 = pos2 − dev2 y2 = pos2 + dev2 for ( x = max( x1, 0 ); x <= min( x2, 255 ); ++x )    DepthLUT[ x ] = Clip3( 0, 255, Round( ( ( x − x1 ) * ( y2 − y1 ) ) ÷ ( x2 − x1 ) + y1 ) ) }” 8. The method of claim 6, wherein the number of piece-wise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity is indicated by a third syntax element in the DRI SEI message. “[546]…depth_representaion_base_ view id[i] specifies the view identifier for the NAL unit of either base view which the disparity for coded depth frame of i-th view id is derived from ( depth_representation_type equal to 1 or 3) or base view which the Z-axis for the coded depth frame of i-th view id is defined as the optical axis of ( depth_representation_type equal to O or 2). depth_nonlinear_representation_num_minus 1 +2 specifies the number of piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity...” 9. The method of claim 8, the third syntax element is Exp-Golomb-coded using an unsigned integer. “[0070] When describing H.264/AVC and HEVC as well as in example embodiments, the following descriptors may be used to specify the parsing process of each syntax element. [0071] b(8): byte having any pattern of bit string (8 bits). [0072] se(v): signed integer Exp-Golomb-coded syntax element with the left bit first. [0073] u(n): unsigned integer using n bits. When n is “v” in the syntax table, the number of bits varies in a manner dependent on the value of other syntax elements. The parsing process for this descriptor is specified by n next bits from the bitstream interpreted as a binary representation of an unsigned integer with the most significant bit written first. [0074] ue(v): unsigned integer Exp-Golomb-coded syntax element with the left bit first.” 10. The method of claim 1, wherein the conversion comprises encoding the video into the bitstream. “[0054] In the following, several embodiments are described using the convention of referring to (de)coding, which indicates that the embodiments may apply to decoding and/or encoding.” 11. The method of claim 1, wherein the conversion comprises decoding the video from the bitstream. “[0054] In the following, several embodiments are described using the convention of referring to (de)coding, which indicates that the embodiments may apply to decoding and/or encoding.” Regarding the claims 12, 13, they recite elements that are at least included in the claims 1, 2 above but in a different claim form. Therefore, the same rationale for the rejection of the claims 16 and 16 applies. Regarding the processor, memory and storage medium in the claims, see HANNUKSELA [655]-[659]. 14. The apparatus of claim 12, wherein whether the first syntax element being included in the DRI SEI message is based on a value of a second syntax element, and the second syntax element in the DRI SEI message specifies a representation definition of decoded luma samples of auxiliary pictures, “[546] Continuing the exemplary semantics of the depth representation SEI message…depth_nonlinear_representation_model[i] specifies the piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity. When depth_representation_type is equal to 3, depth view component contains nonlinearly transformed depth samples…” “[0545] Continuing the exemplary semantics of the depth representation SEI message, depth_representation_type specifies the representation definition of luma pixels in coded frame of depth views as specified in the table below. In the table below, disparity specifies the horizontal displacement between two texture views and Z value specifies the distance from a camera.” wherein the first syntax element is included in the DRI SEI message when the value of the second syntax element is equal to 3, wherein the first syntax element having an index equal to 0 and the first syntax element having an index equal to a number of piece-wise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity have a same value, and wherein the same value is zero. “[0546] Continuing the exemplary semantics of the depth representation SEI message, all_views_equal_flag equal to 0 specifies that depth representation base view may not be identical to respective values for each view in target views. all_views_equal_flag equal to 1 specifies that the depth representation base views are identical to respective values for all target views. depth_representaion_base_view_id[i] specifies the view identifier for the NAL unit of either base view which the disparity for coded depth frame of i-th view_id is derived from (depth_representation_type equal to 1 or 3) or base view which the Z-axis for the coded depth frame of i-th view_id is defined as the optical axis of (depth_representation_type equal to 0 or 2). depth_nonlinear_representation_num_minus 1+2 specifies the number of piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity. depth_nonlinear_representation_model[i] specifies the piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity. When depth_representation_type is equal to 3, depth view component contains nonlinearly transformed depth samples. Variable DepthLUT [i], as specified below, is used to transform coded depth sample values from nonlinear representation to the linear representation—disparity normalized in range from 0 to 255. The shape of this transform is defined by means of line-segment-approximation in two-dimensional linear-disparity-to-nonlinear-disparity space. The first (0, 0) and the last (255, 255) nodes of the curve are predefined. Positions of additional nodes are transmitted in form of deviations (depth_nonlinear_representation_model[i]) from the straight-line curve. These deviations are uniformly distributed along the whole range of 0 to 255, inclusive, with spacing depending on the value of nonlinear_depth_representation_num. [0547] Variable DepthLUT[i] for i in the range of 0 to 255, inclusive, is specified as follows. TABLE-US-00019 depth_nonlinear_representation_model[ 0 ] = 0 depth_nonlinear_representation_model[depth_nonlinear_representation_num + 1 ] = 0 for( k=0; k<= depth_nonlinear_representation_num; ++k ) { pos1 = ( 255 * k) / (depth_nonlinear_representation_num + 1 ) dev1 = depth_nonlinear_representation_model[ k ] pos2 = ( 255 * ( k+1) ) / (depth_nonlinear_representation_num + 1 ) ) dev2 = depth_nonlinear_representation_model[ k+1 ] x1 = pos1 − dev1 y1 = pos1 + dev1 x2 = pos2 − dev2 y2 = pos2 + dev2 for ( x = max( x1, 0 ); x <= min( x2, 255 ); ++x )    DepthLUT[ x ] = Clip3( 0, 255, Round( ( ( x − x1 ) * ( y2 − y1 ) ) ÷ ( x2 − x1 ) + y1 ) ) }” 15. The apparatus of claim 14, wherein the number of piece-wise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity is indicated by a third syntax element in the DRI SEI message, and wherein the third syntax element is Exp-Golomb-coded using an unsigned integer. “[0070] When describing H.264/AVC and HEVC as well as in example embodiments, the following descriptors may be used to specify the parsing process of each syntax element. [0071] b(8): byte having any pattern of bit string (8 bits). [0072] se(v): signed integer Exp-Golomb-coded syntax element with the left bit first. [0073] u(n): unsigned integer using n bits. When n is “v” in the syntax table, the number of bits varies in a manner dependent on the value of other syntax elements. The parsing process for this descriptor is specified by n next bits from the bitstream interpreted as a binary representation of an unsigned integer with the most significant bit written first. [0074] ue(v): unsigned integer Exp-Golomb-coded syntax element with the left bit first.” “[546]…depth_representaion_base_ view id[i] specifies the view identifier for the NAL unit of either base view which the disparity for coded depth frame of i-th view id is derived from ( depth_representation_type equal to 1 or 3) or base view which the Z-axis for the coded depth frame of i-th view id is defined as the optical axis of ( depth_representation_type equal to O or 2). depth_nonlinear_representation_num_minus 1 +2 specifies the number of piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity...” Regarding the claims 16-19, they recite elements that are at least included in the claims 1, 2, 15, 1above but in a different claim form. Therefore, the same rationale for the rejection of the claims 16 and 16 applies. Regarding the processor, memory and storage medium in the claims, see HANNUKSELA [655]-[659]. 20. The non-transitory computer-readable recording medium of claim 19, wherein, “[0547] Variable DepthLUT[i] for i in the range of 0 to 255, inclusive, is specified as follows. TABLE-US-00019 depth_nonlinear_representation_model[ 0 ] = 0 depth_nonlinear_representation_model[depth_nonlinear_representation_num + 1 ] = 0 for( k=0; k<= depth_nonlinear_representation_num; ++k ) { pos1 = ( 255 * k) / (depth_nonlinear_representation_num + 1 ) dev1 = depth_nonlinear_representation_model[ k ] pos2 = ( 255 * ( k+1) ) / (depth_nonlinear_representation_num + 1 ) ) dev2 = depth_nonlinear_representation_model[ k+1 ] x1 = pos1 − dev1 y1 = pos1 + dev1 x2 = pos2 − dev2 y2 = pos2 + dev2 for ( x = max( x1, 0 ); x <= min( x2, 255 ); ++x )    DepthLUT[ x ] = Clip3( 0, 255, Round( ( ( x − x1 ) * ( y2 − y1 ) ) ÷ ( x2 − x1 ) + y1 ) ) }” Given BRI, the range specified in the reference is “in” the range specified by the claim language. wherein the first syntax element is Exp-Golomb-coded using an unsigned integer, “[0070] When describing H.264/AVC and HEVC as well as in example embodiments, the following descriptors may be used to specify the parsing process of each syntax element. [0071] b(8): byte having any pattern of bit string (8 bits). [0072] se(v): signed integer Exp-Golomb-coded syntax element with the left bit first. [0073] u(n): unsigned integer using n bits. When n is “v” in the syntax table, the number of bits varies in a manner dependent on the value of other syntax elements. The parsing process for this descriptor is specified by n next bits from the bitstream interpreted as a binary representation of an unsigned integer with the most significant bit written first. [0074] ue(v): unsigned integer Exp-Golomb-coded syntax element with the left bit first.” wherein whether the first syntax element being included in the DRI SEI message is based on a value of a second syntax element, and the second syntax element in the DRI SEI message specifies a representation definition of decoded luma samples of auxiliary pictures, “[0545] Continuing the exemplary semantics of the depth representation SEI message, depth_representation_type specifies the representation definition of luma pixels in coded frame of depth views as specified in the table below. In the table below, disparity specifies the horizontal displacement between two texture views and Z value specifies the distance from a camera.” “[546] Continuing the exemplary semantics of the depth representation SEI message…depth_nonlinear_representation_model[i] specifies the piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity. When depth_representation_type is equal to 3, depth view component contains nonlinearly transformed depth samples…” wherein the first syntax element is included in the DRI SEI message when the value of the second syntax element is equal to 3, wherein the first syntax element having an index equal to 0 and the first syntax element having an index equal to a number of piece-wise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity have a same value, and wherein the same value is zero, “[0546] Continuing the exemplary semantics of the depth representation SEI message, all_views_equal_flag equal to 0 specifies that depth representation base view may not be identical to respective values for each view in target views. all_views_equal_flag equal to 1 specifies that the depth representation base views are identical to respective values for all target views. depth_representaion_base_view_id[i] specifies the view identifier for the NAL unit of either base view which the disparity for coded depth frame of i-th view_id is derived from (depth_representation_type equal to 1 or 3) or base view which the Z-axis for the coded depth frame of i-th view_id is defined as the optical axis of (depth_representation_type equal to 0 or 2). depth_nonlinear_representation_num_minus 1+2 specifies the number of piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity. depth_nonlinear_representation_model[i] specifies the piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity. When depth_representation_type is equal to 3, depth view component contains nonlinearly transformed depth samples. Variable DepthLUT [i], as specified below, is used to transform coded depth sample values from nonlinear representation to the linear representation—disparity normalized in range from 0 to 255. The shape of this transform is defined by means of line-segment-approximation in two-dimensional linear-disparity-to-nonlinear-disparity space. The first (0, 0) and the last (255, 255) nodes of the curve are predefined. Positions of additional nodes are transmitted in form of deviations (depth_nonlinear_representation_model[i]) from the straight-line curve. These deviations are uniformly distributed along the whole range of 0 to 255, inclusive, with spacing depending on the value of nonlinear_depth_representation_num. [0547] Variable DepthLUT[i] for i in the range of 0 to 255, inclusive, is specified as follows. TABLE-US-00019 depth_nonlinear_representation_model[ 0 ] = 0 depth_nonlinear_representation_model[depth_nonlinear_representation_num + 1 ] = 0 for( k=0; k<= depth_nonlinear_representation_num; ++k ) { pos1 = ( 255 * k) / (depth_nonlinear_representation_num + 1 ) dev1 = depth_nonlinear_representation_model[ k ] pos2 = ( 255 * ( k+1) ) / (depth_nonlinear_representation_num + 1 ) ) dev2 = depth_nonlinear_representation_model[ k+1 ] x1 = pos1 − dev1 y1 = pos1 + dev1 x2 = pos2 − dev2 y2 = pos2 + dev2 for ( x = max( x1, 0 ); x <= min( x2, 255 ); ++x )    DepthLUT[ x ] = Clip3( 0, 255, Round( ( ( x − x1 ) * ( y2 − y1 ) ) ÷ ( x2 − x1 ) + y1 ) ) }” wherein the number of piece-wise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity is indicated by a third syntax element in the DRI SEI message, and wherein the third syntax element is Exp-Golomb-coded using an unsigned integer. “[0070] When describing H.264/AVC and HEVC as well as in example embodiments, the following descriptors may be used to specify the parsing process of each syntax element. [0071] b(8): byte having any pattern of bit string (8 bits). [0072] se(v): signed integer Exp-Golomb-coded syntax element with the left bit first. [0073] u(n): unsigned integer using n bits. When n is “v” in the syntax table, the number of bits varies in a manner dependent on the value of other syntax elements. The parsing process for this descriptor is specified by n next bits from the bitstream interpreted as a binary representation of an unsigned integer with the most significant bit written first. [0074] ue(v): unsigned integer Exp-Golomb-coded syntax element with the left bit first.” “[546]…depth_representaion_base_ view id[i] specifies the view identifier for the NAL unit of either base view which the disparity for coded depth frame of i-th view id is derived from ( depth_representation_type equal to 1 or 3) or base view which the Z-axis for the coded depth frame of i-th view id is defined as the optical axis of ( depth_representation_type equal to O or 2). depth_nonlinear_representation_num_minus 1 +2 specifies the number of piecewise linear segments for mapping of depth values to a scale that is uniformly quantized in terms of disparity...” Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAE N. NOH whose telephone number is (571) 270-0686. The examiner can normally be reached on Mon-Fri 8:30AM-5PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571) 272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAE N NOH/ Primary Examiner Art Unit 2481
Read full office action

Prosecution Timeline

Dec 27, 2023
Application Filed
Apr 04, 2025
Non-Final Rejection — §102
Jul 14, 2025
Response Filed
Oct 17, 2025
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604025
METHOD FOR VERIFYING IMAGE DATA ENCODED IN AN ENCODER UNIT
2y 5m to grant Granted Apr 14, 2026
Patent 12593071
ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12587679
LOW-LATENCY MACHINE LEARNING-BASED STEREO STREAMING
2y 5m to grant Granted Mar 24, 2026
Patent 12574571
FRAME SELECTION FOR STREAMING APPLICATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12574529
IMAGE ENCODING AND DECODING METHOD AND APPARATUS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
76%
With Interview (-10.0%)
2y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 445 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month