Prosecution Insights
Last updated: April 19, 2026
Application No. 17/437,905

METHODS FOR RECOVERY POINT PROCESS FOR VIDEO CODING AND RELATED APPARATUS

Non-Final OA §103
Filed
Sep 10, 2021
Examiner
ITSKOVICH, MIKHAIL
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
7 (Non-Final)
35%
Grant Probability
At Risk
7-8
OA Rounds
4y 0m
To Grant
59%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
206 granted / 585 resolved
-22.8% vs TC avg
Strong +24% interview lift
Without
With
+23.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
62 currently pending
Career history
647
Total Applications
across all art units

Statute-Specific Performance

§101
11.5%
-28.5% vs TC avg
§103
53.5%
+13.5% vs TC avg
§102
12.3%
-27.7% vs TC avg
§112
20.4%
-19.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 585 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/28/2026 has been entered. Response to Arguments Applicant's arguments filed on 01/28/2026 have been fully considered but they are not persuasive. Generally, Examiner suggests claiming a particular syntax that modifies the HEVC standard to indicate the additional information provided by the Applicant in a particular coded format. Applicant argues: “The present application states that previous standards that used recovery points used recovery point supplemental enhancement information (SEI) messages signaled in a SEI NAL unit. (See originally filed specification, paras. [003 l ]-[0032]). Additionally, the present application states that "At the 13th meeting in Marrakesh in January 2019, input document JVETM0529 proposed to indicate a recovery point using a NAL unit type instead of using an SEI message as in HEVC and AVC.” Examiner notes that the claims do not recite features of the prior art HEVC or AVC standards as subject matter that is being modified by the claims. When examined in a broader context, the claimed subject matter appears to be obvious in view of the art cited below. Examiner suggests limiting the claims to the specific context and the specific data structures that provide an inventive context to the claimed subject matter. Regarding the newly amended language, Applicant argues: “The present claims recite "decoding from a picture header of a current picture in the bitstream a recovery point indication … This is distinctly different than the video coding standards at the time of filing because the video coding standards did not include information …” Examiner again notes that the claims are not limited to features of “video coding standards at the time of filing.” Picture header is a terminology of a much older MPEG standard rather than the HEVC or VCC that use the term picture parameter set. Relevantly, the claims are rejected over the prior art that discusses coding under the MPEG standards. See updated reasons for rejection below. Regarding the newly amended language, Applicant argues: “Uz fails to teach or suggest these features of amended Claim 1. Uz describes that in "MPEG-2 encoding ... However, nothing in Uz teaches or suggests that the "format and header data" includes a picture header of a current picture that includes a recovery point indication that is decoded to obtain a decoded set of syntax elements that comprises indications of information for generating a set of unavailable reference pictures. … Chen fails to correct these deficiencies of Uz,” Examiner notes that Uz and Chen provide examples of such indications in picture headers as cited in the updated reasons for rejection below. Applicant argues: “Chen describes with reference to Fig. 4, that an "incoming MPEG transport stream is read," the "presence of a picture header is determined," and a "picture coding type is determined (325) (i.e., whether I, P or B frames are present)." (See Chen, para. [0037]). "If the picture coding type detects the presence of P-frames, the P-frames are decoded (355) to recover I-slices as discussed above … However, Chen fails to teach or suggest anywhere that a picture header includes a recovery point indication” Examiner disagrees. Chen teaches P-frame designations of recovery points which is consistent with the claimed requirements. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This paragraph describes the treatment of admitted prior art. In describing an invention, Applicant must inevitably reference that which is known in the art as the basis for the invention, however it is important that the claims particularly point out and distinctly claim that which Applicant regards to be his own invention. See 35 U.S.C. 112 (b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. A statement by an applicant in the specification or made during prosecution identifying prior art is an admission which can be relied upon for both anticipation and obviousness determinations, regardless of whether the admitted prior art would otherwise qualify as prior art under the statutory categories of 35 U.S.C. 102. Riverwood Int ’l Corp. v. R.A. Jones & Co., 324 F.3d 1346, 1354, 66 USPQ2d 1331, 1337 (Fed. Cir. 2003); Constant v. Advanced Micro-Devices Inc., 848 F.2d 1560, 1570, 7 USPQ2d 1057, 1063 (Fed. Cir. 1988). The examiner must determine whether the subject matter identified as prior art is applicant’s own work, or the work of another. In the absence of another credible explanation, examiners should treat such subject matter as the work of another. MPEP 2129. Claims 1, 4-5, 9, 14, 16, 18-19, 30-31, 37-39, 43-44 are rejected under 35 U.S.C. 103 as being unpatentable over US 6351538 to Uz (“Uz”) in view of US 20010026677 to Chen (“Chen”). Regarding Claim 1: “A method of decoding a set of pictures from a video bitstream, wherein the set of coded pictures include coded picture data,, the method comprising: decoding from a picture header of a current picture in the bitstream a recovery point indication to obtain a decoded set of syntax elements that comprises indications of (See “MPEG-2 [bitstream] encoding includes … Other format and header data is inserted at appropriate locations of the encoded data to help identify individual pictures, blocks, etc., as well as other specifiable parameters.” Uz, Column 3, lines 30-34 and similarly in Chen, Paragraph 37. For example, “designating pictures as I, P or B. … By starting each group of pictures on an I picture it is possible to randomly begin decoding at any desired group without loss of picture quality,” thus an I picture indication indicates one type of a recovery point. See Uz, Column 4, lines 45-55. Also note that Specification Paragraph 32 indicates that this feature is from the HEVC and AVC standards.) that the current picture is a gradual decoding refresh, GDR, picture that begins the set of coded pictures in the video bitstream, (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, this GDR picture can be a P or a B picture (not an I picture) that contains recovery information. See Specification, Paragraphs 88, 169. Prior art designates a P or a B picture as GDR pictures from which the decoder can begin decoding the set of pictures, “using a technique called intra slice refresh.” See, Uz, Column 4, line 48 - Column 5, line 5. Also note that Specification Paragraphs 40-41 indicate that this feature is from previously published considerations of the HEVC and AVC standards. Similarly see Chen, Paragraph 40, providing an intermediate recovery method that designates P-frames (but not I or B frames) as GDR frames from which recovery can begin and provides a number of frames required to recover all I-slices of an I-frame, and statement of motivation below.) the GDR picture comprising a picture that is not an intra-coded random access point, IRAP, picture from where to begin a refresh of the decoding of the video bitstream, (Uz teaches an embodiment where a picture designated as a P or a B picture can be a gradual recovery picture: “The successive pictures are designated as a particular type based on a predefined pattern, e.g., IBBPBBPBBPBB. Note that each B and P picture relies on the data of a preceding picture. … In addition, or in the alternative, selected macroblocks are forced to be intracoded regardless of whether they are in P or B pictures … Over a sequence of pictures, each slice is intracoded once. This enables starting decoding at any random picture [such as P or B] in the sequence regardless of whether or not it is an I picture” thus a picture coded as a P or a B picture can be designated as a GDR picture based on the random point in the video where recovery may become required. See, Uz, Column 4, line 48 - Column 5, line 5. Similarly see Chen, Paragraph 40 providing an intermediate recovery method that designates P-frames as GDR frames and a number of frames required to recover all I-slices of an I-frame, and statement of motivation below.) a position of a recovery point picture, associated with the GDR picture that ends the set of coded pictures the recovery point picture comprising a picture in the video bitstream at which or from where the decoding of the video bitstream is fully refreshed; (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, “A recovery point picture may be the last picture in the recovery point period.” See Specification, Paragraph 89 and original Claim 14. Prior art teaches this: “The successive pictures are designated as a particular type based on a predefined pattern, e.g., IBBPBBPBBPBB. Note that each B and P picture relies on the data of a preceding picture.” Uz, Column 4, lines 47-51. In this case the group of coded pictures is specified as ending at the last B picture of the GOP, at which point all the recoverable data of the GOP has been recovered. Also note that Specification Paragraph 32 indicates that this feature is from the HEVC and AVC standards. In another embodiment, Uz, Column 5, lines 1-6 and Chen, Paragraph 40 providie an intermediate recovery method that designates P-frames as GDR frames and ending the recovery sequence after a particular number of frames required to recover all I-slices of an I-frame. See statement of motivation below.) and information for generating a set of unavailable reference pictures; (For example, the information can be “called intra slice refresh, selected slices are forced to be intracoded in respective pictures … This enables starting decoding at any random picture in the sequence regardless of whether or not it is an I picture although, several pictures may need to be decoded before an intelligible picture is produced.” See Uz, Column 4 lines 47-55 and Column 5, lines 1-5. Thus, the information of the missing I picture or any other missing reference picture can be generated using the information in the intra refresh slices. See similarly in Chen, Fig. 1 and statement of motivation below. Also note that Specification Paragraph 32 indicates that this feature is from the HEVC and AVC standards.) deriving the information for generating the set of unavailable reference pictures from the decoded set of syntax elements before any of the coded picture data included in the set of coded pictures is parsed, (For example, “Other format and header data is inserted at appropriate locations of the encoded data to help identify individual pictures, blocks, etc., as well as other specifiable parameters. … In decoding the encoded video signal, the header and control information is removed from the encoded video signal and used to determine [derive] how to decoded the encoded picture data.” Thus, all necessary decoding information is derived based on decoded header information before the picture data is decoded according to that information. Uz, Column 3 lines 31-34 and Column 4, lines 27-30. Also note that Specification Paragraphs 32-41 indicate that this feature is from previously published considerations of the HEVC and AVC standards.) wherein the set of unavailable reference pictures is not included in the set of coded pictures and at least one picture in the set of coded pictures references a picture in the set of unavailable reference pictures; (“The successive pictures are designated as a particular type based on a predefined pattern, e.g., IBBPBBPBBPBB … This enables starting decoding at any random picture in the sequence regardless of whether or not it is an I picture” thus not including the I (reference) picture and potentially some P reference pictures in the set of coded pictures that start decoding at a random point in the sequence that starts after these pictures. See Uz, Column 4, lines 48-55, Column 5, lines 1-5. See similarly see Chen, Paragraph 40. Also note that Specification Paragraphs 32-3 indicate that this feature is from previously published considerations of the HEVC and AVC standards.) generating the set of unavailable reference pictures based on the derived information (“Over a sequence of pictures, each slice is intracoded once. This enables starting decoding at any random picture in the sequence regardless of whether or not it is an I picture although, several pictures may need to be decoded” Uz, Column 4, line 59 – Column 5, line 5. Also note that Specification Paragraphs 32-41 indicate that this feature is from previously published considerations of the HEVC and AVC standards. Uz does not explicitly state that this process recovers the reference pictures, however in the context of MPEG encoding, it is understood that reference pictures (I, P) are characterized by I-slices that are referenced by the other pictures in the group, and that recovering all the I-slices recovers all the reference pictures. Chen clarifies this point in the context of MPEG encoding and in the context of decoding video without I-pictures: “If the picture coding type detects the presence of P-frames, the P-frames are decoded (355) to recover I-slices as discussed above. … the process is repeated until n is greater than the refresh rate N (until a complete refresh cycle has passed and all I-slices for a complete I-frame have been recovered.” Chen, Paragraph 40. Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of Uz to recover reference pictures such as I-frames, as taught in Chen, in order to enable playback of a video signal from a random picture in the sequence. Uz, Column 5, lines 2-3 and Chen, Paragraph 33.) decoding the set of coded pictures after generating the set of unavailable reference pictures.” (“This enables starting decoding at any random picture in the sequence regardless of whether or not it is an I picture although, several pictures may need to be decoded before an intelligible picture is produced” Uz, Column 4, line 59 – Column 5, line 5. Similarly in Chen, Paragraph 375 and statement of motivation above. Also note that Specification Paragraphs 40-41 indicate that this feature is from previously published considerations of the HEVC and AVC standards.) Regarding Claim 4: “The method of Claim 1, wherein the generating is done before any of the coded picture data included in the set of coded pictures is parsed.” (See similar treatment of deriving in Claim 1. Further, prior art teaches this feature: “Using a technique called intra slice refresh, selected slices are forced to be intracoded in respective pictures, … This enables starting decoding at any random picture in the sequence regardless of whether or not it is an I picture although, several pictures may need to be decoded before an intelligible picture is produced [generated]” and thus before any of the other dependent frames can be fully parsed and generated and before the next I-frame is generated. Uz, Column 4, line 59 – Column 5, line 5.) Regarding Claim 5: “The method of Claim 1, wherein the GDR picture includes a block that is not an intra coded block.” (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, P and B pictures include such blocks and I pictures do not. Prior art enables starting decoding at a picture that is not an I picture: Uz, Column 4, line 59 – Column 5, line 5. Also see Chen, Paragraph 40 and statement of motivation above.) Regarding Claim 9: “The method of Claim 1, wherein the set of coded pictures includes references to the set of unavailable reference pictures.” (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, in a group of pictures (GOP), P and B pictures reference other pictures and I pictures do not. Prior art enables starting decoding at any picture in a GOP even when I pictures and other pictures that have the reference I-slices are not available: Uz, Column 4, line 59 – Column 5, line 5. Also see Chen, Paragraph 40 and statement of motivation above.) Regarding Claim 14: “The method of Claim 1 further comprising, generating the set of unavailable reference pictures, decoding the GDR picture in the set of coded pictures and all other pictures in the of coded pictures following the first GDR, up to and including the recovery point picture, thereby to refresh fully a video carried in the video bitstream.” (“The first picture of each group of pictures is designated an I picture. The successive pictures are designated as a particular type based on a predefined pattern, e.g., IBBPBBPBBPBB. Note that each B and P picture relies on the data of a preceding picture for use in motion compensated decoding of the picture. By starting each group of pictures on an I picture, it is possible to randomly begin decoding at any desired group without loss of picture quality,” thus decoding all of the GOP pictures before and including the recovery point picture enables decoding of all of the GOP pictures that follow after the recovery point picture. See Uz, Column 4, lines 47-55.) Regarding Claim 16: “The method of Claim 1 wherein deriving information for generating the set of unavailable reference pictures from the decoded set of syntax elements comprises at least one of: ... deriving at least one parameter set identifier that identifies a parameter set that is active for the GDR picture in the set of coded pictures; ... deriving a number of unavailable reference pictures in the set of unavailable reference pictures to generate; ... deriving a picture order count value for each picture in the set of the unavailable reference pictures and assigning a derived picture order count value to each of the associated pictures in the set of unavailable reference pictures; ... deriving a picture order count value for the GDR picture in the set of pictures, deriving delta values for a delta picture order count for each of the pictures in the set of unavailable reference pictures relative to the picture order count value for the GDR picture in the set of pictures, and using the derived delta values to calculate a picture order count value for each of the pictures in the set of unavailable reference pictures and assigning the calculated picture order count values to each of the associated unavailable reference pictures; ... deriving a picture marking status for each picture in the set of unavailable reference pictures, wherein the picture marking status is at least one of: a long-term picture, a short-term picture, and a mark each picture in the set of unavailable reference pictures with a derived marking status; ... deriving a luma width value and a luma height value and generating each picture in the set of unavailable reference pictures having the luma width value and the luma height value; ... deriving a luma width value and a luma height value for each picture in the set of unavailable pictures and generating each picture in the set of unavailable reference pictures to have a width and a height of the associated derived luma width and height value; ... deriving a number of components of the unavailable reference pictures comprising a relative dimension value for each of the components and a bit-depth value for each of the components; and generating each picture in the set of unavailable reference pictures having the number of components, the relative dimensions and the bit-depth according to the derived values; ... deriving a picture type value for each picture in the set of unavailable reference pictures and assigning the derived picture type values to each of an associated unavailable reference picture in the set of unavailable reference pictures; ... deriving a temporal identity value for each of the pictures in the set of unavailable reference pictures and assigning the derived temporal identity values to each of an associated unavailable reference picture in the set of unavailable reference pictures; ... deriving a layer identity value for each of the pictures in the set of unavailable reference pictures and assigning a derived layer identity value to each of an associated unavailable reference picture in the set of unavailable reference pictures; ... deriving at least one picture parameter set identifier for each of the pictures in the set of unavailable reference pictures and assigning the derived at least one picture parameter set identifier values to each of an associated unavailable reference picture in the set of unavailable reference pictures; and ... deriving a block size comprising a size of a coding tree unit, generating each picture in the set of unavailable reference pictures to have that block size, and assigning the block size to each of an unavailable reference picture in the set of unavailable reference pictures.” (For example, “Other format and header data is inserted at appropriate locations of the encoded data to help identify individual pictures, blocks, etc., as well as other specifiable parameters.” Uz, Column 3, lines 31-34 and Column 4, lines 27-30. Similarly see Chen, Paragraph 37 and statement of motivation in Claim 1.) Regarding Claim 18: “The method of Claim 1, wherein generating the set of unavailable reference pictures comprises allocating or assigning memory to store values for each of the pictures in the set of unavailable reference pictures, wherein the stored values includes sample values for each component of each picture in the set of unavailable reference pictures.” (“the P-frames are decoded (355) to recover I-slices as discussed above. The counter is incremented by n=n+1 (360). The refresh rate N is compared to n (365). If n is less than the refresh rate, the P-picture stream is stored 370 and the process is repeated until n is greater than the refresh rate N (until a complete refresh cycle has passed and all I-slices for a complete I-frame have been recovered. Once n is determined to be greater than n, the complete I-frame is encoded and placed into the stream in place of a P-frame (375).” Chen, Paragraph 40 and statement of motivation in Claim 1.) Regarding Claim 19: “The method of Claim 1 any of Claims 6 to 18, wherein the set of unavailable reference pictures comprises at least one unavailable reference picture and wherein generating a set of unavailable reference pictures comprises generating each of the pictures in the set of unavailable reference pictures, and (See recovering all reference picture information in Uz, Column 4, line 59 – Column 5, line 5 and Chen Paragraph 40 and treatment of recovering unavailable reference pictures in Claim 1.) wherein generating each of the pictures in the set of unavailable reference pictures comprises at least one of: ... setting a number of components for the picture in the set of unavailable reference pictures; ... setting a width and a height for each component of the picture in the set of unavailable reference pictures; ... setting a sample bit depth for each component of the picture in the set of unavailable reference pictures; ... setting a sample value for each sample in the picture in the set of unavailable reference pictures; ... assigning Assigning a PPS identifier to the picture in the set of unavailable reference pictures; ... assigning Assigning a SPS identifier to the picture in the set of unavailable reference pictures; ... assigning Assigning an identifier to the picture in the set of unavailable reference pictures, wherein the identifier comprises a picture order count value; ... marking the picture in the set of unavailable reference pictures as at least one of: a short-term picture, a long-term picture, and an unused for prediction; ... assigning Assigning a picture type to the picture in the set of unavailable reference pictures; ... assigning Assigning a temporal ID to the picture in the set of unavailable reference pictures; ... assigning Assigning a layer ID to the picture in the set of unavailable reference pictures; ... assigning Assigning a block size for each component of the picture in the set of unavailable reference pictures; and ... marking the picture in the set of unavailable reference pictures as initialized.” (For example, “a short-term picture, a long-term picture, and an unused for prediction” correspond to P, I, and B-pictures respectively, and these are standard types of pictures found in the first part of a GOP [that may become unavailable for reference] in an MPEG type encoding. See Uz, Paragraph 4, lines 45-55.) Regarding Claim 30: “The method of Claim 1, further comprising performing a random access operation at the GDR pictureoint.” (For example “starting decoding at any random picture in the sequence” Uz, Column 5, lines 2-3.) Regarding Claim 31: “The method of Claim 1, wherein the recovery point indication and the GDR picture in the set of coded pictures belong to the same access unit, AU.” (For example, where the access unit is a GOP, the starting picture can be in the same GOP as the first picture of the GOP or the I-picture of the GOP. Uz, Column 4 line 45 - Column 5, line 5.) Regarding Claim 37: “A decoder configured to operate to decode a set of pictures from a video bitstream, comprising: a processor; and memory coupled with the processor, and storing instructions that cause the processor to implement the method of Claim 1.” (“An illustrative conditional access decoding system 200 includes at least a video decoder 216 … illustratively implemented using one or more suitably programmed processors such as an AViA™ or a ZiVA™ video decoder processor …” Uz, Column 8, lines 59-67.) Regarding Claim 38: “A computer program comprising program code to be executed by a processor of a decoder configured to operate to decode a set of coded pictures from a video bitstream, whereby execution of the program code causes the decoder to perform operations according to Claim 1.” (“An illustrative conditional access decoding system 200 includes at least a video decoder 216 … illustratively implemented using one or more suitably programmed processors such as an AViA™ or a ZiVA™ video decoder processor …” Uz, Column 8, lines 59-67.) Regarding Claim 39: “A method of encoding a recovery point indication into a video bitstream, the method comprising: (See reasons for rejection in Claim 1 and specific citations below.) encoding a first set of pictures to the video bitstream; (For example, “The first picture of each group of pictures is designated an I picture. The successive pictures are designated as a particular type based on a predefined pattern, e.g., IBBPBBPBBPBB.” Uz, Column 4, lines 47-50.) determining a set of reference pictures that would be unavailable to a decoder if decoding started in the video bitstream after the first set of coded pictures; (“For example, one or more slices in non-contiguous macroblock rows may be selected in a first picture for intracoding. In the next picture, an equal number of slices offset by one macroblock row from the respective slices in the previous picture may be designated for intracoding, and so on. Over a sequence of pictures, each slice is intracoded once.” Uz, Column 4, line 64 – Column 5, line 2. For example, each picture has a known set of preceding pictures in the GOP: “The first picture of each group of pictures is designated an I picture. The successive pictures are designated as a particular type based on a predefined pattern, e.g., IBBPBBPBBPBB.” Uz, Column 4, lines 47-50.)) encoding a recovery point indication into a picture header of a gradual decoding refresh, GDR, picture that begins a second set of pictures that follows the first set of pictures, … wherein the recovery point indication includes a set of syntax elements indicating a position of a recovery point picture associated with the GDR picture, comprising a picture in the video bitstream at which of from where a decoding of the video bitstream would be fully refreshed, and information for the set of reference pictures that would be unavailable, (“format and header data is inserted at appropriate locations of the encoded data to help identify individual pictures, … MPEG-2 in designating pictures as I, P or B. … selected macroblocks are forced to be intracoded regardless of whether they are in P or B pictures and regardless of whether or not there is an adequate prediction macroblock therefor,” thus P or B indications in the picture header designate GDR pictures. Uz, Column 4, lines 45-60. See full reason for rejection in Claim 1.) thereby providing information to enable a decoder to generate a picture in the set of unavailable reference pictures before parsing the second set of pictures when encoded as a second set of coded pictures in the in the bitstream, the second set of coded pictures including at least one coded picture that references a picture in the first set of pictures; and (“The first picture of each group of pictures is designated an I picture. The successive pictures are designated as a particular type based on a predefined pattern, e.g., IBBPBBPBBPBB. Note that each B and P picture relies on the data of a preceding picture … starting decoding at any random picture in the sequence regardless of whether or not it is an I picture” thus being able to decode pictures after the start picture while the previous reference pictures are unavailable. Uz, Column 4, line 64 – Column 5, line 2.) wherein GDR picture comprises a picture that is not an intra-coded random access point, IRAP, picture in the video bitstream from where to begin a refresh of the decoding; and (See reasons for rejection in Claim 1.) information for generating a set of unavailable reference pictures; and (For example, the information can be “called intra slice refresh, selected slices are forced to be intracoded in respective pictures … This enables starting decoding at any random picture in the sequence regardless of whether or not it is an I picture although, several pictures may need to be decoded before an intelligible picture is produced.” See Uz, Column 4 lines 47-55 and Column 5, lines 1-5. Thus, the information of the missing I picture or any other missing reference picture can be generated using the information in the intra refresh slices. See similarly in Chen, Fig. 1 and statement of motivation below. Also note that Specification Paragraph 32 indicates that this feature is from the HEVC and AVC standards.) encoding the second set of pictures into the video bitstream.” (“The first picture of each group of pictures is designated an I picture. The successive pictures are designated as a particular type based on a predefined pattern, e.g., IBBPBBPBBPBB. Note that each B and P picture relies on the data of a preceding picture … starting decoding at any random picture in the sequence regardless of whether or not it is an I picture” thus the GOP encodes the reference pictures and the latter dependent pictures embodying the second set of pictures. Uz, Column 4, line 64 – Column 5, line 2.) Regarding Claim 43: “An encoder configured to operate to encode into a video bitstream a recovery point indication including information to enable a decoder to generate a picture in a set of unavailable reference pictures, comprising: a processor; and memory coupled with the processor and storing instructions that cause the processor to implement the method of Claim 39.” (See reasons for rejection in Claim 39 and “using one or more suitably programmed processors such as an AViA™ or a ZiVA™ video decoder processor, or a DV Expert™ video encoder processor, …” Uz, Column 8, lines 59-67.) Claim 44 is rejected for reasons stated for Claims 43 and 38. Claims 3, 17, 20 are rejected under 35 U.S.C. 103 as being unpatentable over US 6351538 to Uz (“Uz”) in view of US 20010026677 to Chen (“Chen”) and in view of US 20130235152 to Hannuksela (“Hannuksela”). Regarding Claim 3: “The method of Claim 1, Uz and Chen do not teach “wherein the coded picture data includes all video coding layer, VCL, network abstraction layer, NAL units included in the set of pictures in the video bitstream.” Hannuksela teaches the above claim feature in the context of encoding video under video coding standards such as MPEG: “NAL units can be categorized into Video Coding Layer (VCL) NAL units and non- VCL NAL units. VCL NAL units are either coded slice NAL units, coded slice data partition NAL units, or VCL prefix NAL units.” Hannuksela, Paragraph 117. Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of Uz and Chen to PERFORM_FUNCTION as taught in Hannuksela, in order to encode the video data into data packets to be transmitted over the network. See Hannuksela, Paragraph 117. Finally, in reviewing the present application, there does not seem to be objective evidence that the claim limitations are particularly directed to: addressing a particular problem which was recognized but unsolved in the art, producing unexpected results at the level of the ordinary skill in the art, or any other objective indicators of non-obviousness. Regarding Claim 17: “The method of Claim 1, wherein the decoded set of syntax elements is decoded from a recovery point indication in a non-video coding layer (non-VCL) network abstraction layer (NAL).” (Claim 1 indicates using syntax information in headers and supplemental information in other portion of the video. Further, Hannuksela indicates that this information can coded in “A non- VCL NAL unit may be of one of the following types: a sequence parameter set, a picture parameter set [header information], a supplemental enhancement information (SEI) NAL unit … Parameter sets may be needed for the reconstruction of decoded pictures …” Hannuksela, Paragraph 118 and statement of motivation in Claim 3.) Regarding Claim 20: “The method of Claim 1, wherein the recovery point indication is decoded from a non-video coding layer non-VCL NAL unit including a NAL unit type syntax element, (Claim 1 indicates using syntax information in headers and supplemental information in other portions of the video. Further, Hannuksela indicates that this information can coded in “A non- VCL NAL unit may be of one of the following types: a sequence parameter set, a picture parameter set [header information], a supplemental enhancement information (SEI) NAL unit … Parameter sets may be needed for the reconstruction of decoded pictures …” Hannuksela, Paragraph 118 and another embodiment in Paragraph 121. See statement of motivation in Claim 3.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIKHAIL ITSKOVICH whose telephone number is (571)270-7940. The examiner can normally be reached Mon. - Thu. 9am - 8pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at (571)272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MIKHAIL ITSKOVICH/Primary Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Sep 10, 2021
Application Filed
Sep 23, 2023
Non-Final Rejection — §103
Dec 28, 2023
Response Filed
Mar 09, 2024
Final Rejection — §103
May 08, 2024
Response after Non-Final Action
May 18, 2024
Response after Non-Final Action
Jun 12, 2024
Request for Continued Examination
Jun 17, 2024
Response after Non-Final Action
Jul 13, 2024
Non-Final Rejection — §103
Oct 17, 2024
Response Filed
Jan 16, 2025
Final Rejection — §103
Mar 21, 2025
Response after Non-Final Action
Apr 11, 2025
Request for Continued Examination
Apr 21, 2025
Response after Non-Final Action
May 03, 2025
Non-Final Rejection — §103
Aug 05, 2025
Response Filed
Nov 03, 2025
Final Rejection — §103
Jan 05, 2026
Response after Non-Final Action
Jan 28, 2026
Request for Continued Examination
Feb 01, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §103
Mar 16, 2026
Interview Requested
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 07, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12548733
Automating cryo-electron microscopy data collection
2y 5m to grant Granted Feb 10, 2026
Patent 12489911
IMAGE CODING METHOD, IMAGE DECODING METHOD, IMAGE CODING APPARATUS, RECEIVING APPARATUS, AND TRANSMITTING APPARATUS
2y 5m to grant Granted Dec 02, 2025
Patent 12477146
ENCODING AND DECODING METHOD, DEVICE AND APPARATUS
2y 5m to grant Granted Nov 18, 2025
Patent 12452404
METHOD FOR DETERMINING SPECIFIC LINEAR MODEL AND VIDEO PROCESSING DEVICE
2y 5m to grant Granted Oct 21, 2025
Patent 12432328
SYSTEM AND METHOD FOR RENDERING THREE-DIMENSIONAL IMAGE CONTENT
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
35%
Grant Probability
59%
With Interview (+23.8%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 585 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month