Prosecution Insights
Last updated: April 19, 2026
Application No. 18/897,143

METHOD AND APPARATUS FOR ENCODING AND DECODING A LARGE FIELD OF VIEW VIDEO

Non-Final OA §102§103§112
Filed
Sep 26, 2024
Examiner
HESS, MICHAEL J
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Interdigital Madison Patent Holdings SAS
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
52%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
183 granted / 418 resolved
-14.2% vs TC avg
Moderate +8% lift
Without
With
+7.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
66 currently pending
Career history
484
Total Applications
across all art units

Statute-Specific Performance

§101
4.6%
-35.4% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
20.8%
-19.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 418 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Examiner’s Amendment Claim 12. Cancelled. Claim Rejections - 35 USC § 112(a) The following is a quotation of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 1–11 and 13–20 are rejected under 35 U.S.C. 112(a) as failing to comply with the written description and enablement requirements. This rejection is about claim scope not being commensurate with Applicant’s disclosure. Specifically, Applicant’s claims recite an item of information associated with a coding module. Because this feature, as claimed, covers all possible associations between a piece of information and a coding module, the claims are quite broad and must be described and enabled to a corresponding degree of breadth. This is impermissible because the claims must be commensurate in scope with that enabled by the Specification. Sitrick v. Dreamworks, LLC, 516 F.3d 993, 999, 85 USPQ2d 1826, ____ (Fed. Cir. 2008) (“The scope of the claims must be less than or equal to the scope of the enablement to ensure that the public knowledge is enriched by the patent specification to a degree at least commensurate with the scope of the claims.”) (quotation omitted). MPEP 2161.01(III). Regarding written description, the claims must also be in commensurate scope with Applicant’s disclosure. “[T]he description of one method for creating a seamless DWT does not entitle the inventor . . . to claim any and all means for achieving that objective.” LizardTech, 424 F.3d at 1346, 76 USPQ2d at 1733. See MPEP 2161.01. Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112: (B) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1–11 and 13–20 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Specifically, the skilled artisan would not be reasonably certain what it means to claim an item of information associated with a coding module and Examiner’s review of Applicant’s Specification did not yield any helpful guidance. Examiner recommends Applicant define within the claims what the association is or how the two entities are associated. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim 20 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by a prior art DVD or similar. To be given patentable weight, the recording medium and the recited bitstream (i.e. descriptive material) must be in a functional relationship. When a claimed “computer-readable medium merely serves as a support for information or data, no functional relationship exists”. MPEP §2111.05(III). In this instance, because there is no patentable weight given to the non-functional descriptive material of the bitstream, a prior art DVD or similar manufacture reads on the claimed manufacture. Accordingly, claim 20 is unpatentable under 35 U.S.C. 102. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1–11 and 13–19 are rejected under 35 U.S.C. 103 as being unpatentable over Abbas (US 2017/0295356 A1) and Zhang (US 2014/0161186 A1). Examiner interprets Applicant’s invention to be combining a cubic projection model with HEVC coding tools to code each of the six sides of the cube using convention 3D-HEVC coding. Applicant also rotates the cube map projection of some of the sides of the cube so that neighboring sides line up in the 2D plane. Applicant’s Figs. 14A–16C demonstrate a cube map for 3D content. Abbas’s paragraph [0111] teaches, “cube projections may be encoded using HEVC and/or other codecs.” Abbas’s Figs. 4A–4C teach rotating the sides of the cube map projection so that portions that would otherwise be adjacent in the 3D scene are likewise adjacent in the 2D plane. Examiner incorporates herein the findings and rationale provided in the Final Office Action dated 07/01/2024 and the Advisory Action dated 09/11/2024 for application no. 16/334,719. Abbas’s Figs. 4A–4C teach rotating the sides of the cube map projection so that portions that would otherwise be adjacent in the 3D scene are likewise adjacent in the 2D plane.” This is a known problem and solution when projecting a 3D scene onto a 2D representation. Think of a world map projection (2D) meant to represent the earth (3D). As one can see from the below examples, policy choices are made for how to introduce discontinuities to maintain continuity elsewhere in the projection map. PNG media_image1.png 231 515 media_image1.png Greyscale PNG media_image2.png 154 327 media_image2.png Greyscale PNG media_image3.png 337 602 media_image3.png Greyscale PNG media_image4.png 321 500 media_image4.png Greyscale The skilled artisan knows the myriad projections to address continuities and discontinuities between 2D and 3D representations of data and the cited prior art explains these concepts concretely with respect to specifically video and image coding using coding modules that rely on neighbors for predictions and filtering. Claim 1 lists the same elements as claim 4, but in method form rather than apparatus form. Therefore, the rationale for the rejection of claim 4 applies to the instant claim. Claim 2 lists the same elements as claim 5, but in method form rather than apparatus form. Therefore, the rationale for the rejection of claim 5 applies to the instant claim. Claim 3 lists the same elements as claim 6, but in method form rather than apparatus form. Therefore, the rationale for the rejection of claim 6 applies to the instant claim. Claim 4 lists the same elements as claim 13, but is drawn to the corresponding encoding apparatus rather than the decoding apparatus. Because encoders and decoders perform complementary or reciprocal operations, the rationale for the rejection of claim 13 applies to the instant claim. Claim 5 lists the same elements as claim 14, but is drawn to the corresponding encoding apparatus rather than the decoding apparatus. Because encoders and decoders perform complementary or reciprocal operations, the rationale for the rejection of claim 14 applies to the instant claim. Claim 6 lists the same elements as claim 19, but is drawn to the corresponding encoding apparatus rather than the decoding apparatus. Because encoders and decoders perform complementary or reciprocal operations, the rationale for the rejection of claim 19 applies to the instant claim. Claim 7 lists the same elements as claim 13, but in method form rather than apparatus form. Therefore, the rationale for the rejection of claim 13 applies to the instant claim. Claim 8 lists the same elements as claim 14, but in method form rather than apparatus form. Therefore, the rationale for the rejection of claim 14 applies to the instant claim. Claim 9 lists the same elements as claim 15, but in method form rather than apparatus form. Therefore, the rationale for the rejection of claim 15 applies to the instant claim. Claim 10 lists the same elements as claim 16, but in method form rather than apparatus form. Therefore, the rationale for the rejection of claim 16 applies to the instant claim. Claim 11 lists the same elements as claim 17, but in method form rather than apparatus form. Therefore, the rationale for the rejection of claim 17 applies to the instant claim. Regarding claim 13, the combination of Abbas and Zhang teaches or suggests an apparatus for decoding a bitstream representative of a large field of view video, at least one picture of the large field of view video being represented as a surface, the surface being projected onto at least one 2D picture using a projection function (Examiner notes these projections from 3D space to 2D surfaces are well-known in the art and have been around a long time in cartography; Examiner also notes Applicant’s paragraphs [0062] and [0083] explain the projection is a cube projection yielding six sides; Abbas, Fig. 3: like Applicant’s Figs. 14A and 14B, Abbas teaches the well-known technique of projecting a cube into a 3D spherical viewpoint model and then “unfolding the box” to yield a 2D model), the apparatus comprising one or more processors configured to, for a current region of the 2D picture: determine, according to the projection function, at least one item of information associated with a decoding module, the at least one item of information defining a region of the 2D picture to be used as a neighboring region of the current region in replacement of an adjacent neighboring region of the current region in the 2D picture during decoding of blocks of the current region by the decoding module (Examiner notes original claims 16 and 17 explain the item of information is a rotation transform; Abbas, ¶¶ 0098–0099 and Figs 4A and 4B: teach how region 410 is identified as a region of the 2D cube map that is used as a neighboring region to current region 404 so that object 420 lines up between the two regions; Abbas explains that region 442 is created by a transformation, i.e. a rotation, to coincide with the current region and that standard prediction schemes like motion compensation or other video coding tools can be applied as in a traditional image; Abbas, ¶‌ 0102: teaches the transformation operations can be identified by one or more flags; Examiner notes the art uses flags to turn on and off a multitude of coding tools such that the skilled artisan would find it obvious to indicate whether a transform was used for a particular coding region; Examiner notes the skilled artisan recognizes that a table of index values mapped to transformation operations is how the art conventionally handles a group of coding options; Specifically, the skilled artisan would find it obvious to recognize that rotation operations could comprise +90-degrees or –90-degrees or 180-degrees and that signaling an index value of either a 00, 01, or 10 (i.e. 0, 1, or 2) would indicate for the decoder which of those three rotations to apply when the decoder looks up the index in the mapping table; see also Abbas, claim 39: teaching an indication in the bitstream indicates the type of transformation to use); determine, for a current block of the current region, a group of neighboring blocks using the at least one item of information (Abbas, ¶ 0100: explains that the coding is envisioned to include macroblocks, which the skilled artisan knows is generically called blocks by some in the art; Examiner finds that one skilled in the art would immediately know from the teachings of Abbas, especially in view of the description of using “encoding information (e.g. motion vectors)”, that Abbas is suggesting any tools available in HEVC can be used once the rotation transform is applied to a non-neighboring region to create a neighboring region; Abbas’s “encoding information” reads on Applicant’s item of information that triggers determining a group of neighboring blocks; Examiner interprets Applicant’s neighboring blocks consist with Applicant’s Fig. 26A or 26B; Abbas does not appear to provide details regarding how such neighboring blocks are used for predictions in the conventional video coding scheme; Examiner notes neighboring blocks are used for various processes including intra prediction, merge/skip modes, motion vector prediction, deblocking filtering, etc.; However, in the same field of endeavor, Zhang, ¶¶ 0026–0028 and Fig. 4: explain that 3D-HEVC utilizes conventional HEVC concepts but applies them across views, including coding tools like merge mode, skip mode, and AMVP; Zhang, ¶ 0010: teaches IDMVC which uses disparity vectors from other views); and decode the current block using the determined group of neighboring blocks, wherein the current block is decoded by processing the current block by the decoding module using the determined group of neighboring blocks (Abbas, e.g. ¶¶ 0100 and 0107: teach encoding the current blocks using neighboring blocks using encoding information; Examiner notes decoding is obvious in view of encoding in this art; see also Abbas, ¶¶‌ 0115 and 0116: teaching inter prediction between neighboring frames; Abbas does not appear to provide details regarding how such neighboring blocks are used for predictions in the conventional video coding scheme; Examiner notes neighboring blocks are used for various processes including intra prediction, merge/skip modes, motion vector prediction, deblocking filtering, etc.; However, in the same field of endeavor, Zhang, ¶¶ 0026–0028 and Fig. 4: explain that 3D-HEVC utilizes conventional HEVC concepts but applies them across views, including coding tools like merge mode, skip mode, and AMVP; Zhang, ¶ 0010: teaches IDMVC which uses disparity vectors from other views; Zhang, ¶ 0236: teaches intra most probable mode; Examiner notes the process of deblocking is the process of smoothing the boundaries (edge pixels) between blocks, which uses both samples of the current block and neighboring blocks; Zhang, ¶ 0228: teaches deblocking filtering); wherein the region defined by the at least one item of information depends on the decoding module (Abbas, e.g. ¶¶ 0100 and 0107: teach encoding the current blocks using neighboring blocks using encoding information; Examiner notes decoding is obvious in view of encoding in this art; see also Abbas, ¶¶‌ 0115 and 0116: teaching inter prediction between neighboring frames; This limitation is interpreted consistent with Applicant’s Specification; Substantial analysis on this feature was provided in the Advisory Action of the parent case, dated 09/11/2024; Examiner notes Applicant's Table 2 seems to be describing that intra-prediction always requires the rotational transformation if the regions are not adjacent (non-continuous). Likewise, the prior art seems to clearly teach rotational transformation to maintain image continuities for the purpose of intra prediction (e.g. Abbas, Abstract). So, setting aside what may happen in the case of inter-prediction (motion vectors), both Applicant's Table 2 and the prior art teach the same thing with respect to intra-prediction; Applicant's Table 2 does support the concept of no rotations for mv pred (i.e. inter-prediction); Examiner finds there is commonality between Abbas's handling of non-adjacent regions for inter-prediction and Applicant's handling; Both recognize that 2D projections (e.g. the unfolded box) will create regions where there is no image data to reference and that the solution to that problem is to move (geometrically transform) a region to be adjacent to the region undergoing coding, although perhaps without the need to also perform a rotation; While Applicant's table 2, which does list prior art encoding modules and relates them to regions, neighboring regions, and rotational transforms, neither the table nor any other part of the Specification appears to explain any particular relationship between any coding module and any rotation; The table explains intra prediction is the only encoding mode that requires there to be a rotation while any non-existent neighboring regions are replaced with real neighboring regions. This is obvious subject matter in view of Abbas's teachings; Examiner interprets the claimed rotational transform or the like and is not performed by, or the result of the actions of, an encoding module per se; Instead there is a non-specified dependency between a particular encoding module and a particular modification (rotation); Abbas's Fig. 8A shows a MAPPING between a 3D cube projection and a 2D representation wherein the skilled artisan can immediately recognize which border regions of the facets represent contiguous and non-contiguous portions of the laid-flat cube. Therefore, the mapping of the 2D regions to 3D space and the geometric transforms necessary to move (translation and rotation) these image portions to create contiguous image regions to help support prior art coding tools such as inter-prediction, intra-prediction, deblocking filtering, etc. is taught by the prior art). One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Abbas, with those of Zhang, because both references are drawn to the same field of endeavor such that one wishing to practice the art of 3D video coding would be led to their teachings to build a workable 3D video coding solution, because each reference merely represents a description of the state-of-the-art at the time of publication such that the combination of features represents a mere combination of prior art elements, according to known methods (combining their teachings into software modules capable of achieving the described features), to yield a predictable result (a 3D coded video), and because Abbas itself explains that the skilled artisan would be motivated to code a 3D cube map using conventional HEVC coding tools and Zhang is merely describing those conventional tools developed by the 3D-HEVC standards body. For all these reasons, the skilled artisan would find their combination obvious. This rationale applies to all combinations of Abbas and Zhang used in this Office Action unless otherwise noted. Regarding claim 14, the combination of Abbas and Zhang teaches or suggests the apparatus of claim 13, wherein decoding module performs at least one of: determine a predicted block using at least one sample of a block belonging to the group of neighboring blocks (Zhang, ¶¶ 0026: teaches merge mode, skip mode, and AMVP, which are all tools that predict a current block from samples of neighboring blocks); determine a most probable mode list for encoding an intra prediction mode for the at least one current block (Zhang, ¶ 0236: teaches intra most probable mode); derive a motion vector predictor for encoding a motion vector for the at least one current block (Zhang, ¶ 0028: teaches advance motion vector prediction (AMVP)); derive a motion vector for coding a motion vector for the at least one current block (Zhang, ¶¶ 0026: teaches merge mode, skip mode, and AMVP, which are all tools that derive a motion vector for coding a motion vector for a current block); deblocking filter between the at least one current block and a block belonging to the group of neighboring blocks (Examiner notes the process of deblocking is the process of smoothing the boundaries (edge pixels) between blocks, which uses both samples of the current block and neighboring blocks; Zhang, ¶ 0228: teaches deblocking filtering); or sample adaptive offset filter between at least one sample of the at least one current block and at least one sample of a block belonging to the group of neighboring blocks (Zhang, ¶ 0228: teaches SAO filtering). Regarding claim 15, the combination of Abbas and Zhang teaches or suggests the apparatus of claim 13, wherein the at least one item of information is stored in a neighbor replacement table for the current region (Abbas explains that region 442 is created by a transformation, i.e. a rotation, to coincide with the current region; Abbas, ¶‌ 0102: teaches the transformation operations can be identified by one or more flags; Examiner notes the art uses flags to turn on and off a multitude of coding tools such that the skilled artisan would find it obvious to indicate whether a transform was used for a particular coding region; Examiner notes the skilled artisan recognizes that a table of index values mapped to transformation operations is how the art conventionally handles a group of coding options; Specifically, the skilled artisan would find it obvious to recognize that rotation operations could comprise +90-degrees or –90-degrees or 180-degrees and that signaling an index value of either a 00, 01, or 10 (i.e. 0, 1, or 2) would indicate for the decoder which of those three rotations to apply when the decoder looks up the index in the mapping table; see also Abbas, claim 39: teaching an indication in the bitstream indicates the type of transformation to use; Examiner notes tables can either be preprogrammed or transmitted; Zhang, ¶‌ 0236: teaches tables can be transmitted in the bitstream; The prior art teaches Applicant's relationship between a 3D scene and a 2D representation of said scene and further explains how the different facets of a cube projection represent images spatially related to each other; For example, Abbas's Fig. 8A shows a MAPPING between a 3D cube projection and a 2D representation wherein the skilled artisan can immediately recognize which border regions of the facets represent contiguous and non-contiguous portions of the laid-flat cube; Therefore, the mapping of the 2D regions to 3D space and the geometric transforms necessary to move (translation and rotation) these image portions to create contiguous image regions to help support prior art coding tools such as inter-prediction, intra-prediction, deblocking filtering, etc. is taught by the prior art; Reading the claimed subject matter as a whole, this feature is merely putting these prior art spatial relationships between facets into a table; Putting non-functional descriptive material already described in the prior art into a table does not represent a patentable contribution), and wherein the at least one item of information indicates at least one of: the adjacent neighboring region of the current region is a 2D spatially neighboring region of the current region that is available and the adjacent neighboring region is replaced by another region of the 2D picture defined according to the projection function (Abbas, Figs. 4B, 5B, 5C: illustrate replacing non-available spatially adjacent regions with neighboring regions); the adjacent neighboring region of the current region is a 2D spatially neighboring region of the current region that is not available and the adjacent neighboring region is replaced by another region of the 2D picture defined according to the projection function (Abbas, Figs. 4B, 5B, 5C: illustrate replacing non-available spatially adjacent regions with neighboring regions); or the adjacent neighboring region of the current region is a 2D spatially neighboring region of the current region that is available and the availability of the adjacent neighboring region of the current region is disabled for the current region (Abbas, Figs. 5B vs. 5C: illustrates that a blank region can be used rather than putting in a neighboring replacing region (see e.g. right of E); compare also 5A to 5B, for example). Regarding claim 16, the combination of Abbas and Zhang teaches or suggests the apparatus of claim 13, wherein the at least one item of information is defined in association with a transformation to be applied to the other region of the 2D picture to be used as a neighboring region (see treatment of claim 13; Abbas, e.g. ¶¶ 0098 and 0100: teach the transformation is a rotation). Regarding claim 17, the combination of Abbas and Zhang teaches or suggests the apparatus claim 16, wherein the transformation is a rotation (Abbas, e.g. ¶¶ 0098 and 0100: teach the transformation is a rotation). Regarding claim 18, the combination of Abbas and Zhang teaches or suggests the apparatus of claim 13, wherein the item of information is stored in a table (Abbas explains that region 442 is created by a transformation, i.e. a rotation, to coincide with the current region; Abbas, ¶‌ 0102: teaches the transformation operations can be identified by one or more flags; Examiner notes the art uses flags to turn on and off a multitude of coding tools such that the skilled artisan would find it obvious to indicate whether a transform was used for a particular coding region; Examiner notes the skilled artisan recognizes that a table of index values mapped to transformation operations is how the art conventionally handles a group of coding options; Specifically, the skilled artisan would find it obvious to recognize that rotation operations could comprise +90-degrees or –90-degrees or 180-degrees and that signaling an index value of either a 00, 01, or 10 (i.e. 0, 1, or 2) would indicate for the decoder which of those three rotations to apply when the decoder looks up the index in the mapping table; see also Abbas, claim 39: teaching an indication in the bitstream indicates the type of transformation to use; Examiner notes tables can either be preprogrammed or transmitted; Zhang, ¶‌ 0236: teaches tables can be transmitted in the bitstream). Regarding claim 19, the combination of Abbas and Zhang teaches or suggests the apparatus of claim 18, wherein the one or more processors are further configured to decode the table from a bitstream (Abbas explains that region 442 is created by a transformation, i.e. a rotation, to coincide with the current region; Abbas, ¶‌ 0102: teaches the transformation operations can be identified by one or more flags; Examiner notes the art uses flags to turn on and off a multitude of coding tools such that the skilled artisan would find it obvious to indicate whether a transform was used for a particular coding region; Examiner notes the skilled artisan recognizes that a table of index values mapped to transformation operations is how the art conventionally handles a group of coding options; Specifically, the skilled artisan would find it obvious to recognize that rotation operations could comprise +90-degrees or –90-degrees or 180-degrees and that signaling an index value of either a 00, 01, or 10 (i.e. 0, 1, or 2) would indicate for the decoder which of those three rotations to apply when the decoder looks up the index in the mapping table; see also Abbas, claim 39: teaching an indication in the bitstream indicates the type of transformation to use; Examiner notes tables can either be preprogrammed or transmitted; Zhang, ¶‌ 0236: teaches tables can be transmitted in the bitstream). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kuzyakov and Pio, “Next-generation video encoding techniques for 360 video and VR,” Video Engineering, Virtual Reality, Engineering at Meta, January 21, 2016. Accessible at https://engineering.fb.com/2016/01/21/virtual-reality/next-generation-video-encoding-techniques-for-360-video-and-vr/. The text and video on this page provide a helpful reference. Raveendran (US 2017/0324951 A1) teaches a cube map projection and using conventional video coding tools to code the tiles of the resulting stitched canvas (e.g. ¶‌ 0051). Boyce (US 2017/0347084 A1) teaches encoding regions of interest for a cube map projection using HEVC (e.g. ¶¶ 0004 and 0015). Deng (US 2015/0172544 A1) teaches 3D video coding, projections, and rotation. Lelescu (US 2004/0105597 A1) teaches projection onto a 2D plane of omni-directional video and using conventional coding techniques to code the images (e.g. Abstract and ¶‌ 0042). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael J Hess whose telephone number is (571)270-7933. The examiner can normally be reached Mon - Fri 9:00am-5:30pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8933. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL J HESS/Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Sep 26, 2024
Application Filed
Dec 18, 2025
Non-Final Rejection — §102, §103, §112
Dec 18, 2025
Examiner Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12563195
Method And An Apparatus for Encoding and Decoding of Digital Image/Video Material
2y 5m to grant Granted Feb 24, 2026
Patent 12563208
PICTURE CODING METHOD, PICTURE CODING APPARATUS, PICTURE DECODING METHOD, AND PICTURE DECODING APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12556737
MOTION COMPENSATION FOR VIDEO ENCODING AND DECODING
2y 5m to grant Granted Feb 17, 2026
Patent 12556747
ARRAY BASED RESIDUAL CODING ON NON-DYADIC BLOCKS
2y 5m to grant Granted Feb 17, 2026
Patent 12549728
METHOD AND APPARATUS FOR CODING VIDEO DATA IN TRANSFORM-SKIP MODE
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
52%
With Interview (+7.7%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 418 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month