Prosecution Insights
Last updated: April 19, 2026
Application No. 18/981,947

MOVING IMAGE ENCODING DEVICE, MOVING IMAGE ENCODING METHOD, MOVING IMAGE ENCODING PROGRAM, MOVING IMAGE DECODING DEVICE, MOVING IMAGE DECODING METHOD, AND MOVING IMAGE DECODING PROGRAM

Non-Final OA §102§103
Filed
Dec 16, 2024
Examiner
FINDLEY, CHRISTOPHER G
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Godo Kaisha Ip Bridge 1
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
89%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
580 granted / 752 resolved
+19.1% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
28 currently pending
Career history
780
Total Applications
across all art units

Statute-Specific Performance

§101
4.1%
-35.9% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
25.5%
-14.5% vs TC avg
§112
5.0%
-35.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 752 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation Patentable weight is given to data stored on a computer-readable medium when there exists a functional relationship between the data and its associated substrate. MPEP 2111.05 III. For example, if a claim is drawn to a computer-readable medium containing programming, a functional relationship exists if the programming “performs some function with respect to the computer with which it is associated.” Id. However, if the claim recites that the computer-readable medium merely serves as a support for information or data, no functional relationship exists and the information or data is not given patentable weight. Id. Claim 7 is directed to a non-transitory computer-readable medium storing a bitstream that is generated by an encoding method, wherein the method steps are listed in the claim. These steps are not performed by an intended computer, and the bitstream is not a form of programming that causes functions to be performed by an intended computer. This shows that the computer-readable medium merely serves as support for the bitstream and provides no functional relationship between the steps/elements that describe the generation of the bitstream and intended computer system. Therefore, those claim elements are not given patentable weight. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 7 is rejected under 35 U.S.C. 102(a)(2) as being anticipated by Han et al. (US 20200169745 A1). Re claim 7, Han discloses that video encoder 200 may generate a bitstream including encoded video data (Han: Fig. 1; paragraph [0039]). Source device 102 may then output the encoded video data via output interface 108 onto computer-readable medium 110 for reception and/or retrieval by, e.g., input interface 122 of destination device 116 (Han: Fig. 1; paragraph [0039]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6 are rejected under 35 U.S.C. 103 as being unpatentable over Han et al. (US 20200169745 A1) in view of Xu et al. (US 20200186819 A1). Re claim 1, Han discloses a moving-picture coding device comprising: a history-based predictor candidate list update unit configured to add motion information of a coded block to an end of a history-based motion vector predictor candidate list (Han: paragraph [0024]); a spatial motion information candidate derivation unit configured to add spatial motion information candidates to a motion vector predictor candidate list, the spatial motion information candidates including a first spatial motion information candidate derived from motion information of a block neighboring a left side of a coding target block in a space domain and a second spatial motion information candidate derived from motion information of a block neighboring an upper side of the coding target block in a space domain (Han: paragraph [0022], the motion vector predictor list includes motion vector information of previously coded blocks, such as spatially neighboring blocks (e.g., blocks that neighbor the current block in the same picture as the current block) and collocated blocks (e.g., blocks that are located at particular locations in other pictures); and a history-based motion information candidate derivation unit configured to add a history-based motion information candidate to the motion vector predictor candidate list, the history-based motion information candidate derived from the history-based motion vector predictor candidate list (Han: paragraph [0025], the video encoder and the video decoder add HMVP candidates from the HMVP candidate history table into the motion vector predictor list), wherein the history-based motion information candidate derivation unit derives the history-based motion information candidate by referring to motion information in the history-based motion vector predictor candidate list in order from a beginning without making a comparison of the motion information in the history-based motion vector predictor candidate list with the motion information in the motion vector predictor candidate list (Han: paragraph [0084], in one or more examples, video encoder 200 and video decoder 300 may add HMVP candidates from the second subset of HMVP candidates without comparing the HMVP candidates from the second subset of HMVP candidates with entries in the motion vector predictor list). Han does not specifically disclose wherein the spatial motion information candidate derivation unit adds the second spatial motion information candidate to the motion vector predictor candidate list when the second spatial motion information candidate is different from the first spatial motion information candidate. However, Xu discloses a number of redundancy checks are performed to construct the motion vector prediction list (Xu: paragraph [0045]). In an embodiment, the first number of redundancy checks does not include at least one of a comparison between (i) a possible spatial motion vector predictor and a first existing spatial motion vector predictor in the motion vector predictor list, (ii) a possible temporal motion vector predictor and a first existing temporal motion vector predictor in the motion vector predictor list, (iii) a possible history-based motion vector predictor and the first or another existing spatial motion vector predictor in the motion vector predictor list, and (iv) the possible history-based motion vector predictor and the first or another existing temporal motion vector predictor in the motion vector predictor list (Xu: paragraph [0046]). Thus, the comparison between (i) a possible spatial motion vector predictor and a first existing spatial motion vector predictor in the motion vector predictor list may be included in the first number of redundancy checks when one of the other redundancy checks is omitted. Since Han and Xu relate to constructing motion vector predictor candidate lists, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the redundancy checks of Xu with the system of Han in order to reduce the number of operations and simplify the construction process of the motion vector predictor list (Xu: paragraph [0129]). Claim 2 recites the corresponding moving-picture coding method implemented by the coding device of claim 1. Therefore, arguments analogous to those presented for claim 1 are applicable to claim 2. Accordingly, claim 2 has been analyzed and rejected with respect to claim 1 above. Claim 3 recites the corresponding decoding device for decoding the data encoded by the coding device of claim 1. Han discloses that their disclosure refers to a “coding” device as a device that performs coding (encoding and/or decoding) of data (Han: Fig. 1; paragraph [0038]). Thus, video encoder 200 and video decoder 300 represent examples of coding devices, in particular, a video encoder and a video decoder, respectively (Han: Fig. 1; paragraph [0038]). In some examples, devices 102, 116 may operate in a substantially symmetrical manner such that each of devices 102, 116 include video encoding and decoding components (Han: Fig. 1; paragraph [0038]). Therefore, arguments analogous to those presented for claim 1 are applicable to claim 3. Accordingly, claim 3 has been analyzed and rejected with respect to claim 1 above. Claim 4 recites the corresponding moving-picture decoding method implemented by the decoding device of claim 3. Therefore, arguments analogous to those presented for claim 3 are applicable to claim 4. Accordingly, claim 4 has been analyzed and rejected with respect to claim 3 above. Claim 5 recites the corresponding method of storing a bitstream on a computer-readable recording medium after encoding by the coding device of claim 1. Therefore, arguments analogous to those presented for claim 1 are applicable to claim 5. Additionally, Han discloses that video encoder 200 may generate a bitstream including encoded video data (Han: Fig. 1; paragraph [0039]). Source device 102 may then output the encoded video data via output interface 108 onto computer-readable medium 110 for reception and/or retrieval by, e.g., input interface 122 of destination device 116 (Han: Fig. 1; paragraph [0039]). Accordingly, claim 5 has been analyzed and rejected with respect to claim 1 above. Claim 6 recites the corresponding method of transmitting a bitstream generated by a picture encoding device of claim 1. Therefore, arguments analogous to those presented for claim 1 are applicable to claim 6. Additionally, Han discloses that video encoder 200 may generate a bitstream including encoded video data (Han: Fig. 1; paragraph [0039]). Source device 102 may then output the encoded video data via output interface 108 onto computer-readable medium 110 for reception and/or retrieval by, e.g., input interface 122 of destination device 116 (Han: Fig. 1; paragraph [0039]). In one example, computer-readable medium 110 represents a communication medium to enable source device 102 to transmit encoded video data directly to destination device 116 in real-time, e.g., via a radio frequency network or computer-based network (Han: Fig. 1; paragraph [0041]). Accordingly, claim 6 has been analyzed and rejected with respect to claim 1 above. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER G FINDLEY whose telephone number is (571)270-1199. The examiner can normally be reached Monday-Friday 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at (571)272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER G FINDLEY/Primary Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Dec 16, 2024
Application Filed
Jan 10, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604018
CONVENTIONAL AND NEURAL NETWORK CODECS FOR RANDOM ACCESS VIDEO CODING
2y 5m to grant Granted Apr 14, 2026
Patent 12590799
Systems and Methods for Estimating Depth from Projected Texture using Camera Arrays
2y 5m to grant Granted Mar 31, 2026
Patent 12593031
IMAGE ENCODING/DECODING METHOD, DEVICE, AND RECORDING MEDIUM HAVING BITSTREAM STORED THEREIN
2y 5m to grant Granted Mar 31, 2026
Patent 12574546
METHOD AND DEVICE FOR ENCODING OR DECODING IMAGE ON BASIS OF INTER MODE
2y 5m to grant Granted Mar 10, 2026
Patent 12574504
IMAGE ENCODING/DECODING METHOD, DEVICE, AND RECORDING MEDIUM HAVING BITSTREAM STORED THEREIN
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
89%
With Interview (+11.8%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 752 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month