Prosecution Insights
Last updated: April 19, 2026
Application No. 17/590,396

VISUAL EFFECTS PROCESSING FRAMEWORK

Final Rejection §103
Filed
Feb 01, 2022
Examiner
LU, ZHIYU
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Netflix Inc.
OA Round
4 (Final)
49%
Grant Probability
Moderate
5-6
OA Rounds
3y 8m
To Grant
63%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
374 granted / 759 resolved
-12.7% vs TC avg
Moderate +14% lift
Without
With
+13.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
57 currently pending
Career history
816
Total Applications
across all art units

Statute-Specific Performance

§101
2.9%
-37.1% vs TC avg
§103
66.6%
+26.6% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 759 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 01/22/2026 have been fully considered but they are not persuasive. Regarding rejection on claim 1, applicant argued that Lee fails to teach “the second partial image stores a second plurality of bits that is disjoint from the first plurality of bits in each pixel of the input image” because Lee teach dividing a 12-bit image into two sub-images (Fig. 1) and dividing a 12-bit image into five 8-bit sub-images (Figs. 9A-C). Applicant then argued that Lee fails to teach “modifying…” because instead of using masked version of sub-image to generate the synthesized Lee applies weights to the unmasked version to generate synthesized image. However, examiner respectfully disagrees. First off, Lee teaches a number of different embodiments. While applicant’s argument focuses on embodiment such as five 8-bit sub images, Lee does teach alternative embodiments with two sub images that teach argued limitation. [0084] For example, when the original image acquired through the image sensor 110 has 10-bit, 12-bit, 16-bit, or 24-bit per pixel, the processor 130 may acquire the plurality of sub images having an 8-bit number per pixel may be acquired. [0189] In other words, in the description of FIGS. 4 to 7, an exemplary embodiment of acquiring two sub images, that is, the MSB side image and the LSB side image from the original image, has been described, but as described in the description of FIGS. 9A to 9C, a synthesized image may be acquired based on at least two sub images among the five 8-bit sub images acquired from the 12-bit original image. Though Lee uses a 12-bit image as example (Fig. 1), it does not necessarily mean Lee’s embodiment has overlapped bits in the two sub images. It’s well-known in the art that one possible embodiment in splitting a 12-bit image into MSB and LSB components is to take the 8 most significant bits for the first image and the 4 least significant bits padded with zeros for the second image. Moreover, in a case of 16-bit image, first sub image of MSB side and second sub image of LSB side would have no overlap, which means disjointed. Second off, argued claim define neither “modify” operation nor “processing result”. Under the broadest reasonable interpretation, “modify” can be interpreted as any possible processing, and “processing result” can be interpreted as a result (not necessarily generating new image or mask). Lee does teach applying edge detection on both sub images, wherein various algorithms may be applied (paragraphs 0012, 0021, 0076) that lead to identifying information (e.g., processing result) on shapes of objects included in said sub images through an edge detection (paragraph 0021). And, edge detection inherently involves modifying bits of image data because its primary function is to transform a detailed image into a simplified representation that highlights structural boundaries, which requires altering the original pixel value. So, Lee does teach “modify… to generate… processing result” wherein “processing result” is not necessarily indicating saving or generating another image or mask. Third off, despite of applicant’s argument, argued claim does not expressly limit how “a combination of the first partial image processing result, the second partial image processing result, a first weight associated with the first plurality of bits, and a second weight associated with the second plurality of bits” is executed. To one of ordinary skill in the art, said argued limitation may be series processing, parallel processing, or both. Lee teach acquiring synthesized image based on synthesized weight for each of acquired block (S1205 of Fig. 12, S1307 of Fig. 13; paragraphs 0217, 0224), wherein the synthesized weight for each block for the MSB side image and the LSB side image may be acquired based on the identified information (S1204 of Fig. 12, S1306 of Fig. 13; paragraphs 0214, 0223). Under the broadest reasonable interpretation, Lee teach argued limitation. Thus, rejection is proper and maintained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 6-7, 9-13, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US2022/0036522). To claim 1, Lee teach a computer-implemented method, comprising: dividing an input image (10 of Fig. 1) into a group consisting of a first partial image and a second partial image (20 of Fig. 1), wherein: the first partial image stores a first plurality of bits in each pixel of the input image, and the second partial image stores a second plurality of bits that is disjoint from the first plurality of bits in each pixel of the input image (paragraphs 0056, 0084, 0189, MSB sub image and LSB sub image); modifying a first set of pixels in the first partial image to generate a first partial image processing result; modifying a second set of pixels in the second partial image to generate a second partial image processing result (paragraphs 0021, 0055-0057, while interpretation of modifying is not limited, extracted/divided set of pixels to generate a partial image obviously comprises modifying, and the processing of edge detection also involves modifying bits); and generating a combined image processing result associated with the input image (30 of Fig. 1) based on a combination of: the first partial image processing result, the second partial image processing result, a first weight associated with the first plurality of bits, and a second weight associated with the second plurality of bits (Fig. 8, synthesized image acquired based on a synthesized weight for each area of two sub images illustrated in Figs. 5A and 5B; paragraphs 0008-0024, 0132, obviously weights associated with respective areas associate with respective plurality of bits, wherein said weights are based on processing results from edge detection/shape identification information). To claim 11, Lee teach one or more non-transitory computer readable media storing instructions that, when executed by one or more processors, cause the one or more processors to perform the steps (as explained in response to claim 1 above). To claim 20, Lee teach a system (as explained in response to claim 1 above). To claims 6 and 12-13, Lee teach claims 1 and 11. Lee teach wherein generating the combined image processing result comprises: merging the first partial image processing result with the first partial image to generate a first merged image; merging the second partial image processing result with the second partial image to generate a second merged image (Fig. 1, after identifying processing); generating the combined image processing result based on a first combination of the first weight and the first merged image and a second combination of the second weight and the second merged image (paragraphs 0058-0062). To claim 7, Lee teach claim 6. Though Lee do not expressly disclose wherein generating the combined image processing result further comprises upsampling the first partial image processing result and the second partial image processing result to match a resolution of the input image prior to generating the first merged image and the second merged image, upsampling and matching a resolution are image combining technique well-known in the art, which would have been obvious to one of ordinary skill in the art to incorporate into the method of Lee, hence Official Notice is taken. To claim 9, Lee teach claim 1. Lee teach wherein dividing the input image into the first partial image and the second partial image comprises: storing a set of most-significant bits from each pixel in the input image in the first partial image (Fig. 5A); and storing a set of least-significant bits from each pixel in the input image in the second partial image (Fig. 5B; paragraphs 0125-0129). To claim 10, Lee teach claim 9. Lee teach wherein each of the set of most-significant bits and the set of least-significant bits comprises half of the bits in each pixel of the input image (obvious in paragraph 0084, input image may 16-bit number per pixel, and sub images may have 8-bit number per pixel). Claim(s) 2-3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US2022/0036522) in view of Rossato et al. (US2009/0028432). To claim 2, Lee teach claim 1. Lee teach having mask process (paragraph 0076), but Lee do not expressly disclose further comprising: applying one or more dilation operations to a third set of pixels in a mask associated with the input image to generate an updated mask; and generating the first partial image processing result and the second partial image processing result based on the updated mask. Rossato teach separating an input image into a first partial image and a second partial image (paragraph 0023, 0101), wherein applying one or more dilation operations to a third set of pixels in a mask associated with the input image to generate an updated mask; and generating the first partial image processing result and the second partial image processing result based on the updated mask (paragraphs 0139-0156), which would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate into the method of Lee, in order to implement particular application by design preference. To claim 3, Lee and Rossato teach claim 2. Lee and Rossato teach wherein generating the first partial image processing result and the second partial image processing result based on the updated mask comprises: determining a target region of the input image based on the updated mask; modifying the first set of pixels corresponding to the target region to generate the first partial image processing result; and modifying the second set of pixels corresponding to the target region to generate the second partial image processing result (Rossato, 0170-0174). Claim(s) 8, 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US2022/0036522) in view of Fernandez et al. (US2008/0077953). To claims 8 and 18, Lee teach claims 1 and 11. But, Lee do not expressly disclose further comprising: receiving the input image and one or more image processing parameters associated with the combined image processing result from a remote machine; and after the combined image processing result is generated based on the input image and the one or more image processing parameters, transmitting the combined image processing result to the remote machine. Fernandez teach receiving the input image and one or more image processing parameters associated with the combined image processing result from a remote machine; and after the combined image processing result is generated based on the input image and the one or more image processing parameters, transmitting the combined image processing result to the remote machine (paragraphs 0064-0074, advertising server intercepts video stream in a video teleconference, replaces background of said video stream with warped version of advertising content, and rebroadcasts said modified video stream, wherein participants in said video teleconference may opt-in or opt-out of sad advertising content campaign, such as processing parameter to said advertising server), which would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate into the method of Lee, in order to implement particular application by design preference. To claim 19, Lee and Fernandez teach claim 18. Lee and Fernandez teach wherein the one or more image processing parameters comprise at least one of a first weight associated with the first partial image, a second weight associated with the second partial image, or a mask (Lee, paragraphs 0058-0062). Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US2022/0036522) in view of Kim et al. (US2013/0308056). To claim 21, Lee teach claim 1. Though obvious, but Lee do not expressly disclose wherein: the input image has a first color depth; the first partial image has a second color depth; the second partial image has a third color depth; and the first color depth is equal to a sum of the second color depth and the third color depth. However, Lee does teach separating an input image into MSB sub image and LSB sub image, which is obviously a color depth separation since color depth, also known as bit depth, refers to the number of bits used to represent the color of a single pixel in an image. Kim teach each color pixel of a frame may be separated into an MSB bit stream and LSB bit stream based on the color depth (paragraphs 0009, 0056, 0068, which shows sum of MSB and LSB to be color depth of original frame), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed inventio to incorporate into the method of Lee, in order to further implementation of color depth separation. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHIYU LU whose telephone number is (571)272-2837. The examiner can normally be reached Weekdays: 8:30AM - 5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. ZHIYU . LU Primary Examiner Art Unit 2669 /ZHIYU LU/Primary Examiner, Art Unit 2665 February 20, 2026
Read full office action

Prosecution Timeline

Feb 01, 2022
Application Filed
Oct 25, 2024
Non-Final Rejection — §103
Jan 27, 2025
Response Filed
May 02, 2025
Final Rejection — §103
Jul 03, 2025
Response after Non-Final Action
Aug 06, 2025
Request for Continued Examination
Aug 07, 2025
Response after Non-Final Action
Oct 20, 2025
Non-Final Rejection — §103
Jan 22, 2026
Response Filed
Feb 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601695
METHOD FOR MEASURING THE DETECTION SENSITIVITY OF AN X-RAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12597268
METHOD AND DEVICE FOR DETERMINING LANE OF TRAVELING VEHICLE BY USING ARTIFICIAL NEURAL NETWORK, AND NAVIGATION DEVICE INCLUDING SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12596187
METHOD, APPARATUS, AND SYSTEM FOR WIRELESS SENSING MEASUREMENT AND REPORTING
2y 5m to grant Granted Apr 07, 2026
Patent 12592052
INFORMATION PROCESSING DEVICE, AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12581142
APPROACHES FOR COMPRESSING AND DISTRIBUTING IMAGE DATA
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
49%
Grant Probability
63%
With Interview (+13.9%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 759 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month