Prosecution Insights
Last updated: April 19, 2026
Application No. 18/492,836

Super-Resolution System Management Using Artificial Intelligence for Gaming Applications

Final Rejection §103
Filed
Oct 24, 2023
Examiner
SUMMERS, GEOFFREY E
Art Unit
2669
Tech Center
2600 — Communications
Assignee
MediaTek Inc.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
249 granted / 348 resolved
+9.6% vs TC avg
Strong +35% interview lift
Without
With
+35.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
27 currently pending
Career history
375
Total Applications
across all art units

Statute-Specific Performance

§101
9.6%
-30.4% vs TC avg
§103
41.0%
+1.0% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
28.6%
-11.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 348 resolved cases

Office Action

§103
DETAILED ACTION Status of the Claims Claims 1-20 were previously pending. Applicant’s amendment filed December 16, 2025, has been entered in full. Claims 1, 2, 8, 11, 12, 15, 17, and 18 are amended. Claims 6 and 16 are cancelled. No new claims are added. Accordingly, claims 1-5, 7-15, and 17-20 are now pending. Response to Arguments Applicant argues that the amendments have overcome the previous claim objection (Remarks filed December 16, 2025, hereinafter Remarks: Page 6). Examiner agrees. The previous objection is withdrawn. Applicant argues that the amendments have overcome the previous rejection under 35 U.S.C. 112(b) (Remarks: Page 6). Examiner agrees. The previous rejection under 35 U.S.C. 112(b) is withdrawn. Applicant traverses the previous rejections under 35 U.S.C. 103 (Remarks: Pages 6-9). Examiner respectfully disagrees. As an initial matter, Examiner notes that previously-presented claim 6 depended from claim 5, which depended from claim 1. Applicant has incorporated limitations from claim 6 into claim 1, but has not incorporated the limitations of intervening claim 5. Accordingly, the scope of currently-presented claim 1 is different from the scope of previously-presented claim 6. Substantially the same fact pattern applies to claims 16, 15, and 11. Applicant argues that Yeo does not teach that “selecting the AI model is based on a power consumption estimate of the AI model and a power budget surplus of the computing system” (Remarks: Pages 7-8). In particular, Applicant argues: PNG media_image1.png 200 400 media_image1.png Greyscale Applicant is apparently arguing that “computing capability” does not fall within the scope of the claimed “power consumption”. First, Examiner has not identified any use of the specific term “computing capability” in Yeo. Instead, Yeo uses the term “computational power” – see, e.g., the quotation above, the title of Sec. 5.2 and Sec. 2, Under-utilization of client’s computation. Second, as shown by Yeo’s use of the term, computational power is indeed a form of power that is consumed within an electronic device. This interpretation is not inconsistent with the specification. For example, par. [0026] of the published specification states that “the power required by the AI model may be indicated by the number of neural network nodes in the AI model.” The number of nodes in an AI model is a measure of computational power, such as described in Yeo, because each node requires a certain amount of computation and more nodes necessarily require greater computational power. Third, Yeo plainly does estimate computational power consumption of AI models and does select one that fits within a power budget surplus. Yeo’s technique is fundamentally based on a recognition that “the current video delivery infrastructure under-utilizes client’s computational power” (Sec. 2, Under-utilization of client’s computation; emphasis in original) – i.e., that there is a power budget surplus in client devices. Yeo tests each of multiple AI model configurations to determine their inference time and selects “the largest (highest-quality) DNN that runs in real-time” (Sec. 5.2, Choosing a DNN from multiple options (client-side)). The inference time is an estimate of the computational power consumption of a given AI model – i.e., higher inference times correspond to higher computational power consumption and vice versa. The power budget surplus is the amount of DNN computation that can be completed in real-time. Yeo chooses an AI model that provides the best performance, subject to a condition that its computational power consumption estimate must fall within a computational power budget surplus of the computing system. For at least these reasons, Yeo does teach the limitations added to the independent claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 8, 10-13, 18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over ‘Qin’ (US 2024/0311948 A1) in view of ‘Yeo’ (“Neural Adaptive Content-aware Internet Video Delivery,” 2018). Regarding claim 1, Examiner notes that the claim recites a method that is substantially the same as the method performed by the system of claim 11. Qin in view of Yeo teaches the system of claim 11 (see below). Accordingly, claim 1 is also rejected under 35 U.S.C. 103 as being unpatentable over Qin in view of Yeo for substantially the same reasons as claim 11. Regarding claim 2, Examiner notes that the claim recites a method that is substantially the same as the method performed by the system of claim 12. Qin in view of Yeo teaches the system of claim 12 (see below). Accordingly, claim 2 is also rejected under 35 U.S.C. 103 as being unpatentable over Qin in view of Yeo for substantially the same reasons as claim 12. Regarding claim 3, Examiner notes that the claim recites a method that is substantially the same as the method performed by the system of claim 13. Qin in view of Yeo teaches the system of claim 13 (see below). Accordingly, claim 3 is also rejected under 35 U.S.C. 103 as being unpatentable over Qin in view of Yeo for substantially the same reasons as claim 13. Regarding claim 8, Examiner notes that the claim recites a method that is substantially the same as the method performed by the system of claim 18. Qin in view of Yeo teaches the system of claim 18 (see below). Accordingly, claim 8 is also rejected under 35 U.S.C. 103 as being unpatentable over Qin in view of Yeo for substantially the same reasons as claim 18. Regarding claim 10, Examiner notes that the claim recites a method that is substantially the same as the method performed by the system of claim 20. Qin in view of Yeo teaches the system of claim 20 (see below). Accordingly, claim 10 is also rejected under 35 U.S.C. 103 as being unpatentable over Qin in view of Yeo for substantially the same reasons as claim 20. Regarding claim 11, Qin teaches a computing system operative to perform artificial-intelligence (AI) super-resolution (SR) (e.g., Fig. 4), comprising: a plurality of processors including a graphics processing unit (GPU) (e.g., Fig. 4, GPU) and an AI processing unit (APU) (e.g., Fig. 4, NPU); and a memory (e.g., Fig. 4, memory) to store a plurality of AI models (The memory of Fig. 4 is capable of performing the intended use of storing a plurality of AI models; also, e.g., [0126], model is loaded into a memory; also see Note Regarding Models below), wherein the processors are operative to: detect an indication that loading of the GPU exceeds a threshold (e.g., [0084], [0086]-[0087], [0073]-[0074], if the required resolution exceeds a threshold – which indicates that loading of the GPU will be too high – then AI SR processing is enabled); reduce resolution of a video output from the GPU in response to the indication (e.g., [0084], “The GPU generates image data with a low resolution based on the rendering instruction”); select an AI model (e.g., [0125]-[0126], AI SR model is selected for use in upscaling) among the plurality of AI models based on graphics scenes in the video and respective power consumption estimates of the AI models (see Note Regarding Models below), wherein the AI model is selected based on a power consumption estimate of the AI model and a power budget surplus of the computing system (see Note Regarding Models below); and perform, by the APU, AI SR operations on the video using the selected AI model to restore the resolution of the video for display (e.g., [0084], “The NPU performs super-resolution rendering processing on the image data with the low resolution, to obtain image data with a high resolution” for display). Note Regarding Models. Qin does not explicitly teach storing multiple AI models, that the selection of an AI model is based on graphics scenes in the video and respective power consumption estimates of the AI models, or that the AI model is selected based on a power consumption estimate of the AI model and a power budget surplus of the computing system. However, Yeo does teach techniques for AI super-resolution that include storing a plurality of AI models (e.g., Table 2 summarizes the different AI SR models; e.g., page 651, left column, Training content-aware DNNs, a specific version of every model summarized in Table 2 is trained for each specific video; also see Sections 4.1-4.2) and selecting an AI model among the plurality of AI models based on graphics scenes in the video (e.g., Sec. 4.1, selection of AI models is limited to AI models trained for specific content – i.e., graphics scenes in the video) and respective power consumption estimates of the AI models (e.g., Sec. 5.2, Choosing a DNN from multiple options (client-side), computational power consumption is estimated for each of the AI models and one is selected for providing highest quality while fitting within available computational power budget), wherein the AI model is selected based on a power consumption estimate of the AI model (e.g., Sec. 5.2, Choosing a DNN from multiple options (client-side), computational power consumption is estimated for each of the AI models – i.e., information about each DNN is used to test their computational power consumption, with larger models requiring more power and thus taking longer to execute) and a power budget surplus of the computing system (e.g., Sec. 5.2, Choosing a DNN from multiple options (client-side), “the clients select the largest (highest-quality) DNN that runs in real-time”; I.e., the DNN with the largest power consumption estimation that still falls within a power budget surplus of the computing system is selected; As discussed at, e.g., Sec. 2, Under-utilization of client’s computation, this exploitation of surplus computational power budget is fundamental to Yeo’s technique). Yeo recognizes that “[t]he available capacity of computing changes across time and space because of heterogeneity of client devices, changes in workloads, and multiplexing” (Sec. 4.2, 1st paragraph), but a single AI scaling model requiring a fixed amount of computing power cannot adapt to this time-varying capacity (Sec. 4.2, 1st par.). This means that using a single DNN of fixed complexity either under-utilizes the available computing capacity (compromising output image quality) or over-loads the available computing capacity, both of which are undesirable (e.g., Sec. 4.2, 1st par.). Yeo recognizes that this problem can be solved by offering multiple AI SR models with a range of options differing in computational requirements (and quality) and selecting one that best fits the available computational resources (e.g., Sec. 4.2, 2nd par.). Yeo also recognizes that selecting an AI SR model based on particular graphics scenes in video (i.e., based on content) is advantageous because it can provide better performance than more-generalized AI SR models (e.g., Sec. 4.1). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify the system of Qin with the AI model selection of Yeo in order to improve the system with the reasonable expectation that this would result in a system that better utilized available computing capacity and achieved better performance than using a single, more-generalized AI SR model. This technique for improving the system of Qin was within the ordinary ability of one of ordinary skill in the art based on the teachings of Yeo. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Qin and Yeo to obtain the invention as specified in claim 11. Regarding claim 12, Qin in view of Yeo teaches the computing system of claim 11, and Qin further teaches that the AI model is selected such that increased system power consumption caused by the selected AI model is estimated to be less than reduced system power consumption caused by the reduced resolution (e.g., [0073]-[0074], Qin recognizes that rendering may not be able to be completed real-time a computational power budget for high-resolution images (i.e., “frame freezing during running”) and solves this problem by reducing resolution and using an NPU/APU to maintain real-time processing within an available computational power budget; Yeo selects an AI model so that its estimated power consumption is low enough to maintain real-time processing – e.g., Sec. 5.2, Choosing a DNN from multiple options (client-side); Accordingly, for a system of Qin in view of Yeo as applied above, the AI model is selected such that increased system power consumption caused by the AI model is estimated to be less than reduced system power consumption caused by the reduced resolution at least because the selected AI model can operate in real-time, while the power consumption was too great for real-time operation prior to the resolution reduction). Regarding claim 13, Qin in view of Yeo teaches the computing system of claim 11, and Yeo further teaches that each power consumption estimate is based on a total count of nodes in a neural network represented by the AI model (e.g., Sec. 5.2, a specification of layers, channels, etc. for each AI model are used to randomly initialize a total count of all nodes in an AI model and count a total time required to execute all the nodes as a measure of computational power consumption; Also, in general, the higher the total count of nodes, the higher the inference time and therefore the higher the power consumption estimate). Regarding claim 18, Qin in view of Yeo teaches the computing system of claim 11, and Qin further teaches that an increase in loading of the GPU is detected from an increase in graphics scene complexity in the video (e.g., [0086], GPU loading is detected as having increased, thereby triggering use of AI SR, based on an increase past a threshold of resolution in the video; e.g., [0073], higher-resolution video is more complex for a GPU to render, therefore increasing its loading). Regarding claim 20, Qin in view of Yeo teaches the computing system of claim 11, and Qin further teaches that the APU is operative to perform the AI SR operations according to a whitelist that specifies a configuration of a plurality of functions used in rendering a plurality of graphics scenes in the video (e.g., [0083], [0128]). Claim(s) 4 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qin in view of Yeo as applied above, and further in view of ‘Maghazeh’ (“Perception-Aware Power Management for Mobile Games via Dynamic Resolution Scaling,” 2015). Regarding claim 4, Examiner notes that the claim recites a method that is substantially the same as the method performed by the system of claim 14. Qin in view of Yeo and Maghazeh teaches the system of claim 14 (see below). Accordingly, claim 4 is also rejected under 35 U.S.C. 103 as being unpatentable over Qin in view of Yeo and Maghazeh for substantially the same reasons as claim 14. Regarding claim 14, Qin in view of Yeo teaches the computing system of claim 11. Qin does not discuss frame rate (i.e., FPS). Yeo selects specific AI SR models in order to maintain the frame rate (i.e., FPS) of the video (e.g., Table 6, Sec. 7.4, Heterogeneous clients, video FPS is maintained at 30, with AI SR models being selected to provide highest quality while maintaining at least that FPS), thereby providing power saving (see various discussion in Qin and Yeo mapped above). Yeo does not explicitly teach increasing the FPS of the video without exceeding a power budget of the computing system when performance is prioritized over power saving. However, Maghazeh does teach that, when system performance is prioritized over power saving, the FPS of video may be increased (Sec. IV.B, Frame rate subsection at pages 615-616, GPU can run faster, thereby increasing frame rate/FPS) without exceeding a power budget of the computing system (Sec. IV.B, Frame rate subsection at pages 615-616, GPU can deliver higher frame rates up to the power budget/limit of the system; In the described example, power use is increased to 4.7 watts at 90 FPS, which is within the power budget/limit of the system). There is a fundamental tradeoff between resource (e.g., power) usage and performance that is recognized by each of Qin, Yeo, and Maghazeh. Maghazeh recognizes that “frame rate is an important parameter for user experience” so it may be preferable to prioritize performance as indicated by FPS (i.e., frame rate) over power consumption (Sec. IV.B, Frame rate subsection at pages 615-616, 2nd par.) in order to improve user experience. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify the system of Qin in view of Yeo as applied above with the FPS increase taught by Maghazeh in order to improve the system with the reasonable expectation that this would result in a system that advantageously improved user experience. This technique for improving the system of Qin in view of Yeo was within the ordinary ability of one of ordinary skill in the art based on the teachings of Maghazeh. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Qin, Yeo, and Maghazeh to obtain the invention as specified in claim 14. Claim(s) 5, 7, 15, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qin in view of Yeo as applied above, and further in view of ‘Park’ (US 2015/0181117 A1). Regarding claim 5, Examiner notes that the claim recites a method that is substantially the same as the method performed by the system of claim 15. Qin in view of Yeo and Park teaches the system of claim 15 (see below). Accordingly, claim 5 is also rejected under 35 U.S.C. 103 as being unpatentable over Qin in view of Yeo and Park for substantially the same reasons as claim 15. Regarding claim 7, Examiner notes that the claim recites a method that is substantially the same as the method performed by the system of claim 17. Qin in view of Yeo and Park teaches the system of claim 17 (see below). Accordingly, claim 7 is also rejected under 35 U.S.C. 103 as being unpatentable over Qin in view of Yeo and Park for substantially the same reasons as claim 17. Regarding claim 15, Qin in view of Yeo teaches the computing system of claim 11. Qin is focused on avoiding high temperatures and power consumption (e.g., [0073]-[0074]), but does not explicitly teach measuring temperature. As discussed above with respect to claim 11, Qin also does not explicitly teach multiple AI SR models. Yeo detects computing power consumption of the processors in the computing system (e.g., Sec. 5.2, last par., power consumption in terms of computing time is measured) and replaces the selected AI model with a different one of the AI models for the AI SR operations such that the power consumption stays within a power budget (e.g., Sec. 7.4, Temporal variation, Fig. 16, different models (each with a different number of optional blocks) are selected such that power consumption stays within a power budget available within a device, such as shown by the Ideal line in Fig. 16). Nevertheless, Yeo does not teach detecting a temperature of processors or considering the temperature in AI model selection. However, Park does teach detecting not only power consumption (e.g., Fig. 7, step 142, P) but also temperature (e.g., Fig. 7, step 142, T) of processors in a computing system. Park further teaches selecting an image processing parameter (e.g., Fig. 7, steps 146 and 152, parameter S) such that the power consumption stays within a power budget at the detected temperature (e.g., Fig. 7, the parameter S is being set to keep power P within its budget, including at the detected temperature). Park teaches that one such parameter that may be adjusted is image resolution (e.g., Figs. 5A-B, [0098]-[0102]), with lower image resolution being associated with both lower power consumption and lower quality (e.g., Figs. 5A-B, [0098]-[0102]). It is important to note that Yeo associates different AI SR models with different image resolutions (e.g., Table 2). Accordingly, in a system of Park in view of Yeo and further modified with the temperature-based resolution adjustment of Park, temperature-based changes to the image resolution will result in changes to the selected AI model. Qin recognizes that it is desirable to avoid excessive computing power and temperature (e.g., [0073]-[0074]). The temperature-based resolution adjustment of Park helps avoid excessive computing power and temperature, while advantageously maximizing user experience (e.g., [0111]-[0112]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify the system of Qin in view of Yeo as applied above with the temperature-based resolution adjustment of Park in order to improve the system with the reasonable expectation that this would result in a system that advantageously avoided excessive power consumption or temperature while maximizing user experience. This technique for improving the system of Qin in view of Yeo was within the ordinary ability of one of ordinary skill in the art based on the teachings of Park. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Qin, Yeo, and Park to obtain the invention as specified in claim 15. Regarding claim 17, Qin in view of Yeo teaches the computing system of claim 11. Qin teaches different scenarios where AI SR operations on the video are deactivated (e.g., [0125], deactivation due to application resolution being low or lack of NPU; e.g., [0083], deactivation due to absence from whitelist). Accordingly, one of the system parameters of Qin is whether or not AI SR operations are activated or deactivated. Qin further teaches that AI-based SR operations may be selected depending on resolution (e.g., [0077], NPU is used for specific resolutions). Nevertheless, Qin does not teach detecting temperature and power consumption of processors in the computing system, with the AI SR deactivation occurring when the power consumption reaches or exceeds a power budget at the detected temperature. Yeo also does not teach these features. However, Park does teach temperature sensors to detect a temperature (e.g., Fig. 7, step 142, temperature T) and power consumption (e.g., Fig. 7, step 142, power consumption P) of in processors the computing system, wherein the AI SR operations on the video are deactivated (e.g., Fig. 7, steps 146-148, parameter S is adjusted; As explained above, one of the parameters in Qin in view of Yeo is whether or not AI SR is activated/deactivated; Additionally, Park adjusts resolution – e.g., Figs. 5A-B and [0098]-[0102] – as one of its parameters, which affects whether AI SR is performed on Qin’s NPU as explained above) when the power consumption reaches or exceeds a power budget at the detected temperature (e.g., Fig. 7, YES path from step 144). Qin recognizes that it is desirable to avoid excessive computing power and temperature (e.g., [0073]-[0074]). The temperature-based resolution adjustment of Park helps avoid excessive computing power and temperature, while advantageously maximizing user experience (e.g., [0111]-[0112]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify the system of Qin in view of Yeo as applied above with the temperature-based processing adjustment of Park in order to improve the system with the reasonable expectation that this would result in a system that advantageously avoided excessive power consumption or temperature while maximizing user experience. This technique for improving the system of Qin in view of Yeo was within the ordinary ability of one of ordinary skill in the art based on the teachings of Park. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Qin, Yeo, and Park to obtain the invention as specified in claim 17. Claim(s) 9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qin in view of Yeo as applied above, and further in view of ‘Wang’ (US 2017/0116951 A1). Regarding claim 9, Examiner notes that the claim recites a method that is substantially the same as the method performed by the system of claim 19. Qin in view of Yeo and Wang teaches the system of claim 19 (see below). Accordingly, claim 9 is also rejected under 35 U.S.C. 103 as being unpatentable over Qin in view of Yeo and Wang for substantially the same reasons as claim 19. Regarding claim 19, Qin in view of Yeo teaches the computing system of claim 11. Qin recognizes that higher resolutions will cause over-loading of a GPU (e.g., [0073]), and so it uses resolution as an indication of the loading of the GPU (e.g., [0086]). Qin does not explicitly teach that the indication of the loading of the GPU is detected from one or more of: an operating frequency of the GPU, a utilization rate of the GPU, and unstable frame per second (FPS) of the video. Yeo also does not teach this feature. However, Wang does teach that an indication of the loading of the GPU is detected from one or more of: an operating frequency of the GPU, a utilization rate of the GPU, and unstable frame per second (FPS) of the video (e.g., [0003], [0037], unstable frame rate – i.e., FPS – of video is indicative of high GPU loading). Resolution is not the only factor that can affect GPU loading. As recognized by Wang, complexity of a rendered signal (e.g., a game) can also cause a GPU to become overloaded beyond its capacity, resulting in an unstable frame rate (i.e., FPS) (e.g., [0003]). It is a goal of Qin to avoid such overloading (e.g., [0073]-[0074]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify the system of Qin in view of Yeo as applied above with the frame-rate-based GPU loading detection of Wang in order to improve the system with the reasonable expectation that this would result in a system that could advantageously detect and mitigate high GPU loading caused by factors other than resolution, such as complexity. This technique for improving the system of Qin in view of Yeo was within the ordinary ability of one of ordinary skill in the art based on the teachings of Wang. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Qin, Yeo, and Wang to obtain the invention as specified in claim 19. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEOFFREY E SUMMERS whose telephone number is (571)272-9915. The examiner can normally be reached Monday-Friday, 7:00 AM to 3:30 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GEOFFREY E SUMMERS/Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Oct 24, 2023
Application Filed
Sep 11, 2025
Non-Final Rejection — §103
Dec 16, 2025
Response Filed
Jan 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586379
SYSTEM FOR DETECTING OCCURRENCE PERIOD OF CYCLICAL EVENT
2y 5m to grant Granted Mar 24, 2026
Patent 12561755
System and Method for Image Super-Resolution
2y 5m to grant Granted Feb 24, 2026
Patent 12555205
METHOD AND APPARATUS WITH IMAGE DEBLURRING
2y 5m to grant Granted Feb 17, 2026
Patent 12541838
INSPECTION APPARATUS AND REFERENCE IMAGE GENERATION METHOD
2y 5m to grant Granted Feb 03, 2026
Patent 12536682
METHOD AND SYSTEM FOR GENERATING A DEPTH MAP
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+35.4%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 348 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month