Prosecution Insights
Last updated: April 19, 2026
Application No. 18/106,348

TECHNIQUES TO USE A NEURAL NETWORK TO EXPAND AN IMAGE

Non-Final OA §103
Filed
Feb 06, 2023
Examiner
LIU, XIAO
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
5 (Non-Final)
89%
Grant Probability
Favorable
5-6
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
257 granted / 290 resolved
+26.6% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
44 currently pending
Career history
334
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 290 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/27/2026 has been entered. Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/20/2026 has/have been considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chaudhuri et al (U.S PG-PUB No. 20190045168 A1), hereinafter Chaudhuri in view of Iqbal et al (U.S PG-PUB No. 20200061811 A1), hereinafter Iqbal, -Regarding claim 1, Chaudhuri discloses that a processor (FIG. 9, CPU 901, graphic processing unit 902; FIG. 10; [0106], “GPU”; [0027], “image signal processor 101”; FIG. 1) uses a neural network to generate an output image (Abstract; FIGS. 1-11), wherein, to generate the output image (FIGS. 1-5, 7-8), the neural network aggregates (FIG. 4; [0041], “decoder portion 460 to combine the extracted features (e.g., feature maps) using skip connections 443”; [0046]-[0049]; FIG. 5) and upsamples a plurality of feature maps weighted based on the input image (FIG. 4; [0041], “decoder network (e.g., decoder portion 460) to take the feature representations (e.g., feature maps) as input via skip connections 443, process them and produce an output”; [0042], “the label u indicates an up-sampling layer”; [0046]-[0047]; [0048], “The resultant feature maps from connection 438 are upsampled (e.g., 2×2 upsampled) at upsampling layer 439. The feature maps from upsampling layer 439 are combined, at connection 440, with the output feature maps from the final convolutional layer of convolutional layer grouping 411”; [0049]; FIG. 5), the plurality of feature maps generated using one or more convolutional layers (Abstract; FIG. 4, convolution layers 425, 427, 429 …437, output of convolution layers; [0041], “encoder network (e.g., encoder portion 450) to map inputs to feature representations (e.g., feature maps)”; [0043], “Each convolutional layer may generate feature maps that are representative of extracted features”; FIG. 5). Chaudhuri does disclose that the neural network that can be implemented on a graphics dedicated processor (e.g., a graphics processing unit (GPU) (Chaudhuri: [0106]; see also Park: [0114]; [0222]; [0227]; [0241]). Chaudhuri does not disclose a processor comprising a set of graphics cores to share a cache memory, wherein each of the graphics cores comprise: an instruction cache; a cache/shared memory; a texture unit; a set of registers; integer logic units; floating point logic units to perform 16-bit floating point operations; and matrix processing units (MPUs) to perform half-precision floating point and 8-bit integer operations; memory coupled with the sets of graphics cores, wherein the memory includes graphics double data rate (GDDR) memory; a memory controller; and a PCI Express host interface. In the same field of endeavor, Iqbal teaches a processor including a set of graphics cores for a machine-learning system using neural networks (Iqbal: Abstract; FIGS. 3, 7-9b, 10, 12C-12D). Iqbal further teaches the processor comprising (Iqbal: FIGS. 20A-20B, GPGPU 2030): a set of graphics cores (Iqbal: FIG. 20B, clusters 2036A-2036H; [0293], “each include a set of graphics cores”) to share a cache memory (Iqbal: [0291], “share a cache memory 2038”), wherein each of the graphics cores comprise (Iqbal: FIG. 20A, graphics core 2000; [0288]): an instruction cache (Iqbal: FIG. 20A, cache 1902); a cache/shared memory (Iqbal: FIG. 20A, memory 1920); a texture unit (Iqbal: FIG. 20A, unit 1918); a set of registers (Iqbal: FIG. 20A, registers 2010A-2010N); integer logic units (Iqbal: FIG. 20A, units ALUs 2016-2016N); floating point logic units to perform 16-bit floating point operations (Iqbal: FIG. 20A, FPUs 2014A-2014N; [0289]); and matrix processing units (MPUs) to perform half-precision floating point and 8-bit integer operations (Iqbal: FIG. 20A, MPU 2017A-2017N; [0289]); memory coupled with the sets of graphics cores (Iqbal: FIG. 20B; [0292]), wherein the memory includes graphics double data rate (GDDR) memory (Iqbal: FIG. 20B; [0292], “graphics double data rate (GDDR) memory”); a memory controller (Iqbal: FIG. 20B, memory controllers 2042A-2042B); and a PCI Express host interface (Iqbal: FIG. 20B, interface 2032; [0291]). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chaudhuri with the teaching of Iqbal by using the same or similar processor comprising a set of graphics cores that are optimized to perform any amount and type of operations associated with machine learning (Iqbal: [0233]; [0289]; [0293]) in order to achieve fast neural network training and best performance of the trained neural network (Please note: claim 1 recites a plurality of claim limitations related to a processor. However, these claim limitations are fully disclosed by Iqbal. Therefore, these claim limitations are not inventive concepts. Thus , claim 1 recites using a known hardware to implement a neural network to perform up-sampling.). -Regarding claim 11 Chaudhuri discloses that a processor (FIG. 9, CPU 901, graphic processing unit 902; FIG. 10; [0106], “GPU”; [0027], “image signal processor 101”; FIG. 1) uses a neural network to generate an output image (Abstract; FIGS. 1-11), wherein, to generate the output image (FIGS. 1-5, 7-8), the neural network aggregates (FIG. 4; [0041], “decoder portion 460 to combine the extracted features (e.g., feature maps) using skip connections 443”; [0046]-[0049]; FIG. 5) and upsamples a plurality of feature maps weighted based on the input image (FIG. 4; [0041], “decoder network (e.g., decoder portion 460) to take the feature representations (e.g., feature maps) as input via skip connections 443, process them and produce an output”; [0042], “the label u indicates an up-sampling layer”; [0046]-[0047]; [0048], “The resultant feature maps from connection 438 are upsampled (e.g., 2×2 upsampled) at upsampling layer 439. The feature maps from upsampling layer 439 are combined, at connection 440, with the output feature maps from the final convolutional layer of convolutional layer grouping 411”; [0049]; FIG. 5), the plurality of feature maps generated using one or more convolutional layers (Abstract; FIG. 4, convolution layers 425, 427, 429 …437, output of convolution layers; [0041], “encoder network (e.g., encoder portion 450) to map inputs to feature representations (e.g., feature maps)”; [0043], “Each convolutional layer may generate feature maps that are representative of extracted features”; FIG. 5). Chaudhuri does disclose that the neural network that can be implemented on a graphics dedicated processor (e.g., a graphics processing unit (GPU) (Chaudhuri: [0106]; see also Park: [0114]; [0222]; [0227]; [0241]). Chaudhuri does not disclose a system on chip (SoC) comprising: a set of graphics cores to share a cache memory, wherein each of the graphics cores comprise: an instruction cache; a cache/shared memory; a texture unit; a set of registers; integer logic units; floating point logic units to perform 16-bit floating point operations; and matrix processing units (MPUs) to perform half-precision floating point and 8-bit integer operations; memory coupled with the sets of graphics cores, wherein the memory includes graphics double data rate (GDDR) memory; a memory controller; and a PCI Express host interface. In the same field of endeavor, Iqbal teaches a system on chip (SoC) (Iqbal: FIGS. 13-14, 18-19B; [0287]) for a machine-learning system using neural networks (Iqbal: Abstract; FIGS. 3, 7-8, 10; [0281]). Iqbal further teaches the processor comprising (Iqbal: FIGS. 20A-20B, GPGPU 2030): a set of graphics cores (Iqbal: FIG. 20B, clusters 2036A-2036H; [0293], “each include a set of graphics cores”) to share a cache memory (Iqbal: [0291], “share a cache memory 2038”), wherein each of the graphics cores comprise (Iqbal: FIG. 20A, graphics core 2000; [0288]): an instruction cache (Iqbal: FIG. 20A, cache 1902); a cache/shared memory (Iqbal: FIG. 20A, memory 1920); a texture unit (Iqbal: FIG. 20A, unit 1918); a set of registers (Iqbal: FIG. 20A, registers 2010A-2010N); integer logic units (Iqbal: FIG. 20A, units ALUs 2016-2016N); floating point logic units to perform 16-bit floating point operations (Iqbal: FIG. 20A, FPUs 2014A-2014N; [0289]); and matrix processing units (MPUs) to perform half-precision floating point and 8-bit integer operations (Iqbal: FIG. 20A, MPU 2017A-2017N; [0289]); memory coupled with the sets of graphics cores (Iqbal: FIG. 20B; [0292]), wherein the memory includes graphics double data rate (GDDR) memory (Iqbal: FIG. 20B; [0292], “graphics double data rate (GDDR) memory”); a memory controller (Iqbal: FIG. 20B, memory controllers 2042A-2042B); and a PCI Express host interface (Iqbal: FIG. 20B, interface 2032; [0291]). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chaudhuri with the teaching of Iqbal by using a system on chip comprising a set of graphics cores that are optimized to perform any amount and type of operations associated with machine learning (Iqbal: [0233]; [0289]; [0293]) in order to achieve efficient and fast neural network training, and best performance of the trained neural network. -Regarding claims 2 and 12, Chaudhuri in view of Iqbal teaches the processor of claim 1 and the SoC of claim 11. Chaudhuri does not disclose wherein the memory controller is to provide access to a memory interface to access synchronous dynamic random-access memory (SDRAM) devices. In the same field of endeavor, Iqbal teaches a system on chip (SoC) (Iqbal: FIGS. 13-14, 18-19B; [0287]) for a machine-learning system using neural networks (Iqbal: Abstract; FIGS. 3, 7-8, 10; [0281]). Iqbal further teaches wherein the memory controller is to provide access to a memory interface to access synchronous dynamic random-access memory (SDRAM) devices (Iqbal: [0279], “memory controller 1865 for access to SDRAM”). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chaudhuri with the teaching of Iqbal by using a processor or system on chip comprising a set of graphics cores that are optimized to perform any amount and type of operations associated with machine learning in order to achieve efficient and fast neural network training, and best performance of the trained neural network. -Regarding claims 3 and 13, Chaudhuri in view of Iqbal teaches the processor of claim 1 and the SoC of claim 11. Chaudhuri does not disclose wherein each of the graphics cores further comprise a scheduler to schedule one or more threads to be performed. In the same field of endeavor, Iqbal teaches a system on chip (SoC) (Iqbal: FIGS. 13-14, 18-19B; [0287]) for a machine-learning system using neural networks (Iqbal: Abstract; FIGS. 3, 7-8, 10; [0281]). Iqbal further teaches wherein each of the graphics cores further comprise a scheduler to schedule one or more threads to be performed (Iqbal: [0288], “a thread scheduler 2006A-2006N”; [0291], “execution threads”). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chaudhuri with the teaching of Iqbal by using a processor or system on chip comprising a set of graphics cores that are optimized to perform any amount and type of operations associated with machine learning in order to achieve efficient and fast neural network training, and best performance of the trained neural network. -Regarding claims 4 and 14, Chaudhuri in view of Iqbal teaches the processor of claim 1 and the SoC of claim 11. Chaudhuri does not disclose further comprising an input/output hub to couple the processor or Soc to other processor instances. In the same field of endeavor, Iqbal teaches a system on chip (SoC) (Iqbal: FIGS. 13-14, 18-19B; [0287]) for a machine-learning system using neural networks (Iqbal: Abstract; FIGS. 3, 7-8, 10; [0281]). Iqbal further teaches further comprising an input/output hub to couple the processor or Soc to other processor instances (Iqbal: FIGS. 20A-20B; [0294], “I/O hub 2039 that couples GPGPU 2030 with a GPU link 2040 that enables a direct connection to other instances”). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chaudhuri with the teaching of Iqbal by using a processor or system on chip comprising a set of graphics cores that are optimized to perform any amount and type of operations associated with machine learning in order to achieve efficient and fast neural network training, and best performance of the trained neural network. -Regarding claims 5 and 15, Chaudhuri in view of Iqbal teaches the processor of claim 1 and the SoC of claim 11. Chaudhuri does not disclose further comprising a link to enable communication between with other processor instances. In the same field of endeavor, Iqbal teaches a system on chip (SoC) (Iqbal: FIGS. 13-14, 18-19B; [0287]) for a machine-learning system using neural networks (Iqbal: Abstract; FIGS. 3, 7-8, 10; [0281]). Iqbal further teaches further comprising a link to enable communication between with other processor instances (Iqbal: FIGS. 20A-20B; [0294], “GPU link 2040 is coupled to a dedicated GPU-to-GPU bridge that enables communication and synchronization between multiple instances”). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chaudhuri with the teaching of Iqbal by using a processor or system on chip comprising a set of graphics cores that are optimized to perform any amount and type of operations associated with machine learning (Iqbal: [0233]; [0289]; [0293]) in order to achieve efficient and fast neural network training, and best performance of the trained neural network. -Regarding claim 6, Chaudhuri in view of Iqbal teaches the processor of claim 1. Chaudhuri does not disclose wherein the processor is to be included on a system on chip (SoC). In the same field of endeavor, Iqbal teaches a system on chip (SoC) (Iqbal: FIGS. 13-14, 18-19B; [0287]) for a machine-learning system using neural networks (Iqbal: Abstract; FIGS. 3, 7-8, 10; [0281]). Iqbal further teaches wherein the processor is to be included on a system on chip (SoC) (Iqbal: FIG. 18B; [0278]). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chaudhuri with the teaching of Iqbal by using a processor or system on chip comprising a set of graphics cores that are optimized to perform any amount and type of operations associated with machine learning in order to achieve efficient and fast neural network training, and best performance of the trained neural network. -Regarding claims 7 and 17, Chaudhuri in view of Iqbal teaches the processor of claim 1 and the SoC of claim 11. Chaudhuri does not disclose wherein each of the graphics cores further comprise a dispatcher to dispatch one or more threads to be performed. In the same field of endeavor, Iqbal teaches a system on chip (SoC) (Iqbal: FIGS. 13-14, 18-19B; [0287]) for a machine-learning system using neural networks (Iqbal: Abstract; FIGS. 3, 7-8, 10; [0281]). Iqbal further teaches wherein each of the graphics cores further comprise a dispatcher to dispatch one or more threads to be performed (Iqbal: FIGS.20A-20B; [0288], “a thread dispatcher 2008A-2008N”; FIG. 19A, [0285], “dispatch execution threads”). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chaudhuri with the teaching of Iqbal by using a processor or system on chip comprising a set of graphics cores that are optimized to perform any amount and type of operations associated with machine learning in order to achieve efficient and fast neural network training, and best performance of the trained neural network. -Regarding claims 8 and 18, Chaudhuri in view of Iqbal teaches the processor of claim 1 and the SoC of claim 11. Chaudhuri does not disclose further comprising floating point logic units to perform 32-bit floating point operations. In the same field of endeavor, Iqbal teaches a system on chip (SoC) (Iqbal: FIGS. 13-14, 18-19B; [0287]) for a machine-learning system using neural networks (Iqbal: Abstract; FIGS. 3, 7-8, 10; [0281]). Iqbal further teaches further comprising floating point logic units to perform 32-bit floating point operations (Iqbal: FIGS. 20A-20B; [0289]; [0293]). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chaudhuri with the teaching of Iqbal by using a s processor or system on chip comprising a set of graphics cores that are optimized to perform any amount and type of operations associated with machine learning in order to achieve efficient and fast neural network training, and best performance of the trained neural network. -Regarding claim 16, Chaudhuri in view of Iqbal teaches the SoC of claim 11. Chaudhuri does not disclose comprising one or more processors optimized to perform one or more shader programs. In the same field of endeavor, Iqbal teaches comprising one or more processors optimized to perform one or more shader programs (Iqbal: [0283], “optimized to execute fragment shader programs”). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chaudhuri with the teaching of Iqbal by using a processor or system on chip comprising a set of graphics cores that are optimized to perform any amount and type of operations associated with machine learning in order to achieve efficient and fast neural network training, and best performance of the trained neural network. -Regarding claims 9 and 19, Chaudhuri in view of Iqbal teaches the processor of claim 1 and the SoC of claim 11. The combination further teaches wherein the one or more convolutional layers are to generate the plurality of feature maps based, at least in part, on the input image, and wherein the input image is smaller than the output image (Chaudhuri: Abstract, “upscaling”; FIGS. 1-5, image 217 (input), image 114 (output); [0036]; [0055], “image super-resolution CNN 103, which gradually upscales downscaled intermediate image 217 to intermediate image 114”; [0056]; FIG. 8, step 805). -Regarding claims 10 and 20, Chaudhuri in view of Iqbal teaches the processor of claim 1 and the SoC of claim 11. The combination further teaches wherein the one or more convolutional layers are comprised in the neural network(Chaudhuri: FIGS. 1-5). Response to Arguments Applicant's arguments filed 02/27/2026 regarding to claim rejections under 35 U.S.C. 103 have been fully considered but they are not persuasive. Applicant argues that Chaudhuri does not disclose "the neural network uses one or more convolutional layers to determine feature maps corresponding to an input image, aggregates the feature maps, and upsamples the aggregated feature maps" as recited previously in claim 1 (Remarks: Page 10, last paragraph) because Chaudhuri does not disclose "upsampl[ing] ... based on one or more weights" (Remarks: Page 11, 1st paragraph) and Chaudhuri does not disclose that the feature maps are weighted based the input image (Remarks: Page 2, 2nd paragraph). Applicant further argues “weighted” as block of shift, paste, and assemble in applicant’s FIG. 7, block 708, and “input image” as self-similarity map in applicant’s FIG. 7, block 706 (Remarks: Page 11, last paragraph). Applicant also argues that cited combination has not shown to teach or otherwise render obvious such subject matter as recited in claim 1 (Remarks: Page 13, 2nd paragraph). Regarding to claim 1 and in response to applicant's argument that Chaudhuri fails to disclose upsampling a plurality of feature maps weighted based on the input image, see pages 4-5, Final Rejection dated on 01/20/2026 and page 3 of this office action. Chaudhuri discloses upsampling feature maps in Chaudhuri’s FIGS. 4-5, etc.. As shown in Chaudhuri’s FIG. 4 (also see Remarks: Page 11), label “u” in layers of 423, 427, 431, 435 and 439 are indicating upsampling layers that upsample feature maps at 422, 426, 430, 434 and 438 respectively. It is a common knowledge that a feature map is weighted on the input image because convolution neural network is a deep neural network with many convolution layers and neural network weights. In addition, the feature maps at 422, 426, 430, 434 and 438 are combined feature maps of convolution path and skip path. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., shift, paste, assemble, self-similarity map etc. in applicant’s FIG. 7, blocks 706, 708 ) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, Chaudhuri disclose a neural network performing upsampling fuction and Iqbal teaches a hardware platform for implementation neural networks. Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chaudhuri with the teaching of Iqbal by using the same or similar processor comprising a set of graphics cores that are optimized to perform any amount and type of operations associated with machine learning (Iqbal: [0233]; [0289]; [0293]) in order to achieve fast neural network training and best performance of the trained neural network. Please also note that the amended claims are similar to the applicant’s claims set dated 06/03/2024 which has narrower claim scope comparing to the current claims set dated 02/27/2026. Final Rejection for claims set dated 06/03/2024 was mailed on 08/27/2024. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAO LIU whose telephone number is (571)272-4539. The examiner can normally be reached Monday-Thursday and Alternate Fridays 8:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAO LIU/Primary Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Feb 06, 2023
Application Filed
Nov 28, 2023
Non-Final Rejection — §103
Feb 09, 2024
Interview Requested
Feb 16, 2024
Examiner Interview Summary
Feb 16, 2024
Applicant Interview (Telephonic)
Jun 03, 2024
Response Filed
Aug 21, 2024
Final Rejection — §103
Aug 21, 2024
Examiner Interview (Telephonic)
Nov 25, 2024
Examiner Interview Summary
Nov 25, 2024
Applicant Interview (Telephonic)
Feb 27, 2025
Notice of Allowance
Jun 27, 2025
Request for Continued Examination
Jun 30, 2025
Response after Non-Final Action
Jul 11, 2025
Non-Final Rejection — §103
Aug 14, 2025
Applicant Interview (Telephonic)
Aug 14, 2025
Examiner Interview Summary
Nov 17, 2025
Response Filed
Jan 16, 2026
Final Rejection — §103
Jan 30, 2026
Applicant Interview (Telephonic)
Jan 30, 2026
Examiner Interview Summary
Feb 27, 2026
Request for Continued Examination
Mar 02, 2026
Response after Non-Final Action
Mar 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603972
WIRELESS TRANSMITTER IDENTIFICATION IN VISUAL SCENES
2y 5m to grant Granted Apr 14, 2026
Patent 12592069
OBJECT RECOGNITION METHOD AND APPARATUS, AND DEVICE AND MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12579834
Information Extraction Method and Apparatus for Text With Layout
2y 5m to grant Granted Mar 17, 2026
Patent 12576873
SYSTEM AND METHOD OF CAPTIONS FOR TRIGGERS
2y 5m to grant Granted Mar 17, 2026
Patent 12573175
TARGET TRACKING METHOD, TARGET TRACKING SYSTEM AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+11.5%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 290 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month