Prosecution Insights
Last updated: April 19, 2026
Application No. 18/219,594

VIDEO UPSAMPLING USING ONE OR MORE NEURAL NETWORKS

Non-Final OA §103
Filed
Jul 07, 2023
Examiner
SALVUCCI, MATTHEW D
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
348 granted / 485 resolved
+9.8% vs TC avg
Strong +28% interview lift
Without
With
+28.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
17 currently pending
Career history
502
Total Applications
across all art units

Statute-Specific Performance

§101
4.6%
-35.4% vs TC avg
§103
60.8%
+20.8% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 485 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after allowance or after an Office action under Ex Parte Quayle, 25 USPQ 74, 453 O.G. 213 (Comm'r Pat. 1935). Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, prosecution in this application has been reopened pursuant to 37 CFR 1.114. Applicant's submission filed on 23 September 2025 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Puri et al. (US Pub. 2014/0328400), hereinafter Puri, in view of Vemulapalli et al. (US Pub. 2019/0206026), hereinafter Vemulapalli. Regarding claim 1, Puri discloses a system-on-a-chip (SoC), comprising: a central processing unit (CPU) (Paragraph [0372]: Processor 1710 may be implemented as a Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors, x86 instruction set compatible processors, multi-core, or any other microprocessor or central processing unit (CPU). In various implementations, processor 1710 may be dual-core processor(s), dual-core mobile processor(s), and so forth); memory (Fig. 16; Paragraph [0358]: FIG. 16 is an illustrative diagram of example video coding system 1600, arranged in accordance with at least some implementations of the present disclosure. In the illustrated implementation, video coding system 1600 may include imaging device(s) 1601, video encoder 100, video decoder 200 (and/or a video coder implemented via logic circuitry 1650 of processing unit(s) 1620), an antenna 1602, one or more processor(s) 1603, one or more memory store(s) 1604, and/or a display device 1605); a Peripheral Component Interconnect (PCI) communication bus (Paragraph [0384]: drivers (not shown) may include technology to enable users to instantly turn on and off platform 1702 like a television with the touch of a button after initial boot-up, when enabled, for example. Program logic may allow platform 1702 to stream content to media adaptors or other content services device(s) 1730 or content delivery device(s) 1740 even when the platform is turned "off" In addition, chipset 1705 may include hardware and/or software support for 5.1 surround sound audio and/or high definition 7.1 surround sound audio, for example. Drivers may include a graphics driver for integrated graphics platforms. In various embodiments, the graphics driver may comprise a peripheral component interconnect (PCI) Express graphics card); an upsampler to infer a higher resolution image from an input frame, wherein the higher resolution image is to be blended with a prior frame (Paragraph [0362]: In some implementations, the video encoder may include an image buffer and a graphics processing unit. The graphics processing unit may be configured to motion compensate a previously generated super resolution frame to generate a motion compensated super resolution reference frame. The graphics processing unit may be further configured to upsample a currently decoded frame to generate an upsampled super resolution reference frame. The graphics processing unit may be further configured to blend the motion compensated super resolution reference frame and the upsampled super resolution reference frame to generate a current super resolution frame. The graphics processing unit may be further configured to de-interleave the current super resolution frame to provide a plurality of super resolution based reference pictures for motion estimation of a next frame. The graphics processing unit may be further configured to store the plurality of super resolution based reference pictures); and a graphics processing unit (GPU) (Paragraph [0375]: Graphics subsystem 1715 may perform processing of images such as still or video for display. Graphics subsystem 1715 may be a graphics processing unit (GPU) or a visual processing unit (VPU), for example. An analog or digital interface may be used to communicatively couple graphics subsystem 1715 and display 1720. For example, the interface may be any of a High-Definition Multimedia Interface, DisplayPort, wireless HDMI, and/or wireless HD compliant techniques. Graphics subsystem 1715 may be integrated into processor 1710 or chipset 1705. In some implementations, graphics subsystem 1715 may be a stand-alone device communicatively coupled to chipset 1705) including: a general processing cluster (GPC), where the GPC includes streaming multiprocessors (SMs) (Paragraph [0372]: Processor 1710 may be implemented as a Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors, x86 instruction set compatible processors, multi-core, or any other microprocessor or central processing unit (CPU). In various implementations, processor 1710 may be dual-core processor(s), dual-core mobile processor(s), and so forth) comprising: an instruction cache (Paragraphs [0360]-[0372]: logic circuitry 1650 may be implemented via hardware, video coding dedicated hardware, or the like, and processor(s) 1603 may implemented general purpose software, operating systems, or the like. In addition, memory store(s) 1604 may be any type of memory such as volatile memory (e.g., Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), etc.) or non-volatile memory (e.g., flash memory, etc.), and so forth. In a non-limiting example, memory store(s) 1604 may be implemented by cache memory. In some examples, logic circuitry 1650 may access memory store(s) 1604 (for implementation of an image buffer for example). In other examples, logic circuitry 1650 and/or processing unit(s) 1620 may include memory stores (e.g., cache or the like) for the implementation of an image buffer or the like…Processor 1710 may be implemented as a Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors, x86 instruction set compatible processors, multi-core, or any other microprocessor or central processing unit (CPU). In various implementations, processor 1710 may be dual-core processor(s), dual-core mobile processor(s), and so forth); a dispatch unit (Paragraph [0372]: Processor 1710 may be implemented as a Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors, x86 instruction set compatible processors, multi-core, or any other microprocessor or central processing unit (CPU). In various implementations, processor 1710 may be dual-core processor(s), dual-core mobile processor(s), and so forth); cores (Paragraph [0372]: Processor 1710 may be implemented as a Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors, x86 instruction set compatible processors, multi-core, or any other microprocessor or central processing unit (CPU). In various implementations, processor 1710 may be dual-core processor(s), dual-core mobile processor(s), and so forth); a load/store unit (LSU) (Paragraph [0396]: implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor); shared memory (Paragraph [0386]: system 1700 may be implemented as a wireless system, a wired system, or a combination of both. When implemented as a wireless system, system 1700 may include components and interfaces suitable for communicating over a wireless shared media, such as one or more antennas, transmitters, receivers, transceivers, amplifiers, filters, control logic, and so forth. An example of wireless shared media may include portions of a wireless spectrum, such as the RF spectrum and so forth. When implemented as a wired system, system 1700 may include components and interfaces suitable for communicating over wired communications media, such as input/output (I/O) adapters, physical connectors to connect the I/O adapter with a corresponding wired communications medium, a network interface card (NIC), disc controller, video controller, audio controller, and the like. Examples of wired communications media may include a wire, cable, metal leads, printed circuit board (PCB), backplane, switch fabric, semiconductor material, twisted-pair wire, co-axial cable, fiber optics, and so forth); and an L1 cache (Paragraph [0360]: Video coding system 1600 also may include optional processor(s) 1603, which may similarly include application-specific integrated circuit (ASIC) logic, graphics processor(s), general purpose processor(s), or the like. In some examples, logic circuitry 1650 may be implemented via hardware, video coding dedicated hardware, or the like, and processor(s) 1603 may implemented general purpose software, operating systems, or the like. In addition, memory store(s) 1604 may be any type of memory such as volatile memory (e.g., Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), etc.) or non-volatile memory (e.g., flash memory, etc.), and so forth. In a non-limiting example, memory store(s) 1604 may be implemented by cache memory. In some examples, logic circuitry 1650 may access memory store(s) 1604 (for implementation of an image buffer for example). In other examples, logic circuitry 1650 and/or processing unit(s) 1620 may include memory stores (e.g., cache or the like) for the implementation of an image buffer or the like). Puri does not explicitly disclose upsampler including at least one neural network. However, Vemulapalli teaches video upsampling (Fig. 2; Fig. 6; Paragraphs [0031 ]-[0035]), further comprising an upsampler including at least one neural network (Fig. 2; Fig. 6; Paragraphs [0031]-[0035]: the computing system can upsample the current low-resolution image frame to a high-resolution space of the warped previous estimated high-resolution image frame to map the warped previous estimated high-resolution image frame to the current low-resolution image-frame…some implementations, the computing system can input the warped previous estimated high-resolution image frame and the current low-resolution image frame into a machine-learned frame estimation model…the current estimated high-resolution image frame can be passed back for use as an input in the next iteration. That is, the current estimated high-resolution image frame can be used as the previous estimated high-resolution image frame at the next iteration, in which a next subsequent low resolution image frame is super-resolved…the machine-learned recurrent super-resolution model can include one or more neural networks (e.g., deep neural networks), support vector machines, decision trees, ensemble models, k-nearest neighbors models, Bayesian networks, or other types of models including linear models and/or non-linear models. Example neural networks can include feed-forward neural networks, convolutional neural networks, recurrent neural networks (e.g., long short-term memory (LSTM) recurrent neural networks, gated recurrent unit (GRU) neural networks), or other forms of neural networks). Vemulapalli teaches that this will allow for increased resolution of imagery (Abstract). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Puri with the features of above as taught by Vemulapalli so as to allow for increased resolution as presented by Vemulapalli. Regarding claim 2, Puri, in view of Vemulapalli teaches the SoC of claim 1, Puri discloses wherein the prior frame is a prior inferred frame (Paragraph [0362]: In some implementations, the video encoder may include an image buffer and a graphics processing unit. The graphics processing unit may be configured to motion compensate a previously generated super resolution frame to generate a motion compensated super resolution reference frame. The graphics processing unit may be further configured to upsample a currently decoded frame to generate an upsampled super resolution reference frame. The graphics processing unit may be further configured to blend the motion compensated super resolution reference frame and the upsampled super resolution reference frame to generate a current super resolution frame. The graphics processing unit may be further configured to de-interleave the current super resolution frame to provide a plurality of super resolution based reference pictures for motion estimation of a next frame. The graphics processing unit may be further configured to store the plurality of super resolution based reference pictures). Regarding claim 3, Puri, in view of Vemulapalli teaches the SoC of claim 1, Puri discloses wherein the SMs further comprise a register file (Paragraph [0395]: Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints). Regarding claim 4, Puri, in view of Vemulapalli teaches the SoC of claim 1, Puri discloses wherein the SMs further comprise one or more special function units (SFUs) (Paragraph [0267]: Process 1200 may continue at operation 1214, "Perform Transforms on Potential Coding Partitionings", where fixed or content adaptive transforms with various block sizes may be performed on various potential coding partitionings of partition prediction error data. For example, partition prediction error data may be partitioned to generate a plurality of coding partitions. For example, the partition prediction error data may be partitioned by a bi-tree coding partitioner module or a k-d tree coding partitioner module of coding partitions generator module 107 as discussed herein. In some examples, partition prediction error data associated with an F/B- or P-picture may be partitioned by a bi-tree coding partitioner module. In some examples, video data associated with an I-picture (e.g., tiles or super-fragments in some examples) may be partitioned by a k-d tree coding partitioner module. In some examples, a coding partitioner module may be chosen or selected via a switch or switches; Paragraph [0375]: Graphics subsystem 1715 may perform processing of images such as still or video for display. Graphics subsystem 1715 may be a graphics processing unit (GPU) or a visual processing unit (VPU), for example. An analog or digital interface may be used to communicatively couple graphics subsystem 1715 and display 1720. For example, the interface may be any of a High-Definition Multimedia Interface, DisplayPort, wireless HDMI, and/or wireless HD compliant techniques. Graphics subsystem 1715 may be integrated into processor 1710 or chipset 1705. In some implementations, graphics subsystem 1715 may be a stand-alone device communicatively coupled to chipset 1705; Paragraph [0386]: system 1700 may be implemented as a wireless system, a wired system, or a combination of both. When implemented as a wireless system, system 1700 may include components and interfaces suitable for communicating over a wireless shared media, such as one or more antennas, transmitters, receivers, transceivers, amplifiers, filters, control logic, and so forth. An example of wireless shared media may include portions of a wireless spectrum, such as the RF spectrum and so forth). Regarding claim 5, Puri, in view of Vemulapalli teaches the SoC of claim 1, Puri discloses wherein the SMs each further comprise one or more interconnects (Paragraph [0384]: drivers (not shown) may include technology to enable users to instantly turn on and off platform 1702 like a television with the touch of a button after initial boot-up, when enabled, for example. Program logic may allow platform 1702 to stream content to media adaptors or other content services device(s) 1730 or content delivery device(s) 1740 even when the platform is turned "off" In addition, chipset 1705 may include hardware and/or software support for 5.1 surround sound audio and/or high definition 7.1 surround sound audio, for example. Drivers may include a graphics driver for integrated graphics platforms. In various embodiments, the graphics driver may comprise a peripheral component interconnect (PCI) Express graphics card). Regarding claim 6, Puri, in view of Vemulapalli teaches the SoC of claim 1, Puri discloses wherein the GPC further comprises a raster engine (Paragraph [0372]: Processor 1710 may be implemented as a Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors, x86 instruction set compatible processors, multi-core, or any other microprocessor or central processing unit (CPU). In various implementations, processor 1710 may be dual-core processor(s), dual-core mobile processor(s), and so forth). Regarding claim 7, Puri, in view of Vemulapalli teaches the SoC of claim 1, Puri discloses further comprising a hub to interface with one or more GPU interconnects (Paragraph [0375]: Graphics subsystem 1715 may perform processing of images such as still or video for display. Graphics subsystem 1715 may be a graphics processing unit (GPU) or a visual processing unit (VPU), for example. An analog or digital interface may be used to communicatively couple graphics subsystem 1715 and display 1720. For example, the interface may be any of a High-Definition Multimedia Interface, DisplayPort, wireless HDMI, and/or wireless HD compliant techniques. Graphics subsystem 1715 may be integrated into processor 1710 or chipset 1705. In some implementations, graphics subsystem 1715 may be a stand-alone device communicatively coupled to chipset 1705). Regarding claim 8, Puri, in view of Vemulapalli teaches the SoC of claim 1, Puri discloses wherein the GPU further comprises an input/output (I/O) unit to interface with the PCI communication bus (Paragraph [0386]: system 1700 may be implemented as a wireless system, a wired system, or a combination of both. When implemented as a wireless system, system 1700 may include components and interfaces suitable for communicating over a wireless shared media, such as one or more antennas, transmitters, receivers, transceivers, amplifiers, filters, control logic, and so forth. An example of wireless shared media may include portions of a wireless spectrum, such as the RF spectrum and so forth. When implemented as a wired system, system 1700 may include components and interfaces suitable for communicating over wired communications media, such as input/output (I/O) adapters, physical connectors to connect the I/O adapter with a corresponding wired communications medium, a network interface card (NIC), disc controller, video controller, audio controller, and the like. Examples of wired communications media may include a wire, cable, metal leads, printed circuit board (PCB), backplane, switch fabric, semiconductor material, twisted-pair wire, co-axial cable, fiber optics, and so forth). Regarding claim 9, Puri, in view of Vemulapalli teaches the SoC of claim 1, Puri discloses wherein the GPU further comprises a crossbar (Xbar) (Paragraph [0375]: Graphics subsystem 1715 may perform processing of images such as still or video for display. Graphics subsystem 1715 may be a graphics processing unit (GPU) or a visual processing unit (VPU), for example. An analog or digital interface may be used to communicatively couple graphics subsystem 1715 and display 1720. For example, the interface may be any of a High-Definition Multimedia Interface, DisplayPort, wireless HDMI, and/or wireless HD compliant techniques. Graphics subsystem 1715 may be integrated into processor 1710 or chipset 1705. In some implementations, graphics subsystem 1715 may be a stand-alone device communicatively coupled to chipset 1705; Paragraph [0384]: drivers (not shown) may include technology to enable users to instantly turn on and off platform 1702 like a television with the touch of a button after initial boot-up, when enabled, for example. Program logic may allow platform 1702 to stream content to media adaptors or other content services device(s) 1730 or content delivery device(s) 1740 even when the platform is turned "off" In addition, chipset 1705 may include hardware and/or software support for 5.1 surround sound audio and/or high definition 7.1 surround sound audio, for example. Drivers may include a graphics driver for integrated graphics platforms. In various embodiments, the graphics driver may comprise a peripheral component interconnect (PCI) Express graphics card). Regarding claim 10, Puri, in view of Vemulapalli teaches the SoC of claim 1, Puri discloses wherein the GPU further comprises a memory partition unit (Paragraph [0361]: video encoder 100 implemented via logic circuitry may include an image buffer (e.g., via either processing unit(s) 1620 or memory store(s) 1604)) and a graphics processing unit (e.g., via processing unit(s) 1620). The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include video encoder 100 as implemented via logic circuitry 1650 to embody the various modules as discussed with respect to FIG. 1 and/or any other encoder system or subsystem described herein. For example, the graphics processing unit may include coding partitions generator logic circuitry, adaptive transform logic circuitry, content pre-analyzer, encode controller logic circuitry, adaptive entropy encoder logic circuitry, and so on. The logic circuitry may be configured to perform the various operations as discussed herein). Regarding claim 11, the limitations of this claim substantially correspond to the limitations of claim 1; thus they are rejected on similar grounds. Regarding claim 12, the limitations of this claim substantially correspond to the limitations of claim 2; thus they are rejected on similar grounds. Regarding claim 13, Puri, in view of Vemulapalli teaches the method of claim 11, Vemulapalli discloses wherein the prior frame comprises a higher resolution and is to be inferred by the at least one neural network (Fig. 2; Fig. 6; Paragraphs [0031]-[0035]: the computing system can upsample the current low-resolution image frame to a high-resolution space of the warped previous estimated high-resolution image frame to map the warped previous estimated high-resolution image frame to the current low-resolution image-frame…some implementations, the computing system can input the warped previous estimated high-resolution image frame and the current low-resolution image frame into a machine-learned frame estimation model…the current estimated high-resolution image frame can be passed back for use as an input in the next iteration. That is, the current estimated high-resolution image frame can be used as the previous estimated high-resolution image frame at the next iteration, in which a next subsequent low resolution image frame is super-resolved…the machine-learned recurrent super-resolution model can include one or more neural networks (e.g., deep neural networks), support vector machines, decision trees, ensemble models, k-nearest neighbors models, Bayesian networks, or other types of models including linear models and/or non-linear models. Example neural networks can include feed-forward neural networks, convolutional neural networks, recurrent neural networks (e.g., long short-term memory (LSTM) recurrent neural networks, gated recurrent unit (GRU) neural networks), or other forms of neural networks). Regarding claim 14, Puri, in view of Vemulapalli teaches the method of claim 11, Puri discloses wherein the higher resolution image is to be blended with pixel values of a prior inferred frame (Paragraph [0350]: a process 2000 for video coding may further include generating, via the motion estimator module, motion data associated with a prediction partition of the next frame based at least in part one or more of the plurality of super resolution based reference pictures. Motion compensation may be performed, via the characteristics and motion compensated filtering predictor module, based at least in part on the motion data and the one or more of the plurality of super resolution based reference pictures to generate predicted partition data for the prediction partition. The predicted partition data may be differenced, via a differencer, with original pixel data associated with the prediction partition to generate a prediction error data partition. The prediction error data partition may be partitioned, via a coding partitions generator, to generate a plurality of coding partitions. A forward transform may be performed, via an adaptive transform module, on the plurality of coding partitions to generate transform coefficients associated with the plurality of coding partitions. The transform coefficients may be quantized, via an adaptive quantize module, to generate quantized transform coefficients. The quantized transform coefficients and the motion data may be entropy encoded, via an adaptive entropy encoder, into a bitstream; Paragraph [0362]: In some implementations, the video encoder may include an image buffer and a graphics processing unit. The graphics processing unit may be configured to motion compensate a previously generated super resolution frame to generate a motion compensated super resolution reference frame. The graphics processing unit may be further configured to upsample a currently decoded frame to generate an upsampled super resolution reference frame. The graphics processing unit may be further configured to blend the motion compensated super resolution reference frame and the upsampled super resolution reference frame to generate a current super resolution frame. The graphics processing unit may be further configured to de-interleave the current super resolution frame to provide a plurality of super resolution based reference pictures for motion estimation of a next frame. The graphics processing unit may be further configured to store the plurality of super resolution based reference pictures). Regarding claim 15, Puri, in view of Vemulapalli teaches the method of claim 11, Puri discloses wherein the GPU further comprises a scheduler unit (Paragraph [0293]: Various components of the systems described herein may be implemented in software, firmware, and/or hardware and/or any combination thereof. For example, various components of system 300 may be provided, at least in part, by hardware of a computing System-on-a-Chip (SoC) such as may be found in a computing system such as, for example, a smart phone. Those skilled in the art may recognize that systems described herein may include additional components that have not been depicted in the corresponding figures. For example, the systems discussed herein may include additional components such as bit stream multiplexer or de-multiplexer modules and the like that have not been depicted in the interest of clarity). Regarding claim 16, the limitations of this claim substantially correspond to the limitations of claim 6; thus they are rejected on similar grounds. Regarding claim 17, the limitations of this claim substantially correspond to the limitations of claim 12; thus they are rejected on similar grounds. Regarding claim 18, Puri, in view of Vemulapalli teaches the method of claim 11, Puri discloses wherein the SoC further comprises a network interface (Paragraph [0386]: system 1700 may be implemented as a wireless system, a wired system, or a combination of both. When implemented as a wireless system, system 1700 may include components and interfaces suitable for communicating over a wireless shared media, such as one or more antennas, transmitters, receivers, transceivers, amplifiers, filters, control logic, and so forth. An example of wireless shared media may include portions of a wireless spectrum, such as the RF spectrum and so forth. When implemented as a wired system, system 1700 may include components and interfaces suitable for communicating over wired communications media, such as input/output (I/O) adapters, physical connectors to connect the I/O adapter with a corresponding wired communications medium, a network interface card (NIC), disc controller, video controller, audio controller, and the like. Examples of wired communications media may include a wire, cable, metal leads, printed circuit board (PCB), backplane, switch fabric, semiconductor material, twisted-pair wire, co-axial cable, fiber optics, and so forth). Regarding claim 19, Puri, in view of Vemulapalli teaches the method of claim 11, Puri discloses wherein the SoC further comprises one or more display devices (Fig. 16; Paragraph [0358]: FIG. 16 is an illustrative diagram of example video coding system 1600, arranged in accordance with at least some implementations of the present disclosure. In the illustrated implementation, video coding system 1600 may include imaging device(s) 1601, video encoder 100, video decoder 200 (and/or a video coder implemented via logic circuitry 1650 of processing unit(s) 1620), an antenna 1602, one or more processor(s) 1603, one or more memory store(s) 1604, and/or a display device 1605). Regarding claim 20, Puri, in view of Vemulapalli teaches the method of claim 11, Vemulapalli discloses further comprising inferring the higher resolution image by the at least one neural network based, at least in part, on a lower resolution input frame (Fig. 2; Fig. 6; Paragraphs [0031]-[0035]: the computing system can upsample the current low-resolution image frame to a high-resolution space of the warped previous estimated high-resolution image frame to map the warped previous estimated high-resolution image frame to the current low-resolution image-frame…some implementations, the computing system can input the warped previous estimated high-resolution image frame and the current low-resolution image frame into a machine-learned frame estimation model…the current estimated high-resolution image frame can be passed back for use as an input in the next iteration. That is, the current estimated high-resolution image frame can be used as the previous estimated high-resolution image frame at the next iteration, in which a next subsequent low resolution image frame is super-resolved…the machine-learned recurrent super-resolution model can include one or more neural networks (e.g., deep neural networks), support vector machines, decision trees, ensemble models, k-nearest neighbors models, Bayesian networks, or other types of models including linear models and/or non-linear models. Example neural networks can include feed-forward neural networks, convolutional neural networks, recurrent neural networks (e.g., long short-term memory (LSTM) recurrent neural networks, gated recurrent unit (GRU) neural networks), or other forms of neural networks). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW D SALVUCCI whose telephone number is (571)270-5748. The examiner can normally be reached M-F: 7:30-4:00PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XIAO WU can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHEW SALVUCCI/Primary Examiner, Art Unit 2613 0
Read full office action

Prosecution Timeline

Jul 07, 2023
Application Filed
Sep 23, 2025
Request for Continued Examination
Nov 04, 2025
Non-Final Rejection — §103
Nov 04, 2025
Response after Non-Final Action
Jan 09, 2026
Interview Requested
Jan 20, 2026
Examiner Interview Summary
Jan 20, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597198
RAY TRACING METHOD AND APPARATUS BASED ON ATTENTION FOR DYNAMIC SCENES
2y 5m to grant Granted Apr 07, 2026
Patent 12597207
Camera Reprojection for Faces
2y 5m to grant Granted Apr 07, 2026
Patent 12579753
Phased Capture Assessment and Feedback for Mobile Dimensioning
2y 5m to grant Granted Mar 17, 2026
Patent 12561899
Vector Graphic Parsing and Transformation Engine
2y 5m to grant Granted Feb 24, 2026
Patent 12548256
IMAGE PROCESSING APPARATUS FOR GENERATING SURFACE PROFILE OF THREE-DIMENSIONAL GEOMETRIC MODEL, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+28.5%)
2y 12m
Median Time to Grant
Low
PTA Risk
Based on 485 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month