Prosecution Insights
Last updated: April 19, 2026
Application No. 18/650,152

Context Aware Protocol for Single Video Memory Map

Non-Final OA §103
Filed
Apr 30, 2024
Examiner
GE, JIN
Art Unit
2619
Tech Center
2600 — Communications
Assignee
DELL PRODUCTS, L.P.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
416 granted / 520 resolved
+18.0% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
38 currently pending
Career history
558
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
60.2%
+20.2% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 520 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-12 are pending in the present application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 2 and 8 are objected to because of the following informalities: “an iGPU video buffer, a dPGU video buffer” should be “an integrated graphics processing unit (iGPU) video buffer, a discrete integrated graphics processing unit (dPGU) video buffer”. Claims 4 and 10 are objected to because of the following informalities: “Comprising Monitoring Rendered Video For video faults” should be “comprising monitoring rendered video for video faults”. Claims 6 and 12 are objected to because of the following informalities: “detecting A Refresh rate issue” should be “detecting a refresh rate issue”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3 and 7-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2021/0329306 to Liu et al. in view of U.S. Patent 10867655 to Harms et al.. Regarding claim 1, Liu et al. teach a method comprising (abstract): enumerating multiple video memory resources in an information handling system (par 0297, “although various multi-core processors 1605 and GPUs 1610 may be physically coupled to a particular memory 1601, 1620, respectively, and/or a unified memory architecture may be implemented in which a virtual system address space (also referred to as “effective address” space) is distributed among various physical memories. For example, processor memories 1601(1)-1601(M) may each comprise 64 GB of system memory address space and GPU memories 1620(1)-1620(N) may each comprise 32 GB of system memory address space resulting in a total of 256 GB addressable memory when M=2 and N=4. Other values for N and M are possible”, par 0329-0330, “ a unified memory is used, addressable via a common virtual memory address space used to access physical processor memories 1601(1)-1601(N) and GPU memories 1620(1)-1620(N). In this implementation, operations executed on GPUs 1610(1)-1610(N) utilize a same virtual/effective memory address space to access processor memories 1601(1)-1601(M) and vice versa, thereby simplifying programmability. In at least one embodiment, a first portion of a virtual/effective address space is allocated to processor memory 1601(1), a second portion to second processor memory 1601(N), a third portion to GPU memory 1620(1), and so on. In at least one embodiment, an entire virtual/effective memory space (sometimes referred to as an effective address space) is thereby distributed across each of processor memories 1601 and GPU memories 1620, allowing any processor or GPU to access any physical memory with a virtual address mapped to that memory”); configuring a virtual video memory buffer encompassing two or more of the video memory resources (par 0303, “accelerator integration circuit 1636 includes a memory management unit (MMU) 1639 for performing various memory management functions such as virtual-to-physical memory translations (also referred to as effective-to-real memory translations) and memory access protocols for accessing system memory 1614”, par 0329, “As illustrated in FIG. 16F, in at least one embodiment, a unified memory is used, addressable via a common virtual memory address space used to access physical processor memories 1601(1)-1601(N) and GPU memories 1620(1)-1620(N). In this implementation, operations executed on GPUs 1610(1)-1610(N) utilize a same virtual/effective memory address space to access processor memories 1601(1)-1601(M) and vice versa, thereby simplifying programmability. In at least one embodiment, a first portion of a virtual/effective address space is allocated to processor memory 1601(1), a second portion to second processor memory 1601(N), a third portion to GPU memory 1620(1), and so on. In at least one embodiment, an entire virtual/effective memory space (sometimes referred to as an effective address space) is thereby distributed across each of processor memories 1601 and GPU memories 1620, allowing any processor or GPU to access any physical memory with a virtual address mapped to that memory”); determining a context of an object interface associated with an application program (par 0304, “a set of registers 1645 store context data for threads executed by graphics processing engines 1631(1)-1631(N) and a context management circuit 1648 manages thread contexts. For example, context management circuit 1648 may perform save and restore operations to save and restore contexts of various threads during contexts switches (e.g., where a first thread is saved and a second thread is stored so that a second thread can be execute by a graphics processing engine)”, par 0313, “a process handle may be an implementation-specific value provided to a host process when registering its context with graphics processing engine 1631(1)-1631(N) (that is, calling system software to add a process element to a process element linked list)”). But Liu et al. keep silent for teaching invoking a runtime-adjustable video frame driver to access the virtual video memory buffer based at least in part on the context. In related endeavor, Harms et al. teach configuring a virtual video memory buffer encompassing two or more of the video memory resources(col 5:14-28, “receiving one or more configuration requests for the memory device, the memory device comprising a plurality of memory portions, each of the plurality of memory portions associated with respective one or more operational parameters; determining whether to grant the one or more configuration requests for the memory device”, col 19:18-31, “configuration registers can dynamically adjust the row decoder 816 and/or corresponding RW amp 822 so as to operate according to e.g., 1T1C or 2T2C operation for individual ones of the memory arrays 821”), determining a context of an object interface associated with an application program (col 7:17-32, “the various techniques described herein may be specifically tailored for a given application and a memory device's operating characteristics may be altered when another application is invoked. Specific operating examples are also disclosed in which these memory devices may be more suitable than prior memory device architectures. For example, specific operating examples are given in the context of video buffering applications, Internet of Things (IoT) applications, and fog networking implementations”, col 24:54-67, “a user space initiated configuration request, alternative embodiments may enable an operating system (OS) to initiate configuration/reconfiguration without being requested by e.g., a user space application. In such implementations, an OS may dynamically manage memory configurations based on current application loads and/or system configurations.”); and invoking a runtime-adjustable video frame driver to access the virtual video memory buffer based at least in part on the context (col 12:37-43, “While the foregoing example is presented in the context of DRAM refresh, artisans of ordinary skill in the related arts will readily appreciate that most dynamic memory technologies may be selectively modified to increase or decrease volatility (BER as a function of time) so as to trade-off other memory performances”, col 5:14-28, “determining whether to grant the one or more configuration requests for the memory device; in response to the determining, implementing the one or more configuration requests within the memory device, the implementing comprising dynamically reconfiguring, during run-time, respective one or more operational parameters associated with at least a portion of the plurality of memory portions of the memory device; and operating the memory device in accordance with the implementing”, col 21:35-43, “Referring back to FIG. 9, an application 902 (which is assumed to be untrusted) may make a request to, for example, dynamically configure a memory array 912 of the memory device 910 through an application programming interface (API)”, col 21:54-67, “If the request to dynamically configure the memory array 912 of the memory device 910 is allowed by the trust protocols of the computer system 900, then the request is sent to the memory driver 906. The memory driver 906 may then reconfigure the memory arrays 912 and/or associated logic of the memory device 910 to accommodate the request”, col 25:3-15, “the one or more operational parameters may be dynamically adjusted during operation without affecting the memory contents”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Liu et al. to include invoking a runtime-adjustable video frame driver to access the virtual video memory buffer based at least in part on the context as taught by Harms et al. in response to the determining, implementing the one or more configuration requests within the memory device, the implementing comprising dynamically reconfiguring, during run-time, respective one or more operational parameters associated with at least a portion of the plurality of memory portions of the memory device to perform dynamic adjustment of performance. Regarding claim 2, Liu et al. as modified by Harms et al. teach all the limitation of claim 1, and Liu et al. further teach wherein the multiple video memory resources include : an iGPU video buffer, a dPGU video buffer; and a host video buffer (par 0188, par 0536, iGPU, par 0226, dPGU, par 0343, framebuffer). Regarding claim 3, Liu et al. as modified by Harms et al. teach all the limitation of claim 1, and further teach wherein the enumerating of the multiple video memory resources is performed during a pre-boot phase of the information handling system (Liu et al.: par 0210, Harms et al.: col 24:54-67, “While the illustrated embodiment illustrates a user space initiated configuration request, alternative embodiments may enable an operating system (OS) to initiate configuration/reconfiguration without being requested by e.g., a user space application. In such implementations, an OS may dynamically manage memory configurations based on current application loads and/or system configurations.”). Regarding claim 7, Liu et al. teach an information handling system comprising (abstract): a central processing unit; a memory including processor-executable instructions that, when executed by the processor, cause the system to perform operations (par 0267-0269). The remaining limitations of the claim are similar in scope to claim 1 and rejected under the same rationale. Regarding claims 8-9, Liu et al. as modified by Harms et al. teach all the limitation of claim 7, the claims 8-9 are similar in scope to claims 2-3 and are rejected under the same rational. Claim(s) 4-6 and 10-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2021/0329306 to Liu et al. in view of U.S. Patent 10867655 to Harms et al., further in view of China PGPubs CN117097883 to Sun et al.. Regarding claim 4, Liu et al. as modified by Harms et al. teach all the limitation of claim 1, but keep silent for teaching further Comprising Monitoring Rendered Video For video faults. In related endeavor, Sun et al. teach further Comprising Monitoring Rendered Video For video faults (abstract, par 0062-0063, monitor frame loss). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Liu et al. as modified by Harms et al. to include further Comprising Monitoring Rendered Video For video faults as taught by Sun et al. to conveniently and rapidly determine the frame loss fault reason through monitor multi-frame images to improve user experience of a user. Regarding claim 5, Liu et al. as modified by Harms et al. and Sun et al. teach all the limitation of claim 4, and Sun et al. teach wherein the video faults comprise refresh rate faults (abstract, monitor frame loss, par 0062-0063, determine and report the frame generated by the frame loss fault and the frame loss fault value (namely the frame loss time length) to the cloud). Regarding claim 6, Liu et al. as modified by Harms et al. and Sun et al. teach all the limitation of claim 4, and Harms et al. teach responsive to detecting A Refresh rate issue, dynamically adjusting the virtual video memory buffer (abstract, “The adjusting of the performance for the partitioned memory includes one or more of enabling/disabling refresh operations, altering a refresh rate for the partitioned memory, enabling/disabling error correcting code (ECC) circuitry for the partitioned memory, and/or altering a memory cell architecture for the partitioned memory”, col 6:59-67 and col 7:1-16, “other applications may be tolerant to higher bit error rates associated with memory storage and hence, the memory refresh rates may be adjusted so as to occur less frequently”, col 12:11-26, “An application can intelligently use the memory performance characteristics to select a refresh rate that both minimizes memory bandwidth for refresh while still providing acceptable reliability. For example, a first memory array may use a refresh rate (e.g., 60 ms) that results in low bit error rates (e.g., 1×10.sup.18) for the first memory array; however, a second memory array may use a refresh rate (e.g., 90 ms) that results in a slightly higher bit error rate than the first memory array (e.g., 1×10.sup.−17).”, col 19:9-17, “configuration registers can dynamically adjust the refresh control circuit 814 so as to control the rate of refresh for individual ones of the memory arrays 821. For example, the refresh control circuit 814 may disable refresh operations for memory array 821a, may implement a first refresh rate for another memory array 821b, and may implement a second refresh rate (that differs from the first refresh rate) for yet another memory array 821c”, col 27:13-36, “A second application that has an intermediate level of priority requests and is granted a memory array that is refreshed at a standard refresh rate for error-free operation. Subsequently thereafter, a third application that has a lowest level of priority requests a reduced refresh rate”). Regarding claims 10-12, Liu et al. as modified by Harms et al. teach all the limitation of claim 7, the claims 10-12 are similar in scope to claims 4-6 and are rejected under the same rational. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jin Ge whose telephone number is (571)272-5556. The examiner can normally be reached 8:00 to 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JIN . GE Examiner Art Unit 2619 /JIN GE/ Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Apr 30, 2024
Application Filed
Feb 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592024
QUANTIFICATION OF SENSOR COVERAGE USING SYNTHETIC MODELING AND USES OF THE QUANTIFICATION
2y 5m to grant Granted Mar 31, 2026
Patent 12586296
METHODS AND PROCESSORS FOR RENDERING A 3D OBJECT USING MULTI-CAMERA IMAGE INPUTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579704
VIDEO GENERATION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573164
DESIGN DEVICE, PRODUCTION METHOD, AND STORAGE MEDIUM STORING DESIGN PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12573151
PERSONALIZED DEFORMABLE MESH BY FINETUNING ON PERSONALIZED TEXTURE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
98%
With Interview (+18.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 520 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month