Prosecution Insights
Last updated: April 19, 2026
Application No. 18/174,438

GRAPHICAL MEMORY SHARING

Final Rejection §103
Filed
Feb 24, 2023
Examiner
VINCENT, ROSS MICHAEL
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Netflix Inc.
OA Round
2 (Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
12 granted / 22 resolved
-0.5% vs TC avg
Strong +36% interview lift
Without
With
+35.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
32 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
22.7%
-17.3% vs TC avg
§103
57.4%
+17.4% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1, 13, and 20 are currently amended. Claim 21 has been newly added. Claim 19 has been cancelled. Claims 1-18 and 20-21 are currently pending for examination. Response to Arguments In response to the applicant’s arguments, pgs.8-9, that Johnson in view of Chauhan fails to disclose the limitation of “identifies a time span during which the identified content stored in the shared memory will be immutable,” from claim 1, the examiner finds these arguments persuasive. As such, the new grounds of rejection rely upon the combination of McMullen in view of Kiel in further view of Chauhan in further view of Johnson to teach this limitation. In response to the applicant’s arguments, pgs.8-9, that Johnson in view of Chauhan fails to disclose the limitation of “instructs the requesting application instance to access the identified content from the specified location in shared memory during the identified time span” from claim 1, the examiner finds these arguments persuasive. As such, the new grounds of rejection rely upon the combination of McMullen in view of Kiel in further view of Chauhan in further view of Johnson to teach this limitation. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 13, 15, 17-18, and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over McMullen (US 20090313622 A1) in view of Kiel (US 20170039124 A1) in further view of Chauhan (US 20200341659 A1) in further view of Johnson (US 9430258 B2). As per claim 1, McMullen discloses, A computer-implemented method comprising: instantiating a memory management process that is configured to communicate with one or more graphics processing hardware components to control usage of shared memory by multiple different application instances; (“Embodiments of the present invention provide for, among other things, synchronized access to shared surfaces. Embodiment may include graphics processing unit (GPU) synchronization and/or central processing unit (CPU) synchronization. As used herein, the term "surface" refers to an allocation of video memory that is the target for rendering by a GPU. The video memory, for instance, may represent graphics that could be displayed by the computing system. In embodiments of the present invention, a surface may be shared by multiple graphics devices in different processes/threads such that the graphics devices may render to the same surface (i.e., a shared surface).”, 0015 ; “In accordance with embodiments, access to a shared surface by multiple devices is synchronized.”, 0016 ; “If a non-owning rendering context wishes to acquire access to render to the shared surface, the non-owning rendering context makes an acquire call and waits until the surface is released by the currently owning device.”, 0003 ; Examiner Note: the processes running on the multiple devices, i.e.- the rendering contexts, equate to different application instances) receives a request from at least one the application instances indicating that content associated with a specific resource implemented by the application instance is to be stored in the shared memory (“If a non-owning rendering context wishes to acquire access to render to the shared surface, the non-owning rendering context makes an acquire call and waits until the surface is released by the currently owning device. When the currently owning device finishes rendering, it releases the shared surface. The rendering context that made the acquire call may then acquire access to the shared surface and begin rendering to the surface.” 0003 ; Examiner Note: the content being rendered equates to content associated with a specific resource implemented by the application) McMullen discloses the above limitations of claim 1, but does not disclose the identification of a time period wherein a region of memory will be immutable. However, Kiel discloses: identifies a time span during which the identified content stored in the shared memory will be immutable, the time span being defined at least in part by one or more characteristics of the specific resource (“Knowledge of resource production and consumption by particular processors allows the frame debugger interception layer to know when synchronization must occur in order to produce correct results. Since the interception layer knows all the details about the application intended synchronization operations, it can determine if there are missing synchronization operations”, 0036 ; Examiner Note: the resource production and consumption equate to one or more characteristics of the specific resource) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of McMullen and Kiel in order to provide for more efficient parallel processing while still maintaining sequential order and avoiding data hazards by using separate, non-blocking fence primitives (Kiel, [0041]). McMullen in view of Kiel discloses the above limitations of claim 1, but does not disclose the instructing of a requesting application to read from the specified location while it is immutable. However, Chauhan discloses: instructs the requesting application instance to access the identified content from the specified location in shared memory during the identified time span. (“In one embodiment, it may be determined whether the set of data should be stored using the WORM compliant hardware storage device or if the set of data should instead satisfy WORM compliance by modifying the file access permissions to prevent the stored set of data from being modified.”, 0099 ; “The WORM compliant hardware storage device may comprise a disk drive (e.g., an SSD or HDD), an optical or magnetic storage device, or a non-volatile semiconductor storage device (e.g., a Flash-based semiconductor memory) in which the stored data may not be erased, overwritten, or altered, but may be read or accessed with authorized access to the stored data.”, 0016 ; “The node 434 includes a virtual machine 435, a virtual disk 436, and virtual WORM compliant data storage 437 in which all data that is written to the virtual WORM compliant data storage 437 is locked via restricted read access permissions.”, 0095 ; Examiner Note: the time periods wherein the WORM compliant data storage is not locked equate to the identified time spans when the VM (requesting application) is instructed to access the identified content) The method of Chauhan within the context of McMullen in view of Kiel would provide a system predictably capable of instructing the requesting application instance to access the identified content from the specified location in shared memory during the identified time span. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of McMullen in view of Kiel with those of Chauhan, in order to provide improved performance of the data storage hardware (Chauhan, [0017]). McMullen in view of Kiel in further view of Chauhan discloses the above limitations of claim 1, but does not disclose the determining that the content identified in the request has been previously stored at a specified location in the shared memory. However, Johnson discloses: determines that the content identified in the request has been previously stored at a specified location in the shared memory (“If an identical graphics resource does not currently reside in the host graphics memory, a command is sent to the host GPU driver to store the graphics resource in the host graphics memory, but when an identical graphics resource resides in the host graphics memory, the graphics resource is not stored in the host graphics memory. Instead, the identical graphics resource is shared by the first VM and at least one other VM”, col.2, lines 7-13) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of McMullen in view of Kiel in further view of Chauhan with those of Johnson in order to provide a efficient system wherein separate graphics resource instances may be consolidated in HGPU memory (Johnson, [col.3, lines 32-33]). As per claim 2, McMullen in view of Kiel in further view of Chauhan in further view of Johnson fully discloses the limitations of claim 1. Furthermore, Johnson discloses: the memory management process instructs the requesting application instance to access the identified content from the specified location in shared memory during the identified time span instead of storing the content associated with the specified resource in the memory (“If an identical graphics resource does not currently reside in the host graphics memory, a command is sent to the host GPU driver to store the graphics resource in the host graphics memory, but when an identical graphics resource resides in the host graphics memory, the graphics resource is not stored in the host graphics memory. Instead, the identical graphics resource is shared by the first VM and at least one other VM.”, 0017 ; Examiner Note: the system of Johnson in view of Chauhan would specify the identified time span during which the VM, or application, should access the memory instead of storing content) As per claim 13, McMullen discloses A system comprising: at least one physical processor; and physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to: instantiating a memory management process that is configured to communicate with one or more graphics processing hardware components to control usage of shared memory by multiple different application instances; (“In another embodiment, an aspect of the invention is directed to one or more computer-storage media having computer-useable instructions embodied thereon for performing a method for synchronizing access to a shared surface. “, 0019 ; “With reference to FIG. 1, computing device 100 includes a bus 110 that directly or indirectly couples the following devices: memory 112, one or more processors 114, one or more presentation components 116, input/output ports 118, input/output components 120, and an illustrative power supply 122.”, 0023 ; “Embodiments of the present invention provide for, among other things, synchronized access to shared surfaces. Embodiment may include graphics processing unit (GPU) synchronization and/or central processing unit (CPU) synchronization. As used herein, the term "surface" refers to an allocation of video memory that is the target for rendering by a GPU. The video memory, for instance, may represent graphics that could be displayed by the computing system. In embodiments of the present invention, a surface may be shared by multiple graphics devices in different processes/threads such that the graphics devices may render to the same surface (i.e., a shared surface).”, 0015 ; “In accordance with embodiments, access to a shared surface by multiple devices is synchronized.”, 0016 ; “If a non-owning rendering context wishes to acquire access to render to the shared surface, the non-owning rendering context makes an acquire call and waits until the surface is released by the currently owning device.”, 0003 ; Examiner Note: the processes running on the multiple devices, i.e.- the rendering contexts, equate to different application instances) receives a request from at least one the application instances indicating that content associated with a specific resource implemented by the application instance is to be stored in the shared memory (“If a non-owning rendering context wishes to acquire access to render to the shared surface, the non-owning rendering context makes an acquire call and waits until the surface is released by the currently owning device. When the currently owning device finishes rendering, it releases the shared surface. The rendering context that made the acquire call may then acquire access to the shared surface and begin rendering to the surface.” 0003 ; Examiner Note: the content being rendered equates to content associated with a specific resource implemented by the application) Furthermore, Kiel discloses: identifies a time span during which the identified content stored in the shared memory will be immutable, the time span being defined at least in part by one or more characteristics of the specific resource (“Knowledge of resource production and consumption by particular processors allows the frame debugger interception layer to know when synchronization must occur in order to produce correct results. Since the interception layer knows all the details about the application intended synchronization operations, it can determine if there are missing synchronization operations”, 0036 ; Examiner Note: the resource production and consumption equate to one or more characteristics of the specific resource) Furthermore, Chauhan discloses: instructs the requesting application instance to access the identified content from the specified location in shared memory during the identified time span. (“In one embodiment, it may be determined whether the set of data should be stored using the WORM compliant hardware storage device or if the set of data should instead satisfy WORM compliance by modifying the file access permissions to prevent the stored set of data from being modified.”, 0099 ; “The WORM compliant hardware storage device may comprise a disk drive (e.g., an SSD or HDD), an optical or magnetic storage device, or a non-volatile semiconductor storage device (e.g., a Flash-based semiconductor memory) in which the stored data may not be erased, overwritten, or altered, but may be read or accessed with authorized access to the stored data.”, 0016 ; “The node 434 includes a virtual machine 435, a virtual disk 436, and virtual WORM compliant data storage 437 in which all data that is written to the virtual WORM compliant data storage 437 is locked via restricted read access permissions.”, 0095 ; Examiner Note: the time periods wherein the WORM compliant data storage is not locked equate to the identified time spans when the VM (requesting application) is instructed to access the identified content) The method of Chauhan within the context of McMullen in view of Kiel would provide a system predictably capable of instructing the requesting application instance to access the identified content from the specified location in shared memory during the identified time span. Furthermore, Johnson discloses: determines that the content identified in the request has been previously stored at a specified location in the shared memory (“If an identical graphics resource does not currently reside in the host graphics memory, a command is sent to the host GPU driver to store the graphics resource in the host graphics memory, but when an identical graphics resource resides in the host graphics memory, the graphics resource is not stored in the host graphics memory. Instead, the identical graphics resource is shared by the first VM and at least one other VM”, col.2, lines 7-13) As per claim 15, McMullen in view of Kiel in further view of Chauhan in further view of Johnson fully discloses the limitations of claim 13. Furthermore, Chauhan discloses: the memory management process preserves the specified resource in the shared memory for at least a specified amount of time after determining that the specified resource is no longer being used by the multiple application instances (“In step 513, the set of data is written to WORM compliant archival storage. The WORM compliant archival storage may comprise cloud-based archival storage. In one embodiment, upon detection that the set of data was written to the WORM compliant hardware storage device more than a threshold time ago (e.g., more than a week ago), the set of data may be written to the WORM compliant archival storage in order to maintain a secondary copy of the data to support disaster recovery.”, 0100 ; “The WORM compliance may require that the set of data be immutable for the data retention time (e.g., cannot be altered for six months).”, 0097 ; Examiner Note: detection that the data was written to over a threshold amount of time ago equates to determining that the specified resource is no longer in use) As per claim 17, McMullen in view of Kiel in further view of Chauhan in further view of Johnson fully discloses the limitations of claim 13. Furthermore, Johnson discloses: content identified as being mutable is shared by the memory management process in a pool among the multiple different application instances (“As will be described in more detail below, a graphics resource may be stored in HGPU memory 112 and shared by multiple VMs 102.”, col.3, lines 28-29 ; “If the graphics resource, prior to the modification, was shared by at least one other VM, then the table entry establishing the translation from the received object ID to the resource key is removed from resource table 136 and the newly generated resource key and corresponding object ID is stored in resource table 136. If the modified graphics resource is not already used by any other VMs, then it is created by calling a function in HGPU driver 128 to store the modified graphics resource to HGPU memory 112. The current VM 102 may then use the updated graphics resource where the updated graphics content is stored in HGPU memory 112.”, col.7, 22-33 ; Examiner Note: the graphics resource residing in the HGPU memory, which equates to a pool, is mutable- as evidenced by the modification process, before and after which the resource/content is still shared amongst VMs equating to application instances) As per claim 18, McMullen in view of Kiel in further view of Chauhan in further view of Johnson fully discloses the limitations of claim 17. Furthermore, McMullen discloses: the content identified as being mutable is associated with a fence synchronization object that is used to track when asynchronous tasks performed using the content are completed (“Embodiments of the present invention relate to providing multiple rendering contexts with synchronized access to a shared surface that comprises an allocation of video memory. In accordance with embodiments of the present invention, only one rendering context may "own" or have access to a shared surface at a given time, allowing that rendering context to read from and write to the shared surface.”, 0003 ; “For instance, in some embodiments, fence synchronization objects are provided to support the behavior of the GPU wait. As discussed previously, the acquire and release APIs may employ key values for determining which device will acquire the surface next. The key values may be arbitrary. For instance, a device could acquire on a key value of 1 and then release on a key value of 0. In contrast, the fence value is a monotonically increasing value in kernel that is mapped to the API key value. Any time a new device acquires, the new device receives the current fence value plus one.”, 0048 ; Examiner Note: acquiring the surface equates to performing asynchronous tasks using the content (video memory resources) as GPU operations are necessarily asynchronous with reference to the CPU. Increasing the value equates to tracking the performance of tasks) As per claim 20, it is a non-transitory computer-readable medium (Johnson : “One or more embodiments may be implemented as one or more computer programs or as one or more computer program modules embodied in one or more computer readable storage media. The term computer readable storage medium refers to any data storage device that can store data which can thereafter be input to a computer system—computer readable media may be based on any existing or subsequently developed technology for embodying computer programs in a manner that enables them to be read by a computer. Examples of a non-transitory computer readable medium include a hard drive, network attached storage (NAS), read-only memory, random-access memory (e.g., a flash memory device), a CD (Compact Discs)—CD-ROM, a CD-R, or a CD-RW, a DVD (Digital Versatile Disc), a magnetic tape, and other optical and non-optical data storage devices.”, 0031) claim with substantially the same limitations as claim 1, and as such, it is rejected for substantially the same reasons. As per claim 21, McMullen in view of Kiel in further view of Chauhan in further view of Johnson fully discloses the limitations of claim 1. Furthermore, Kiel discloses: the one or more characteristics of the specific resource comprises a signal from a fence synchronization object associated with the specific resource; (“A conventional mechanism for ordering or synchronizing operations with data dependencies across two or more processors (homogenous, heterogeneous, physical, logical, virtual, etc.) is to use synchronization objects or primitives. Such objects allow one processor to communicate with one or more other processors when a workload (set of tasks or operations) has completed. A fence object is an example of such a synchronization primitive… A fence typically encapsulates a value that can be observed by processors, allowing the processors or application to make decisions about what workloads to execute based on the current progress made by other processors as indicated by the fence value”, 0005 ; “Application operations that signal the fence end up operating on the underlying signaling fence, while application operations that observe or wait on the fence end up operating on the waiting fence. The interception layer is responsible for detecting completion of work as indicated by the signaling fence, and propagating this information to the waiting fence.”, 0008) and the time span is defined at least in part by a signal from a fence synchronization object associated with the specific resource. (“Correct replay of the application's commands as recorded in function bundles may be dependent on detecting when the application has made a decision by observing the value of a fence object. According to one or more embodiments, knowing the order of application specified commands relative to the time that a fence signal completes during capture mode allows the interception layer to maintain this ordering in replay mode”, 0033) Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over McMullen (US 20090313622 A1) in view of Kiel (US 20170039124 A1) in further view of Chauhan (US 20200341659 A1) in further view of Johnson (US 9430258 B2) in further view of Aissi (US 20140331279 A1). As per claim 3, McMullen in view of Kiel in further view of Chauhan in further view of Johnson fully discloses the limitations of claim 1, but does not disclose the memory management process monitoring the identified content. However, Aissi discloses the identified content stored in the shared memory is monitored by the memory management process. (“In some embodiments, the memory management services 340 may utilize a memory management table to monitor and adjust memory resources of the secure operating environment 110.", 0117) The system of McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Aissi would comprise a memory management process which monitored the resources stored in the shared memory. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of McMullen in view of Kiel in further view of Chauhan in further view of Johnson with those of Aissi in order to provide a means for determining whether or not the stored resources have been corrupted (Aissi, [0117]) Claims 4 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over McMullen (US 20090313622 A1) in view of Kiel (US 20170039124 A1) in further view of Chauhan (US 20200341659 A1) in further view of Johnson (US 9430258 B2) in further view of Elliot (US 7765539 B1). As per claim 4, McMullen in view of Kiel in further view of Chauhan in further view of Johnson fully discloses the limitations of claim 1, but does not disclose the memory management process being embedded in the application process package file associated with the application that sent the request. However, Elliot discloses: the memory management process is embedded in an application process package file associated with the application instance that sent the request. (see fig.2 and fig.3 ; “FIG. 3 shows a view of exemplary software components of library routines 510. Such software components may include, for example: CPU libraries 510a, memory management unit libraries 510b, interrupt libraries 510c, video library 510d, sound library 510e, I/O library 510f, and other libraries 510g”, col.7, lines 10-19 ; “The C compiler and linker 508 may compile and/or link the source code with hardware emulation/mapping library routines 510 designed or written for the particular target platform. Such library routines 510 may, for example, simulate, emulate and/or "stub" certain functions available on the original platform that are not available on the new target platform (e.g., graphics capabilities such as 3D effects, rotation, scaling, texture mapping, blending, or any of a variety of other graphics related functionality, audio functionality such as sound generation based on proprietary sound formats, etc.). The resulting trans-compiled object file 512 is stored in some type of storage medium (e.g., an SDRAM card, memory stick, internal RAM, Flash memory, etc.) to be efficiently run on the target platform to provide a satisfying and interesting game play experience”, col.6-7, lines 62-9 ; “In the exemplary illustrative non-limiting example shown, the CPU library 510a provides functionality needed to allow the target CPU (which may be different from the original CPU) to run the game software or other application.”, col.7, lines 20-24 ; Examiner Note: the library routines, equating to an application process package file, include the memory management unit as well as the libraries required to provide the graphics capabilities for a game, equating to the application) The system of McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Elliot would contain the memory management unit within the application process package file for the application which sent the request for memory access. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of McMullen in view of Kiel in further view of Chauhan in further view of Johnson with those of Elliot in order to provide an application package containing an MMU which may be run in an efficient manner (Elliot, [col.7, lines 7-8]). As per claim 5, McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Elliot fully discloses the limitations of claim 4. Furthermore, Johnson discloses: the memory management process determines whether the specific resource is already being managed by the memory management process using one or more resource identifiers or resource characteristics obtained from the application process package file (“If an identical graphics resource does not currently reside in the host graphics memory, a command is sent to the host GPU driver to store the graphics resource in the host graphics memory, but when an identical graphics resource resides in the host graphics memory, the graphics resource is not stored in the host graphics memory. Instead, the identical graphics resource is shared by the first VM and at least one other VM”, col.2, lines 7-13 ; “ In operation 164, it is determined whether there are any other entries in resource table 136 with identical resource keys”, col.5, lines 25-27 ; “In operation 158, resource key generator 134 uses content from the graphics resource to generate the resource key. Resource key generator 134 may use a hash algorithm to generate the resource key from all or part of the graphics object contents so that identical resources will have identical keys.”, col.4, lines 58-63 ; Examiner Note: the resource key equates to the resource identifier, and being generated from the contents of the graphics object, or application package, equates to being obtained from the application process package file) Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over McMullen (US 20090313622 A1) in view of Kiel (US 20170039124 A1) in further view of Chauhan (US 20200341659 A1) in further view of Johnson (US 9430258 B2) in further view of Elliot (US 7765539 B1) in further view of Hung (US 20140258973 A1). As per claim 6, McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Elliot discloses the limitations of claim 4, but does not disclose the application process package file comprising a game engine for a video game. However, Hung discloses: the application process package file comprises a game engine for a video game. ("In some embodiments, the user device can further edit game data corresponding to the subject application, and the application design platform generates a game engine in the application package according to the game data, wherein when the application package executes, the game engine controls game actions corresponding to a game.", 0015) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Elliot with those of Hung in order to save manpower and time resources in development by using application packages (Hung, [0035]) Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over McMullen (US 20090313622 A1) in view of Kiel (US 20170039124 A1) in further view of Chauhan (US 20200341659 A1) in further view of Johnson (US 9430258 B2) in further view of Forin (US 20090133042 A1). As per claim 7, McMullen in view of Kiel in further view of Chauhan in further view of Johnson fully discloses the limitations of claim 1, but does not disclose the memory management process being loaded dynamically along with the application which sent the request. However, Forin discloses: the memory management process is dynamically loaded along with the application instance that sent the request. (“Dynamic loading and unloading of components provides the flexibility that lets the system adapt to changing requirements.”, 0046 ; “A preferred embodiment of the invention is directed to a flexible system architecture that is suitable for a wide range of applications. The system is built out of minimal but flexible components, which can be deployed as needed.”, 0045 ; "The virtual memory manager is a component like any other, and is loaded dynamically on demand", 0151 ; Examiner Note: the components which are dynamically loaded equate to application instances) The combination of McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Forin would be capable of dynamically loading the application which requests shared memory. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of McMullen in view of Kiel in further view of Chauhan in further view of Johnson with those of Forin in order to provide increased flexibility to the system, allowing it to adapt to changing requirements (Forin, [0045]) Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over McMullen (US 20090313622 A1) in view of Kiel (US 20170039124 A1) in further view of Chauhan (US 20200341659 A1) in further view of Johnson (US 9430258 B2) in further view of Forin (US 20090133042 A1) in further view of Viggers (US 20170116702 A1). As per claim 8, McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Forin fully discloses the limitations of claim 7. Furthermore, Forin discloses: the memory management process is dynamically loaded using a Vulkan layer that allows dynamic interception of graphics application programming interface (API) calls (“Dynamic loading and unloading of components provides the flexibility that lets the system adapt to changing requirements.”, 0046 ; “A preferred embodiment of the invention is directed to a flexible system architecture that is suitable for a wide range of applications. The system is built out of minimal but flexible components, which can be deployed as needed.”, 0045 ; "The virtual memory manager is a component like any other, and is loaded dynamically on demand", 0151) Forin discloses the dynamic loading of the memory management process, but does not disclose a Vulkan layer performing this step. However, Viggers discloses: the memory management process is dynamically loaded using a Vulkan layer that allows dynamic interception of graphics application programming interface (API) calls (“Referring to FIG. 1B, there is a block diagram of a Vulkan driver architecture 150. The Vulkan driver architecture 150 comprises a dispatch module 166, a memory manager 152, an OS module 154, a render module 124, a carddata module 126, which sits on top of a GPU 125.”, 0031 ; “A system, method, and computer-readable medium are provided for translating OpenGL API calls to operations in a Vulkan graphics driver using an OpenGL-on-Vulkan driver architecture. An OpenGL-on-Vulkan driver receives an OpenGL context and render function, translates an OpenGL format to a Vulkan format, creates a Vulkan object and sets a Vulkan state, and generates a Vulkan command buffer corresponding to the OpenGL render function.”, abstract) The system of McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Forin in further view of Viggers would be capable of dynamically loading the memory management module using the Vulkan layer which intercepts graphics API calls. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the system of McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Forin with the Vulkan layer of Viggers in order to leverage the increased flexibility of Vulkan over its alternatives (Viggers, [0006]). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over McMullen (US 20090313622 A1) in view of Kiel (US 20170039124 A1) in further view of Chauhan (US 20200341659 A1) in further view of Johnson (US 9430258 B2) in further view of Forin (US 20090133042 A1) in further view of Viggers (US 20170116702 A1) in further view of Jiang (US 20230092902 A1). As per claim 9, McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Forin in further view of Viggers fully discloses the limitations of claim 8. Furthermore, Johnson discloses: the memory management process determines whether the specific resource is already being managed by the memory management process using one or more resource identifiers or resource characteristics obtained from at least one intercepted API call (“If an identical graphics resource does not currently reside in the host graphics memory, a command is sent to the host GPU driver to store the graphics resource in the host graphics memory, but when an identical graphics resource resides in the host graphics memory, the graphics resource is not stored in the host graphics memory. Instead, the identical graphics resource is shared by the first VM and at least one other VM”, col.2, lines 7-13 ; “ In operation 164, it is determined whether there are any other entries in resource table 136 with identical resource keys”, col.5, lines 25-27) Johnson discloses the determination of whether or not a specific resource is already being managed using one or more resource identifiers (resource keys), but does not disclose the resource identifier being obtained through an intercepted API call. However, Jiang discloses: the memory management process determines whether the specific resource is already being managed by the memory management process using one or more resource identifiers or resource characteristics obtained from at least one intercepted API call (“Token interceptor 218 is configured to intercept the API call with HTTP Authorization headers and uniform resource identifier (URI) query parameters from client computer system 240, validate the token using information stored in token bucket 206, and pass through the request (i.e., API call with HTTP Authorization headers and URI query parameters) to backend service of backend services computer systems 280 or reject the request due to an expired token.”, 0076 ; Examiner Note: the uniform resource identifier equates to a resource identifier, and is obtained through the intercepted API call.) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the system of McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Forin in further view of Viggers with the intercepting of API calls of Jiang in order to provide the resource identifier of the application without sending a separate message contacting the application (Jiang, [0012]) Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over McMullen (US 20090313622 A1) in view of Kiel (US 20170039124 A1) in further view of Chauhan (US 20200341659 A1) in further view of Johnson (US 9430258 B2) in further view of Forin (US 20090133042 A1) in further view of Viggers (US 20170116702 A1) in further view of Jiang (US 20230092902 A1) in further view of Scheifler (US 20090271472 A1). As per claim 10, McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Forin in further view of Viggers in further view of Jiang fully discloses the limitations of claim 9, but does not disclose a UUID. However, Scheifler discloses: at least one of the resource identifiers obtained from the intercepted API call comprises a universally unique identifier (UUID) (“As previously noted, each time a resource is created, the system (e.g., the platform or the grid) may assign a universally unique identifier (UUID) to that resource and all cross references within resources may be done by UUID rather than name”, 0082) The combination of McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Forin in further view of Viggers in further view of Jiang in further view of Scheifler would provide a system which could obtain the UUID of a resource from an intercepted API call (see Jiang, [0076]). It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the system of Johnson in view of Chauhan in further view of Forin in further view of Viggers in further view of Jiang with the UUIDs assigned to resources of Scheifler in order to provide a completely unique identifier to each resource, thereby avoiding the potential confusion of resources having the same name (Scheifler, [0082]). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over McMullen (US 20090313622 A1) in view of Kiel (US 20170039124 A1) in further view of Chauhan (US 20200341659 A1) in further view of Johnson (US 9430258 B2) in further view of Forin (US 20090133042 A1) in further view of Viggers (US 20170116702 A1) in further view of Jiang (US 20230092902 A1) in further view of Gary (US 7133912 B1) in further view of Fontaine (DE 112022003546 T5). As per claim 11, McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Forin in further view of Viggers in further view of Jiang fully discloses the limitations of claim 9, but does not disclose an intercepted API call causing a count to be incremented, and returning a shared file descriptor pointing to a location in memory. However, Gary discloses: the intercepted API call increments a reference count and returns a shared file descriptor that points to a memory backing that was previously allocated for the resource (“For instance, in certain implementations, the gateway may, upon performing a particular type of usage, call the API, which will increment a count for that particular type of usage.”, col.4, lines 41-44) The system of McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Forin in further view of Viggers in further view of Jiang in further view of Gary would increment a reference count for each intercepted API call. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the system of McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Forin in further view of Viggers in further view of Jiang with the incrementing of a reference count of Gary, in order to provide a means for tracking different types of usage of resources (Gary, [col.4, lines 41-44]). McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Forin in further view of Viggers in further view of Jiang in further view of Gary discloses incrementing a reference count in response to intercepting an API call, but does not disclose the API call returning a file descriptor which points to a memory location. However, Fontaine discloses: the intercepted API call increments a reference count and returns a shared file descriptor that points to a memory backing that was previously allocated for the resource (In at least one embodiment, the Get-Semaphore-Signal-Node-Parameters API call 500 includes a return parameters parameter pointer specifying a memory location in which to store a data structure with return parameters.”, Par.43, Detailed Description ; Examiner Note: the parameter pointer specifying a memory location equates to a shared file descriptor that points to a memory backing) The system of McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Forin in further view of Viggers in further view of Jiang in further view of Gary in further view of Fontaine would return a shared file descriptor pointing to the memory address that was previously allocated for the resource associated with the request. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the system of McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Forin in further view of Viggers in further view of Jiang in further view of Gary with the method of Fontaine, in order to provide a more efficient method of referencing memory resources (Fontaine, [Par.18, Detailed Description]). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over McMullen (US 20090313622 A1) in view of Kiel (US 20170039124 A1) in further view of Chauhan (US 20200341659 A1) in further view of Johnson (US 9430258 B2) in further view of Forin (US 20090133042 A1) in further view of Viggers (US 20170116702 A1) in further view of Jiang (US 20230092902 A1) in further view of Gary (US 7133912 B1) in further view of Fontaine (DE 112022003546 T5) in further view of LeMay (US 20180082057 A1). As per claim 12, McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Forin in further view of Viggers in further view of Jiang in further view of Gary in further view of Fontaine fully discloses the limitations of claim 11, but does not disclose a shared file descriptor being used in the access of a resource. However, LeMay discloses: the application instance uses the shared file descriptor when accessing the specific resource (“For a code region, one or more segment descriptors and associated segment selectors may indicate respective resources to be allocated to the code region so that the code region can access the resources during execution.”, 0040 ; Examiner Note: a segment descriptor equates to a shared file descriptor) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Forin in further view of Viggers in further view of Jiang in further view of Gary in further view of Fontaine with the segment descriptor, or shared file descriptor, of LeMay, in order to provide a convenient means for accessing the necessary resources during execution (LeMay, [0040]). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over McMullen (US 20090313622 A1) in view of Kiel (US 20170039124 A1) in further view of Chauhan (US 20200341659 A1) in further view of Johnson (US 9430258 B2) in further view of Fernald (US 20120166778 A1). As per claim 14, McMullen in view of Kiel in further view of Chauhan in further view of Johnson fully discloses the limitations of claim 13, but does not disclose the preloading of one or more resources for the application instance that sent the request. However, Fernald discloses: the memory management process preloads one or more resources for the application instance that sent the request (“Pursuant to the technique 300, the memory manager 160 preloads (block 304) the prefetch buffer 254 with content associated with the alternative reset routine 290 and associates the address 294 of the original reset routine 296 with the alternative content using a corresponding address tag 258, pursuant to block 308.", 0026 ; Examiner Note: the content associated with the reset routine equates to resources for an application instance) The system of McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Fernald would be capable of preloading the resources associated with the application instance that sent the request for memory access (see Johnson). It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of McMullen in view of Kiel in further view of Chauhan in further view of Johnson with those of Fernald in order to provide a system wherein the processor does not have to load the required resources at runtime, thereby improving the efficiency of the system (Fernald, [0026]). Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over McMullen (US 20090313622 A1) in view of Kiel (US 20170039124 A1) in further view of Chauhan (US 20200341659 A1) in further view of Johnson (US 9430258 B2) in further view of Hoffman (US 20200267449 A1). As per claim 16, McMullen in view of Kiel in further view of Chauhan in further view of Johnson fully discloses the limitations of claim 15. Furthermore, Chauhan discloses the specified resource is preserved in the shared memory for the specified amount of time based on a determined amount of churn that is related to the specified resource (“The WORM compliance may require that the set of data be immutable for the data retention time (e.g., cannot be altered for six months).”, 0097) Chauhan discloses preserving a specified resource in memory for a specified amount of time based on WORM compliance standards, but does not disclose this decision being based on amount of churn associated with the resource. However, Hoffman discloses: based on a determined amount of churn that is related to the specified resource (“At 112, the content recommendation application tracks or determines user churn. For example, the content recommendation application may query the user database of the media service provider to check which users (e.g., users from entries 106, 108, 110) have unsubscribed (churned) from the media service.”, 0017 ; “For example, the content recommendation application may identify that subset of users 120 (e.g., subset 116) may be associated with certain types of media content (e.g., sports and news) and with certain consumption time slots (e.g., weekend afternoons and evenings). For example, the content recommendation application may determine that these types of contents and timeslots may be atypical of not-churned subset 114 and typical of churned subset 116.”, 0018 ; Examiner Note: decisions regarding what to do with a media content item are made based upon the amount of churn associated with that resource) The system of McMullen in view of Kiel in further view of Chauhan in further view of Johnson in further view of Hoffman would be capable of determining an amount of time to retain a resource in shared memory based upon a determined amount of churn associated with that resource. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of McMullen in view of Kiel in further view of Chauhan in further view of Johnson with those of Hoffman, as each of the elements being combined are being combined in using known methods and to yield predictable results. Determining how long to retain a memory resource, and basing decisions regarding what to do with content on a determined amount of churn associated with that content are disclosed explicitly by Chauhan and Hoffman. Assigning a numerical value to the churn and using an algorithm/equation to determine a length of retention time from the value of churn would have been an obvious method to combine these systems. The results of implementing this technique would be deterministic, and thus very much so predictable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Wilt (US 9547535 B1) – discloses a method of creating a process in a GPU that has access to memory buffers in the system memory of the computer system that are shared amongst a plurality of GPUs within the system Monteith (US 12182014 B2) – discloses a cost effective storage management method comprising identifying one or more portions of one or more source objects stored in remote storage resources, and issuing a command to cause the storage resources to create a new object comprising the one or more of portions of the one or more source objects Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROSS MICHAEL VINCENT whose telephone number is (703)756-1408. The examiner can normally be reached Mon-Fri 8:30AM-5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at (571) 270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /R.M.V./ Examiner, Art Unit 2196 /APRIL Y BLAIR/Supervisory Patent Examiner, Art Unit 2196
Read full office action

Prosecution Timeline

Feb 24, 2023
Application Filed
Aug 07, 2025
Non-Final Rejection — §103
Nov 13, 2025
Examiner Interview Summary
Nov 14, 2025
Response Filed
Jan 27, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530219
TIME-BOUND LIVE MIGRATION WITH MINIMAL STOP-AND-COPY
2y 5m to grant Granted Jan 20, 2026
Patent 12511158
TASK ALLOCATION METHOD, APPARATUS, ELECTRONIC DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Dec 30, 2025
Patent 12493493
METHOD AND SYSTEM FOR ALLOCATING GRAPHICS PROCESSING UNIT PARTITIONS FOR A COMPUTER VISION ENVIRONMENT
2y 5m to grant Granted Dec 09, 2025
Patent 12481529
CONTROLLER FOR COMPUTING ENVIRONMENT FRAMEWORKS
2y 5m to grant Granted Nov 25, 2025
Patent 12430170
QUANTUM COMPUTING SERVICE WITH QUALITY OF SERVICE (QoS) ENFORCEMENT VIA OUT-OF-BAND PRIORITIZATION OF QUANTUM TASKS
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
90%
With Interview (+35.9%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month