Detailed Action
1. This office action is in response to communication filed January 6, 2026. Claims 1, 4-9, and 12-16 are currently pending and claims 1 and 9 are the independent claims.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
3. This Office Action is in response to the applicant’s remarks and arguments filed on
January 6, 2026.
Claims 1, 4, 7, 9, 12, and 15 were amended. Claims 2-3 and 10-11 have been cancelled. No claims are new. Claims 1, 4-9, and 12-16 remain pending in the application. Claims 5-6, 8, 13-14, and 16 filed on September 19, 2025 are being considered on the merits along with amended claims 1, 4, 7, 9, 12, and 15.
Response to Arguments
4. Applicant's arguments filed January 6, 2026 have been fully considered but they are not persuasive.
The Applicant remarks on pages 6-8 of the Remarks that the independent claims 1 and 9 have been amended to include “wherein the synchronously loaded page represents information about data synchronously loaded into the local memory so as to be immediately referenced from a current execution context of a central processing unit (CPU), and is maintained in the local preloading metadata for determining whether execution of the local preloading task is unnecessary.” The Applicant respectfully submitted that none of Meier, Acharya, and Sukegawa, either alone or in combination, disclose or suggest the amended limitation.
The Examiner respectfully disagrees. The Examiner uses the combination of Acharya and DiVincenzo as seen in section 5 below to reject the newly claimed limitation of “wherein the synchronously loaded page represents information about data synchronously loaded into the local memory so as to be immediately referenced from a current execution context of a central processing unit (CPU), and is maintained in the local preloading metadata for determining whether execution of the local preloading task is unnecessary.” As such, the Examiner maintains the rejections for independent claims 1 and 9 along with the rejections for dependent claims 4-8 and 12-16.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claims 1, 3-6, 8-11, 13-14, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Meier et al. (U.S. Patent No. 8,761,008) – hereinafter “Meier”, in view of Acharya et al. (U.S. Patent No. 9,582,603) – hereinafter “Acharya”, DiVincenzo et al. (U.S. Pub. No. 2017/0054800) – hereinafter “DiVincenzo”, and Sukegawa et al. (U.S. Patent No. 5,778,429) – hereinafter “Sukegawa”.
Regarding independent claim 1, Meier discloses a method for preloading data comprising:
registering a local preloading task corresponding to the local preloading target in local preloading metadata stored in the local memory; (Col. 20, Lines 33-43 “In one embodiment, persistence component 535 provides a pre-fetch configuration interface for defining a pre-fetch configuration. For example, a user at gateway 120 or at a GIG node 110 may interact with the pre-fetch configuration interface. A pre-fetch configuration includes, for example, a request schedule and data criteria and/or metadata elements associated with data content. A request schedule defines a target time at which data content corresponding to a pre-fetch configuration are to be accessed. For example, a request schedule may indicate data content are to be accessed at 8:00 am each weekday.”) The citation is interpreted to read on the claimed invention because under broadest reasonable interpretation, the data to be preloaded and when it needs to happen by is determined with the pre-fetch configuration.
checking whether the local preloading target is redundant … prior to asynchronously starting the local preloading task; and (Col. 19, Lines 32-336 “If persistence component 535 determines, based on the query response, that data content meeting the criteria are available in the data repository, persistence component 535 publishes a data retrieval response message to pub/sub engine 505, including the data content corresponding to the criteria.” and Col. 20, Lines 44-55 “Persistence component 535 retrieves data content matching the data criteria and/or metadata elements of a pre-fetch configuration according to the pre-fetch schedule. For example, if the data are to be accessed at 8:00 am, persistence component 535 may retrieve the data at any time between the previous retrieval and 8:00 am. A pre-fetch schedule may include a data age constraint (e.g., four hours), in which case persistence component 535 retrieves the data content based on the target access time and the data age constraint. For example, with a target access time of 8:00 am and a data age constraint of four hours, persistence component 535 would retrieve the data no earlier than 4:00 am.”) The citation is interpreted to read on the claimed invention because under broadest reasonable interpretation, the persistence component determines that the data is already available in the repository, thus sending the data to the pub/sub engine and not requiring the preloading task that happens asynchronously.
asynchronously starting the local preloading task at a preset time based on the local preloading metadata, (Col. 20, Lines 44-55 “Persistence component 535 retrieves data content matching the data criteria and/or metadata elements of a pre-fetch configuration according to the pre-fetch schedule. For example, if the data are to be accessed at 8:00 am, persistence component 535 may retrieve the data at any time between the previous retrieval and 8:00 am. A pre-fetch schedule may include a data age constraint (e.g., four hours), in which case persistence component 535 retrieves the data content based on the target access time and the data age constraint. For example, with a target access time of 8:00 am and a data age constraint of four hours, persistence component 535 would retrieve the data no earlier than 4:00 am.”) The citation is interpreted to read on the claimed invention because under broadest reasonable interpretation, the data that matches the pre-fetch configuration criteria is started to be preloaded at a preset time.
Meier does not explicitly disclose:
A computer-implemented method for preloading data in a distributed computing system including multiple computers connected over a network, the method comprising:
executing, by a processor of a computer among the multiple computers, instructions stored in a memory to perform operations comprising:
selecting a local preloading target to be preloaded into local memory of the computer;
checking whether the local preloading target is redundant based on a synchronously loaded page stored in the local preloading metadata … and
wherein the local preloading metadata is stored in a first page of the local memory different from a second page in which remote preloading metadata for managing a remote preloading task is stored, and
wherein the synchronously loaded page represents information about data synchronously loaded into the local memory so as to be immediately referenced from a current execution context of a central processing unit (CPU) and is maintained in the local preloading metadata for determining whether execution of the local preloading task is unnecessary.
However, Acharya discloses:
A computer-implemented method for preloading data in a distributed computing system including multiple computers connected over a network, the method comprising: (Col. 6, Lines 36-52 “FIG. 1 is a network diagram that illustrates an example embodiment of a data preload manager system that manages preloading of data for supported client computing systems. In particular, in the illustrated embodiment, a preload manager system 140 is illustrated, and it performs automated operations to support the preloading of data on one or more example client computing systems 100 and 105. In addition, one or more content data source systems 160 are illustrated and are available to provide various data to client systems, including multiple data groups 165 provided by the example content data source system 160a. In addition, various CDN edge server devices 170a-170n are also illustrated at different locations within a network 190, and one or more other computing systems 180 are also illustrated and may be configured to provide various additional types of functionality (e.g., to manage operations of a network of the edge server devices 170 of the content delivery network).”) The citation is interpreted to read on the claimed invention because under broadest reasonable interpretation, the preload manager system is part of a network including a multitude of client and other computing systems.
executing, by a processor of a computer among the multiple computers, instructions stored in a memory to perform operations comprising: (Col. 14, Lines 28-48 “FIG. 3 is a block diagram illustrating an example embodiment of a computer system suitable for performing techniques to manage the preloading of data for supported client systems. In particular, FIG. 3 illustrates a server computer system 300 suitable for executing an embodiment of a data preload manager system 340, as well as various computer systems 350, content data source systems 360, optionally edge server devices 370, and other computing systems 380. In the illustrated embodiment, the computer system 300 has components that include one or more hardware CPU processors 305, various I/O components 310, storage 320, and memory 330, with the illustrated I/O components including a display 311, a network connection 312, a computer-readable media drive 313, and other I/O devices 315 (e.g., a keyboard, a mouse, speakers, etc.). In other embodiments, the computer system 300 may have more or less components than are illustrated, and the local storage 320 may optionally be provided by one or more non-volatile storage devices that are included within or otherwise locally attached to the computer system 300.”)
selecting a local preloading target to be preloaded into local memory of the computer; (Col. 5, Lines 7-12 “In at least some embodiments, particular data groups to preload for a client computing system may be stored on one or more devices proximate to the client system, whether instead of or in addition to storing those data groups on a local persistent cache or other local storage of the client system.”) The citation is interpreted to read on the claimed invention because under broadest reasonable interpretation, the data group to preload for a client computing system is stored on multiple devices.
wherein the synchronously loaded page represents information about data synchronously loaded into the local memory so as to be immediately referenced from a current execution context of a central processing unit (CPU) … (Col. 4, Lines 3-12 “For example, the initial portions of multiple audio/video data files may be preloaded on a client system, such as by preloading one or more data groups for each such data file—such preloaded data groups may then be used to act as a data buffer for a data file that is selected (e.g., by a user of the client system), so that the initial portion of the data file is locally available to be presented (e.g., substantially immediately) while additional data groups corresponding to some or all of the remainder of the data file are downloaded.”) The citation is interpreted to read on the claimed invention because under broadest reasonable interpretation, the preloaded data groups on the client system are available to be immediately referenced to cover the time while the remaining data groups are downloaded and presented.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to add a computer-implemented method for preloading data in a distributed computing system including multiple computers connected over a network, the method comprising: executing, by a processor of a computer among the multiple computers, instructions stored in a memory to perform operations comprising: selecting a local preloading target to be preloaded into local memory of the computer and wherein the synchronously loaded page represents information about data synchronously loaded into the local memory so as to be immediately referenced from a current execution context of a central processing unit (CPU) as seen in Acharya’s invention into Meier’s invention because these modifications allow the selection of data to be preloaded across a multitude of computers in an “obvious to try” manner so that any computer of a multitude of computers can handle a task such that the invention is not limited to a singular device or CPU.
In addition, DiVincenzo discloses:
checking whether the local preloading target is redundant based on a synchronously loaded page stored in the local preloading metadata … ([0024] “As part of request clustering introduced above, in cases when a received request is not the first request for a segment in a particular prefetch interval, the prefetching process will have already commenced in response to the first received request. Accordingly, it is likely that the segment being requested has already been prefetched and is locally cached within the distribution server memory. When cached, the process responds to the user request by serving the requested segment from cache without initiating a request to the origin server. There may be instances when prefetching has commenced in response to a first request, but a second request is received prior to receiving the prefetched segments from the origin server. In such cases, request clustering stops the distribution server from retrieving redundant copies of the same segment from the origin server by initiating a new connection and submitting a new request for the segment when the segment is already being prefetched.”) The citation is interpreted to read on the claimed invention because under broadest reasonable interpretation, the request clustering stops the process of retrieving redundant copies of the same data if the prefetching process has already commenced/occurred and the preloading target is locally cached.
… data synchronously loaded into the local memory … is maintained in the local preloading metadata for determining whether execution of the local preloading task is unnecessary. ([0024] “As part of request clustering introduced above, in cases when a received request is not the first request for a segment in a particular prefetch interval, the prefetching process will have already commenced in response to the first received request. Accordingly, it is likely that the segment being requested has already been prefetched and is locally cached within the distribution server memory. When cached, the process responds to the user request by serving the requested segment from cache without initiating a request to the origin server. There may be instances when prefetching has commenced in response to a first request, but a second request is received prior to receiving the prefetched segments from the origin server. In such cases, request clustering stops the distribution server from retrieving redundant copies of the same segment from the origin server by initiating a new connection and submitting a new request for the segment when the segment is already being prefetched.”) The citation is interpreted to read on the claimed invention because under broadest reasonable interpretation, the request clustering stops the process of retrieving redundant copies of the same data if the prefetching process has already commenced/occurred and the preloading target is locally cached.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to add checking whether the local preloading target is redundant based on a synchronously loaded page stored in the local preloading metadata and data synchronously loaded into the local memory … is maintained in the local preloading metadata for determining whether execution of the local preloading task is unnecessary as seen in DiVincenzo’s invention into Meier’s invention because these modifications allow applying a known technique to a known device ready for improvement to yield predictable results such as not performing redundant preloading tasks to yield a predictable result of decreased resource usage waste.
In addition, Sukegawa discloses:
wherein the local preloading metadata is stored in a first page of the local memory different from a second page in which remote preloading metadata for managing a remote preloading task is stored. (Col. 3, Lines 19-27 “The processor 20, 120 is connected to its shared memory 60, 160 via a local-remote divided cache memory subsystem 30, 130, which contains a local data area 40, 140 separately from a remote data area 70, 170. As the names suggest, local data area 40, 140 stores local data, and remote data area 70, 170 stores remote data. Access to the local data area 40, 140 is controlled separately from access to the remote data area 70, 170.”) The citation is interpreted to read on the claimed invention because under broadest reasonable interpretation, the local data and remote data are stored in the same local-remote divided cache memory subsystem, but in separate areas/pages.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to add wherein the local preloading metadata is stored in a page other than a page in which remote preloading metadata for managing a remote preloading task is stored as seen in Sukegawa’s invention into Meier’s invention because these modifications allow applying a known technique to a known device ready for improvement to yield predictable such that there is clear separation of remote data from local data that allows a system to handle both remote preloading and local preloading separately to yield predictable results of more efficient data storage situations.
Regarding claim 4, Meier discloses the method of claim 1, but does not explicitly disclose:
stopping the local preloading task when the local preloading target is redundant with the synchronously loaded data.
However, DiVincenzo discloses:
stopping the local preloading task when the local preloading target is redundant with the synchronously loaded data. ([0024] “Accordingly, it is likely that the segment being requested has already been prefetched and is locally cached within the distribution server memory. When cached, the process responds to the user request by serving the requested segment from cache without initiating a request to the origin server. There may be instances when prefetching has commenced in response to a first request, but a second request is received prior to receiving the prefetched segments from the origin server. In such cases, request clustering stops the distribution server from retrieving redundant copies of the same segment from the origin server by initiating a new connection and submitting a new request for the segment when the segment is already being prefetched.”) The citation is interpreted to read on the claimed invention because under broadest reasonable interpretation, the data the preloading task is targeting is already locally cached, so it stops the prefetch request from retrieving redundant data.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to add stopping the local preloading task when the local preloading target is redundant with the synchronously loaded data as seen in DiVincenzo’s invention into Meier’s invention because these modifications allow applying a known technique to a known device ready for improvement to yield predictable results such that not performing redundant preloading tasks yields a predictable result of decreased resource usage waste.
Regarding claim 5, Meier discloses the method of claim 1, wherein the remote preloading task is executed within a memory region of a remote computer that is connected to the computer over the network. (Col. 21, Lines 53-62 “In an alternative embodiment, persistence component 535 publishes to pub/sub engine 505 a data pre-fetch message addressed to the other gateway 120, including the second data content. Device management component 510 is programmed to subscribe for data pre-fetch messages and transmit the data pre-fetch message to the other gateway 120. Persistence component 535 at the other gateway 120 receives the data pre-fetch message (e.g., via device management component 510 and pub/sub engine 505) and stores the data content from the data pre-fetch message in its data repository.”) The citation is interpreted to read on the claimed invention because under broadest reasonable interpretation, the pre-fetch message is addressed to another gateway, so data is transferred across systems to remote computers.
Meier does not explicitly disclose:
wherein the local preloading task is executed within a local memory region of the computer …
However, Acharya discloses:
wherein the local preloading task is executed within a local memory region of the computer … (Col. 7, Lines 19-29 “In the illustrated embodiment, the preload manager system 140 may receive an indication to preload data on the client computing system 100, such as in response to a request from a program 110 on the client computing system, in response to a request from a particular content data source system 160, etc. In this example, the preload manager system 140 obtains information about the data cache 130 available on local storage 120 of the client computing system, such as to determine an amount of storage capacity that the data cache may hold, and selects one or more data groups to store in the data cache 130.”) The citation is interpreted to read on the claimed invention because under broadest reasonable interpretation, the indication to preload data on the client computing system is meant to obtain data from the local storage to store in the data cache.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to add wherein the local preloading task is executed within a local memory region of the computer as seen in Acharya’s invention into Meier’s invention because these modifications allow the use of a known technique to improve similar devices in the same way such that preloading tasks with both local data and remote data are able to be handled in the same way to improve the method.
Regarding claim 6, Meier discloses the method of claim 1, but does not explicitly disclose:
wherein each of the multiple computers individually generates and manages local preloading metadata in a local memory page and remote preloading metadata in a separate memory page.
However, Acharya discloses:
wherein each of the multiple computers individually generates and manages local preloading metadata in a local memory page and remote preloading metadata in a separate memory page. (Col. 5, Lines 7-12 “In at least some embodiments, particular data groups to preload for a client computing system may be stored on one or more devices proximate to the client system, whether instead of or in addition to storing those data groups on a local persistent cache or other local storage of the client system.” and Col. 17, Line 66 – Col. 18, Line 10 “After block 455, the routine continues to block 460 to select a preload storage target to use. In the illustrated example, only one target is selected at a given time, although in other embodiments particular selected data groups may be stored on multiple targets. In addition, as described in greater detail elsewhere, a variety of factors may be used to select a particular preload storage target to use, including in a manner specific to a particular client system, such as to determine whether to use a local storage cache on the client system or to use storage accessible and proximate to the client system (e.g., on a particular selected edge server device).”) The citation is interpreted to read on the claimed invention because under broadest reasonable interpretation, the preload storage target may be any computer and may receive preloading metadata from a local storage cache or proximate devices to the system.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to add wherein each of the multiple computers individually generates and manages local preloading metadata in a local memory page and remote preloading metadata in a separate memory page as seen in Acharya’s invention into Meier’s invention because these modifications allow an “obvious to try” solution with a reasonable expectation of success such that each computer has their own version of preloading metadata so that each computer is not just redundant copies of data as it is obvious to try for efficient usage of limited storage.
Regarding claim 8, Meier discloses the method of claim 2, wherein information about the synchronously loaded page is stored for a preset period. (Col. 20, Lines 17-23 “Persistence component 535 also purges cached data from the data repository. For example, cached data content may be associated with metadata elements in the data repository, and persistence component 535 may delete the cached data based on a time at which the data content were retrieved and/or stored, a time at which the data content were produced, and/or a time at which the cached data were last accessed.”) The citation is interpreted to read on the claimed invention because under broadest reasonable interpretation, the cached data associated with metadata elements are deleted after a period of time based on how recently the data was stored.
Regarding claim 9, it is an apparatus claim having the same limitations as cited in method claim 1. Thus, claim 9 is also rejected under the same rationale as addressed in the rejection of claim 1 above.
Regarding claim 12, it is an apparatus claim having the same limitations as cited in method claim 4. Thus, claim 12 is also rejected under the same rationale as addressed in the rejection of claim 4 above.
Regarding claim 13, it is an apparatus claim having the same limitations as cited in method claim 5. Thus, claim 13 is also rejected under the same rationale as addressed in the rejection of claim 5 above.
Regarding claim 14, it is an apparatus claim having the same limitations as cited in method claim 6. Thus, claim 14 is also rejected under the same rationale as addressed in the rejection of claim 6 above.
Regarding claim 16, it is an apparatus claim having the same limitations as cited in method claim 8. Thus, claim 16 is also rejected under the same rationale as addressed in the rejection of claim 8 above.
6. Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Meier et al. (U.S. Patent No. 8,761,008) – hereinafter “Meier”, in view of Acharya et al. (U.S. Patent No. 9,582,603) – hereinafter “Acharya”, DiVincenzo et al. (U.S. Pub. No. 2017/0054800) – hereinafter “DiVincenzo”, and Sukegawa et al. (U.S. Patent No. 5,778,429) – hereinafter “Sukegawa”, and further in view of Li et al. (U.S. Pub. No. 2023/0176980) – hereinafter “Li”.
Regarding claim 7, Meier discloses the method of claim 1, but does not explicitly disclose:
wherein the local preloading target is selected so as to correspond to a page physically adjacent to a currently referenced page in consideration of the current execution context of the central processing unit (CPU).
However, Li discloses:
wherein the local preloading target is selected so as to correspond to a page physically adjacent to a currently referenced page in consideration of the current execution context of the central processing unit (CPU). ([0019] “Correspondingly, in the page swap-in process, for the to-be-swap-in target page, according to a prefetch principle, the target page and the adjacent page of the target page may be swapped into the memory device, or the target page is swapped in from the swap device to the memory device, and the adjacent page of the target page is preloaded. In the preloading process, preloading may be implemented by the memory manager by using a cache (cache). Details are not described herein again. This process can ensure that a swap-in page is a logically related page of a same application, so that prefetch accuracy in the swap-in process is improved. This improves running performance of the system.”) The citation is interpreted to read on the claimed invention because under broadest reasonable interpretation, the adjacent page to the target page is preloaded such that it is a logically related page to the target page and can continue execution.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to add wherein the local preloading target is selected so as to correspond to a page physically adjacent to a currently referenced page in consideration of the current execution context of the central processing unit (CPU) as seen in Li’s invention into Meier’s invention because these modifications allow the use of a known technique to improve similar devices in the same way such that the adjacent page to the target is to be a logically related page to the target page and can continue processor execution without halting as is a known technique for improved execution results.
Regarding claim 15, it is an apparatus claim having the same limitations as cited in method claim 7. Thus, claim 15 is also rejected under the same rationale as addressed in the rejection of claim 7 above.
Conclusion
7. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Such prior art includes Ingle et al. (U.S. Pub. No. 2009/0228688) which discloses prefetching/preloading data from memory into register in a very conventional manner such that many patents that follow use the same general format of prefetching data. Additionally, Kalogeropulos et al. (U.S. Patent No. 2014/0195788) discloses merging/removing duplicate prefetch instructions before they are executed to not be wasteful of resources. This is similar to the newly added limitation in the independent claims.
Examiner has cited particular columns/paragraphs/sections and line numbers in the references applied and not relied upon to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
When responding to the Office action, applicant is advised to clearly point out the patentable novelty the claims present in view of the state of the art disclosed by the reference(s) cited or the objections made. A showing of how the amendments avoid such references or objections must also be present. See 37 C.F.R. 1.111(c).
When responding to this Office action, applicant is advised to provide the line and page numbers in the application and/or reference(s) cited to assist in locating the appropriate paragraphs.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL B TRAINOR whose telephone number is (571)272-3710. The examiner can normally be reached Monday-Friday 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Vital can be reached at (571) 272-4215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/D.T./Examiner, Art Unit 2198
/PIERRE VITAL/Supervisory Patent Examiner, Art Unit 2198