Prosecution Insights
Last updated: April 19, 2026
Application No. 18/751,088

Intelligent, Predictive Memory Management System and Method

Non-Final OA §103§112
Filed
Jun 21, 2024
Examiner
BIRKHIMER, CHRISTOPHER D
Art Unit
2138
Tech Center
2100 — Computer Architecture & Software
Assignee
Mext Corporation
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
82%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
370 granted / 496 resolved
+19.6% vs TC avg
Moderate +8% lift
Without
With
+7.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
30 currently pending
Career history
526
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
43.1%
+3.1% vs TC avg
§102
21.6%
-18.4% vs TC avg
§112
27.2%
-12.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 496 resolved cases

Office Action

§103 §112
DETAILED ACTION The current Office Action is in response to the papers submitted 06/21/2024. Claims 1 - 38 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 5, 9, and 11 are objected to because of the following informalities: Claim 5 is worded oddly to the examiner. The phrase “in comprising predicting” should be “ Claim 9 is worded oddly to the examiner. The phrase “is that a score that the process will refer to other pages within a number of subsequent memory access operations exceeds a threshold score” is unclear. It appears the phrase is supposed to be similar to one of the ways the score is calculated in paragraph 0041. It is suggested to use similar language as what is in paragraph 0041. Claim 11 is worded oddly to the examiner. The phrase “is that a score that predicted pages are likely to be needed by the process before other, colder pages resident in the relatively faster memory exceeds a threshold score” is unclear. It appears the phrase is supposed to be similar to one of the ways the score is calculated in paragraph 0041. It is suggested to use similar language as what is in paragraph 0041. Appropriate correction is required. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The specification is objected to as failing to provide proper antecedent basis for the claimed subject matter. See 37 CFR 1.75(d)(1) and MPEP § 608.01(o). Correction of the following is required: The claims disclose a software appliance, most-likely-to-be-missed pages, most-likely-to-be accessed pages, common hardware platform, process behavior information, system utilization statistics cache misses, thread identifier, the process’ name, offset of a page in a process virtual address space section, pressure stall information metrics, a page swap-out time, a time of most recent use of a respective page, process address space size upon swap-out, process cumulative page fault data when upon swap-out, process cumulative runtime upon swap-out of a memory block, I/O waiting time upon memory block swap-out, process working set size at swap-out, page sharing by more than one process/thread at swap-out, page unaccessed time exceeding an access time threshold, page accessed before becoming unaccessed, page accessed shortly after swap-out, identification of a number of pages accessed by context before a most recent page miss on a respective page, and a time at which a page block was first accessed in a virtual memory of a context. There is no specific disclosure of these limitations in the specification. Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the software appliance must be shown or the feature(s) canceled from the claim(s). No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claims 1 – 38 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1 – 3, 7, 15, 17, 26 – 29, 32, and 34 disclose the use of software appliance, most-likely-to-be-missed pages, most-likely-to-be accessed pages, common hardware platform, process behavior information, system utilization statistics cache misses, thread identifier, the process’ name, offset of a page in a process virtual address space section, pressure stall information metrics, a page swap-out time, a time of most recent use of a respective page, process address space size upon swap-out, process cumulative page fault data when upon swap-out, process cumulative runtime upon swap-out of a memory block, I/O waiting time upon memory block swap-out, process working set size at swap-out, page sharing by more than one process/thread at swap-out, page unaccessed time exceeding an access time threshold, page accessed before becoming unaccessed, page accessed shortly after swap-out, identification of a number of pages accessed by context before a most recent page miss on a respective page, and/or a time at which a page block was first accessed in a virtual memory of a context. The original specification fails to disclose the use of these terms. All remaining claims are rejected for being dependent on a rejected base claim. Claims 1 - 38 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The terms “relatively faster”, “relatively slower”, “most-likely-to-be-missed pages”, “most-likely-to-be accessed pages” in claims 1 and 27 are relative terms which renders the claims indefinite. The terms “relatively faster”, “relatively slower”, “most-likely-to-be-missed pages”, “most-likely-to-be accessed pages” are not defined by the claims, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claims 1, 11, 16, 27, 32, recite the limitations “relatively faster memory” and/or “relatively slower memory” in multiple locations. The use of the term “relatively” indicates the memory is compared to something. However, the limitations fail to indicate that the faster and slower limitations are compared to. It is also unclear what aspect of the memory is considered faster or slower. This could mean the speed of data reads, writes, data access in general including both reads and writes, the speed of the memory to be accessible after start up, or any other number of aspects that can be considered faster or slower. This makes the limitation and claim indefinite since the scope of what is considered “relatively faster” and “relatively slower” is unclear. For examiner the limitations will be considered as referring to each other which means the computer system has a plurality of memories where a first memory of the plurality of memories is considered faster than a second memory of the plurality of memories in terms of general data access. Claims 1 – 3, 7, 15, 17, 26 – 29, 32, and 34 disclose a software appliance, most-likely-to-be-missed pages, most-likely-to-be accessed pages, common hardware platform, process behavior information, system utilization statistics cache misses, thread identifier, the process’ name, offset of a page in a process virtual address space section, pressure stall information metrics, a page swap-out time, a time of most recent use of a respective page, process address space size upon swap-out, process cumulative page fault data when upon swap-out, process cumulative runtime upon swap-out of a memory block, I/O waiting time upon memory block swap-out, process working set size at swap-out, page sharing by more than one process/thread at swap-out, page unaccessed time exceeding an access time threshold, page accessed before becoming unaccessed, page accessed shortly after swap-out, identification of a number of pages accessed by context before a most recent page miss on a respective page, and/or a time at which a page block was first accessed in a virtual memory of a context. There is no disclosure of these terms or what these terms are in the original specification. This makes the scope of the terms indefinite thereby making the scope of the claim indefinite also. Claims 1 and 27 disclose the limitations “most-likely-to-be-missed pages” and “most-likely-to-be accessed pages”. As indicated above, the specification and drawings fail to provide proper support for the limitations. This makes the scope of the limitations indefinite. Claims 1 and 27 recite the limitation "the most-likely-to-be-accessed pages". There is no previous mention of most-likely-to-be-accessed pages in the claims. There is insufficient antecedent basis for this limitation in the claim. Claims 2 and 28 recite the limitation “relative fast memory”. The use of the term “relative” indicates the memory is compared to something. However, the limitation fails to indicate what the fast limitation is compared to. It is also unclear what aspect of the memory is considered fast. This could mean the speed of data reads, writes, data access in general including both reads and writes, the speed of the memory to be accessible after start up, or any other number of aspects that can be considered fast. This makes the limitation and claim indefinite since the scope of what is considered “relatively fast” is unclear. The terms “relatively faster”, “relatively slower”, and “relatively fast” in claims 1 – 2, 11, 16, 27 - 28, and/or 32 are relative terms which renders the claims indefinite. The terms “relatively faster”, “relatively slower”, and “relatively fast” are not defined by the claims, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. There is no indication what aspect of memory is considered fast, faster, or slower. The claims fail to define what the degree of “relative” is to be considered fast, faster, or slower also. The term “local” in claims 2 and 28 is a relative term which renders the claims indefinite. The term “local” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. There is no indication what the relatively fast memory is local to and what defines the meaning of local as compared to another memory that is not local. Claims 3 and 29 recites the software appliance and the OS run on a common hardware platform. There is no mention of a common hardware platform in the original specification. It is unclear what is meant by “common”. Common has multiple meanings that could apply to the claim. One meaning is that the hardware platform that the software appliance and OS run on is the same hardware platform. Another valid meaning would be that the hardware the software appliance and the OS run on is common in the art but might be different from each other. This makes the scope of the claim indefinite. For examination the common hardware platform limitation will be treated as referring to a hardware platform that both the software appliance and the OS run on together. Claims 4 and 30 recite the limitation the pages to be moved are moved independently of any transfer request by the process. Claim 4 is dependent on claim 1 and claim 30 is dependent on claims 27. Both claims 1 and 27 disclose inputting information corresponding to events associated with a process running on the operating system. The inputted information is input into a machine learning component that is used to configure prefetching. This shows that in the independent claims the moving of pages in a prefetch operation is based on requests of the process. Requests of a process running on an operating system simplify down to read and write requests in a computer system. Read and write requests are transfer requests. This makes it unclear how the movement of pages is independent of any transfer request when the prefetching that moves data is based on transfer requests. The examination the limitation will be treated as meaning the movement of certain pages is performed before a request for the certain pages is received from the process. Claim 4 recites the limitation "the pages to be moved" in line 1. There is no previous mention of pages to be moved in the claim or any base claim. There is insufficient antecedent basis for this limitation in the claim. For examination the pages to be moved will be treated as referring back to the pages that are moved from the relatively slower memory to the relatively faster memory in base claim 1. The term “near” in claim 7 is a relative term which renders the claim indefinite. The term “near” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. There is no indication what defines near with regard to near future as compared to another version of the future. Claim 8 appears to be missing some limitations. The claim does not read coherently from line 3 – 4. The claim fails to finish detailing what is adjusted in line 3. Line 4 is not a complete limitation; it is unclear how the dimensionality and the page access model are related or interact with each other. The cope of claim 8 is indefinite at this time. For examination claim 8 will be treated as generating a list of pages ranked, choosing a cutoff of the list of ranked pages that is adjustable in real time, and the page access model can change in real time. Claim 9 recites the limitation “the access prediction criterion is that a score that the process will refer to other pages within a number of subsequent memory access operations exceeds a threshold score”. The wording of the limitation is unclear. It reads as if the score is a value that exceeds a threshold all the time and also represents the chance a process will refer to other pages within a number of memory access operations. It is unclear what the other pages refer to exactly and how a score would always exceed a threshold score. For examination the limitation will be treated as meaning the access prediction criterion is based on a score of pages where the score is an indication that the pages will be accessed within a number of memory access and the score is compared to a threshold score. Claim 11 recites the limitation “the access prediction criterion is that a score that predicted pages are likely to be needed by the process before other, colder pages resident in the relatively faster memory exceeds a threshold score”. The wording of the limitation is unclear. It reads as if the score is a value that exceeds a threshold all the time and also represents the chance predicted pages will be needed before cold pages in the faster memory. For examination the limitation will be treated as meaning access prediction criterion is a score that a indicates predicted pages are likely to be needed before other data in the faster memory and the score is compared to a threshold. Claim 17 discloses process behavior information. There is no disclosure of process behavior information or what process behavior information is in the original specification. This makes the scope of the s process behavior information indefinite thereby making the scope of the claim indefinite also. For examination the process behavior information will be treated as any information related to the behavior of a process in any fashion that is not page miss information. All remaining claims are rejected for being dependent on a rejected base claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1 – 15, 17 – 18, 23 – 32, 34 – 35, and 37 - 38 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dias et al. (Pub. No.: US 2020/0250096) referred to as Dias in view Zhuang et al. (Pub. No.: US 2016/0239423) referred to as Zhuang in view of BenHanokh et al. (Pub. No.: US 2022/0164313) of referred to as BenHanokh. Regarding claim 1, Dias teaches a memory management method [Figs 2 – 4], for a computing system [Figs 1 and 8 – 9], in which the computing system [Figs 1 and 8 – 9] includes an operating system (OS) [Paragraph 0018; An operating system is the software that runs on the system hardware allowing the user to interact with the hardware] that supports virtual memory [130-1 and 130-N, Fig 1; Logical units shows the use of virtual memory] and that accesses a relatively faster memory [150, Fig 1; Paragraph 0018; Cache 150 is considered faster than other memory in the system such as 125 in figure 1] and at least one relatively slower memory [125, Fig 1; Paragraph 0018; The storage 125 is considered than cache], the memory management method [Figs 2 – 4] comprising: inputting from the OS [Paragraph 0018] to a learning component [Figs 2 – 4; The adaptive prefetching process is a component that performs prefetching that learns based on training data as in step 410] information [Paragraphs 0037 – 0039; The hit information and I/O traces are information input to the learning to component] corresponding to events associated with a process running on the OS [210, Fig 2; 410, Fig 410; The processes that cause the flow charts to be activated are run on the operating system of the system], in which the component is configured within a software appliance [120, Fig 1; Figs 2 – 4; The prefetch adaptive prefetching process is configured in a software appliance that is running on 120]; in the learning component [Figs 2 – 4; The adaptive prefetching process is a component that performs prefetching that learns based on training data as in step 410], synthesizing a page access model from at least one sequence of the events inputted from the OS [310 and 320, Fig 3; Paragraphs 0037 – 0039; The look ahead window is a page access model used for prefetching based on I/O traces from the OS]; identifying patterns in the at least one sequence of the events [315, Fig 3; Paragraph 0037 – 0039; The look ahead window is optimized based on the traversal do the address space which is patterns of usage]; in real time, predicting page misses by the process in the relatively faster memory that are likely to happen by the process and identifying most-likely-to-be-missed pages that the process may attempt to access in the relatively faster memory [315, Fig 3; Paragraphs 337 – 0039; The simulation engine predicts pages that will be missed based on I/O traces and adjusts the look ahead window accordingly in real time as the system is running]; and moving at least some of the most-likely-to-be accessed pages [Fig 6; The blocks between 1069 – 1255 and 5349 – 5511; These blocks are most likely to be accessed in the future and are prefetched] from the relatively slower memory [125, Fig 1] to the relatively faster memory [150, Fig 1], whereby the pages the process will attempt to access in at least one relatively faster memory [150, Fig 1] are predictively moved to and made available to the process in at least one relatively faster memory [150, Fig 1] before the process attempts access [Fig 6; Paragraphs 0046 - 0047; The blocks between 1069 – 1255 and 5349 – 5511; The blocks loaded to the cache include blocks predicted to be accessed outside of the actual requested blocks]. However, Dias may not specifically disclose the limitation of the component being a machine learning component that is configured within a software appliance that is logically separate from the OS. Zhuang discloses the component that manages the prefetching is configured within a software appliance that is logically separate from the OS [Paragraph 0024; Claim 1; The prefetching rules are separate from the operating system]. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Zhuang in Dias, because it allows for prefetching rules to be applied to an operating system that does not support prefetching along with updating the prefetching rules without having to update the operating system based on data correlations associated with applications [Paragraph 0024]. However, Dias in view of Zhuang may not specifically disclose the limitation of the component being a machine learning component. BenHanokh discloses the learning component is a machine learning component [Paragraphs 0015 – 0016 and 0032]. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate BenHanokh in Dias in view of Zhuang, because the use of machine learning allows the prefetching to be tuned and adjusted based on complex and non-linear access patterns, adapt to irregular workloads, and generally adapt to behavior of the system. Regarding claim 2, Dias teaches the software appliance [120, Fig 1; Figs 2 – 4] accesses a local relatively fast memory [150, Fig 1; Paragraph 0018]; the software appliance [120, Fig 1; Figs 2 – 4] and the computing system [Figs 1 and 8 – 9] communicate over a network [Figs 1 and 8 – 9; The lines indicate data paths of a network that allows the software appliance and computer system to communicate to different devices]; and the most-likely-to-be accessed pages are moved over the network [Figs 1 and 6; The data that is loaded into the cache is moved along the lines from the cache 150 to logical units 125 along the network of data lines]. Regarding claim 3, Dias teaches the software appliance [120, Fig 1; Figs 2 – 4] and the OS [Paragraph 0018] run on a common hardware platform [100, Fig 1; The operating system and prefetch software run on the same hardware in figure 1]; and the most-likely-to-be accessed pages are moved into a memory space [150, Fig 1] shared by the software appliance [120, Fig 1; Figs 2 – 4] and the OS [Paragraph 0018; The operating system accesses the cache for requests and the software prefetch application accesses the cache to perform prefetch operations]. Regarding claim 4, Dias teaches the pages to be moved are moved independently of any transfer request by the process [Fig 6; The blocks moved to cache outside the actual request are moved independent of a request for the blocks]. Zhuang discloses pages to be moved are moved independently of any page miss handler controlled by the OS [Paragraph 0024; The prefetch rules and operations are separate from the operating system showing the moving is performed independently of operations of the operating system]. Regarding claim 5, Dias teaches in comprising predicting the page misses according to an access prediction criterion [310, Fig 3; Paragraphs 0036 – 0039 and 0049 - 0051; The look ahead window predicts which pages will be missed in the future based on access prediction criteria indicating based on locality if current requests]. Regarding claim 6, Dias teaches in which the access prediction criterion is a function of an output of the page access model [310 and 320, Fig 3; Paragraphs 0049 – 0051; The output of the page access model is feed back into the model to set the criterion to set the model when the model is not optimized]. Regarding claim 7, Dias teaches generating the output as a list of pages currently residing in the software appliance estimated to be needed by the process within a near future [320, Fig 3; The unseen I/O traces represent pages in the software appliance that will be needed in the future]. Regarding claim 8, Dias teaches generating the list of pages as a ranked list; and choosing a cutoff of the ranked list that is adjustable in real time in order to adjust a whereby a dimensionality of the page access model [Fig 6; Paragraphs 0037 – 0039 and 0046 - 0047; The look ahead window is a list of pages that are ranked as being optimal pages to be prefetched with regard to a request. The size of the look ahead window has a cutoff that can be adjusted by the page access model as needed]. Regarding claim 9, Dias teaches the access prediction criterion is that a score that the process will refer to other pages within a number of subsequent memory access operations exceeds a threshold score [Fig 6; Paragraphs 0046 – 0051; The blocks that are prefetched outside of the blocks actually requested are scored above a threshold indicating they are likely to be accessed by the process within a number of accesses from the actual request based on time]. Regarding claim 10, Dias teaches the score is a probability [310, Fig 3; Paragraphs 0046 – 0051; The look ahead window is equivalent to a probability score that indicates so many blocks beyond the actual requested blocks are likely to be requested and are thus prefetched into the cache]. Regarding claim 11, Dias teaches the access prediction criterion is that a score that predicted pages are likely to be needed by the process before other, colder pages resident in the relatively faster memory [150, Fig 1; Paragraph 0018] exceeds a threshold score [310, Fig 3; Paragraphs 0046 – 0051; The look ahead window is an indication of predicted pages that are likely to be needed before other pages in the system including pages in the cache. The size of a window is based on a threshold score indicating how much extra data to prefetch based on the operation of the system]. Regarding claim 12, Dias teaches the score is a probability [310, Fig 3; Paragraphs 0046 – 0051; The look ahead window is equivalent to a probability score that indicates so many blocks beyond the actual requested blocks are likely to be requested and are thus prefetched into the cache]. Regarding claim 13, Dias teaches including in the access prediction criterion a threshold score that predicted pages are likely to be needed; and dynamically adjusting the threshold score to change how many pages are designated as the most-likely-to-be accessed pages [320, Fig 3; Fig 6; Paragraphs 0046 – 0051; The locality of blocks to actually requested blocks is a threshold score that is dynamically adjusted changing the number of blocks prefetched]. Regarding claim 14, Dias teaches the score is a probability [310, Fig 3; Paragraphs 0046 – 0051; The look ahead window is equivalent to a probability score that indicates so many blocks beyond the actual requested blocks are likely to be requested and are thus prefetched into the cache]. Regarding claim 15, Dias teaches the software appliance [120, Fig 1; Figs 2 – 4; The prefetch adaptive prefetching process is configured in a software appliance that is running on 120] directs the learning component [Figs 2 – 4; The adaptive prefetching process is a component that performs prefetching that learns based on training data as in step 410] to predict the page misses upon detection of at least one trigger event [Fig 6; Paragraphs 0046 – 0047; The pages predicted to be missed are predicted based on the actual request being received]. BenHanokh discloses the learning component is a machine learning component [Paragraphs 0015 – 0016 and 0032]. Regarding claim 17, Dias teaches the at least one trigger event includes process behavior information in addition to page misses [710, Fig 7; Paragraphs 0029, 0036 – 0039, and 0052; Prefetching is performed and controlled by trace information which is behavior information and a miss occurring]. Regarding claim 18, Dias teaches the process is one of a plurality of processes running concurrently on the OS [210, Fig 2; 410, Fig 410; The processes that cause the flow charts to be activated are run on the operating system of the system. Operating systems run multiple processes concurrently]; and the page access model is synthesized specific to the process, independent of behavior of any other of the plurality of processes [220, Fig 2; 315, Fig 3; The look ahead window is the page access model and is synthesized in the simulation based on the process that requested data]. Regarding claim 23, Dias teaches scanning blocks of the virtual memory [130-1 and 130-N, Fig 1; Logical units shows the use of virtual memory] to sample accesses by the process [315, Fig 3; Paragraph 0037 – 0038; The look ahead window is based on a hit ratio which indicates a scan of the blocks of the virtual memory to determine hits and misses to obtain the hit ratio]; and inputting resulting scanning information [Paragraphs 0037 – 0039; The hit information and I/O traces are information input to the learning to component] to the learning component [Figs 2 – 4; The adaptive prefetching process is a component that performs prefetching that learns based on training data as in step 410]. BenHanokh discloses the learning component is a machine learning component [Paragraphs 0015 – 0016 and 0032]. Regarding claim 24, Dias teaches the sequence of events includes at least one event chosen from the group of events comprising a page miss, detection of contextual embedding actions including process/thread scheduling, the creation and destruction of a virtual address space, page hits, and page swapping [310 and 320, Fig 3; Paragraphs 0037 – 0039; The I/O traces result in either page hits or misses which allows the system to determine the hit ratio]. Regarding claim 25, Dias in view of Zhuang in view of BenHanokh teaches carrying out the steps of claim 1 [Refer back to the rejection of claim 1] independent of specific hardware support in the computing system [Dias, Paragraphs 0079 – 0086; Any type of hardware that supports the functions of claim 1 can implement the steps of claim 1]. Regarding claim 26, Dias teaches the information [Paragraphs 0037 – 0039; The hit information and I/O traces are information input to the learning to component] input to the learning component [Figs 2 – 4; The adaptive prefetching process is a component that performs prefetching that learns based on training data as in step 410] includes at least one of the information items including hardware performance counters, software counters, system utilization statistics cache misses, translation lookaside-buffer (TLB) misses, CPU load, I/O activity [Paragraphs 0037 – 0039; The I/O trace information is I/O activity], a thread identifier, the process’ name, offset of a page in a process virtual address space section, pressure stall information metrics, a page swap-out time, a time of most recent use of a respective page, process address space size upon swap-out, process cumulative page fault data when upon swap-out, process cumulative runtime upon swap-out of a memory block, I/O waiting time upon memory block swap-out, process working set size at swap-out, page sharing by more than one process/thread at swap-out, page unaccessed time exceeding an access time threshold, page accessed before becoming unaccessed, page accessed shortly after swap-out, identification of a number of pages accessed by context before a most recent page miss on a respective page, and a time at which a page block was first accessed in a virtual memory of a context. Claims 27 – 32, 34 - 35, and 37 – 38 are system claims corresponding to claims 1 – 5, 15, 17 – 18, and 19 – 24 and are rejected based on the same prior art and similar reasoning. Dias teaches the memory management system [Figs 1 and 8 – 9]. Claim(s) 19, 21 – 22, and 36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dias et al. (Pub. No.: US 2020/0250096) referred to as Dias in view Zhuang et al. (Pub. No.: US 2016/0239423) referred to as Zhuang in view of BenHanokh et al. (Pub. No.: US 2022/0164313) of referred to as BenHanokh as applied to claims 1 and 27 above, and further in view of Mayur Jain (Sampling in Machine Learning: A Beginner’s Guide) referred to as Jain. Regarding claim 19, Dias teaches including page addresses [Paragraph 0039; The trace information indicating how the memory is traversed shows the use of address information to know the traversal of the memory] in the information input from the OS [Paragraph 0018; The operating system inputs requests for data to the prefetching process] to the learning component [Figs 2 – 4; The adaptive prefetching process is a component that performs prefetching that learns based on training data as in step 410]. BenHanokh discloses the learning component is a machine learning component [Paragraphs 0015 – 0016 and 0032]. However, Dias in view of Zhuang in view of BenHanokh may not specifically disclose the limitation(s) of reducing the number of page addresses used as inputs by sampling. Jain discloses reducing the number of page addresses used as inputs by sampling [What is Sampling?, Pages 2 – 3; The use of sampling reduces the amount of any data by only using a small sample of the larger overall data]. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Jain in Dias in view of Zhuang in view of BenHanokh, because it reduces computational costs and saving time by only inputting, transferring, and analyzing a smaller representative sample of a larger group of data. Regarding claim 21, Dias teaches including page addresses [Paragraph 0039; The trace information indicating how the memory is traversed shows the use of address information to know the traversal of the memory] in the information input from the OS [Paragraph 0018; The operating system inputs requests for data to the prefetching process] to the learning component [Figs 2 – 4; The adaptive prefetching process is a component that performs prefetching that learns based on training data as in step 410]. Jain discloses sampling input data [What is Sampling?, Pages 2 – 3; The use of sampling reduces the amount of any data by only using a small sample of the larger overall data]. Regarding claim 22, Dias teaches including page addresses [Paragraph 0039; The trace information indicating how the memory is traversed shows the use of address information to know the traversal of the memory] in the information input from the OS [Paragraph 0018; The operating system inputs requests for data to the prefetching process] to the learning component [Figs 2 – 4; The adaptive prefetching process is a component that performs prefetching that learns based on training data as in step 410]. Jain discloses sampling all input data [What is Sampling?, Pages 2 – 3; The use of sampling reduces the amount of any data by only using a small sample of the larger overall data]. Claim 36 is a system claims corresponding to claim 19 and is rejected based on the same prior art and similar reasoning. Dias teaches the memory management system [Figs 1 and 8 – 9]. Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dias et al. (Pub. No.: US 2020/0250096) referred to as Dias in view Zhuang et al. (Pub. No.: US 2016/0239423) referred to as Zhuang in view of BenHanokh et al. (Pub. No.: US 2022/0164313) of referred to as BenHanokh in view of Mayur Jain (Sampling in Machine Learning: A Beginner’s Guide) referred to as Jain as applied to claim 19 above, and further in view of IT Articles (The importance of sampling in Machine Learning) referred to as IT Articles. Regarding claim 20, Jain discloses the sampling of data [What is Sampling?, Pages 2 – 3; The use of sampling reduces the amount of any data by only using a small sample of the larger overall data]. However, Dias in view of Zhuang in view of BenHanokh in view of Jain may not specifically disclose the limitation(s) of sampling is a function of an accuracy rate of the machine learning component. IT Articles discloses sampling is a function of an accuracy rate of the machine learning component [Why is it important to choose carefully a sampling technique in Machine Learning, Pages 1 – 3; Performance comparison between different sample techniques, Pages 10 – 14; The sample that is used is a function of the performance of the machine learning]. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate IT Articles in Dias in view of Zhuang in view of BenHanokh in view of Jain, because it allows the sampling to be adjusted to reduce overfitting and handle imbalances in the source dataset [Why is it important to choose carefully a sampling technique in Machine Learning?, Pages 1 – 3]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER D BIRKHIMER whose telephone number is (571)270-1178. The examiner can normally be reached 8-5 Hoteling. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached at 571-272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Christopher D Birkhimer/Primary Examiner, Art Unit 2138
Read full office action

Prosecution Timeline

Jun 21, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585384
METHOD AND DEVICE FOR MANAGING VEHICLE DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12586637
IN-MEMORY COMPUTATION DEVICE FOR PERFORMING A SIGNED MAC OPERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12572283
WRITE BUFFER CONTROL IN MANAGED MEMORY SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12572293
SPEEDING CACHE SCANS WITH A BYTEMAP OF ACTIVE TRACKS WITH ENCODED BITS
2y 5m to grant Granted Mar 10, 2026
Patent 12566547
HYBRID DESIGN FOR LARGE SCALE BLOCK DEVICE COMPRESSION USING FLAT HASH TABLE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
82%
With Interview (+7.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 496 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month