Prosecution Insights
Last updated: April 19, 2026
Application No. 18/206,068

DATA MANAGEMENT SCHEME IN VIRTUALIZED HYPERSCALE ENVIRONMENTS

Non-Final OA §102§103§DP
Filed
Jun 05, 2023
Examiner
RONES, CHARLES
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
23%
Grant Probability
At Risk
3-4
OA Rounds
4y 3m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 23% of cases
23%
Career Allow Rate
10 granted / 44 resolved
-32.3% vs TC avg
Strong +34% interview lift
Without
With
+34.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
10 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
46.0%
+6.0% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 44 resolved cases

Office Action

§102 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 1. This office action is in response to papers filed 12/04/2025. This application is a Continuation of application 16/897,264 filed 06/09/2020, now U.S. Patent #11,966,581 which is a Continuation of application 16/231,229 filed 12/21/2018, now U.S. Patent #10,725,663 which is a Continuation of application 14/729,026, filed 06/02/2015, now U.S. Patent #10,282,100 which is a continuation in Part of application 14/561,204, filed 12/04/2014, now U.S. Patent #10,437,479. Response to Amendment The amendment file on 12/03/2025 has been entered. Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. 2. The submission of Information Disclosure Statements filed October 19, 2023; February 26, 2024 and September 06, 2024 are in compliance with the provisions of 37 CRR 1.97. Accordingly, they have been reviewed and consider by the Examiner. Claim Rejections - Double Patenting 3. The non-statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A non-statutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a non-statutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). 4. Claims 1-12 and 15-19 are rejected on the ground of non-statutory obviousness-type double patenting as being unpatentable over claims 1-2, 5-12, and 15-19 of U.S. Patent No. #10,725,663. Although the conflicting claims are not identical, they are not patentably distinct from each other because claims 1-2, 5-12, and 15-19 in US Patent No. #10,725,663 fully encompasses claims 1-12 and 15-19 in the instant application and thus claims 1-12 and 15-19 of the instant application is an obvious anticipation of claim 1-2, 5-12, and 15-19 in U.S. Patent No. #10,725,663 based on the anticipation doctrine of In re Goodman, see table below: Current Application Patent 10,725,663 1. An apparatus comprising: a memory controller configured to: interface with a memory system comprising a first storage medium of a first type and a second storage medium of a second type, wherein the first storage medium is associated with a first performance characteristic, and the second storage medium is associated with a second performance characteristic; receive a data access for the memory system; select the first storage medium to perform the data access, based on a category of data associated with the data access and the first performance characteristic associated with the first storage medium; and route the data access to the first storage medium. 2. The apparatus of claim 1, wherein the memory controller is configured to move, based on receiving the data access, data associated with the data access from the first storage medium to the second storage medium. 1. An apparatus comprising: a memory management unit configured to: interface with a heterogeneous memory system that comprises a plurality of types of storage mediums, wherein each type of storage medium is based upon a respective memory technology and is associated with one or more performance characteristics; receive, from a virtual machine, a data access for the heterogeneous memory system; determine at least one of the storage mediums of the heterogeneous memory system to service the data access, wherein the target storage medium is selected based, at least in part, upon at least one performance characteristic associated with the target storage medium and a quality of service tag that is associated with the virtual machine and that indicates one or more of the at least one performance characteristic; and route the data access by the virtual machine to the at least one of the storage mediums. 3. The apparatus of claim 2, wherein the memory controller is configured to move the data associated with the data access based on an access pattern of the data. 4. The apparatus of claim 2, wherein the memory controller is configured to move the data associated with the data access based on a modification of a performance characteristic. 2. The apparatus of claim 1, wherein the memory management unit is configured to, in response to a triggering event, move data associated with the virtual machine from a first storage medium to a second storage medium. 5. The apparatus of claim 1, wherein the memory controller is configured to: receive the data access from a virtual machine; and select the first storage medium based on a performance characteristic associated with the virtual machine. 6. The apparatus of claim 5, wherein the memory controller is configured to select the first storage medium based on a range of a performance characteristic associated with the virtual machine. 5. The apparatus of claim 1, wherein the quality of service tag includes at least two portions; wherein a first portion of the quality of service tag indicates a performance characteristic guaranteed by the virtual machine; and wherein a second portion of the quality of service tag indicates a range of values for the performance characteristic guaranteed by the virtual machine. 11. A method comprising: receiving a data access for a memory system, wherein the memory system comprises a first storage medium of a first type and a second storage medium of a second type, wherein the first storage medium is associated with a first performance characteristic, and the second storage medium is associated with a second performance characteristic; selecting, by a memory controller, the first storage medium for the data access based on a category of data associated with the data access and the first performance characteristic associated with the first storage medium and routing, by the memory controller, the data access to the first storage medium. 6. The apparatus of claim 1, wherein the memory management unit is configured to: maintain a count of an amount of allocable storage space associated with each storage medium; and route a data access by the virtual machine to at least one of the storage mediums based, at least in part, upon the amount of allocable storage space associated with each respective storage medium, and the quality-of-service tag. 7. The apparatus of claim 1, wherein the memory controller is configured to allocate data associated with the data access to the first storage medium and the second storage medium. 7. The apparatus of claim 6, wherein the memory management unit is configured to allocate data associated with the virtual machine across two or more of the storage mediums. 8. The apparatus of claim 1, wherein the memory controller is configured to allocate one or more memory pages of the data access to the first storage medium and the second storage medium, wherein the first storage medium and the second storage medium share an address space. 8. The apparatus of claim 6, wherein the memory management unit is configured to allocate memory pages of the virtual machine across two or more storage devices, wherein the two or more storage devices share a same, physical address space. 9. The apparatus of claim 1, wherein: the memory controller is configured to receive the data access from a virtual machine; and the virtual machine is configured to: execute a first application that is associated with a first quality of service based on a first performance characteristic associated with the virtual machine, and execute a second application that is associated with a second quality of service based on a second performance characteristic associated with the virtual machine. 9. The apparatus of claim 1, wherein the virtual machine is configured to execute a plurality of applications; and wherein each of the applications is associated with a quality-of-service tag that indicates one or more performance characteristics guaranteed by the virtual machine. 10. The apparatus of claim 1, wherein the first storage medium of the first type comprises a volatile storage medium and second storage medium of the second type comprises a non-volatile storage medium. 10. The apparatus of claim 1, wherein the heterogeneous memory system comprises both volatile and non-volatile storage mediums. 11.A method comprising: receiving a data access for a memory system, wherein the memory system comprises a first storage medium of a first type and a second storage medium of a second type, wherein the first storage medium is associated with a first performance characteristic, and the second storage medium is associated with a second performance characteristic; determining an amount of available storage space associated with the first storage medium; selecting, by a memory controller, the first storage medium for the data access based on a category of data associated with the data access and the first performance characteristic associated with the first storage medium and the amount of available storage space associated with the first storage medium; and routing, by the memory controller, the data access to the first storage medium. 11. A method comprising: receiving, from a virtual machine that is executed by a processor, a data access for a heterogeneous memory system, wherein the heterogeneous memory system comprises a plurality of types of storage mediums, wherein each type of storage medium is based upon a respective memory technology and is associated with one or more performance characteristic; determining, by a memory management unit, a target storage medium of the heterogeneous memory system for the data access based, at least in part, upon at least one performance characteristic associated with the target storage medium and a quality of service tag that is associated with the virtual machine and that indicates one or more of the at least one performance characteristic guaranteed by the virtual machine; and routing, by the memory management unit, the data access, at least partially, between the processor and the target storage medium. 12. The method of claim 11, further comprising moving, based on receiving the data access, data from the first storage medium to the second storage medium. 12. The method of claim 11, further comprising, in response to a triggering event, moving data associated with the virtual machine from a first storage medium to a second storage medium. 15. The method of claim 11, wherein the memory controller is configured to: receive the data access from a virtual machine; and select the first storage medium based on a performance characteristic associated with the virtual machine. 15. (Original) The method of claim 11, wherein the quality-of-service tag includes at least two portions; wherein a first portion of the quality-of-service tag indicates a performance characteristic guaranteed by the virtual machine; and wherein a second portion of the quality-of-service tag indicates a range of values for the performance characteristic guaranteed by the virtual machine. 16. The method of claim 15, wherein the memory controller is configured to select the first storage medium based on a range of a performance characteristic associated with the virtual machine. 16. The method of claim 11, wherein determining a target storage medium comprises: maintaining a count of an amount of allocable storage space associated with each storage medium; and selecting a target storage medium based, at least in part, upon the amount of allocable storage space associated with each respective storage medium, and the quality-of-service tag. 17. The method of claim 11, wherein the selecting the first storage medium comprises determining an application associated with the data access. 17. The method of claim 11, wherein the virtual machine is configured to execute a plurality of applications, wherein each of the applications is associated with a quality-of-service tag that indicates one or more performance characteristics guaranteed by the virtual machine; and wherein determining a target storage medium comprises determining which application executed is associated with the data access. 18. An apparatus comprising: a first interface configured to receive a data access for a memory system; at least one circuit configured to: access a memory system comprising a first storage medium of a first type and a second storage medium of a second type, wherein the first storage medium is associated with a first performance characteristic, and the second storage medium is associated with a second performance characteristic, select the first storage medium for the data access based on a category of data associated with the data access and the first performance characteristic associated with the first storage; and a second interface configured to route the data access to the first storage medium. 18. An apparatus comprising: a processing-side interface configured to receive a data access of a memory system; a memory router configured to: determine if the memory access targets a heterogeneous memory system that comprises a plurality of types of storage mediums, wherein each type of storage medium is based upon a respective memory technology and is associated with one or more performance characteristic, and if the memory access targets a heterogeneous memory system, select a target storage medium of the heterogeneous memory system for the data access based, at least in part, upon at least one performance characteristic associated with the target storage medium and a quality of service tag that is associated with the data access and that indicates one or more of the at least one performance characteristic; and a heterogeneous memory system interface configured to, if the memory access targets a heterogeneous memory system, route the data access, at least partially, to the target storage medium. 19. The apparatus of claim 18, wherein the at least one circuit is configured to move, based on receiving the data access, data from the first storage medium to the second storage medium. 19. The apparatus of claim 18, wherein the memory router is configured to, in response to a triggering event, move data associated with the virtual machine from a first storage medium to a second storage medium. Noted, it would have been obvious to a person of ordinary skill in the art at the time the invention was made to modify or to omit the additional elements of claims 1-2, 5-12, and 15-19 of U.S. Patent No. 10,725,663 to arrive at the claims 5-7, 13-15, 18 and 20 of the instant application because the person would have realized that the remaining element would perform the same functions as before. "Omission of element and its function in combination is obvious expedient if the remaining elements perform same functions as before." See In re Karlson (CCPA) 136 USPQ 184, decide Jan 16, 1963, Appl. No. 6857, U.S. Court of Customs and Patent Appeals. Claims 1-8 are rejected on the ground of non-statutory obviousness-type double patenting as being unpatentable over claims 1-8 of U.S. Patent No. #10,437,479. Although the conflicting claims are not identical, they are not patentably distinct from each other because claims 1-8 in US Patent No. #10,437,479 fully encompasses claims 1-8 in the instant application and thus claims 1-8 of the instant application is an obvious anticipation of claim 1-8 in U.S. Patent No. #10,437,479 based on the anticipation doctrine of In re Goodman, see table below: Current Application Patent 10,282,100 1. An apparatus comprising: a memory controller configured to: interface with a memory system comprising a first storage medium of a first type and a second storage medium of a second type, wherein the first storage medium is associated with a first performance characteristic, and the second storage medium is associated with a second performance characteristic; receive a data access for the memory system; select the first storage medium to perform the data access, based on a category of data associated with the data access and the first performance characteristic associated with the first storage medium; and route the data access to the first storage medium. An apparatus comprising: a memory management unit comprising configured to: a first memory interface configured to communicate interface with a heterogeneous memory system that comprises a plurality of types of storage mediums, wherein each type of storage medium is based upon a respective memory technology and is associated with one or more performance characteristics; a second memory interface configured to a receive, from a virtual machine, a data access for the heterogeneous memory system; and a controller configured to: determine a target storage medium from the at least one of the storage mediums of the heterogeneous memory system to service the data access, wherein the target storage medium is selected based, at least in part, upon at least one performance characteristic associated with the target storage medium and a quality of service tag, wherein the quality of service tag is associated with the virtual machine and that indicates one or more desired storage medium performance characteristics that the virtual machine desires to be met as part of a fulfillment of the data accessing and locally route the data access by the virtual machine through the memory management unit to the target storage medium. 2. The apparatus of claim 1, wherein the memory controller is configured to move, based on receiving the data access, data associated with the data access from the first storage medium to the second storage medium. 2. The apparatus of claim 1, wherein the memory management unit is configured to, in response to a triggering event, move data associated with the virtual machine from a first storage medium to a second storage medium. 4. The apparatus of claim 2, wherein the memory controller is configured to move the data associated with the data access based on a modification of a performance characteristic. 4. The apparatus of claim 2, wherein the triggering event includes relaxing one or more of the performance characteristics guaranteed by the virtual machine. 5. The apparatus of claim 1, wherein the memory controller is configured to: receive the data access from a virtual machine; and select the first storage medium based on a performance characteristic associated with the virtual machine. 5. The apparatus of claim 1, wherein the quality-of-service tag includes at least two portions; wherein a first portion of the quality-of-service tag indicates a performance characteristic guaranteed by the virtual machine; and wherein a second portion of the quality-of-service tag indicates a range of values for the performance characteristic guaranteed by the virtual machine. 6. The apparatus of claim 5, wherein the memory controller is configured to select the first storage medium based on a range of a performance characteristic associated with the virtual machine. 6. The apparatus of claim 1, wherein the memory management unit is configured to: maintain a count of an amount of allocable storage space associated with each storage medium; and route a data access by the virtual machine to at least one of the storage mediums based, at least in part, upon the amount of allocable storage space associated with each respective storage medium, and the quality-of-service tag. 7. The apparatus of claim 1, wherein the memory controller is configured to allocate data associated with the data access to the first storage medium and the second storage medium. 7. The apparatus of claim 6, wherein the memory management unit is configured to allocate data associated with the virtual machine across two or more of the storage mediums. 8. The apparatus of claim 1, wherein the memory controller is configured to allocate one or more memory pages of the data access to the first storage medium and the second storage medium, wherein the first storage medium and the second storage medium share an address space. 8. The apparatus of claim 6, wherein the memory management unit is configured to allocate memory pages of the virtual machine across two or more storage devices, wherein the two or more storage devices share a same, physical address space. Noted, it would have been obvious to a person of ordinary skill in the art at the time the invention was made to modify or to omit the additional elements of claims 1-20 of U.S. Patent No. 10,437,479 to arrive at the claims 5-7, 13-15, 18 and 20 of the instant application because the person would have realized that the remaining element would perform the same functions as before. "Omission of element and its function in combination is obvious expedient if the remaining elements perform same functions as before." See In re Karlson (CCPA) 136 USPQ 184, decide Jan 16, 1963, Appl. No. 6857, U.S. Court of Customs and Patent Appeals. Claims 1 and 13 are rejected on the ground of non-statutory obviousness-type double patenting as being unpatentable over claims 11 and 13 of U.S. Patent No. #11,036,397. Although the conflicting claims are not identical, they are not patentably distinct from each other because claims 1 and 13 in US Patent No. #11,036,397 fully encompasses claims 1 and 13 in the instant application and thus claims 1and 13 of the instant application is an obvious anticipation of claim 1 and 13 in U.S. Patent No. #11,036,397 based on the anticipation doctrine of In re Goodman, see table below: Current Application Patent 11,036,397 1. An apparatus comprising: a memory controller configured to: interface with a memory system comprising a first storage medium of a first type and a second storage medium of a second type, wherein the first storage medium is associated with a first performance characteristic, and the second storage medium is associated with a second performance characteristic; receive a data access for the memory system; select the first storage medium to perform the data access, based on a category of data associated with the data access and the first performance characteristic associated with the first storage medium; and route the data access to the first storage medium. 13. (Original) The method of claim 11, wherein receiving the data access includes receiving an indication of a data category associated with the data access; and wherein routing includes preferentially routing the data to one of the plurality of types of storage mediums based upon the data category. 11. (Original) A method comprising: receiving, from a processor, a data access for a heterogeneous memory system, wherein the heterogeneous memory system comprises a plurality of types of storage mediums, and wherein the heterogeneous memory system, wherein each type of storage medium is associated with one or more performance characteristic; determining, by a memory interconnect, a target storage medium of the heterogeneous memory system for the data access, wherein determining is based, at least in part, upon at least one performance characteristic associated with the target storage medium; and locally routing, by the memory interconnect, the data access, at least partially, between the processor and the target storage medium. 13. (Original) The method of claim 11, wherein receiving the data access includes receiving an indication of a data category associated with the data access; and wherein routing includes preferentially routing the data to one of the plurality of types of storage mediums based upon the data category. Noted, it would have been obvious to a person of ordinary skill in the art at the time the invention was made to modify or to omit the additional elements of claims 1-20 of U.S. Patent No. 11,036,397 to arrive at the claims 5-7, 13-15, 18 and 20 of the instant application because the person would have realized that the remaining element would perform the same functions as before. "Omission of element and its function in combination is obvious expedient if the remaining elements perform same functions as before." See In re Karlson (CCPA) 136 USPQ 184, decide Jan 16, 1963, Appl. No. 6857, U.S. Court of Customs and Patent Appeals. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-8 and 10-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Lee et al. (US US 20140223098 A1, filed 02-04-2013, published 08-07-2014), hereinafter, “Lee.” As to claims 1, 11, and 18, Lee discloses: a memory controller configured to: interface with a memory system comprising a first storage medium of a first type and a second storage medium of a second type, wherein the first storage medium is associated with a first performance characteristic, and the second storage medium is associated with a second performance characteristic; See: [0008]; [0008] An apparatus embodiment includes one or more processors and a first processor readable memory having a first performance characteristic. The apparatus also includes a second processor readable memory having a second performance characteristic. The first performance characteristic is better than the second performance characteristic. The one or more processors execute processor readable instructions of an OS to determine whether one or more software applications request usage of the first processor readable memory and an amount of processor readable memory the one or more applications uses. The one or more processors executes the processor readable instructions of the OS to allow at least one of the one or more applications access to the first processor readable memory in response to the request for usage of the first processor readable memory and the amount of processor readable memory the one or more applications uses. receive a data access for the memory system; See [0007]; [0023]; [0098]; [0007] A method embodiment allocates a type of memory to an application that is processed by a computing device. The method includes determining the types of integrated circuit memory available in the computing device. The types of integrated circuit memory available include a first high performance type of memory and a second type of memory that is not high-performance memory. A request from the application to use the high-performance memory is received. The high-performance memory is allocated to the application in response to the request. [0023] One or more processors of a SoC may also have access to different types of memory that have different types of memory characteristics. Memory characteristics or performance parameters may include, but not limited to, bandwidth, memory latency, power consumption, number of writes before wear-out and/or heat generation. High performance memory, such as a memory that has higher bandwidth (or that may transmit or receive more data per period of time than other memory), may be more costly and may not be as available as memory that does not have a particular high-performance characteristic. [0098] FIG. 8 is a functional block diagram of the gaming and media system 1000 and shows functional components of the gaming and media system 1000 in more detail. The console 1002 has a CPU 1100, and a memory controller 1102 that facilitates processor access to various types of memory, including a flash ROM 1104, a RAM 1106, a hard disk drive or solid-state drive 1108, and the portable media drive 1006. In one implementation, the CPU 1100 includes a level 1 cache 1110 and a level 2 cache 1112, to temporarily store data and hence reduce the number of memory access cycles made to the hard drive 1108, thereby improving processing speed and throughput. select the first storage medium to perform the data access, based on a category of data associated with the data access and the first performance characteristic associated with the first storage medium; See: [Lee: Claim 4]; [0003-0004]; [0023]; [0044]; and Lee: Claim 4: 4. The method of claim 1, wherein the determining includes accessing performance characteristics of the first and second types of integrated circuit memory from a list of memory performance characteristics. [0003] In an embodiment, an OS allocates the higher performing memory to certain applications having particular workloads or functions (for example, ray tracing, frame/video buffering, NUI (natural user interface) data buffering). The OS may transfer data from a higher performing memory when new data needs to occupy the higher performing memory. The OS and one or more processors, along with the memory controller logic hardware and/or software, also performs error correction to preserve data integrity. An online (web) processor readable catalog of memory characteristics may be accessed by the OS, for the purpose of determining capabilities and/or performance characteristics of different types of memory. [0004] In an embodiment, applications have an attribute flag or information in the application manifest that indicates to the OS that the particular application benefits from using the high-performance memory. The OS may not allow access to the high-performance memory when the requesting application is not on the applications manifest, or when the requesting application requires more amount of high-performance memory than is available. In an embodiment, the OS monitors the execution of the application and keeps track of the memory location accesses and usage patterns. In embodiments, the OS may pass the attribute flag or information to virtual or physical memory allocators, such as memory controllers or memory managers. [0023] One or more processors of a SoC may also have access to different types of memory that have different types of memory characteristics. Memory characteristics or performance parameters may include, but not limited to, bandwidth, memory latency, power consumption, number of writes before wear-out and/or heat generation. High performance memory, such as a memory that has higher bandwidth (or that may transmit or receive more data per period of time than other memory), may be more costly and may not be as available as memory that does not have a particular high-performance characteristic. [0044] Memory type 310 is responsible for determining what types of memory are available in a computing environment. In an embodiment, memory type 310 queries the computing environment to determine what types of memory are available. In an embodiment, memory type 310 determines whether any high-performance memory is available. In an embodiment, memory type 310 accesses an online (web) catalog of application characteristics via the Internet, for the purpose of determining capabilities and/or performance characteristics for types of memory that are associated with the listed applications, in a computing environment, such as computing device 100. When a particular memory has certain capabilities and/or performance characteristics that are appropriate for the applications, such as having a bandwidth higher than a predetermined threshold, memory type 310 will assign a particular memory as high-performance memory. When the memory does not have a particular capability and/or performance characteristic that meets a predetermined threshold, memory type 310 does not assign the memory as high performance. In an embodiment, the online catalog of memory capabilities and/or performance characteristics is updated/modified as new memory devices become available. In an embodiment an OS tracks and measures performance characteristics of a memory, as related to memory identification and uploads the measured performance characteristics into the online catalog via the Internet. In an embodiment, a user may enter input to console 1002 by way of gesture, touch or voice. In an embodiment, optical I/O interface 1135 receives and translates gestures of a user. In another embodiment, console 1002 includes a natural user interface (NUI) to receive and translate voice and gesture inputs from a user. In an alternate embodiment, front panel subassembly 1142 includes a touch surface and a microphone for receiving and translating a touch or voice, such as a voice command, of a user. In still a further embodiment, a catalog of memory capabilities and/or performance characteristics is stored locally in persistent memory. route the data access to the first storage medium; See: [0003-0004]; [0031]; [0035-0036]; [0044]. [0031] In an embodiment, high performance memory 102 has at least one or more memory characteristic, such as bandwidth, memory latency, heat generation, number of writes before wear-out and/or power consumption that is better in performance than memory 102. For example, high performance memory 102 may be a Wide I/O DRAM having a higher bandwidth than memory 104. Memory 104 may be Low Power Double Data Rate 3 dynamic random-access memory (LPDDR3 DRAM) memory (also known as Low Power DDR, mobile DDR (MDDR) or mDDR). In an embodiment, memory interface 102a is a Wide I/O DRAM interface transmitting and receiving signals on signal path 106; while memory interface 104 is a LPDDR3 DRAM interface transmitting and receiving signals on signal path 105. [0035] In embodiments, signal paths 105/106 are media that transfers a signal, such as an interconnect, conducting element, contact, pin, region in a semiconductor substrate, wire, metal trace/signal line, or photoelectric conductor, singly or in combination. In an embodiment, multiple signal paths may replace a single signal path illustrated in the figures and a single signal path may replace multiple signal paths illustrated in the figures. In embodiments, a signal path may include a bus and/or point-to-point connection. In an embodiment, a signal path includes control and data signal lines. In an alternate embodiment, a signal path includes data signal lines or control signal lines. In still other embodiments, signal paths are unidirectional (signals that travel in one direction) or bidirectional (signals that travel in two directions) or combinations of both unidirectional signal lines and bidirectional signal lines. [0036] FIG. 2 is a high-level block diagram of an exemplary software architecture 200 to access different types of memory. OS 205, and in particular dynamic management of heterogeneous memory (DMHM) 308 determines, among other functions, which application 202-204 are allocated high performance memory 208 and which application 202-204 are allocated memory 209. In embodiments, high performance memory 208 corresponds to high performance memory 102 and memory 209 corresponds to memory 104 described herein and shown in FIG. 1. DMHM 308 determines which of applications 202-204 will have access to high performance memory 208 based on at least whether one of applications 202-204 request high performance memory by way of an attribute flag or information. Once a determination that a particular application will be allocated a particular memory type (either high performance memory 208 or memory 209), the appropriate device drivers 206 is used with OS 205. As to claims 2, 12, and 19, Lee discloses: wherein the memory controller is configured to move, based on receiving the data access, data associated with the data access from the first storage medium to the second storage medium; See: [0044]; As to claims 3, 13, and 20, Lee discloses: wherein the memory controller is configured to move the data associated with the data access based on an access pattern of the data; See: [0003-0004]; [0044]. As to claims 4 and 14, Lee discloses: wherein the memory controller is configured to move the data associated with the data access based on a modification of a performance characteristic; See: [0003-0004]; [0044]. As to claims 5 and 15, Lee discloses: wherein the memory controller is configured to: receive the data access from a virtual machine; See [Lee: claim 7]; [0005]; [0046]; [0077]; “virtual caches and virtual memory allocators are deemed to be virtual machines” and select the first storage medium based on a performance characteristic associated with the virtual machine; See: [Lee: claim 7]; [0042-0046]. [Lee: Claim 7: 7. The method of claim 1, wherein the allocating includes transferring the request to at least one of a virtual memory allocator or physical memory allocator of a memory controller that manages the allocation of memory pages to physical memory areas in the computing device. [0046] In an embodiment, allocate 311 may pass the attribute information to a virtual memory allocator, which manages the allocation of memory pages to physical memory areas. In another embodiment, allocate 311 may pass the attribute information to a physical memory allocator via a memory controller. [0077] Step 501 illustrates determining whether cache memory is available. In an embodiment, cache management 314 determines whether cache memory is available and the amount of cache memory that is available. Step 501 then determines whether high performance memory as virtual cache memory will increase the performance of the computing device. In an embodiment, cache management 314 compares the amount of cache memory available to a predetermined threshold value. When the amount of cache memory available is less than the predetermined threshold value, cache management 314 then assigns the high-performance memory as virtual cache memory as illustrated in step 502. In an alternate embodiment, cache management 314 assigns high performance memory as virtual cache memory when a particular application that may benefit from such assignment requests service from OS 302. As to claims 6 and 16, Lee discloses: wherein the memory controller is configured to select the first storage medium based on a range of a performance characteristic associated with the virtual machine; See: Fig. 6A: [0006]; [0042-0046]. [0006] The OS or the memory controller may also interrogate the different types of memory to obtain memory operational details as well as periodically interrogate the different types of memory for health and performance information in embodiments. The OS or the memory controller may also manage the power consumption state (range of different temperatures) of the different types of memory. PNG media_image1.png 356 460 media_image1.png Greyscale As to claims 7 and 17, Lee discloses: wherein the memory controller is configured to allocate data associated with the data access to the first storage medium and the second storage medium; See: [0003-0006]. As to claims 8, Lee discloses: wherein the memory controller is configured to allocate one or more memory pages of the data access to the first storage medium and the second storage medium; See [0046]; wherein the first storage medium and the second storage medium share an address space; See: [0053]; [0046] In an embodiment, allocate 311 may pass the attribute information to a virtual memory allocator, which manages the allocation of memory pages to physical memory areas. In another embodiment, allocate 311 may pass the attribute information to a physical memory allocator via a memory controller. [0053] In an embodiment, high performing memory is used in order to create a virtualized holding space or virtual cache for L1/L2/L3 cache memory. This enables cache memory to be larger and allows L1/L2/L3 cache memory to pool available memory space for its own purpose. In an embodiment, cache management 315 stores data likely to be used by L1/L2/L3 cache memory in high performance memory (virtual cache) using speculative fetching. As to claim 10, Lee discloses: wherein the first storage medium of the first type comprises a volatile storage medium and second storage medium of the second type comprises a non-volatile storage medium; See: [0032]. [0032] In embodiments, high performance memory 102 and memory 104 include one or more arrays of memory cells in an IC disposed on separate semiconductor substrates. In an embodiment, high performance memory 102 and memory 104 are included in respective integrated monolithic circuits housed in separately packaged devices. In embodiments, high performance memory 102 and memory 104 may include volatile and/or non-volatile memory. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 5, 9 and 15 are rejected under 35 U.S.C. 103 as being unpatentable Lee et al. (US US 20140223098 A1, filed 02-04-2013, published 08-07-2014), hereinafter, “Lee,” in view of Zhu et al. (CA 2801473 C), filed 01-10-2013, published 04-19-2016, hereinafter, “Zhu.” As to claim 9, Lee discloses: wherein the memory controller is configured to receive the data access from a virtual machine; See [Lee: claim 7]; [0005]; [0046]; [0077]; and Lee does not disclose: the virtual machine is configured to: execute a first application that is associated with a first quality of service based on a first performance characteristic associated with the virtual machine, and execute a second application that is associated with a second quality of service based on a second performance characteristic associated with the virtual machine. However, Zhu discloses: the virtual machine is configured to: execute a first application that is associated with a first quality of service based on a first performance characteristic associated with the virtual machine; See Zhu: [0042-0043]; [0092]; and execute a second application that is associated with a second quality of service based on a second performance characteristic associated with the virtual machine; See Zhu: [0042-0043]; [0092]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Lee to incorporate the teachings of Zhu to teach the virtual machine is configured to: execute a first application that is associated with a first quality of service based on a first performance characteristic associated with the virtual machine; See Zhu: [0042-0043]; [0092]; and execute a second application that is associated with a second quality of service based on a second performance characteristic associated with the virtual machine See Zhu: [0042-0043]; [0092]. Doing so would allow the system to adapt immediately and efficiently to changes in the amount (capacity) and types(characteristics) of resources available to which the affiliation rules map the compute demands and identify the resource usage and cost. [0042] In contrast to the WPPI system 102, current virtualized environment (e.g., a web server farm) monitoring systems (e.g., a VMwaree system) monitor workload performance in real-time without the benefit of off-line analytics provided by the workload profile as proposed. These current virtualized environment monitoring systems react in real-time and make adjustments (e.g., re-balancing workloads in a reactionary fashion) based on demand. However, after current virtualized environment monitoring systems rebalance workloads, the provider may still observe the utilization of resources (e.g., virtual machines VMS) changes for the workloads over time (e.g., time series factors) without anticipating such changes automatically and/or sufficiently in advance to proactively make adjustments. Accordingly, current virtualized environment monitoring systems do not provide the same level of resource provisioning as offered by the WPPI system 102. [0043] The WPPI system 102 tunes resource estimations in real-time (e.g., where the workload profile changes when the application is executed on-line). Workloads may include web server applications, database servers, application servers, and batch jobs. The WPPI system 102, in on-line mode, initiates the deployment of submitted workloads (118, 120), and the WPPI system 102 applies the models (e.g., the resource estimation profiler model 114, the performance interference model 130, influence matrix 116, and the affiliation rules 122) to initiate execution of the workloads (118, 120) that are then tuned in real-time using the historical resource estimation profile 166 adjusted by a real-time characterization of the workloads (118, 120). During the on-line mode, a workload's profile (e.g., resource usage profile estimation 166) is recalibrated using real-time data and the workload signature may be revised and/or updated accordingly. The resource estimation profile 166 for the workloads and the resources (e.g., hardware infrastructure resources .Math. a server fails over to another server) used by the workloads (118, 120) may change during the on-line mode. Accordingly, during the on-line mode, the affiliation rules 122 map (e.g., virtual machine to physical host assignments 140) resources (144, 146, 148, 150, 160) in real-time to a set of computes demands (e.g., the workloads demand for number of CPUs, RAM and cache memory and disk storage, and network bandwidth). The resources (144, 146, 148, 150, 160) may change in the amount (capacity) and types (characteristics) of resources available to which to map the compute demands. However, because the WPPI system 102 pre-computes the variations of resources, during the off-line mode, to which to map the workloads (e.g., compute demands) the WPPI system 102 adapts immediately and efficiently to changes in the amount (capacity) and types (characteristics) of resources available to which the affiliation rules map the compute demands. [0092] Figure 16 shows soft deadlines 1600 (e.g., QoS guarantees 126) for cloud consumer 136 submitted applications (e.g., workloads 1602, 1604, 1606). Cloud consumer 136 submits each application (e.g., workload 1602, 1604, 1606) with the QoS guarantees 126 (e.g., response time) including a deadline, either hard or soft (1602, 1604, 1606), and a priority ranking (1608, 1610, 1612) of importance that the application (e.g., workload) complete on time (1614, 1616, 1618). The WPPI system 102 provides cloud providers 134 a way to minimize resource utilization costs 142 and maximize revenue 128 (1620). For example, the WPPI system 102 may analyze three applications submitted to two cloud providers, and evaluate random assignments versus model-base assignments, and execute the applications (e.g., workloads) and displays the observations (1622, 1624, 1626) (e.g., CPU, disk, memory, network utilizations). The WPPI system 102 identifies for each provider the resource usage and resource cost. Alternatively, as to claims 5 and 15, Lee does not disclose: wherein the memory controller is configured to: receive the data access from a virtual machine; and select the first storage medium based on a performance characteristic associated with the virtual machine. However, Zhu discloses: wherein the memory controller is configured to: receive the data access from a virtual machine; See Zhu: [0042-0043]; [0092]; and select the first storage medium based on a performance characteristic associated with the virtual machine; See Zhu: [0042-0043]; [0092]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Lee to incorporate the teachings of Zhu to teach the virtual machine is configured to: wherein the memory controller is configured to: receive the data access from a virtual machine; and select the first storage medium based on a performance characteristic associated with the virtual machine. Doing so would allow the system to adapt immediately and efficiently to changes in the amount (capacity) and types(characteristics) of resources available to which the affiliation rules map the compute demands and identify the resource usage and cost. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES RONES whose telephone number is (571)272-4085. The examiner can normally be reached M-F 9-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cordelia Zecher can be reached at 571-272-7771. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES RONES/Supervisory Patent Examiner, Art Unit 2168
Read full office action

Prosecution Timeline

Jun 05, 2023
Application Filed
Sep 06, 2024
Response after Non-Final Action
Feb 22, 2025
Non-Final Rejection — §102, §103, §DP
Jun 27, 2025
Response Filed
Aug 05, 2025
Non-Final Rejection — §102, §103, §DP
Dec 02, 2025
Applicant Interview (Telephonic)
Dec 02, 2025
Examiner Interview Summary
Dec 03, 2025
Response Filed
Feb 07, 2026
Non-Final Rejection — §102, §103, §DP
Apr 08, 2026
Examiner Interview Summary
Apr 08, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585564
METHODS FOR CONFIGURING SPAN OF CONTROL UNDER VARYING TEMPERATURE
2y 5m to grant Granted Mar 24, 2026
Patent 10996865
APPLICATION-SPECIFIC MEMORY SCALING IN MULTI-DEVICE SYSTEMS
2y 5m to grant Granted May 04, 2021
Patent 10990284
ALERT CONFIGURATION FOR DATA PROTECTION
2y 5m to grant Granted Apr 27, 2021
Patent 10978169
PAD DETECTION THROUGH PATTERN ANALYSIS
2y 5m to grant Granted Apr 13, 2021
Patent 10971241
PERFORMANCE BASED METHOD AND SYSTEM FOR PATROLLING READ DISTURB ERRORS IN A MEMORY UNIT
2y 5m to grant Granted Apr 06, 2021
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
23%
Grant Probability
57%
With Interview (+34.5%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 44 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month