DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Note
It is noted that any citations to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the reference should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See MPEP § 2123.
Information Disclosure Statement
An information disclosure statement (IDS) was submitted on 18 November 2022. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hom (US 20170123735 A1) in view of Gao (US 20220156152 A1).
Referring to claims 1, 8, and 14, taking claim 1 as exemplary, Hom teaches
A computer-implemented method for optimize memory configuration of a computer system, (intended use, however [Hom abstract, 0003, 0018] in a computer system includes pre-allocating, by a real storage manager, a pool of large memory frames. An operating system manages virtual memory of a computer, such as a multiprocessor system. The multiprocessor system executes multiple applications simultaneously. The operating system allocates each application a corresponding address space in the virtual memory) the computer-implemented method comprising: determining available online real storage assigned to the computer system; ([Hom 0051-0052, Figs. 6, 6A] The operating system 130, at startup may access a parameter that specifies the amount of real memory to allocate for the large frame area. For example, the amount of real memory to allocate for the large frame area may be specified as a number of large pages, or a percentage of total real memory available, or as a specified amount of memory, or in any other manner.) a Large Frame Area (LFAREA) value to support large pages used by one or more applications executing on the computer system, wherein the large pages are memory pages larger than a predetermined value; ([Hom 0049-0051, Figs. 6, 6A] to facilitate selected applications to improve performance using large pages, the operating system 130 provides a separate large frame area. The large frame area includes a pool of large memory frames. A large page, also referred to as a ‘huge’ page or a ‘super’ page, is a page that has a second predetermined size larger than the predetermined page size. Accordingly, to facilitate selected applications to improve performance using large pages, the operating system 130 provides a separate large frame area. The large frame area includes a pool of large memory frames. The large frame area is used for the large pages of predetermined sizes, such as 1 MB, 2 GB or any other predetermined size.) and dynamically updating the LFAREA value of the computer system to the determined LFAREA value to support the large pages of the computer system ([Hom 0049-0052] Accordingly, to facilitate selected applications to improve performance using large pages, the operating system 130 provides a separate large frame area. The large frame area includes a pool of large memory frames. The large frame area is used for the large pages of predetermined sizes, such as 1 MB, 2 GB or any other predetermined size. The operating system 130, at startup may access a parameter that specifies the amount of real memory to allocate for the large frame area. For example, the amount of real memory to allocate for the large frame area may be specified as a number of large pages, or a percentage of total real memory available, or as a specified amount of memory, or in any other manner.).
Hom does not explicitly disclose computing, using a machine learning model.
Gao teaches computing, using a machine learning model, ([Gao 0163-0166] The storage systems described above may also be well suited to support other types of applications that are resource intensive such as, for example, machine learning applications. Machine learning applications may perform various types of data analysis to automate analytical model building. Using algorithms that iteratively learn from data, machine learning applications can enable computers to learn without being explicitly programmed. One particular area of machine learning is referred to as reinforcement learning, which involves taking suitable actions to maximize reward in a particular situation.).
Hom and Gao are analogous art because they are from the same field of endeavor in storage systems. Before the effective filing date of the invention, it would have been obvious to a person of ordinary skill in the art, having the teaching of Hom and Gao before him or her to modify the computer system of Hom to include the machine learning of Gao, thereafter the computer system is connected to machine learning. The suggestion and/or motivation for doing so would be obtaining the advantage of allowing the computer system have more automated analysis and iterative learning to maximize reward in a particular situation as suggested by Gao. It is known to combine prior art elements according to known methods to yield predictable results. Therefore, it would have been obvious to combine Hom with Gao to obtain the invention as specified in the instant application claims.
With regards to the non-exemplary limitations of claim 8, Hom teaches a memory device; and one or more processing units coupled with the memory device, (intended use, however [Hom abstract, 0003, 0018-0019, Fig. 1] in a computer system includes pre-allocating, by a real storage manager, a pool of large memory frames. An operating system manages virtual memory of a computer, such as a multiprocessor system. The multiprocessor system executes multiple applications simultaneously. The operating system allocates each application a corresponding address space in the virtual memory. An operating system manages virtual memory of a computer, such as a multiprocessor system. The multiprocessor system executes multiple applications simultaneously. pre-allocating a pool of large memory frames by a real storage manager. The multiprocessor 105 includes a plurality of processors P1-Pn 105A-105N.).
Hom does not explicitly disclose the one or more processing units configured to optimize memory configuration of a logical partition (LPAR) of a mainframe system.
Gao teaches the one or more processing units configured to optimize memory configuration of a logical partition (LPAR) of a mainframe system ([Gao 0093-0094, 0111-0113, 0310] authorities 168 operate to determine how operations will proceed against particular logical elements. As for the storage unit controller, the responsibility of the logical “controller” is distributed across each of the blades containing authorities 168. FIG. 2E is a blade 252 hardware block diagram, showing a control plane 254, compute and storage planes 256, 258, and authorities 168 interacting with underlying physical resources, using embodiments of the storage nodes 150 and storage units 152 of FIGS. 2A-C in the storage server environment of FIG. 2D. The control plane 254 is partitioned into a number of authorities 168 which can use the compute resources in the compute plane 256 to run on any of the blades 252. The storage plane 258 is partitioned into a set of devices, each of which provides access to flash 206 and NVRAM 204 resources. Embodiments described herein may be practiced with various computer system configurations including hand-held devices, tablets, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like).
Hom and Gao are analogous art because they are from the same field of endeavor in storage systems. Before the effective filing date of the invention, it would have been obvious to a person of ordinary skill in the art, having the teaching of Hom and Gao before him or her to modify the computer system of Hom to include the mainframe logical elements of Gao, thereafter the computer system is connected to mainframe logical elements. The suggestion and/or motivation for doing so would be obtaining the advantage of allowing the computer system have more scalable system support to grow with customer demands as suggested by Gao. It is known to combine prior art elements according to known methods to yield predictable results. Therefore, it would have been obvious to combine Hom with Gao to obtain the invention as specified in the instant application claims.
As per the non-exemplary claim(s), this/these claim(s) has/have similar limitations and is/are rejected based on the reasons given above.
Referring to claims 2 and 15, taking claim 2 as exemplary, Hom in view of Gao teaches
The computer-implemented method of claim 1, wherein the computer system is a logical partition (LPAR) of a mainframe system ([Gao 0093-0094, 0111-0113, 0310] authorities 168 operate to determine how operations will proceed against particular logical elements. As for the storage unit controller, the responsibility of the logical “controller” is distributed across each of the blades containing authorities 168. FIG. 2E is a blade 252 hardware block diagram, showing a control plane 254, compute and storage planes 256, 258, and authorities 168 interacting with underlying physical resources, using embodiments of the storage nodes 150 and storage units 152 of FIGS. 2A-C in the storage server environment of FIG. 2D. The control plane 254 is partitioned into a number of authorities 168 which can use the compute resources in the compute plane 256 to run on any of the blades 252. The storage plane 258 is partitioned into a set of devices, each of which provides access to flash 206 and NVRAM 204 resources. Embodiments described herein may be practiced with various computer system configurations including hand-held devices, tablets, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like).
As per the non-exemplary claim(s), this/these claim(s) has/have similar limitations and is/are rejected based on the reasons given above.
Referring to claims 3, 9, and 16, taking claim 3 as exemplary, Hom in view of Gao teaches
The computer-implemented method of claim 1, wherein the computer system is monitored continuously to compare one or more parameters associated with the LFAREA value with a knowledge base ([Gao 0163-0166, 0168] Such AI applications may enable devices to perceive their environment and take actions that maximize their chance of success at some goal. Examples of such AI applications can include IBM Watson™, Microsoft Oxford™, Google DeepMind™, Baidu Minwa™, and others. Machine learning applications may perform various types of data analysis to automate analytical model building. Using algorithms that iteratively learn from data, machine learning applications can enable computers to learn without being explicitly programmed. One particular area of machine learning is referred to as reinforcement learning, which involves taking suitable actions to maximize reward in a particular situation.).
As per the non-exemplary claim(s), this/these claim(s) has/have similar limitations and is/are rejected based on the reasons given above.
Referring to claims 4, 10, and 17, taking claim 4 as exemplary, Hom in view of Gao teaches
The computer-implemented method of claim 1, wherein the LFAREA value is computed in response to the available online real storage being greater than a predetermined threshold ([Hom 0032-0033, Fig. 3] The user addressable virtual address spaces 310, 312, and 314 are each divided into two sections by a second memory threshold 360. In the illustrated case, the second memory threshold 360 is at 16 megabytes (MB). The second memory threshold 360 divides the user addressable virtual address space 310 into a first section 310A and a second section 310B.).
As per the non-exemplary claim(s), this/these claim(s) has/have similar limitations and is/are rejected based on the reasons given above.
Referring to claims 5, 11, and 18, taking claim 5 as exemplary, Hom in view of Gao teaches
The computer-implemented method of claim 4, wherein the LFAREA value is determined by subtracting the predetermined threshold from a predetermined portion of the available online real storage at initial program load (IPL) of the computer system ([Hom 0032-0034, Fig. 3] The first section 310A of the virtual storage space includes a common area 320 and a private area 330 and a common PSA area 320B. The second section 310B includes an extended common area 322 and an extended private area 332.).
As per the non-exemplary claim(s), this/these claim(s) has/have similar limitations and is/are rejected based on the reasons given above.
Referring to claims 6, 12, and 19, taking claim 6 as exemplary, Hom in view of Gao teaches
The computer-implemented method of claim 5, wherein the predetermined portion of the available online real storage is computed by the machine learning model ([Gao 0158, 0170-0171] Readers will appreciate that various performance aspects of the cloud-based storage system 318 may be monitored (e.g., by a monitoring module that is executing in an EC2 instance) such that the cloud-based storage system 318 can be scaled-up or scaled-out as needed. A full scale AI deployment may be required to continuously collect, clean, transform, label, and store large amounts of data. Readers will appreciate that each stage in the AI data pipeline may have varying requirements from the data hub (e.g., the storage system or collection of storage systems). Scale-out storage systems must deliver uncompromising performance for all manner of access types and patterns—from small, metadata-heavy to large files, from random to sequential access patterns, and from low to high concurrency.).
As per the non-exemplary claim(s), this/these claim(s) has/have similar limitations and is/are rejected based on the reasons given above.
Referring to claims 7, 13, and 20, taking claim 7 as exemplary, Hom in view of Gao teaches
The computer-implemented method of claim 1, wherein the computer system comprises a plurality of computer systems, and a respective LFAREA value is computed for each computer system ([Hom abstract, 0018-0019, claim 11, Fig. 1] An operating system manages virtual memory of a computer, such as a multiprocessor system. The multiprocessor system executes multiple applications simultaneously. pre-allocating a pool of large memory frames by a real storage manager. The multiprocessor 105 includes a plurality of processors P1-Pn 105A-105N.).
As per the non-exemplary claim(s), this/these claim(s) has/have similar limitations and is/are rejected based on the reasons given above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANCISCO A GRULLON whose telephone number is (571)272-8318. The examiner can normally be reached Monday - Friday, 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hosain Alam can be reached at (571)272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FRANCISCO A GRULLON/Primary Examiner, Art Unit 2132