DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending in this application.
Response to Arguments
Applicant’s arguments regarding the rejections of claims 1-7 under 35 U.S.C. 112a have been fully considered and are persuasive. The rejections have been withdrawn. However, new 35 U.S.C. 112a rejections are applied to claims 1-20 based on the amendments.
Applicant’s arguments regarding the rejections of claims 1-20 under 35 U.S.C. 112b have been fully considered and are persuasive. The rejections have been withdrawn. However, new 35 U.S.C. 112b rejections are applied to claims 1-20 based on the amendments.
Applicant's arguments regarding the 35 U.S.C. 103 rejections of claims 1-20 have been fully considered but they are moot in light of the references being applied in the current rejection.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
As per claim 1:
Lines 13-14 recite “providing, by the learning module in the computer, a suggestion for a heap allocation in response to learning the most possible memory access order” but this is not supported by the specification. The specification recites in [0025] “Learning module112 may learn a most possible memory access order for each function in application104. Learning module112 may provide a report of suggestion for a heap memory allocation to heap memory106 for functions in application 104”. The specification does not recite that the learning module provides a suggestion for a heap allocation in response to learning the most possible memory access order. The specification merely recites that the learning module may learn a most possible memory access order and may provide a suggestion for a heap allocation.
As per claims 8 and 15 (line numbers refer to claim 8):
Lines 14-15 recite “program instructions to provide a suggestion for a heap allocation in response to learning the most possible memory access order” but this is not supported by the specification. The specification recites in [0033-0034] “In step 206, performance improvement module110 clusters recorded data accesses for each function based on a distance between data elements accessed in sequence in heap memory104. Performance improvement module110 may generate a report for each function including a distance between data areas accessed in succession and a data area clustering result…In step 208, performance improvement module110 allocates, based on the data element clustering result, the data elements in a same cluster into a same memory unit in heap memory 106.” Therefore, the specification supports that in response to determining a distance between data elements accessed in sequence, heap memory is allocated.
Claims 2-7, 9-14, and 16-20 are dependent claims of claims 1, 8, and 15 and fail to resolve the deficiencies of claims 1, 8, and 15, so they are rejected for the same reasons.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As per claims 1, 8, and 15 (line numbers refer to claim 1):
Lines 13-14 recite “providing, by the learning module in the computer, a suggestion for a heap allocation in response to learning the most possible memory access order” but it is unclear what data the heap is allocated to.
Claims 2-7, 9-14, and 16-20 are dependent claims of claims 1, 8, and 15 and fail to resolve the deficiencies of claims 1, 8, and 15, so they are rejected for the same reasons.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chang et al. (US 20180150225 A1 hereinafter Chang) in view of Uramoto (US 20150278089 A1), in view of Sela et al. (US 10929032 B1 hereinafter Sela), in view of Hirono et al. (JP2000099351A hereinafter Hirono), in view of Narasayya et al. (US 20110225164 A1 hereinafter Narasayya), in view of Dahlstedt et al. (US 20080021939 A1 hereinafter Dahlstedt), and further in view of Gall (US 6480862 B1).
Chang, Uramoto, Sela, Hirono, Narasayya, and Dahlstedt were cited in a previous office action.
As per claim 1, Chang teaches the invention substantially as claimed including a computer-implemented method comprising: monitoring, by a performance improvement module in a computer, one or more data accesses to one or more data elements in memory from an application, wherein the application has a plurality of functions ([0007] The request message may include a data array having information on data splits of the target transaction group, and address lists of the target transaction group; [0012] The information on data splits of the target transaction group of the data array may have been start addresses and address sizes for respective data splits of the target transaction group; [0091] After setting a transaction group which is accessible in an application; [0100] a record of access to the data of the target transaction group (for example, read or write history of the data of the target transaction group; [0124] Referring to FIG. 10, the data processing system 6200 may include a memory device 6230 having one or more nonvolatile memories and a memory controller 6220 for controlling the memory device 6230. The data processing system 6200 illustrated in FIG. 10 may serve as a storage medium);
recording, by a performance improvement module in the computer, the one or more data accesses into a monitoring element table, wherein the record for each data access includes an identity, a start address, and a memory page number ([0012] The information on data splits of the target transaction group of the data array may have been start addresses and address sizes for respective data splits of the target transaction group; [0007] The request message may include a data array having information on data splits of the target transaction group, and address lists of the target transaction group; [0100] a record of access to the data of the target transaction group (for example, read or write history of the data of the target transaction group); [0108] The memory system 110 identifies data distribution of the target transaction group through the header 810…The controller 130 identifies the locations of the respective data splits of the target transaction group in the memory device 150 through the header 810; [0124] The data processing system 6200 illustrated in FIG. 10 may serve as a storage medium);
clustering, by a performance improvement module in the computer, remaining data elements in the monitoring element table from the one or more data elements corresponding to recorded data accesses according to each function of the plurality of functions of the application based on a distance between the remaining data elements accessed ([0086] Therefore, a management operation such as data defragmentation and data remapping operations are performed in the memory device 150 in order to recover the memory device 150 from the data fragmentation and data split; [0100] a record of access to the data of the target transaction group (for example, read or write history of the data of the target transaction group); [0007] The request message may include a data array having information on data splits of the target transaction group, and address lists of the target transaction group; [0087] Furthermore, by the data fragmentation and data split, data corresponding to the same transaction group, the same task group or the same file group may be stored by being randomly distributed in the memory device 150, and accordingly, data access efficiency to the data stored in the memory device 150 may be degraded markedly. Therefore, a management operation, that is, data defragmentation and data remapping operations are performed for the data stored in the memory device 150. Hereinbelow, in the memory system in accordance with an embodiment, data management operations, that is, data defragmentation and data remapping operations, for the data stored in the memory device 150, to maximize data access efficiency to the memory device 150; [0110] Also, the controller 130 performs data defragmentation and data remapping operations for the respective data splits of the target transaction group in consideration of types of respective memory blocks (e.g., SLC, MLC, TLC and QLC memory blocks) in the memory device 150. For example, in the hot mode indicated by the flag 802 of the header 810, the controller 130 performs data defragmentation and data remapping operations for the data splits 0 to 3 of the target transaction group, which are randomly distributed over the memory blocks of the memory dies 610 to 670, to SLC memory blocks of the memory die 0 610; [0124] The data processing system 6200 illustrated in FIG. 10 may serve as a storage medium; Fragmentation is when there is distance between data that is accessed. Defragmentation clusters data that is originally far apart so that they are closer.);
allocating, by a performance improvement module in the computer, the remaining data elements into the memory based on the clustering, such that the remaining data elements in a same cluster, based on the clustering, are allocated into a same memory unit in the memory ([0102] In the hot mode, data defragmentation and data remapping operations are performed to the target transaction group in consideration of interleaving for the data of the target transaction group. For example, data defragmentation and data remapping operations for the metadata and user data of the target transaction group are performed to an SLC memory block region in the memory device 150; [0110] Also, the controller 130 performs data defragmentation and data remapping operations for the respective data splits of the target transaction group in consideration of types of respective memory blocks (e.g., SLC, MLC, TLC and QLC memory blocks) in the memory device 150. For example, in the hot mode indicated by the flag 802 of the header 810, the controller 130 performs data defragmentation and data remapping operations for the data splits 0 to 3 of the target transaction group, which are randomly distributed over the memory blocks of the memory dies 610 to 670, to SLC memory blocks of the memory die 0 610; [0124] The data processing system 6200 illustrated in FIG. 10 may serve as a storage medium).
Chang fails to teach recording during runtime, the one or more data accesses into a monitoring element table, wherein the record for each data access includes an identity, a start address, an end address; enabling, by a learning module in the computer, a learning mode to determine that a target heap element of the one or more data elements is included in the monitoring element table and a most possible memory access order for each function of the plurality of functions in the application in response to recording the one or more data accesses into the monitoring element table; providing, by the learning module in the computer, a suggestion for a heap allocation in response to learning the most possible memory access order; marking, by the learning module in the computer, a memory page in heap memory; issuing, by an operating system of the computer, a signal in response to the marked memory page being accessed; updating, by the learning module in the computer, a heap free routine and clearing the marked memory page; removing, by the learning module in the computer, a specific record of the one or more data accesses recorded in the monitoring element table corresponding to the target heap element in response to determining that the target heap element is included in the monitoring element table; clustering, by the performance improvement module in the computer, remaining data elements from the one or more data elements corresponding to recorded data accesses according to each function based on a distance between the remaining data elements accessed in sequence and memory accessing times of the remaining data elements, wherein the remaining data elements comprise the one or more data elements that exclude the target heap element; and optimizing, by the performance improvement module in the computer, the allocated remaining data elements using a heuristic algorithm; and executing, by the computer, the application based on the optimized allocated remaining data elements.
However, Uramoto teaches recording during runtime, the one or more data accesses into a monitoring element table, wherein the record for each data access includes an identity, a start address, an end address ([0039] For example, when the program 13 calls a runtime library, the program 13 notifies the runtime library of an address of the first memory region that is to be accessed (for example, a start address and an end address thereof); [0076] When calling, the user program 123 notifies the library 124 of an address (for example, a start address and an end address) of a user-defined region that is to be accessed; [0086] The region management table 136 is a collection of information representing a start address of a user-defined region, an end address of a user-defined region, a unit type, an allocation flag, a start address of a hidden region, and a number of units in a hidden region.).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Chang with the teachings of Uramoto to reduce a burden (see Uramoto [0046] when a program is modified to process a longer character code, its influence might extend widely, increasing burden of modification work. In contrast, the execution control apparatus 10 is used to reduce modification of a program and to facilitate processing of characters of other character encoding schemes by the program.).
Chang and Uramoto fail to teach enabling, by a learning module in the computer, a learning mode to determine that a target heap element of the one or more data elements is included in the monitoring element table and a most possible memory access order for each function of the plurality of functions in the application in response to recording the one or more data accesses into the monitoring element table; providing, by the learning module in the computer, a suggestion for a heap allocation in response to learning the most possible memory access order; marking, by the learning module in the computer, a memory page in heap memory; issuing, by an operating system of the computer, a signal in response to the marked memory page being accessed; updating, by the learning module in the computer, a heap free routine and clearing the marked memory page; removing, by the learning module in the computer, a specific record of the one or more data accesses recorded in the monitoring element table corresponding to the target heap element in response to determining that the target heap element is included in the monitoring element table; clustering, by the performance improvement module in the computer, remaining data elements from the one or more data elements corresponding to recorded data accesses according to each function based on a distance between the remaining data elements accessed in sequence and memory accessing times of the remaining data elements, wherein the remaining data elements comprise the one or more data elements that exclude the target heap element; and optimizing, by the performance improvement module in the computer, the allocated remaining data elements using a heuristic algorithm; and executing, by the computer, the application based on the optimized allocated remaining data elements.
However, Sela teaches determine a most possible memory access order for each function of the plurality of functions in the application in response to recording the one or more data accesses into the monitoring element table (Col. 3 lines 60-63 For example, the host applications may write host application data to the storage array and read host application data from the storage array in order to perform various host application functions; Col. 7 lines 7-9 Certain host application processes are known to be associated with generation of host application data that is sequentially accessed; Col. 8 lines 7-11 Storing related data, e.g. service use records for each subscriber, sequentially in one or more OAUs on HDDs corresponding to the sequential order of the thinly provisioned production volume facilitates subsequent retrieval of that data; Col. 2 lines 3-6 monitor the host application to detect generation of data that is likely to be sequentially accessed by the host application with associated data; Col. 8 lines 27-34 FIG. 4 illustrates a technique for generating and using sequential access hints. As indicated in block 400 the host application is monitored for activity associated with processes that generate sequentially accessed data. If data related to such a process is being created and written to the production volume presented by the storage array as determined in block 402 then the monitoring application prompts the MPIO driver to generate a sequential access hint);
clustering, by the performance improvement module in the computer, remaining data elements from the one or more data elements corresponding to recorded data accesses according to each function based on a distance between the remaining data elements accessed in sequence and memory accessing times of the remaining data elements (claim 1 host computer to detect generation of data that is likely to be non-sequentially written to a logical volume relative to associated data and later sequentially accessed by a host application with the associated data based on the predetermined subset of host application processes being known to be associated with temporally non-sequential generation of data that is sequentially accessed; generating a hint indicating that the data associated with the predetermined subset of host application processes is likely to be sequentially accessed by the host application with the associated data; sending the data and the hint to the storage array; responsive to the hint, allocating sequential storage space on a hard disk drive in the storage array for the data and the associated data; and writing the data to the allocated sequential storage space on the hard disk drive in the storage array; Col. 3 lines 14-17 It will be apparent to those of ordinary skill in the art that the computer-implemented steps may be stored as computer-executable instructions on a non-transitory computer-readable medium; Col. 4 lines 30-36 when the data that is being accessed is sequentially located on the HDD it is not necessary to perform long, time consuming movement of the drive head between individual accesses. Sequential access of sequentially located data thus requires less time than sequential access of non-sequentially located data.).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Chang and Uramoto with the teachings of Sela so that data can be accessed faster (see Sela Col. 4 lines 34-36 Sequential access of sequentially located data thus requires less time than sequential access of non-sequentially located data.).
Chang, Uramoto, and Sela fail to teach enabling, by a learning module in the computer, a learning mode to determine that a target heap element of the one or more data elements is included in the monitoring element table; providing, by the learning module in the computer, a suggestion for a heap allocation in response to learning the most possible memory access order; marking, by the learning module in the computer, a memory page in heap memory; issuing, by an operating system of the computer, a signal in response to the marked memory page being accessed; updating, by the learning module in the computer, a heap free routine and clearing the marked memory page; removing, by the learning module in the computer, a specific record of the one or more data accesses recorded in the monitoring element table corresponding to the target heap element in response to determining that the target heap element is included in the monitoring element table; wherein the remaining data elements comprise the one or more data elements that exclude the target heap element; and optimizing, by the performance improvement module in the computer, the allocated remaining data elements using a heuristic algorithm; and executing, by the computer, the application based on the optimized allocated remaining data elements.
However, Hirono teaches enabling, by a learning module in the computer, a learning mode to determine that a target heap element of the one or more data elements is included in the monitoring element table; marking, by the learning module in the computer, a memory page in heap memory; updating, by the learning module in the computer, a heap free routine and clearing the marked memory page; removing, by the learning module in the computer, a specific record of the one or more data accesses recorded in the monitoring element table corresponding to the target heap element in response to determining that the target heap element is included in the monitoring element table; wherein the remaining data elements comprise the one or more data elements that exclude the target heap element ([0017] Subsequently, a storage area (hereinafter, referred to as a "mark table") of a mark corresponding to each object allocated to an area (hereinafter, referred to as a "heap area") which is a target of garbage collection on the memory is cleared (s202). Subsequently, an object referred to by any of the objects is detected on the basis of the information indicating the reference relationship of the objects allocated in the heap area, and a process of marking a position on the mark table corresponding to the object is performed (s203). As a result of this processing, an object that is not referred to by any object is an object that is no longer used, and a mark is not added to the corresponding mark table. Therefore, the unmarked object is extracted as an area to which a new object can be allocated, that is, a free area (s204); [0007] Mark Table (mark table): A table corresponding one to-one to existing objects to check whether an object is referenced from anywhere. If it is determined that an object has a reference, the table column corresponding to the object is marked. When all the reference relations are checked, the object having no mark is unnecessary and can be removed; [0010] Sweeping: a process of removing an unnecessary object in the heap; [0066] when sweeping or clearing the mark table; [0122] FIG. 38 is a diagram showing a procedure of mark assignment by tree search. As shown in (A), a reference relation represented by a tree structure is traced from a root node 10 to each node and a mark is given to a node (object) in the reference relation. Specifically, a bit at a corresponding position on the mark table is set. This tree structure is formed by the contents of variables provided in an object, which indicate which other object is referred to by a certain object, for example, and tracing the reference relationship of this object, that is, tracing the tree; [0027] Therefore, another object of the present invention is to enable incremental GC; [0070] FIG. 1 is a block diagram showing a hardware configuration of an apparatus. The device is basically composed of a memory 2 for storing a heap area for generating a CPU1 and an object group, a mark table, etc., and an I / O3 for performing input / output with the outside. When a necessary program is loaded into the memory from the outside, the CD-ROM reading interface 4 is used to read a CD-ROM (ROM5) in which the program is written in advance. The CD-ROM corresponds to a program recording medium according to the present invention; [0013](26) Reference ( reference) : information specifying an object B in order for a certain object A to access another specific object B. Specifically, a pointer or index pointing to object B.).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Chang, Uramoto, and Sela with the teachings of Hirono to promote resource use efficiency (see Hirono [0031] improve the use efficiency of memory).
Chang, Uramoto, Sela, and Hirono fail to teach providing, by the learning module in the computer, a suggestion for a heap allocation in response to learning the most possible memory access order; issuing, by an operating system of the computer, a signal in response to the marked memory page being accessed; optimizing, by the performance improvement module in the computer, the allocated remaining data elements using a heuristic algorithm; and executing, by the performance improvement module in the computer, the application based on the optimized allocated remaining data elements.
However, Narasayya teaches optimizing, by the performance improvement module in the computer, the allocated remaining data elements using a heuristic algorithm; and executing, by the performance improvement module in the computer, the application based on the optimized allocated remaining data elements ([0102] In other words, since less than all of the candidates may be defragmented under the defragmentation budget, the candidat(es) that have a relatively high (e.g., potentially the highest) estimated benefit on the I/O performance, when defragmented, can be selected. In some embodiments, a greedy heuristic can be utilized to make the selection; [0039] Index defragmentation: Two commonly used index defragmenting approaches are performing an index rebuilding operation or performing an index reorganization operation. An index rebuilding operation can be similar to creating an index and thus can involve fully scanning and sorting rows on which an index is defined. Individual leaf pages of the index can be allocated, written out on disc, and then filled so that there is little or no internal fragmentation. In addition, individual leaf pages can be written to file sequentially in logical order such that there is little or no external fragmentation; [0017] For example, in some embodiments. a workload and a defragmentation budget can be received. Target index(es) or index range(s) that would be scanned as a result of the statement(s) of the workload being executed can then be identified as candidates to potentially be defragmented…Based on the workload, the defragmentation budget, and/or each candidate's estimated benefit (i.e., the estimated benefits), a set of the candidates can be selected and/or recommended for defragmentation. The set of recommended candidates can include any number of the candidates (i.e., zero recommended candidates, one recommended candidate, or multiple recommended candidates). In some embodiments, the selection can be accomplished by utilizing a greedy heuristic; [0028] Recommendation module 116 can be configured to automatically provide recommendations of which index(es) or index range(s) of indexes 106 (if any) to defragment for a given workload and for a given defragmentation budget. To accomplish this, recommendation module 116 may leverage a benefit analysis performed by, for example, benefit analysis module 114. Alternatively or additionally, recommendation module 116 may be configured to perform all or part of the benefit analysis).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Chang, Uramoto, Sela, and Hirono with the teachings of Narasayya to provide the greatest benefits (see Narasayya [0102] In other words, since less than all of the candidates may be defragmented under the defragmentation budget, the candidat(es) that have a relatively high (e.g., potentially the highest) estimated benefit on the I/O performance, when defragmented, can be selected. In some embodiments, a greedy heuristic can be utilized to make the selection).
Chang, Uramoto, Sela, Hirono, and Narasayya fail to teach providing, by the learning module in the computer, a suggestion for a heap allocation in response to learning the most possible memory access order; issuing, by an operating system of the computer, a signal in response to the marked memory page being accessed.
However, Dahlstedt teaches issuing, by an operating system of the computer, a signal in response to the marked memory page being accessed ([0036] By extending an operating system with a page fault optimizer it is possible to use page faults as efficiently as possible to trap memory accesses between threads within a single process. The page fault optimizer should enable the JVM to mark pages as only readable by a single thread, all other accesses should generate a signal to a signal handler. In accordance with an embodiment, the system uses this possibility to mark the thread local heap as readable only by its owning thread.).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Chang, Uramoto, Sela, Hirono, and Narasayya with the teachings of Dahlstedt to promote efficiency (see Dahlstedt [0011] Thread local heaps can potentially increase efficiency both by avoiding collecting the global heap and by lowering the pause times for each thread and to reduce the number of stop-the-world pauses).
Chang, Uramoto, Sela, Hirono, Narasayya, and Dahlstedt fail to teach providing, by the learning module in the computer, a suggestion for a heap allocation in response to learning the most possible memory access order.
However, Gall teaches providing, by the learning module in the computer, a suggestion for a heap allocation in response to learning the most possible memory access order (Col. 6 lines 24-67 The embodiments described herein generally operate by organizing data objects in an object heap based upon access relationships between the data objects. An object heap can include any segment of a computer memory that represents the working memory for a computer program, typically any segment of memory in which data is dynamically allocated, used and managed during execution of a computer program… In certain embodiments discussed hereinafter, multiple data objects that fall within the same access chains--that is, objects that are referenced in various sequences after a particular root object is accessed--can logically be grouped together in memory (e.g., within a continuous memory segment in an object heap) based upon the temporal proximity between the objects in such a group. Moreover, in certain embodiments, the specific objects within such a group may be arranged relative to one another based upon relative access frequencies and/or the order in which the objects are accessed; claim 21 An apparatus, comprising: (a) a memory; (b) an object heap, resident in the memory, the object heap configured to store a plurality of data objects; and (c) a program, resident in the memory, the program configured to arrange at least first and second data objects in the object heap based upon an access relationship therebetween).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Chang, Uramoto, Sela, Hirono, Narasayya, and Dahlstedt with the teachings of Gall to optimize performance (see Gall Col. 7 lines 1-5 The specific embodiment described hereinafter focus on a particular application of the invention in optimizing the performance of computer programs executed in the Java programming environment developed by Sun Microsystems.).
As per claim 2, Chang, Uramoto, Sela, Hirono, Narasayya, Dahlstedt, and Gall teach the computer-implemented method of claim 1. Chang teaches further comprising: receiving a notification that the one or more data elements in the memory is accessed by the application ([0100] a record of access to the data of the target transaction group (for example, read or write history of the data of the target transaction group); [0091] After setting a transaction group which is accessible in an application; [0113] At step 720, the memory system 110 receives a transaction group access message from the host 102. That is to say, the host 102 transmits a transaction group access message to the memory system 110 for access to the data of the target transaction group); and locating a return address for the application which accesses the one or more data elements in the memory ([0091] After setting a transaction group which is accessible in an application of the host 102, the host 102 transmits a rearrangement request message corresponding to the transaction group, to a file system of the host 102, and the file system of the host 102 transmits a rearrangement request message including address lists, for example, LBA (logical block address) lists, of the transaction group, to the memory system 110.).
As per claim 3, Chang, Uramoto, Sela, Hirono, Narasayya, Dahlstedt, and Gall teach the computer-implemented method of claim 2. Chang teaches further comprising calculating an offset in response to locating the return address for the application which accesses the one or more data elements in the memory ([0105] In the data array 850 of the rearrangement request message 800, there is included information on the data splits of the target transaction group. In other words, in the data array 850, there is included location information on the data splits of the target transaction group that is indicated by the count 806 of the header 810. In the data array 850, there is included location information which indicate start addresses and address sizes of the LBAs for the respective data splits in correspondence to the LBA lists of the target transaction group; [0106] As exemplified in FIG. 8, in the data array 850, there are included start addresses 0 to 3 (represented by reference numerals 860, 870, 880 and 890 of the figure) and address sizes 0 to 3 (represented by reference numerals 865, 875, 885 and 895 of the figure) of the LBAs for respective data splits 0 to 3 of the transaction group; [0091] After setting a transaction group which is accessible in an application of the host 102, the host 102 transmits a rearrangement request message corresponding to the transaction group, to a file system of the host 102, and the file system of the host 102 transmits a rearrangement request message including address lists, for example, LBA (logical block address) lists, of the transaction group, to the memory system 110).
Additionally, Dahlstedt teaches wherein the notification is a signal generated from an operating system when an event to access the memory has occurred ([0036] By extending an operating system with a page fault optimizer it is possible to use page faults as efficiently as possible to trap memory accesses between threads within a single process. The page fault optimizer should enable the JVM to mark pages as only readable by a single thread, all other accesses should generate a signal to a signal handler. In accordance with an embodiment, the system uses this possibility to mark the thread local heap as readable only by its owning thread.).
As per claim 4, Chang, Uramoto, Sela, Hirono, Narasayya, Dahlstedt, and Gall teach the computer-implemented method of claim 1. Chang teaches wherein clustering the recorded data accesses comprises analyzing memory accessing data and aggregation memory elements ([0100] The cold mode and the hot mode are set by the file system of the host 102 according to a record of access to the data of the target transaction group (for example, read or write history of the data of the target transaction group); [0101] In the cold mode, data defragmentation and data remapping operations are performed to the target transaction group in consideration of a wear leveling operation to the memory device 150. For example, data defragmentation and data remapping operations are performed to an MLC memory block region, a TLC memory block region or a QLC memory block region in the memory device 150; [0110] Also, the controller 130 performs data defragmentation and data remapping operations for the respective data splits of the target transaction group in consideration of types of respective memory blocks (e.g., SLC, MLC, TLC and QLC memory blocks) in the memory device 150. For example, in the hot mode indicated by the flag 802 of the header 810, the controller 130 performs data defragmentation and data remapping operations for the data splits 0 to 3 of the target transaction group, which are randomly distributed over the memory blocks of the memory dies 610 to 670, to SLC memory blocks of the memory die 0 610; [0103] In the total size 804 of the header 810, there is included an information on the total size of the target transaction group to which data defragmentation and data remapping operations are to be performed.).
Additionally, Sela teaches further comprising choosing a system operation time based on an optimization of the allocated remaining data elements (Col. 4 lines 30-36 when the data that is being accessed is sequentially located on the HDD it is not necessary to perform long, time consuming movement of the drive head between individual accesses. Sequential access of sequentially located data thus requires less time than sequential access of non-sequentially located data).
As per claim 5, Chang, Uramoto, Sela, Hirono, Narasayya, Dahlstedt, and Gall teach the computer-implemented method of claim 1. Chang teaches further comprising: determining which function is to be optimized to access data in the memory ([0007] The request message may include a data array having information on data splits of the target transaction group, and address lists of the target transaction group. [0008] The request message may further include a header having a flag indicating type information of the data defragmentation and data remapping operations. [0009] The type information may indicate at last one among a general mode, a fast mode, an optimized mode, a cold mode and a hot mode; [0087] by the data fragmentation and data split, data corresponding to the same transaction group, the same task group or the same file group may be stored by being randomly distributed in the memory device 150, and accordingly, data access efficiency to the data stored in the memory device 150 may be degraded markedly. Therefore, a management operation, that is, data defragmentation and data remapping operations are performed for the data stored in the memory device 150; [0110] the controller 130 performs data defragmentation and data remapping operations for the data splits 0 to 3 of the target transaction group, which are randomly distributed over the memory blocks of the memory dies 610 to 670, to SLC memory blocks of the memory die 0 610;)
Additionally, Hirono teaches enabling a learning mode in runtime; calling a heap allocation routine during the enabled learning mode ([0027]Therefore, another object of the present invention is to enable incremental GC; [0017] Subsequently, a storage area (hereinafter, referred to as a "mark table") of a mark corresponding to each object allocated to an area (hereinafter, referred to as a "heap area") which is a target of garbage collection on the memory is cleared (s202). Subsequently, an object referred to by any of the objects is detected on the basis of the information indicating the reference relationship of the objects allocated in the heap area, and a process of marking a position on the mark table corresponding to the object is performed (s203). As a result of this processing, an object that is not referred to by any object is an object that is no longer used, and a mark is not added to the corresponding mark table. Therefore, the unmarked object is extracted as an area to which a new object can be allocated, that is, a free area (s204); [0015] dynamic storage management has been conventionally performed in which a memory of a single address space is dynamically allocated at the time of execution of a program; [0100] the thread 1 and the thread 3 are threads for which the real-time property is required, and the thread 2 is a thread (normal thread) for which the real-time property is not required. The thread 3 is a GC thread. In a normal state in which the free area of the memory is large, when the event 1 occurs while the thread 2 is being executed, the process is transferred to the thread 1, and when the process of the thread 1 caused by the process of the event 1 ends, the process returns to the thread 2. Similarly, when the event 2 occurs, the process proceeds to the thread 3. If the free area is reduced to a predetermined warning level by the processing of the thread 2, the processing of the thread 2 is interrupted and the GC thread of the thread 3 is executed. When the free area is secured by this process, the process returns to the process of the thread 2; [0072] When objects are created in the heap area, the reference relationship from one object to another object forms a tree structure extending from the root node as shown in the figure.).
As per claim 6, Chang, Uramoto, Sela, Hirono, Narasayya, Dahlstedt, and Gall teach the computer-implemented method of claim 1. Hirono teaches wherein the memory is a heap memory used by a programming language to store global variables for the application ([0033] Incremental Garbage Collection of Concurrent Objects for Real – Time Application; [0072]FIG. 3 is a diagram showing a reference relationship of an object generated in a heap area of a memory and a relationship with a stack. When objects are created in the heap area, the reference relationship from one object to another object forms a tree structure extending from the root node as shown in the figure. For example, if a global variable is declared, an object corresponding to the variable is generated. In addition, a stack for storing information such as an argument area, a return address, a local variable, and a work area is generated for each thread, and, for example, a reference relationship from a local variable on the stack to a global variable on the tree as indicated by an arrow in the drawing is also stored.).
As per claim 7, Chang, Uramoto, Sela, Hirono, Narasayya, Dahlstedt, and Gall teach the computer-implemented method of claim 1. Chang teaches wherein the monitoring element table is associated with a monitoring function table including the plurality of functions from the application ([0100] The cold mode and the hot mode are set by the file system of the host 102 according to a record of access to the data of the target transaction group (for example, read or write history of the data of the target transaction group); [0091] After setting a transaction group which is accessible in an application;).
As per claim 8, it is a computer program product claim of claim 1, so it is rejected for similar reasons. Additionally, Chang teaches a computer program product comprising: one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, the program instructions comprising: program instructions ([0124] Referring to FIG. 10, the data processing system 6200 may include a memory device 6230 having one or more nonvolatile memories and a memory controller 6220 for controlling the memory device 6230. The data processing system 6200 illustrated in FIG. 10 may serve as a storage medium such as a memory card (CF, SD, micro-SD or the like) or USB device [0056] When the memory device 150 is a flash memory or specifically a NAND flash memory, the NFC 142 may generate a control signal for the memory device 150 and process data to be provided to the memory device 150 under the control of the processor 134.; [0126] The CPU 6221 may control overall operations on the memory device 6230, for example, read, write, file system management and bad page management operations. The RAM 6222 may be operated according to control of the CPU 6221, and used as a work memory, buffer memory or cache memory.).
As per claims 9, 10, 11, 12, 13, and 14, they are computer program product claims of claims 2, 3, 4, 5, 6, and 7, so they are rejected for similar reasons.
As per claim 15, it is a computer system claim of claim 1, so it is rejected for similar reasons. Additionally, Chang teaches a computer system comprising: one or more computer processors, one or more computer readable storage media, and program instructions stored on the one or more computer readable storage media for execution by at least one of the one or more computer processors, the program instructions comprising: program instructions ([0124] Referring to FIG. 10, the data processing system 6200 may include a memory device 6230 having one or more nonvolatile memories and a memory controller 6220 for controlling the memory device 6230. The data processing system 6200 illustrated in FIG. 10 may serve as a storage medium such as a memory card (CF, SD, micro-SD or the like) or USB device [0056] When the memory device 150 is a flash memory or specifically a NAND flash memory, the NFC 142 may generate a control signal for the memory device 150 and process data to be provided to the memory device 150 under the control of the processor 134.; [0126] The CPU 6221 may control overall operations on the memory device 6230, for example, read, write, file system management and bad page management operations. The RAM 6222 may be operated according to control of the CPU 6221, and used as a work memory, buffer memory or cache memory.).
As per claims 16, 17, 18, 19, and 20, they are computer system claims of claims 2, 3, 4, 5, and 6, so they are rejected for similar reasons.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HSING CHUN LIN whose telephone number is (571)272-8522. The examiner can normally be reached Mon - Fri 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.L./Examiner, Art Unit 2195
/Aimee Li/Supervisory Patent Examiner, Art Unit 2195