DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/24/2025 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 7, 9, 10, 16, 17, and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vax et al. (U.S. Publication No.: US 20210150056 A1) hereinafter Vax, in view of van Opdorp (U.S. Patent No.: US 7343377 B1) hereinafter van Opdorp, and further in view of Jagtap et al. (U.S. Publication No.: US 20210390204 A1) hereinafter Jagtap.
As to claim 1:
Vax discloses:
A computer implemented method, comprising: extracting, by at least one processor, a plurality of identifiers of data from a data source, wherein an identifier uniquely identifies a data sample in the data source [Paragraph 0038 teaches personal information rules may include definition rules mapping to a unique identifier, a display name, country of resident attributes to be associated with specific personal information attributes (e.g., social security numbers or phone numbers) and/or combinations of such attributes. Paragraph 0098 teaches the sample scan results may include metadata, such as but not limited to: data source information corresponding to the tables that were scanned, the number of rows scanned, the specific rows scanned, the number of findings detected, correlated personal information and/or other information. Note: Scanning data sources, wherein the scan results in personal information that include unique identifiers reads on the claims.]; wherein the data source is a Not-only structure query language (NoSQL) data source having a flexible schema [Paragraph 0050 teaches exemplary primary data sources may include, for example, structured databases (e.g., SQL), unstructured file shares, semi-structured Big Data and NoSQL repositories (e.g., Apache Hadoop.]
extracting, by the at least one processor, a plurality of data samples from the data source using the plurality of identifiers [Paragraph 0220 teaches a list of searchable values to be used as input for the scanners 1950, based on the personal information rules; (3) searching for a matching data subject, upon receiving personal information findings from one or more scanners; and (4) when a match is found, creating a personal information record, including data subject name, unique data subject ID, attribute name, data source, and/or data link and storing the same in the shared database 1940 (e.g., in the personal information table 1942 and/or the data subjects table 1941). Note: Using (extracting) identifiers that identifies data (identifier uniquely identifies a data samples) reads on the claims.]; wherein each data sample of the plurality of data samples comprises metadata and data [Paragraph 0098 teaches the sample scan results may include metadata, such as but not limited to: data source information corresponding to the tables that were scanned, the number of rows scanned, the specific rows scanned, the number of findings detected, correlated personal information and/or other information.] extracting, by the at least one processor, metadata from each data sample of the plurality of data samples [Paragraph 0231 teaches only metadata summaries may be uploaded to the management server 2012 so that personal information does not reach the server.];
and wherein the metadata excludes the data stored in the data sample [Paragraph 0231 teaches only metadata summaries may be uploaded to the management server 2012 so that personal information does not reach the server.];
hashing, by the at least one processor, each extracted metadata excluding the data of each respective data sample to generate a respective hashed metadata associated with each respective data sample of the plurality of data samples [Paragraph 0069 teaches each of the findings may comprise metadata associated with the found potential personal information, including… an attribute value (which may be hashed for privacy reasons). Paragraph 0221 teaches where possible, the system may only store hashed values of such attributes.];
Vax discloses some of the limitations as set forth in claim 1 but does not appear to expressly disclose comparing, by the at least one processor, the respective hashed metadata with other hashed metadata corresponding to different data samples extracted from the data source to identify one or more unique hashed metadata, wherein the metadata comprises schema indicative of one or more attributes of each respective data sample, identifying, by the at least one processor, one or more unique schemas corresponding to the one or more unique hashed metadata, and storing, by the at least one processor, the one or more unique schemas in a data store.
van Opdorp discloses:
comparing, by the at least one processor, the respective hashed metadata with other hashed metadata corresponding to different data samples extracted from the data source to identify one or more unique hashed metadata [Column 4 Lines 59-61 teach a good hash function will produce a difficult to forge representation which uniquely identifies the schema metadata. Column 8 Lines 66-67 and Column 9 Lines 1-4 teaches when the version 1.2 client applications 93 and 94 on the desktops 96 and 97 they extract schema metadata in steps 103 and 104 from the database 92, compute in steps 105 and 106 the hash values 107 and 108 of the schema metadata, and compare in steps 109 and 110 the computed hash values 107 and 108 to the stored hash values 101 and 102. Note: Comparing hash values of extracted metadata from a (different data samples) of a database (data source) such as hash values 101, 102, 107, and 108, to identify matching unique hashed metadata, wherein the extracted metadata is separated from the database reads on the claims.]
data source having a flexible schema and wherein the metadata comprises schema indicative of one or more attributes of each respective data sample [Column 2 Lines 58-66 teach a plurality of applications adapted to store a plurality of previously calculated reduced representations of schema metadata for one or more databases, to extract a plurality of schema metadata from one or more databases, to newly calculate a plurality of reduced representations from the plurality of extracted schema metadata, and to compare each of plurality of previously calculated reduced representations with its corresponding newly calculated reduced representation. Column 3 Lines 59-62 teach schema metadata includes tables, columns in tables, datatypes of columns, lengths of columns, custom database data types, foreign keys, constraints, stored procedures, views, triggers, indices, and scheduled jobs. Note: Each of the extracted plurality of schema metadata from one or more databases includes tables, columns in tables, datatypes of columns, lengths of columns, custom database data types, foreign keys, constraints, stored procedures, views, triggers, indices, and scheduled jobs (attributes), wherein the cited plurality of schemas is interpreted read on the claimed flexible schema. The specification (see Paragraph 0044) appears to describe flexible schema to be one of a plurality of schema.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Vax, by incorporating comparing hash values of extracted metadata from a database, wherein the metadata includes various attributes of describing the data, as taught by van Opdorp (see Column 2 Lines 58-66, Column 3 Lines 59-62, Column 4 Lines 59-61, Column 8 Lines 66-67 and Column 9 Lines 1-4), because both applications are directed to metadata analysis; incorporating comparing hash values of extracted metadata from a database, wherein the metadata includes various attributes of describing the data potentially eliminates runtime errors (see Column 9 Lines 19-23).
Vax and van Opdorp discloses some of the limitations as set forth in claim 1 but does not appear to expressly disclose identifying, by the at least one processor, one or more unique schemas corresponding to the one or more unique hashed metadata, wherein the metadata comprises schema indicative of one or more attributes of each respective data sample.
Jagtap discloses:
identifying, by the at least one processor, one or more unique schemas corresponding to the one or more unique hashed metadata [Paragraph 0021 teaches the algorithm used to generate globally unique identifiers should ensure that identifiers are not re-used. The life of the data in fixed content storage system can easily exceed the lifetime of the underlying computing hardware and identifier generation schemes must maintain uniqueness over long periods of time even when older hardware resources are replaced with newer technologies. Paragraph 0025 teaches this procedure can create a unique hash value (e.g., a fingerprint) for a given described schema... receive the new schema, calculate its fingerprint, and store the schema in a cache indexed by the fingerprint. Note: Schemas that are identified by unique hash value or fingerprint reads on the claimed.]; and storing, by the at least one processor, the one or more unique schemas in a data store [Paragraph 0025 teaches this procedure can create a unique hash value (e.g., a fingerprint) for a given described schema... receive the new schema, calculate its fingerprint, and store the schema in a cache indexed by the fingerprint. Note: Storing the schema identified by unique hash value reads on the claims.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Vax and van Opdorp, by incorporating identifying and storing schemas based on a unique hash value or fingerprint, as taught by Jagtap (see Jagtap Paragraph 0021 and 0025), because the three applications are directed to metadata analysis; incorporating identifying and storing schemas based on a unique hash value or fingerprint is beneficial for processing data (see Jagtap Paragraph 0005).
Claims 10 and 17 recite similar limitations as in claim 1. Therefore claims 10 and 17 are rejected for the same reasons as set forth above. See claim 1 for analysis.
As to claim 7:
Vax, Opdorp, and Jagtap discloses all of the limitations as set forth in claim 1.
Jagtap also discloses:
The method of claim 1, further comprising: prior to identifying the unique schema, formatting the extracted metadata [Paragraph 0025 teaches this procedure can create a unique hash value (e.g., a fingerprint) for a given described schema... receive the new schema, calculate its fingerprint, and store the schema in a cache indexed by the fingerprint. Paragraph 0038 teaches metadata scanner 110 can be used to provide enhanced schema information about tables and columns in the Data Source 105... Metadata Scanner 110 can extract metadata for all relevant objects… The metadata can be used to populate corresponding registrations in Metadata Registry 115... use these metadata registrations to augment schema data received in Schema Stream 145, and to decode the events received in the Transaction Stream 150. Note: Using the metadata scanner to format metadata into metadata registration prior to enhancing and augmenting schema data such as identifying a unique hash value for a schema reads on the claims.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Vax and van Opdorp, by incorporating identifying and storing schemas based on a unique hash value or fingerprint, as taught by Jagtap (see Paragraph 0021 and 0025), because the three applications are directed to metadata analysis; incorporating identifying and storing schemas based on a unique hash value or fingerprint is beneficial for processing data (see Paragraph 0005).
As to claim 9:
Vax, van Opdorp, and Jagtap discloses all of the limitations as set forth in claim 1.
Jagtap also discloses:
The method of claim 1, further comprising: outputting the unique schema to a metadata management and data governance framework [Paragraph 0007 teaches determining a schema(s) for each of the events, determining a data quality for each of the events, generating second information by decomposing each of the events based on the schema(s), and storing the second information in a database(s). Paragraph 0024 teaches schema information for each table can be stored in a database. Thus, when processing data for a particular table, the database can be accessed to retrieve the specific schema information for that database (e.g., the row and columns for each table). Each row and/or column can have its own data type, and can include various different types of metadata (e.g., what each column includes or stands for). Note: determining a unique schema (outputting the unique schema) for use in a system that determines data quality based on the schema and metadata reads on the claims.], wherein the unique schema is stored in a standardized format [Paragraph 0025 teaches this procedure can create a unique hash value (e.g., a fingerprint) for a given described schema... receive the new schema, calculate its fingerprint, and store the schema in a cache indexed by the fingerprint. Note: Storing the schema identified by unique hash value reads on the claims because the examiner interprets the cited storing the by the schema fingerprint to be a standard storage format within the context of the reference for the uniquely identified schema.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Vax and van Opdorp, by incorporating identifying and storing schemas based on a unique hash value or fingerprint, as taught by Jagtap (see Paragraph 0021 and 0025), because the three applications are directed to metadata analysis; incorporating identifying and storing schemas based on a unique hash value or fingerprint is beneficial for processing data (see Paragraph 0005).
Claim 16 similar limitations as in claim 9. Therefore claim 16 is rejected for the same reasons as set forth above. See claim 9 for analysis.
As to claim 21:
Vax, van Opdorp, and Jagtap discloses all of the limitations as set forth in claim 1.
van Opdorp also discloses:
The method of claim 1, wherein the metadata comprises the one or more attributes; and wherein the one or more attributes are hashed and the value of each attribute of the one or more attributes are not hashed [Column 3 Lines 56-62 teach schema metadata is information that describes the structure and other features of a database and is agnostic to the actual data stored in the database. Schema metadata includes tables, columns in tables, datatypes of columns, lengths of columns, custom database data types, foreign keys, constraints, stored procedures, views, triggers, indices, and scheduled jobs. Column 8 Lines 66-67 and Column 9 Lines 1-4 teaches when the version 1.2 client applications 93 and 94 on the desktops 96 and 97 they extract schema metadata in steps 103 and 104 from the database 92, compute in steps 105 and 106 the hash values 107 and 108 of the schema metadata, and compare in steps 109 and 110 the computed hash values 107 and 108 to the stored hash values 101 and 102. Note: Hashing schema metadata data that includes various attributes and is agnostic (separate from) to the actual data stored in the database reads on the claims.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Vax, by incorporating hashing schema metadata data that includes various attributes and is agnostic (separate from) to the actual data stored in the database, as taught by van Opdorp (see Column 3 Lines 56-62, Column 8 Lines 66-67, and Column 9 Lines 1-4), because both applications are directed to metadata analysis; incorporating comparing hash values of extracted metadata from a database, wherein the metadata includes various attributes of describing the data potentially eliminates runtime errors (see Column 9 Lines 19-23).
Claim(s) 2, 3, 11, 12, 18, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vax et al. (U.S. Publication No.: US 20210150056 A1) hereinafter Vax, in view of van Opdorp (U.S. Patent No.: US 7343377 B1) hereinafter van Opdorp, in view of Jagtap et al. (U.S. Publication No.: US 20210390204 A1) hereinafter Jagtap, and further in view of Chari et al. (U.S. Publication No.: US 20130097103 A1) hereinafter Chari.
As to claim 2:
Vax, van Opdorp, and Jagtap discloses all of the limitations as set forth in claim 1 but does not appear to expressly disclose wherein the data samples are extracted in batches from the data source and wherein a number of batches is a function of a total count of records in the data source and a sampling percentage.
Chari discloses:
The method of claim 1, wherein the data samples are extracted in batches from the data source [Paragraph 0029 teaches a small set of data (e.g., from about 5% to about 10% of the desired training data set), is selected (sampled) from Data Set U… this given percentage of the desired training size (also referred to herein as "a batch") is selected, this amount of data (batch size) will be added to the training sample set iteratively.]; and wherein a number of batches is a function of a total count of records in the data source and a sampling percentage [Paragraph 0029 teaches a small set of data (e.g., from about 5% to about 10% of the desired training data set), is selected (sampled) from Data Set U… this given percentage of the desired training size (also referred to herein as "a batch") is selected, this amount of data (batch size) will be added to the training sample set iteratively. Paragraph 0031 teaches the remaining samples to be labeled are picked in an iterative fashion, where each iteration produces a fraction of the desired sample size. Note: Iterations that include adding batches (a number of batches) of data (records) up to an amount totaling the remaining samples and satisfies a percentage such as 5% or 10% of the desired data reads on the claims.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Vax, van Opdorp, and Jagtap, by incorporating data samples using batches and iterations, as taught by Chari (see Paragraph 0029 and 0031), because the four applications are directed to data analysis; incorporating data samples using batches and iterations provides improved techniques for generating training samples for predictive modeling (see Chari Paragraph 0001).
Claims 11 and 18 recite similar limitations as in claim 2. Therefore claims 11 and 18 are rejected for the same reasons as set forth above. See claim 2 for analysis.
As to claim 3:
Vax, van Opdorp, and Jagtap discloses all of the limitations as set forth in claim 1 but does not appear to expressly disclose wherein the data samples are extracted in batches from the data source and wherein a number of batches is a function of a total count of records in the data source and a sampling percentage.
Chari discloses:
The method of claim 1, wherein the data samples are extracted from the data source using a random distribution technique [Paragraph 0003 teaches random sampling, a low-cost approach, produces a subset of the data which has a distribution similar to the original data set, producing skewed results for imbalanced data. Paragraph 0011 teaches random sampling at each iteration.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Vax, van Opdorp, and Jagtap, by incorporating data samples using batches and iterations, as taught by Chari (see Paragraph 0029 and 0031), because the four applications are directed to data analysis; incorporating data samples using batches and iterations provides improved techniques for generating training samples for predictive modeling (see Chari Paragraph 0001).
Claims 12 and 19 recite similar limitations as in claim 3. Therefore claims 12 and 19 are rejected for the same reasons as set forth above. See claim 3 for analysis.
Claim(s) 4, 5, 6, 13, 14, 15, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vax et al. (U.S. Publication No.: US 20210150056 A1) hereinafter Vax, in view of van Opdorp (U.S. Patent No.: US 7343377 B1) hereinafter van Opdorp, in view of Jagtap et al. (U.S. Publication No.: US 20210390204 A1) hereinafter Jagtap, in view of Chari et al. (U.S. Publication No.: US 20130097103 A1) hereinafter Chari, and in further view of Panda et al. (U.S. Publication No.: US 20160034372 A1) hereinafter Panda.
As to claim 4:
Vax, van Opdorp, and Jagtap discloses all of the limitations as set forth in claim 1 but does not appear to expressly disclose wherein the data samples are extracted in batches from the data source and wherein a number of batches is a function of a total count of records in the data source and a sampling percentage.
Chari discloses:
The method of claim 1, wherein scanning the data comprises: determining a number of records per dataset based on a total count of records in the data source and a number of datasets [Paragraph 0045 teaches determine how many samples one ideally wishes to draw from each class in this iteration from the total B samples to draw.];
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Vax, van Opdorp, and Jagtap, by incorporating data samples using batches and iterations, as taught by Chari (see Paragraph 0029 and 0031), because the four applications are directed to data analysis; incorporating data samples using batches and iterations provides improved techniques for generating training samples for predictive modeling (see Chari Paragraph 0001).
Vax, van Opdorp, Jagtap, and Chari discloses all of the limitations as set forth in claim 1 and some of claim 4 but does not appear to expressly disclose assigning a start offset value and an end offset value for each dataset and determining offset values to be extracted for each dataset based on the total count of records in the data source.
Panda discloses:
assigning a start offset value and an end offset value for each dataset [Paragraph 0062 teaches a LU block size may be represent by ‘X’, a start offset may be represented by ‘Y’, a final end offset may be represented by ‘Z’, an iteration value may be represented by ‘I’. Note: Each iteration representing a dataset with a starting offset value and end offset value reads on the claims.]; and determining offset values to be extracted for each dataset based on the total count of records in the data source [Paragraph 0055 teaches a starting offset (e.g., a value indicating a location of a storage portion or logical block) associated with one or more packets, a data transfer length (e.g., in logical blocks) associated with one or more packets, and/or a total amount of data (e.g., logical blocks) transferred in a capture period. Note: Associating offset values based on a total amount of data (total count of data records) reads on the claims.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Vax, van Opdorp, Jagtap, and Chari, by incorporating starting and ending offset values based on a total amount, as taught by Panda (see Paragraph 0055 and 0062), because the five applications are directed to data analysis; incorporating starting and ending offset values based on a total amount improves generation and/or scaling of workloads for testing or other purposes (see Panda Paragraph 0106).
Claims 13 and 20 recite similar limitations as in claim 4. Therefore claims 12 and 19 are rejected for the same reasons as set forth above. See claim 4 for analysis.
As to claim 5:
Vax, van Opdorp, and Jagtap discloses all of the limitations as set forth in claim 1 but does not appear to expressly disclose wherein scanning the data comprises:(a) determining a number of datasets per thread(b) retrieving a previous offset value, wherein the previous offset value is indicative of a record of the data source where a previous scan has ended, (c) determining the number records per dataset based on the previous offset value and a total count of records in the data source, (d) assigning a start offset value and an end offset value for each dataset, and (e) generating a random value for each dataset based on the assigned start offset value and end offset value.
Panda discloses:
The method of claim 1, wherein scanning the data comprises:(a) determining a number of datasets per thread [Paragraph 0062 teaches an iteration value may be represented by ‘I’, a total number of logical blocks transferred in a workload or workload segment (e.g., all message in Table 2) may be represented by ‘X’, and a total transfer length may be represented by ‘M’. In this example, a total number of logical blocks to transfer may be computed as M/X.]; (b) retrieving a previous offset value, wherein the previous offset value is indicative of a record of the data source where a previous scan has ended [Paragraph 0063 teaches a start offset for each message, where the start offset may continue to change for each iteration of a workload or workload segment executed, e.g., until (M/X) blocks have been transferred or until a new start offset for a message lies beyond Z (e.g., a final end offset). Paragraph 0064 teaches a workload may be generated and/or scaled that comprises commands for writing and/or reading 10 GBs of data associated with the selected LUN. Note: A new starting offset that must include a determination of previous final end offset (previous offset value) from a read (scan) command ending at 10GBs (has ended) reads on the claims.]; (c) determining the number records per dataset based on the previous offset value and a total count of records in the data source [Paragraph 0062 teaches a start offset may be represented by ‘Y’, a final end offset may be represented by ‘Z’, an iteration value may be represented by ‘I’, a total number of logical blocks transferred in a workload or workload segment (e.g., all message in Table 2) may be represented by ‘X’, and a total transfer length may be represented by ‘M’. In this example, a total number of logical blocks to transfer may be computed as M/X. Note: X being the total count of data and M being the length of data per iteration as separated by a starting offset and ending offset reads on the claim]; (d) assigning a start offset value and an end offset value for each dataset [Paragraph 0062 teaches a LU block size may be represent by ‘X’, a start offset may be represented by ‘Y’, a final end offset may be represented by ‘Z’, an iteration value may be represented by ‘I’. Note: Each iteration representing a dataset with a starting offset value and end offset value reads on the claims.]; and (e) generating a random value for each dataset based on the assigned start offset value and end offset value [Paragraph 0062 teaches a LU block size may be represent by ‘X’, a start offset may be represented by ‘Y’, a final end offset may be represented by ‘Z’, an iteration value may be represented by ‘I’. Paragraph 0079 teaches a scaling algorithm may use workload segments 502, configuration information, and/or a pseudo-random selection process to generate a scaled workload (e.g., a workload that is larger or smaller than an original or base workload). For example, each workload segment 502 in a scaled workload may be unique based on configuration information and/or a selection process which ensure a pseudo-random distribution of one or more message characteristics (e.g., start offset values). Note: Randomly generated distribution of an iteration value (random value) based on the start offset and end offset reads on the claims.]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Vax, van Opdorp, and Jagtap, by incorporating starting and ending offset values based on a total amount data associated with a randomly generated value, as taught by Panda (see Paragraphs 0062-0064), because the four applications are directed to data analysis; incorporating starting and ending offset values based on a total amount data associated with a randomly generated value improves generation and/or scaling of workloads for testing or other purposes (see Panda Paragraph 0106).
Claim 14 similar limitations as in claim 5. Therefore claim 14 is rejected for the same reasons as set forth above. See claim 5 for analysis.
As to claim 6:
Vax, van Opdorp, Jagtap, and Panda discloses all of the limitations as set forth in claim 1 and 5.
Panda also discloses:
The method of claim 5, further comprising: iteratively repeating steps (a) through (e) [Paragraph 0062 teaches an iteration value may be represented by ‘I’, a total number of logical blocks transferred in a workload or workload segment (e.g., all message in Table 2) may be represented by ‘X’, and a total transfer length may be represented by ‘M’. In this example, a total number of logical blocks to transfer may be computed as M/X. Paragraph 0064 teaches a workload may be generated and/or scaled that comprises commands for writing and/or reading 10 GBs of data associated with the selected LUN. Paragraph 0065 teaches an iteration value ‘I” may start at 0 and may increment sequentially, e.g., for each iteration of a workload or workload segment. Paragraph 0079 teaches a scaling algorithm may use workload segments 502, configuration information, and/or a pseudo-random selection process to generate a scaled workload (e.g., a workload that is larger or smaller than an original or base workload). For example, each workload segment 502 in a scaled workload may be unique based on configuration information and/or a selection process which ensure a pseudo-random distribution of one or more message characteristics (e.g., start offset values).]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Vax, van Opdorp, Jagtap, and Chari, by incorporating iteratively starting and ending offset values based on a total amount data associated with a randomly generated value, as taught by Panda (see Paragraphs 0062-0064), because the five applications are directed to data analysis; incorporating starting and ending offset values based on a total amount data associated with a randomly generated value improves generation and/or scaling of workloads for testing or other purposes (see Panda Paragraph 0106).
Claim 15 similar limitations as in claim 6. Therefore claim 15 is rejected for the same reasons as set forth above. See claim 6 for analysis.
Response to Arguments
Applicant’s arguments with respect to the 103 rejection of claim 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EARL LEVI ELIAS whose telephone number is (571)272-9762. The examiner can normally be reached Monday - Friday (IFP).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EARL LEVI ELIAS/Examiner, Art Unit 2169
/SHERIEF BADAWI/Supervisory Patent Examiner, Art Unit 2169