DETAILED ACTION
This office action is in response to applicant's communication filed on 01/05/2026.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/05/2026 has been entered.
Response to Amendment
The Applicant's remarks and amendments, in response to the last Office Action, have been considered with the results that follow:
Claims 1, 11 and 20 are amended
Claims 2-3, and 12-13 were previously canceled.
Claims 1, 4-11 and 14-20 are now pending in this application.
Response to Arguments
Applicant's arguments filed 01/05/2026 have been fully considered but they are moot, because a new ground(s) of rejection has been issued, in view of Applicant's amendments to the claims, and the arguments do not apply to the new combination of references being used in the current rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4-11 and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Hersans (US 2019/0114438 A1) in view of Vidic (US 11675552 B1) and Rai (US 2007/0236724 A1).
Regarding claim 1,
Hersans teaches A computer-implemented method comprising: receiving, by a processor of a computer system, via a network, a first package for processing as part of a batch job; determining, by the processor, that the first package includes data that is linked, via a link, to additional data stored in a data store; unpacking, by the processor, the first package to include additional data linked to the first package, wherein unpacking the first package comprises obtaining, via the link, the additional data from the data store, the first package including the data and the additional data forms a first unpacked package; *see FIGS.1,4-5,11, paras14-16(“...tenant may change an encryption scheme…tenant or the database may switch active encryption keys for multiple data records stored in the database…For each data record group, the database server may perform a mass encryption estimation process to determine relevant metrics associated with the mass encryption process…”), para51(“…to perform a mass encryption, decryption, key rotation, or scheme modification process, the database server 310 may utilize the PL/SQL layer for data processing and may use Java code or the Java layer for actual encryption and decryption…user may continue to access these unlocked data records throughout the mass encryption process...310 may retrieve the locked data records [‘obtaining, via the link, the additional data…’], and may encrypt or decrypt the records according to the latest parameters…store the updated data records in memory”), paras59-85(“ FIG. 4…mass encryption estimation process 400 that supports mass encryption management…data center 405 may include one or more database servers 410 and databases 415, which may communicate using link 420…410 may perform the mass encryption estimation process 400 using a partitioning component 425 and an estimation component 445…415 may store a large number…of data records. When the database server 410 identifies an encryption job, such as a mass encryption job [‘receiving a first package for processing…batch job’]…related to many record groups stored at the database…server 410 or database 415 may identify the a total set of data records 435 corresponding to the encryption job [‘additional data at data store’]…410 or database 415 may search the database 415 storage based on a tenant identifier associated with the encryption job, a specific data object or field associated with the encryption job, or a data encryption scheme [identifier/object/ field part of ‘first package’, determining…data that is linked]…410 or database 415 may identify the total set of data records 435 [‘first unpacked package’] for the encryption process…estimation may be performed by an estimation component 440 …identify information related to the entire encryption job or to an encryption job for a specific record group 435 [‘unpacking…include additional data linked…’]…430, a partitioning component 425 may partition the identified total set data records 435 into one or more data record groups 440 (e.g., record groups 440-a, 440-b, 440-c, and 440-d), which may be referred to as data chunks, for processing...estimation component 445 may utilize PL/SQL queries, or may utilize other forms of queries or data retrieval methods [‘obtaining additional data’]…445 and a mass encryption component may share similar or identical logic for determining which fields, data objects, or tenants are affected by an encryption process, and/or for determining the encryption options to use for the encryption process…445 and mass encryption component may share a code path for generating message queue (MQ) message payloads…applications may differ in data chunk sizes for processing. The mass encryption component may send one or more queries (e.g., PL/SQL queries) to the database 415 to retrieve data records or record groups 440 for encryption [teaches additional records obtained resulting in retrieval of total set, i.e.: ‘obtaining, via the link, the additional data from the data store, the first package including the data and the additional data forms a first unpacked package’]…425 or estimation component 445 may leverage the same queries, and may perform a count based on the queried data records…”), paras86-89, paras123-130(“FIG. 11…mass encryption management…receive…encryption request to perform an encryption process on a set of records stored at the database server…partition the set of records into a plurality of record groups based at least in part on a default group size…”)
determining, by the processor, whether the first unpacked package is less than a package threshold size; *see paras123-130(“FIG.11…mass encryption management…receive, at a database server, an encryption request to perform an encryption process on a set of records stored at the database server… partition the set of records into a plurality of record groups based at least in part on a default group size…calculate a size of each of the plurality of record groups and a total size of the plurality of record groups…determine to perform the encryption process on the plurality of record groups if the total size of the plurality of record groups is less than a threshold size [‘checking…package is less than the package threshold size’]…”), para56(“...database server 310 may perform a preliminary data analysis on the data records stored in the database 315 that are marked for encryption, and may dynamically configure chunk sizes for the data record groups…based on the total number of data records, the total size of data records, an amount of available memory or processing power for the background job, or some combination of these or other parameters associated with the encryption process, the database server 310 may determine a range of data chunk sizes [‘package threshold size’] for the background encryption job. The range of data chunk sizes may be based on a data size of the data chunk or a number of data records in the data chunk, and the database server 310 may partition the total set of data records for mass encryption, decryption, or re-encryption into data chunks either equal or varying in size within the determined range [‘checking…less than the package threshold size’]…”), para101(“...Data size component 745 may calculate a size of each of the set of record groups and a total size of the set of record groups, and may determine to perform the encryption process on the set of record groups if the total size of the set of record groups is less than a threshold size [‘checking…’]…745 may dynamically determine a size range for the set of record groups based on the default group size, where the size of each of the set of record groups is within the size range…”)
in response to the first unpacked package being less than the package threshold size, processing the first unpacked package to form a first output; and … rescaling, by the processor, the first package to satisfy the package threshold size before additional unpacking and processing is performed on the first package to prevent processing of the first package from taking too long to complete, timing out, or flagging an error. *see paras59-85(“…database server 410 or database 415 may identify the total set of data records 435 for the encryption process…410 may estimate the performance of the encryption job. This estimation may be performed by an estimation component…identify information related to the entire encryption job or to an encryption job for a specific record group 435…partitioning component 425 may partition the identified total set data records 435 into one or more data record groups 440 (e.g., record groups 440-a, 440-b, 440-c, and 440-d), which may be referred to as data chunks, for processing…425 may perform the data chunking based on a default record group size (e.g., 100,000 data records) or based on a dynamically selected record group size. Each record group 440 may have a same number of data records, or may have a number of data records or a data size within a data size range [‘package threshold size’]....”), paras86-89, paras123-130(“FIG.11…mass encryption management…database server may receive, at a database server, an encryption request to perform an encryption process on a set of records stored at the database server…partition the set of records into a plurality of record groups based at least in part on a default group size…calculate a size of each of the plurality of record groups and a total size of the plurality of record groups…determine to perform the encryption process on the plurality of record groups if the total size of the plurality of record groups is less than a threshold size [‘first unpacked package…less than a package threshold size’] …perform the encryption process on a first record group of the plurality of record groups based at least in part on the encryption request [‘processing…form a first output’]…”), para56(“...database server 310 may perform a preliminary data analysis on the data records stored in the database 315 that are marked for encryption, and may dynamically configure chunk sizes for the data record groups… based on the total number of data records, the total size of data records, an amount of available memory or processing power for the background job, or some combination of these or other parameters associated with the encryption process, the database server 310 may determine a range of data chunk sizes [‘package threshold size’] for the background encryption job. The range of data chunk sizes may be based on a data size of the data chunk or a number of data records in the data chunk, and the database server 310 may partition the total set of data records for mass encryption, decryption, or re-encryption into data chunks either equal or varying in size within the determined range [‘rescaling the first package to satisfy the package threshold size before additional unpacking and processing is performed on the first package’]…310 may configure other parameters for encryption based on the type of mass encryption…associated data records, the tenant…”), para101(“...Data size component 745 may calculate a size of each of the set of record groups and a total size of the set of record groups, and may determine to perform the encryption process on the set of record groups if the total size of the set of record groups is less than a threshold size…745 may dynamically determine a size range [‘package threshold size’] for the set of record groups based on the default group size, where the size of each of the set of record groups is within the size range…size of a record group includes a number of records associated with the record group…total size of the set of record groups includes a number of record groups for performing the encryption process based on the encryption request, a total number of records associated with the set of record groups, or both…”)
However, Hersans does not explicitly teach “… storing the first output in temporary storage such that the first output is usable during processing of a subsequent package; and in response to the first unpacked package being more than the package threshold size, rescaling, by the processor, the first package to satisfy the package threshold size before additional unpacking and processing is performed on the first package…”
Vidic teaches … storing the first output in temporary storage such that the first output is usable during processing of a subsequent package; and *see col9(“…in other exemplary arrangements, the exemplary central system circuitry may enable the uploading of compressed files such as a ZIP file or other type format file, that includes large or multiple content records. In such arrangements the central system circuitry may operate in accordance with its circuit executable instructions to receive such records from the record provider circuit and to unpack and store each of such records [storing first output in temporary storage] in the at least one data store associated with the central system circuitry. Of course it should be understood that these approaches are exemplary and in other arrangements other approaches to facilitate the uploading of printable content records may be used…Once the record provider has caused all of the electronic content records that will be printed for designated record recipients to be sent to the central system circuitry, the record provider is enabled to instruct the central system circuitry to proceed to the next step by providing at least one input to an input device of the remote record provider circuit …” teaches storing first output usable during processing of subsequent package)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hersans to incorporate the teachings of Vidic and enable Hersans to store first output in temporary storage such that the output is usable during processing of a subsequent package, as doing so would enable uploading, processing of compressed files such as a ZIP file or other type that includes large or multiple content records (Vidic, col9).
Rai teaches …in response to the first unpacked package being more than the package threshold size, rescaling, by the processor, the first package to satisfy the package threshold size before additional unpacking and processing is performed on the first package… *see paras09-11(“…managing the size of a job to be processed in a print shop…job size management system includes: (a) a scheduling tool for generating a list including a plurality of jobs…assigning a job size related value to each one of the plurality of jobs so that the plurality of jobs are corresponded respectively with a set of job size related values, (ii) using the set of job size related values to calculate a control limit [‘package threshold size’], and (iii) for each job size related value exceeding the control limit, splitting the job corresponding with the job size related value exceeding the control limit into n number of sub-jobs for processing at the plurality of autonomous cells [‘rescaling the first package…’]…system for managing the size of a job to be processed in a document management workflow. The job size management system includes a queue for listing a plurality of jobs and a processor operatively associated with the queue…(i) assigns a value to each one of the plurality of jobs so that the plurality of jobs are corresponded respectively with a set of values, (ii) uses the set of values to calculate a threshold, and, (iii) for each value exceeding the threshold, splits the job corresponding with the value exceeding the threshold into n number of sub-jobs… method for managing job size…providing a list including a plurality of jobs; assigning a value to each one of the plurality of jobs so that the plurality of jobs are corresponded respectively with a set of values; setting a threshold; and, for each value exceeding the threshold, splitting the job corresponding with the value exceeding the threshold into n number of sub-jobs [‘in response to…being more than the package threshold size, rescaling…]”), para25(“FIG. 14…showing the number of splits per jobs that are outside of a preset threshold or control limit”), para43(“…FIGS. 10A, 10B, 11A and 11B, individual and moving-range (I-MR) plots …corresponds with a distribution for 490 jobs, and includes a mean as well as an upper control limit (UCL)…UCL can be used advantageously to determine how a job should be split…”), paras61-62(“…control limits or thresholds can be set in such a way that job splitting is optimized and job size CV can be kept below a pre-selected level or threshold…technique can advantageously be applied to the scheduling tool on job input streams before processing them by the hierarchical scheduling algorithm…job may be split precisely with a formula accommodating for one of several values, such as a job size or takt-rate related value…”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hersans to incorporate the teachings of Rai and enable Hersans to rescale package/data to satisfy a threshold before additional unpacking and processing is performed on the first package, as doing so would enable reducing time delays and optimal splitting of printing jobs (Rai, paras04,61).
Regarding claim 4,
Hersans as modified by Vidic and Rai teaches all the claimed limitations as set forth in the rejection of claim 1 above.
Rai further teaches The computer-implemented method of claim 1, wherein in response to the unpacking of the first package not being linked to the additional data, processing the first package to provide a second output. *see paras33-34(“...FIG. 2 illustrates several of the software modules employed in the printing workflow system 2. The printing workflow system 2 includes a workflow-mapping module 12 that determines the workflow for selected document processing jobs…workflow identifies the operational steps needed to complete a document-processing job. The workflow also identifies the sequence of these operational steps…a print job is received [‘first package’] and a workflow for it is developed by the workflow mapping module 12. The job decomposition module may split the job into sub-jobs [teaches ‘unpacking of the first package’, but it is not linked to any additional data...instead received package is processed/split into sub-jobs, which teaches ‘second output’]. The sub-jobs or job are then assigned to cells for completion by the cell assignment module 18. The sub-jobs may be sent to product cell controller 16 of the assigned cells, where each sub-job may be further sub divided...”)
Regarding claim 5,
Hersans as modified by Vidic and Rai teaches all the claimed limitations as set forth in the rejection of claim 1 above.
Hersans and Rai further teach The computer-implemented method of claim 1, wherein the rescaling further comprises splitting, based on the package threshold size, the first package into a plurality of packages comprising a second package and a third package. *see Hersans:para60(“… 430, a partitioning component 425 may partition the identified total set data records 435 into one or more data record groups 440 (e.g., record groups 440-a, 440-b, 440-c, and 440-d) [‘rescaling…splitting’], which may be referred to as data chunks, for processing…partitioning component 425 may perform the data chunking based on a default record group size (e.g., 100,000 data records) or based on a dynamically selected record group size. Each record group 440 may have a same number of data records, or may have a number of data records or a data size within a data size range [‘based on the package threshold size’’]...”), para56(“...database server 310 may perform a preliminary data analysis on the data records stored in the database 315 that are marked for encryption, and may dynamically configure chunk sizes for the data record groups…based on the total number of data records, the total size of data records, an amount of available memory or processing power for the background job, or some combination of these or other parameters associated with the encryption process, the database server 310 may determine a range of data chunk sizes for the background encryption job. The range of data chunk sizes may be based on a data size of the data chunk or a number of data records in the data chunk, and the database server 310 may partition the total set of data records for mass encryption, decryption, or re-encryption into data chunks either equal or varying in size within the determined range [‘rescaling…splitting…package threshold size’]…”), para101(“...Data size component 745 may calculate a size of each of the set of record groups and a total size of the set of record groups, and may determine to perform the encryption process on the set of record groups if the total size of the set of record groups is less than a threshold size…data size component 745 may dynamically determine a size range for the set of record groups based on the default group size, where the size of each of the set of record groups is within the size range…size of a record group includes a number of records associated with the record group…total size of the set of record groups includes a number of record groups for performing the encryption process based on the encryption request, a total number of records associated with the set of record groups…”); Rai:paras09-11(“…for each job size related value exceeding the control limit, splitting the job corresponding with the job size related value exceeding the control limit into n number of sub-jobs for processing at the plurality of autonomous cells…system for managing the size of a job to be processed in a document management workflow…(i) assigns a value to each one of the plurality of jobs so that the plurality of jobs are corresponded respectively with a set of values, (ii) uses the set of values to calculate a threshold, and, (iii) for each value exceeding the threshold, splits the job corresponding with the value exceeding the threshold into n number of sub-jobs [‘rescaling…splitting…second/third package’]…method for managing job size… providing a list including a plurality of jobs; assigning a value to each one of the plurality of jobs so that the plurality of jobs are corresponded respectively with a set of values; setting a threshold; and, for each value exceeding the threshold, splitting the job corresponding with the value exceeding the threshold into n number of sub-jobs [‘second/third package’]”), para25(“FIG. 14…showing the number of splits per jobs that are outside of a preset threshold or control limit”), paras33-34(“...print job is received [‘first package’] and a workflow for it is developed by the workflow mapping module 12. The job decomposition module may split the job into sub-jobs [‘splitting…second package and a third package’]. The sub-jobs or job are then assigned to cells for completion by the cell assignment module 18. The sub-jobs may be sent to product cell controller 16 of the assigned cells, where each sub-job may be further sub divided...”), paras43,61-62
Regarding claim 6,
Hersans as modified by Vidic and Rai teaches all the claimed limitations as set forth in the rejection of claim 5 above.
Rai further teaches The computer-implemented method of claim 5, further comprising: unpacking the second package; and in response to the unpacking of the second package being less than the package threshold size, processing the second package to form a corresponding output. *see paras09-11(“…for each job size related value exceeding the control limit, splitting the job corresponding with the job size related value exceeding the control limit into n number of sub-jobs for processing at the plurality of autonomous cells…system for managing the size of a job to be processed in a document management workflow… (i) assigns a value to each one of the plurality of jobs so that the plurality of jobs are corresponded respectively with a set of values, (ii) uses the set of values to calculate a threshold, and, (iii) for each value exceeding the threshold, splits the job corresponding with the value exceeding the threshold into n number of sub-jobs [teaches that size of jobs and sub-jobs (second/third packages) are being compared against the threshold]…method for managing job size…splitting the job corresponding with the value exceeding the threshold into n number of sub-jobs”), para25(“FIG. 14…showing the number of splits per jobs that are outside of a preset threshold or control limit”), paras33-34(“...workflow also identifies the sequence of these operational steps. A job decomposition module 14 is included for splitting the document processing jobs into sub-jobs and for sending the sub-jobs to cells for completion. A product cell controller (PCC) 16 may be provided at given cells for receiving at least one sub-job to further split the sub-job to be processed by a printing device in the cell. Lastly, a cell assignment module 18 is provided for assigning sub-jobs to be processed by a cell…print job is received and a workflow for it is developed by the workflow mapping module 12. The job decomposition module may split the job into sub-jobs [‘second / third package’]. The sub-jobs or job are then assigned to cells for completion by the cell assignment module 18. The sub-jobs may be sent to product cell controller 16 of the assigned cells, where each sub-job may be further sub divided...”. It is understood, based on all cited paras, that each sub-job [second/third packages] may be divided further if its size is less than the threshold and if not, it is processed by the assigned cell/as ‘output’), paras43,61-62
Regarding claim 7,
Hersans as modified by Vidic and Rai teaches all the claimed limitations as set forth in the rejection of claim 5 above.
Rai further teaches The computer-implemented method of claim 5, further comprising: unpacking the third package; and in response to the unpacking of the third package being less than the package threshold size, processing the third package to provide a corresponding output. *see paras09-11(“…for each job size related value exceeding the control limit, splitting the job corresponding with the job size related value exceeding the control limit into n number of sub-jobs for processing at the plurality of autonomous cells…system for managing the size of a job to be processed in a document management workflow… (i) assigns a value to each one of the plurality of jobs so that the plurality of jobs are corresponded respectively with a set of values, (ii) uses the set of values to calculate a threshold, and, (iii) for each value exceeding the threshold, splits the job corresponding with the value exceeding the threshold into n number of sub-jobs [teaches that size of jobs and sub-jobs (second/third packages) are being compared against the threshold]…method for managing job size…splitting the job corresponding with the value exceeding the threshold into n number of sub-jobs”), para25(“FIG. 14…showing the number of splits per jobs that are outside of a preset threshold or control limit”), paras33-34(“...workflow also identifies the sequence of these operational steps. A job decomposition module 14 is included for splitting the document processing jobs into sub-jobs and for sending the sub-jobs to cells for completion. A product cell controller (PCC) 16 may be provided at given cells for receiving at least one sub-job to further split the sub-job to be processed by a printing device in the cell. Lastly, a cell assignment module 18 is provided for assigning sub-jobs to be processed by a cell…print job is received and a workflow for it is developed by the workflow mapping module 12. The job decomposition module may split the job into sub-jobs [‘second / third package’]. The sub-jobs or job are then assigned to cells for completion by the cell assignment module 18. The sub-jobs may be sent to product cell controller 16 of the assigned cells, where each sub-job may be further sub divided...”. It is understood, based on all cited paras, that each sub-job [second/third packages] may be divided further if its size is less than the threshold and if not, it is processed by the assigned cell/as ‘output’), paras43,61-62
Regarding claim 8,
Hersans as modified by Vidic and Rai teaches all the claimed limitations as set forth in the rejection of claim 5 above.
Hersans and Rai further teach The computer-implemented method of claim 5, wherein the second package and the third package are processed in parallel to form corresponding outputs. *see Hersans, para45(“...To handle the above encryption processes in a database 315 storing large amounts of data… database server 310 may perform a data chunking process to partition the data records marked for encryption into separate groups. The database server 310 may run the background job in parallel to other processes on a group of the data records…User device 305 may continue to interact with the database 315 during mass encryption…based on this data chunking and parallelization”), para83(“... estimation component 445 may generate a single email for the total set of data records 435…estimation component 445 may process each record group 440 of the set of record groups asynchronously, in parallel [‘second package and the third package are processed in parallel’], or all together, and may aggregate statistics and information about each record group 440 and the total set of data records...”);
Rai, para26(“FIG. 15 is a partial list from FIG. 14 displaying an incoming job split into three sub-jobs with the same arrival time and due date” teaches second, third packages are processed in parallel), para36(“...job decomposition module 14 splits a document processing job into sub-jobs to be sent to various autonomous cells for processing. The cells in the network are autonomous and can produce their respective product entirely by themselves…shown in FIG. 4, a document processing job is split into sub-jobs 48 and 50 that are sent to cells 32 and 40, respectively. The product cell controllers 34 and 42 send the sub-jobs 48 and 50 to devices 36 a, 36 b, 36 c and 44 a, 44 b, 44 c in the respective cells 32 and 40 for processing” teaches different jobs/packages being processed in parallel), paras47-49(“...each job is split into sub-jobs with the same arrival and due date as the original jobs in a manner such that each sub-job has a takt-rate…three sub-jobs are distributed respectively across the three cells of the exemplary print shop or factory...”)
Regarding claim 9,
Hersans as modified by Vidic and Rai teaches all the claimed limitations as set forth in the rejection of claim 5 above.
Hersans and Rai further teach The computer-implemented method of claim 5, wherein the second package and the third package are processed serially to form corresponding outputs. *see Hersans, para45(“...To handle the above encryption processes in a database 315 storing large amounts of data… database server 310 may perform a data chunking process to partition the data records marked for encryption into separate groups. The database server 310 may run the background job in parallel to other processes on a group of the data records…User device 305 may continue to interact with the database 315 during mass encryption… based on this data chunking and parallelization”), para83(“... estimation component 445 may generate a single email for the total set of data records 435…estimation component 445 may process each record group 440 of the set of record groups asynchronously, in parallel, or all together [processed together teaches ‘second/ third packages processed serially’], and may aggregate statistics and information about each record group 440 and the total set of data records...”); Rai, para04(“… When a new print job arrives, the print job sequentially passes through each department until the print job is completed” teaches a conventional approach where job/packages being processed serially), para04(“... One of the reasons for this performance may be that when a large job is released to a document production framework for processing within a cell, even though the job is split into batches in the cell, the batches are all processed in sequence with no pre-emption [‘second/third package processed serially’’]...”)
Regarding claim 10,
Hersans as modified by Vidic and Rai teaches all the claimed limitations as set forth in the rejection of claim 1 above.
Hersans and Rai further teach The computer-implemented method of claim 1 further comprising: receiving, at a job execution system, the batch job associated with at least the first package. *see Hersans: paras14-16(“...database may receive an indication of a change to the encryption settings for a tenant. In one aspect, the tenant may select a data record, data field, data object, or data object type for encryption or decryption. In a second aspect, the tenant may change an encryption scheme (e.g., between probabilistic and deterministic encryption schemes) for multiple data records. In a third aspect, the tenant or the database may switch active encryption keys for multiple data records stored in the database. In any of these cases, a database server (e.g., a single server or a server cluster) may identify the data records affected by the change in encryption settings, and may partition the identified data records into one or more data record groups. Each data record group may have a similar size (e.g., within a threshold size range) based on a default group size, a tenant-specific size, available memory or processing power, or some other parameter related to handling batch encryption jobs [‘receiving…batch job comprising the job associated with at least the first package’] on these record groups…database server may perform a mass encryption estimation process to determine relevant metrics associated with the mass encryption process…”); Rai, paras33-34(“…FIG. 2 illustrates several of the software modules employed in the printing workflow system 2. The printing workflow system 2 includes a workflow-mapping module 12 that determines the workflow for selected document processing jobs…In general, a print job is received and a workflow for it is developed by the workflow mapping module 12. The job decomposition module may split the job into sub-jobs. The sub-jobs or job are then assigned to cells for completion by the cell assignment module 18. The sub-jobs may be sent to product cell controller 16 of the assigned cells, where each sub-job may be further sub divided...”)
Regarding claim 11,
Claim 11 recites substantially the same claim limitations as claim 1, and is rejected for the same reasons.
Regarding claim 14,
Claim 14 recites substantially the same claim limitations as claim 4, and is rejected for the same reasons.
Regarding claim 15,
Claim 15 recites substantially the same claim limitations as claim 5, and is rejected for the same reasons.
Regarding claim 16,
Claim 16 recites substantially the same claim limitations as claim 6, and is rejected for the same reasons.
Regarding claim 17,
Claim 17 recites substantially the same claim limitations as claim 7, and is rejected for the same reasons.
Regarding claim 18,
Claim 18 recites substantially the same claim limitations as claim 8, and is rejected for the same reasons.
Regarding claim 19,
Claim 19 recites substantially the same claim limitations as claim 9, and is rejected for the same reasons.
Regarding claim 20,
Claim 20 recites substantially the same claim limitations as claim 1, and is rejected for the same reasons.
Conclusion
The prior art made of record in PTO-892 and not relied upon is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANUGEETHA KUNJITHAPATHAM whose telephone number is (408)918-7510. The examiner can normally be reached M-F 9-5 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached at (571) 270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.K./Examiner, Art Unit 2165
/ALEKSANDR KERZHNER/Supervisory Patent Examiner, Art Unit 2165