Prosecution Insights
Last updated: April 19, 2026
Application No. 17/982,667

SOFTWARE ASSESSMENT TOOL FOR MIGRATING COMPUTING APPLICATIONS USING MACHINE LEARNING

Final Rejection §101§103§112
Filed
Nov 08, 2022
Examiner
EVANS, KIMBERLY L
Art Unit
3629
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Cdw LLC
OA Round
2 (Final)
12%
Grant Probability
At Risk
3-4
OA Rounds
7y 0m
To Grant
26%
With Interview

Examiner Intelligence

Grants only 12% of cases
12%
Career Allow Rate
44 granted / 362 resolved
-39.8% vs TC avg
Moderate +13% lift
Without
With
+13.4%
Interview Lift
resolved cases with interview
Typical timeline
7y 0m
Avg Prosecution
27 currently pending
Career history
389
Total Applications
across all art units

Statute-Specific Performance

§101
30.6%
-9.4% vs TC avg
§103
39.8%
-0.2% vs TC avg
§102
9.3%
-30.7% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 362 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Final action is in reply to the arguments/amendments filed 11/4/2025. Claims 1, 3, 8, 9, 10,14,15-17 and 20 have been amended. Claims 1-20 are pending. Response to Arguments/amendments Applicant’s amendments to claim 20 overcomes the claim objection, it has been withdrawn. With respect to claims 1-20 and the 35 USC 112a rejection, specifically the claim limitations, “process the content migration project parameters, the resource migration parameters, the services parameters, and the respective signal values to determine costs, profits and pricing information corresponding to the migration of the tenant environment”, and “wherein the processing includes applying at least one multiplier determined by a trained machine learning model”, applicant’s amendments overcome the rejection; therefore the rejection is withdrawn. With respect to the 35 USC 112(b) rejection and claims 1-20, specifically the limitations, “scan a tenant computing environment to identify, for each of a plurality of schemas, one or more respective signal values”, and “process the content migration project parameters, the resource migration parameters, the services parameters, and the respective signal values to determine costs, profits and pricing information corresponding to the migration of the tenant environment”, applicant’s amendments overcome the rejection; therefore the rejection is withdrawn. Applicant’s amendments regarding the 35 USC 112(b) rejection and claims 3, 10 and 17 containing the trademark/trade name MicrosoftTM and/or SharePointTM overcome the rejection; it has been withdrawn. With respect to the 35 USC 101 rejection, applicant’s arguments have been fully considered, but they are unpersuasive. Examiner maintains that claims 1-20 are directed toward the abstract idea for aggregating respective signal values associated with site resources, content resources and user specified project parameters to generate predicted costs, profits and pricing information for the migration of a tenant environment and displaying the costs, profits and pricing information in a computing environment. Further, the limitations as claimed pertains to (i) commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising; marketing; or sales activities or behaviors; business relations); and (ii) managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). The steps for receiving, crawling, generating, storing, aggregating, and displaying (data) are directed to certain methods of organizing human activity- since the steps for “receive… one or more services parameters of a user”; “initiate … a craw of the tenant computing environment l …to discover resources and content items”; “generate and store respective signal values” are related to (i) commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors, business relations because the claim limitations are directed to receiving various data/parameters for determining and displaying costs, profits and pricing information (contracts, marketing or sales activities, behaviors, business relations) relating to migration of a tenant environment. The claim limitations also pertain to (ii) managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions (“receive … one or more content migration project parameters of a user”; “receive …one or more resource migration project parameters of a user”; “ receive …one or more services parameters of a user”; “initiate … a crawl of the tenant computing environment …to discover resources and content items”; “apply one or more formulas… to generate predicted costs, profits, and pricing information for the migration of the tenant environment”; “cause the costs, profits and pricing information to be displayed”). The claim limitations fail to operate the recited, [‘one or more processors’, ‘a memory’, ‘scanner module” ‘signal values’, ‘tenant computing environment’ ‘trained machine learning model [claims 1 and 8]; ‘non-transitory computer readable medium’ [claim 15], (which are merely standard computer technology and hardware/software components) in any exceptional manner, and there is no evidence in the disclosure to suggest achieving an actual improvement in the computer functionality itself, or improvement in any specific computer technology other than utilizing ordinary computing components as a tool to automate and perform the abstract idea recited above in a computing environment. Further, applicant’s additional element of a machine learning model and also fails to integrate the abstract idea into a practical application because the specification only generally recites using a machine learning model(s)- see ¶1, 14, 19, 24, 46, 51. Moreover, there is no improvement to the machine learning model(s) itself, and the addition element is merely used generically to further process and/or analyze received data via common computing components. Accordingly, even when considered as a whole, the claims do not transform the abstract idea into a patent-eligible invention since the claim limitations do not amount to a practical application or significantly more than an abstract idea. Therefore, the abstract idea fails to integrate into any practical application. In view of the above, Examiner maintains this rejection. As it relates to the 35USC 103 rejection, applicant’s amendments necessitated new grounds of rejection, therefore the arguments are moot. Examiner has modified the rejection to further explain how the limitations are being interpreted and addressed each of applicant’s claim limitations in the Final rejection as noted below. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION. —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 8 and 15 recite, “as set forth in the schema definitions”; there is insufficient antecedent basis for this limitation in the claim.” The respective dependent claims do not remedy this flaw; therefore, they are also rejected. Appropriate correction is requested. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1-20 are directed to a process (an act, or series of acts or steps) [claims 1-7]; a machine/system [claims 8-14]; and manufacture or composition of matter (non-transitory computer readable medium) [claims 15-20]. Thus, each of the claims fall within one of the four statutory categories. Step 2A-Prong 1: Claim 1 recites in part, “one or more processors; and a memory having stored thereon instructions that, when executed by the one or more processors, cause the system to: receive, via the one or more processors, one or more content migration project parameters of a user; receive, via the one or more processors, one or more resource migration project parameters of a user; receive, via the one or more processors, one or more services parameters of a user; initiate, by operation of a scanner module, a crawl of the tenant computing environment to discover resources and content items associated with sites, webs, lists, pages, users, and mailboxes; and for each type of resource or content item corresponding to a schema, generate and store respective signal values including at least one of usage count, storage amount, permissions, or creation/modification timestamp, as set forth in the schema definitions; aggregate, using at least one schema, the respective signal values associated with site resources, content resources, and user-specified project parameters, and apply one or more formulas specified in the schema to generate predicted costs, profits, and pricing information for the migration of the tenant environment, wherein the formulas include values output by a trained machine learning model based on historical migration data and cause the costs, profits and pricing information to be displayed on a display device” Applicant’s specification emphasizes methods and systems for migrating computing applications using machine learning, techniques for identifying site resources and/or content resources and generating migration predictions using one or more trained machine learning models. The specification also discusses that the techniques include aspects directed to scanning one or more tenant environments to identify site resources and/or content resources, as well as for generating, by processing the site signals and content signals using one or more trained ML models, to predict migration outputs, and causing the migration prediction outputs to be acted upon (e.g., via displaying the outputs). The disclosure teaches that tenant environment may include one or more tenant applications having different resource utilization profiles, and may include one or more tenant applications having different resource utilization profiles (¶1, ¶5, ¶14, ¶18, ¶20, ¶39-¶43). The underlined limitations above demonstrate independent claim 1 is directed toward the abstract idea for aggregating respective signal values associated with site resources, content resources and user specified project parameters to generate predicted costs, profits and pricing information for the migration of a tenant environment and displaying the costs, profits and pricing information in a computing environment. Representative claim 1 is considered an abstract idea because the limitations as claimed pertains to (i) commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising; marketing; or sales activities or behaviors; business relations); and (ii) managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). The steps for receiving, crawling, generating, storing, aggregating, and displaying (data) are directed to certain methods of organizing human activity- since the steps for “receive… one or more services parameters of a user”; “initiate … a craw of the tenant computing environment l …to discover resources and content items”; “generate and store respective signal values” are related to (i) commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors, business relations since the claim limitations are directed to receiving various data/parameters for determining and displaying costs, profits and pricing information (contracts, marketing or sales activities, behaviors, business relations) relating to migration of a tenant environment. The claim limitations also pertain to (ii) managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions (“receive … one or more content migration project parameters of a user”; “receive …one or more resource migration project parameters of a user”; “ receive …one or more services parameters of a user”; “initiate … a crawl of the tenant computing environment …to discover resources and content items”; “apply one or more formulas… to generate predicted costs, profits, and pricing information for the migration of the tenant environment”; “cause the costs, profits and pricing information to be displayed”). Hence, the claim limitations are directed to certain methods of organizing human activity and recites an abstract idea--see MPEP 2106.04(II). Independent claims 8, 15 recite substantially similar limitations as independent claim 1 and therefore also recite the same abstract idea. Step 2A-Prong 2: This judicial exception is not integrated into a practical application because the additional elements [‘one or more processors’, ‘a memory’, ‘scanner module” ‘signal values’, ‘tenant computing environment’ ‘trained machine learning model [claims 1 and 8]; ‘non-transitory computer readable medium’ [claim 15]; are used for data gathering and analysis (receiving, crawling, generating, storing, aggregating, and displaying data) to merely provide instructions for organizing human activity, and implement the abstract idea recited above utilizing [‘one or more processors’, ‘a memory’, ‘scanner module” ‘signal values’, ‘tenant computing environment’ ‘trained machine learning model [claims 1 and 8]; ‘non-transitory computer readable medium’ [claim 15]; as a tool to perform the abstract idea for aggregating respective signal values associated with site resources, content resources and user specified project parameters to generate predicted costs, profits and pricing information for the migration of a tenant environment and displaying the costs, profits and pricing information in a computing environment, and generally links the abstract idea to a particular technological environment. See MPEP 2106.05 (f-h). These additional elements do not impose any meaningful limits on practicing the abstract idea—see MPEP 2106.05(g). Independent claim 1 fails to operate the recited, [‘one or more processors’, ‘a memory’, ‘scanner module” ‘signal values’, ‘tenant computing environment’ ‘trained machine learning model [claims 1 and 8]; ‘non-transitory computer readable medium’ [claim 15], (which are merely standard computer technology and hardware/software components) in any exceptional manner, and there is no evidence in the disclosure to suggest achieving an actual improvement in the computer functionality itself, or improvement in any specific computer technology other than utilizing ordinary computational tools to automate and perform the abstract idea recited above in a computing environment —see MPEP 2106.05(a). Further applicant has not shown an improvement or practical application under the guidance of MPEP section 2106.04(d) or 2106.05(a). Applicant’s limitations as recited above do nothing more than supplement the abstract idea using generic processing and networking components performing generic computer functions ((receiving, crawling, generating, storing, aggregating, and displaying data) such that it amounts to no more than mere instruction to apply the exception using a generic computer component-see MPEP 2106.05(f) and linking the use of the judicial exception to a particular technological environment or field of use as discussed in MPEP 2106.05(h). Dependent claims 2-7, 9-14 and 16-20, fail to cure the deficiencies of the above noted independent claim from which they depend and are therefore rejected under the same grounds. The dependent claims further recite the abstract idea without imposing any meaningful limits on practicing the abstract idea. Dependent claims 2-7, 9-14 and 16-20 recite additional data gathering and processing steps. For example dependent claims 2 and 16 recite in part, “process the one or more respective signal values”; claims 3 and 10 recite in part, “wherein the tenant environment includes” claims 4, 11 and 18 recite in part, “generate a schema corresponding to”; claims 5 and 12 recite in part, “scan the tenant environment to identify at least one of”; claims 6 and 19 recite in part, “train the machine learning model by processing”; claims 7, 14 and 20 recite in part, “generate one or more visualizations corresponding to”; claims 9 and 13 recite in part, “wherein processing the content migration project parameters, the resource migration parameters, the services parameters, and the respective signal values to determine costs, profits and pricing information corresponding to the migration of the tenant environment includes”; claim 17 recites in part, “crawl a root site or entry point uniform resource locator”, which are still directed toward the abstract idea identified previously and are no more than mere instructions to apply the exception using a computer or with computing components. The additional elements in the dependent claims only serves to further limit the abstract idea utilizing, [‘one or more processors’, ‘a memory’, ‘scanner module” ‘signal values’, ‘tenant computing environment’ ‘trained machine learning model [claims 1 and 8]; ‘non-transitory computer readable medium’ [claim 15]; as a tool, and generally link the use of the abstract idea to a particular technological environment, and hence are nonetheless directed towards fundamentally the same abstract idea as their respective independent claim since they fail to impose any meaningful limits on practicing the abstract idea. Therefore, the abstract idea fails to integrate into any practical application. Thus, under Step 2A-Prong Two the claims are directed to an abstract idea. Step 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above, with respect to integration of the abstract idea into a practical application, and giving the broadest reasonable interpretation of the claim limitations in light of the specification, the [‘one or more processors’, ‘a memory’, ‘scanner module” ‘signal values’, ‘tenant computing environment’ ‘trained machine learning model [claims 1 and 8]; ‘non-transitory computer readable medium’ [claim 15] amounts to no more than applying the judicial exception using generic computing components, linking the use of the judicial exception to a computing environment. In this case, the [‘one or more processors’, ‘a memory’, ‘scanner module” ‘signal values’, ‘tenant computing environment’ ‘trained machine learning model [claims 1 and 8]; ‘non-transitory computer readable medium’ [claim 15] are generically used to further process and store received data -see ¶1: “migrating computing applications using machine learning, …generating migration predictions using one or more trained machine learning models”; ¶14: “the present techniques include aspects directed to scanning one or more tenant environments to identify site resources and/or content resources”; ¶19: “The tenant scanner may scan of other services/resources, such as databases, email servers, etc. of other third-party providers and in-house tools, services and/or resources to discover capabilities and gauge usage”; ¶24: “The tenant computing devices 102a, 102b each include a processor and a network interface controller (NIC). The processor may include any suitable number of processors and/or processor types, such as CPUs and one or more graphics processing units (GPUs). Generally, the processor is configured to execute software instructions stored in a memory. The memory may include one or more persistent memories (e.g., a hard drive/ solid state memory) and stores one or more set of computer executable instructions/ modules”; ¶46: “In general, a computer program or computer based product, application, or code (e.g., the model(s), such as machine learning models, or other computing instructions described herein) may be stored on a computer usable storage medium, or tangible, non-transitory computer-readable medium (e.g., standard random access memory (RAM), an optical disc, a universal serial bus (USB) drive, or the like) having such computer-readable program code or computer instructions embodied therein, wherein the computer-readable program code or computer instructions may be installed on or otherwise adapted to be executed by the processor(s) 150 (e.g., working in connection with the respective operating system in memory 154) to facilitate, implement, or perform the machine readable instructions, methods, processes, elements or limitations, as illustrated, depicted, or described for the various flowcharts, illustrations, diagrams, figures, and/or other disclosure herein”; Further, applicant’s “trained machine learning model” is recited at a high-level of generality such that it amounts to no more than mere instructions/rules logic for gathering and processing data/information, and to no more than applying the judicial exception using generic computing components, and generically linking the use of the judicial exception to a computing/technical environment- (see ¶51: “Machine learning model(s), may be created and trained based upon example data (e.g., "training data") inputs or data (which may be termed "features" and "labels") in order to make valid and reliable predictions for new inputs, such as testing level or production level data or inputs. In supervised machine learning, a machine learning program operating on a server, computing device, or otherwise processor(s), may be provided with example inputs (e.g., "features") and their associated, or observed, outputs (e.g., "labels") in order for the machine learning program or algorithm to determine or discover rules, relationships, patterns, or otherwise machine learning "models" that map such inputs (e.g., "features") to the outputs (e.g., labels), for example, by determining and/or assigning weights or other metrics to the model across its various feature categories. Such rules, relationships, or otherwise models may then be provided subsequent inputs in order for the model, executing on the server, computing device, or otherwise processor(s), to predict, based on the discovered rules, relationships, or model, an expected output”); and also fails to integrate the abstract idea into a practical application because the specification only generally recites using a machine learning model(s). Further, there is no improvement to the machine learning model(s) itself; the addition element is merely used generically to further process and/or analyze received data via common computing components. Accordingly, even when considered as a whole, the claims do not transform the abstract idea into a patent-eligible invention since the claim limitations do not amount to a practical application or significantly more than an abstract idea for aggregating respective signal values associated with site resources, content resources and user specified project parameters to generate predicted costs, profits and pricing information for the migration of a tenant environment and displaying the costs, profits and pricing information in a computing environment. Hence, claims 1-20 are directed to non-statutory subject matter and are rejected as ineligible subject matter under 35 USC 101. See 2019 PEG and MPEP 2106. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 2, 4-9, 11-16 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Barker et al., US Patent Application Publication No US 2013/0085742 A1 in view of Bar-Or et al., US Patent Application Publication No US 2017/0351511 A1. With respect to claims 1, 8 and 15, Barker discloses, one or more processors; and a memory having stored thereon instructions that, when executed by the one or more processors, cause the system to: (¶24: “Embodiments may include a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. A computer-usable or computer readable medium may include any apparatus that stores, communicates, propagates, or transports the program for use by or in connection with the instruction execution system, apparatus, or device”; ¶25: “A data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code to reduce the number of times code is retrieved from bulk storage during execution”) receive, via the one or more processors, one or more content migration project parameters of a user; receive, via the one or more processors, one or more resource migration project parameters of a user; receive, via the one or more processors, one or more services parameters of a user (¶7: An SLA may specify metrics of guaranteed service, such as system uptime and query latency. A provider balances these competing goals within the resources of a given multitenant server”; ¶10: “building an analytical model for each of a set of migration methods based on database characteristics; Fig 1, Fig 6, ¶23: “information regarding client SLAs, database characteristics, current workloads, and predicted workloads are entered as input. This information characterizes the database system as it will appear during migration, such that block 104 can predict how a migration will perform according to each of a set of migration methods… Block 106 further selects optimal migration parameters that correspond to the selected migration method… The migration method and parameters are employed in block 108 to migrate, e.g., a tenant from a first database system to a second database system”; ¶28: “Block 202 builds an analytical model for each migration method to represent the resources consumed by that model”; ¶30: “Block 204 uses machine learning and historic data to update the analytical models. Each model is then employed in block 206 to predict migration performance for each of the methods based on current data and predicted future workloads. This predicted performance includes the cost of SLA violations as well as migration costs such as resource usage and migration duration”; ¶31: “the ML model will look at the size of the files that need to be transferred (migrated) to the other server, the current network bandwidth, the IOPS (IO per second) that can be used for the transfer, etc”; ¶35: “Block 402 derives an acceptable performance level based on existing SLAs. Based on this acceptable performance level, block 404 calculates slack resources”; ¶38: “System monitors 614 tracks the resources being used at each server 602/604, providing PID controller 612 and processor 606 with information regarding processor usage, memory usage, and network usage”; Fig 6, Fig 7, ¶39: “At each timestep, the current process variable 716 is compared to the desired setpoint 702. The new output of the controller 612 is determined by three component paths of the error 706, defined as the degree to which the process variable 716 differs from the setpoint 702 at comparator 704”) Applicant’s disclosure generically describes, ¶20: “the resources within the tenant environment(s) may include tenant resources, tenant content and/or tenant services. For example, in the context of a SharePoint-related migration, the tenant environment may include tenant resources (e.g., SharePoint sites, SharePoint webs, Info Paths, web permissions, site groups, etc.), tenant content (e.g., SharePoint lists, SharePoint pages, workflows, etc.), tenant services (e.g., apps, teams, team channels, team tabs, team members, OneDrive users/user groups, mailboxes), etc.”; ¶40: “the sites, webs, and content types discussed herein are for exemplary purposes only, and in production environments, many (e.g., thousands or more) additional resources/services and content may be mapped by the module 166 and the module 168. The resource/ services scanner module 166 and/or the content scanner module may organize the signals data into one or more data schemas reflective of the tenant environment”; ¶41: “the schemas may include respective schemas directed to a number of different signals”; ¶64: “the user may enter content migration parameters such as the size of data to migrate, a ShareGate server count (from 0-3, for example), a size of mailbox data to migrate, a number of tenant messages to migrate, etc.”; ¶65: “The user may enter resource migration parameters”. Barker discloses a method/system for controlling and allocating resources for the migration of multitenant database platforms while preserving tenant service level agreements. Barker further discloses building an analytical model for each of a set of migration methods based on database characteristics; predicting performance of the set of migration methods using the respective analytical model with respect to tenant service level agreements (SLAs) and current and predicted tenant workloads and selecting the best migration method according to the respective migration speeds and SLA violation severities. Barker also teaches using/training machine learning models by incorporating real-time data into historic data to predict the overhead cost of each migration method. Giving the broadest reasonable interpretation of the claim limitation in light of the specification, Examiner interprets the machine learning analytical model and migration methods based on various database characteristics with respect to tenant service level agreements, predicted workloads, resource usage, including the PID controller for determining migration speed and selecting appropriate (lower cost) migration as taught by Barker as teaching applicant’s content migration project parameters, resource migration parameters, services parameters, and the respective signal values. aqqregate, using at least one schema, the respective signal values associated with site resources, content resources, and user-specified project parameters, and apply one or more formulas specified in the schema to generate predicted costs, profits, and pricing information for the migration of the tenant environment, wherein the formulas include values output by a trained machine learning model based on historical migration data (¶7: “To maximize profits, providers wish to maximize the number of tenants on each server. Tenants wish to be guaranteed a certain level of performance, however, as specified by service level agreements (SLAs). An SLA may specify metrics of guaranteed service, such as system uptime and query latency. A provider balances these competing goals within the resources of a given multitenant server”); ¶10: “predicting performance of the set of migration methods using the respective analytical model with respect to tenant service level agreements (SLAs) and current and predicted tenant workloads, wherein said prediction includes a migration speed and an SLA violation severity; ¶23:“information regarding client SLAs, database characteristics, current workloads, and predicted workloads are entered as input. This information characterizes the database system as it will appear during migration, such that block 104 can predict how a migration will perform according to each of a set of migration methods… Block 106 chooses a best method by selecting a method that will perform the migration in the shortest time without generating any SLA violations. If all migration methods tested in block 104 will generate SLA violations, then block 106 selects the migration method which generates the fewest violations. Block 106 further selects optimal migration parameters that correspond to the selected migration method… The migration method and parameters are employed in block 108 to migrate, e.g., a tenant from a first database system to a second database system”; ¶28: “Block 202 builds an analytical model for each migration method to represent the resources consumed by that model”; ¶30- ¶32; ¶30: “Block 204 uses machine learning and historic data to update the analytical models. Each model is then employed in block 206 to predict migration performance for each of the methods based on current data and predicted future workloads. This predicted performance includes the cost of SLA violations as well as migration costs such as resource usage and migration duration”; ¶31: “The machine learning (ML) models are used to predict the overhead cost of each migration method”; ¶32: “If live migration is used, then the ML model may look at the same (or somewhat different) set of characteristics of the system, make a prediction on the migration overhead, e.g., in terms of the duration of the migration and the impact of the migration process on the query latency among all tenants on the current server. Then, based on the predictions, an appropriate (lower cost) migration method is chosen”; ¶33: “ML methods have two stages: a training stage and a testing stage. During the training stage, which is usually done offline, historic data are used to learn a prediction model, such as the linear regression mode. During the testing stage, which is usually done online, the prediction is made based on real-time data and the model trained offline. According to the present principles, the predictive models are constantly updated by incorporating real-time data into historic data and by repeating the training stage of machine learning methods in real time”; Fig 3, Fig 4, ¶35: “method for allocating resources to migration processes M.sub.j is shown. Block 402 derives an acceptable performance level based on existing SLAs. Based on this acceptable performance level, block 404 calculates slack resources”; ¶36: “a method for controlling the migration process is shown, allowing for adaptive throttling of the migration processes in response to changing resource availability. Block 502 determines available slack resources that may be employed in migration… Block 504 uses a proportional-integral-derivative (PID) controller to determine a speed of migration based on system performance. The PID controller is used to adjust system resource consumption of the migration process in block 506 by throttling disk input/output (I/O) and network bandwidth. As slack resources become available, additional resources are allocated to migration to speed the process. As tenant workloads increase and fewer resources are available, resources are taken away from the migration process to preserve SLA guarantees”; Fig 6, Fig 7; ¶38: “System monitors 614 tracks the resources being used at each server 602/604, providing PID controller 612 and processor 606 with information regarding processor usage, memory usage, and network usage”; ¶39: “At a high level, a PID controller 612 operates as a continuous feedback loop that, at each timestep, adjusts the output such that a dependent variable (the process variable 716) converges toward a desired value (the set point value 702). At each timestep, the current process variable 716 is compared to the desired setpoint 702”; ¶40: “The PID controller 612 is in charge of determining the proper migration speed, so the output variable is used as the throttling speed adjustment (either speeding up or slowing down the migration) … The process variable 716 may therefore be set to the current average transaction latency and the setpoint 702 may be set to a target latency. This setpoint 702 represents an efficient use of available slack while still maintaining acceptable query performance”) Applicant’s disclosure merely states at ¶73: “multipliers, hours and cost numbers may be determined via one or more trained ML model, such as a regression analysis”; ¶93: “one or more neural networks may process historical data to generate, for example, a regression analysis of prior migrations, to determine predicted cost and/or time involved, given one or more inputs”. Examiner interprets the machine learning (ML) models for predicting overhead cost of each migration method as taught by Barker as being the same as applicant’s pricing information. Examiner also interprets the system, processor and proportional-integral-derivative (PID) controller for determining a speed (output/process variable) of migration based on system performance and adjusting system resource consumption of the migration process as taught by Barker as teaching applicant’s formula(s). Although Barker does not describe verbatim the wordings of applicant’s claim limitations (content migration project parameters, the resource migration parameters, the services parameters, and the respective signal values). Barker discloses a method/system for controlling and allocating resources for the migration of multitenant database platforms while preserving tenant service level agreements. Barker further discloses building an analytical model for each of a set of migration methods based on database characteristics; predicting performance of the set of migration methods using the respective analytical model with respect to tenant service level agreements (SLAs) and current and predicted tenant workloads and selecting the best migration method according to the respective migration speeds and SLA violation severities. Barker teaches using/training machine learning models by incorporating real-time data into historic data to predict the overhead cost of each migration method. Barker also teaches that information regarding client SLAs, database characteristics, current workloads, and predicted workload information characterizes the database system as it will appear during migration. Barker additionally discloses system monitors for tracking resources being used and PID controller for adjusting system resource consumption, and determining migration speed based on system performance. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of applicant’s invention to use the machine learning models, migration techniques and analytical models for the migration of multitenant database platforms as taught by Barker for monitoring system performance, tracking resources, and selecting optimal migration parameters based on database characteristics, and workloads to predict migration performance as well as migration costs. The known migration techniques/methods of Barker would have predictably resulted in minimum of downtime, controlled tenant interference, and automatic management of low-cost migration for multitenant database systems (Figs 1-7, ¶21, ¶23, ¶28, ¶30-¶34, ¶38-¶40) Barker discloses all of the above limitations, Barker does not distinctly describe the following limitations, but Bar-Or however as shown discloses, initiate, by operation of a scanner module, a crawl of the tenant computinq environment to discover resources and content items associated with sites, webs, lists, pages, users, and mailboxes; and for each type of resource or content item corresponding to a schema, generate and store respective signal values including at least one of usage count, storage amount, permissions, or creation/modification timestamp, as set forth in the schema definitions (Fig 6, ¶96: “A signal schema is a specific type of template used to transform data into signals. Different types of schemas may be used, depending on the nature of the data, the domain, and/or the business environment. Initial signal discovery could fall into one or more of a variety of problem classes (e.g., regression classification, clustering, forecasting, optimization, simulation, sparse data inference, anomaly detection, natural language processing, intelligent data design, etc.)”; Fig 5, ¶97: The data models and schemas are stored along with the code and can be governed and maintained using modern software lifecycle tools. Typically, at the beginning of a Signal Hub project, the Workbench 70 is used by data scientists for profiling and schema discovery of unfamiliar data sources. Signal Hub provides tools that can discover schema (e.g., data types and column names) from a flat file or a database table. It also has built-in profiling tools, which automatically compute various statistics on each column of the data such as missing values, distribution parameters, frequent items, and more. These built-in tools accelerate the initial data load and quality checks”; Fig 9, ¶112: “Signal Hub 304 integrates seamlessly with a variety of front-end systems 322 (e.g., use-case specific apps, business intelligence, customer relationship management (CRM) system, content management system, campaign execution engine, etc.) …Data is transferred in batches, written to a special data landing zone, or accessed on-demand via APIs (application programming interfaces). Signal Hub 304 could also integrate with existing analytic tools, pre-existing code, and models”; ¶122: “The Knowledge Center allows for the intelligence (e.g., signals) to be accessed and explored across use cases and teams throughout the enterprise. Whenever a new use case needs to be implemented, the Knowledge Center enables relevant signals to be reused so that their intrinsic value naturally flows toward the making of a new analytic solution that drives business value”; ¶124: “Key components of the metadata in each signal include the business description, which explains what the signal is (e.g., number of times a customer sat in the middle seat on a long-haul flight in the past three years). Another key component of the metadata in each signal is the taxonomy, which shows each signal's classification based on its subject, object, relationship, time window, and business attributes (e.g., subject=customer, object=flight, relationship=count, time window=single period, and business attributes=long haul and middle seat)”; Fig 21F, ¶132: “Users can comment on a signal via Knowledge Center user interface directly to express interest on a signal, propose potential use case for the signal, or validate the signal value. The Signal Hub platform allows users to interact with each other and exchange ideas. FIG. 21F is a screenshot generated by the system which illustrates the charts that could be generated by the system. The charts could be a representation of a signal or multiple signals”) cause the costs, profits and pricing information to be displayed on a display device (¶76: “A multi target system data flow compiler can generate code to deploy on different target data flow engines utilizing different computer environments, languages, and frameworks. For applications with hard return on investment (ROI) metrics (e.g., churn reduction), faster time to value can equate to millions of dollars earned. Additionally, the system could lower development costs as data science project timelines potentially shrink, such as from 1 year to 3 months (e.g., a 75% improvement) … the system could reduce the total costs of ownership (TCO) for big data analytics”; FIG. 21B, ¶131: “Signal Hub platform 600 can schedule the business report at regular basis (e.g., daily, weekly, monthly, etc.) using a reporting tool 630 to gain recurring insights or export the filtered data to external systems (e.g., CSV file into client's campaign execution engine). The system of the present disclosure can also include a reporting tool implemented in a Hadoop environment. The user can generate a report and query various reports. Further, the user can query a single signal table and view the result in real-time”; ¶132: “The Signal Hub platform can display model description, metadata, input signal, output column, etc. all in one centralized page for each model. FIG. 21E also illustrates a user interface screen generated by the system for commenting signals using the Knowledge Center 600 generated by the system... FIG. 21F is a screenshot generated by the system which illustrates the charts that could be generated by the system. The charts could be a representation of a signal or multiple signals”; ¶138: “FIG. 29A is a screenshot illustrating a solution dependency diagram 750 of the Workbench 700 generated by the system”; Fig 30A, ¶145: “Signal Hub manager 800 facilitates easy viewing and management of signals, signal sets, and models. The management console allows for the creation of custom dashboards and charting, and the ability to drill into real time data and real time charting for a continuous process… the chart area 804 could provide one or more tabs related to performance, invocation history, data result, and configuration. The data result tab could include information such as data, data quality, measure, PMML, and graphs”; Fig 46 “Total Signals Possibilities” “Attribute” “Total Sales” “Net Sales” “Revenue”; “Time Frame”, “Day of Week” “Time of Day”) Applicant’s disclosure only vaguely describes at ¶70: “The financial calculator may include computations for each project type, based on total hours, and taking into account resource costs, travel costs, and total costs. On this basis the financial calculator may display to the user a total price, a gross profit and a profit margin percentage. The financial calculator may provide profit estimates for billed time and material inclusive or exclusive of T&E, and fixed fee inclusive/exclusive of a risk factor and T&E, in some aspects. In this way, the user can quickly compare which billing strategy may provide the best financial upside, and/or best limit costs” Bar-Or teaches a method/system for management of shared datasets in a collaborative data processing system including data files, data versioning and code files. Bar-Or further teaches a Signal Hub tool including a Workbench (along with a Knowledge Center) for aggregating an analytic modeling process from data to signals and for the coding and development of data schemes, quality management processes, views, descriptive and predictive signals, model validation and visualization, and maintenance of staging, input, output data models. Bar-Or discloses that models and schemas can be developed within the Workbench or imported from third-party data modeling tools. The Signal Hub manager of Bar-Or facilitates easy viewing and management of signals, signal sets, and models via a reporting tool. The management console allows for the creation of custom dashboards and charting, and the ability to drill into real time data and real time charting for a continuous process. Barker and Bar-Or are directed to the same field of endeavor since they are related to data modeling and analysis in a computing environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the machine learning models, migration techniques and analytical models for the migration of multitenant database platforms of Barker with the Signal Hub tools and platform as taught by Bar-Or since it allows for schema and signal discovery, gaining recurring insights, shorter development cycles, lowering development costs, as well as generating and querying various business reports, charts and/or signal tables in real time via a user interface (¶76, ¶96, ¶122, ¶124, ¶131, ¶132, ¶138, ¶145, Figs 5, 6, 9, 21B, 21E, 21F, 29A, 30A, 46). With respect to claims 2 and 16, Barker and Bar-Or disclose all of the above limitations, Bar-Or further discloses, process the one or more respective signal values using one or more formulas embedded in the schemas to determine dynamic respective signal values (Fig 5, ¶97: The data models and schemas are stored along with the code and can be governed and maintained using modern software lifecycle tools. Typically, at the beginning of a Signal Hub project, the Workbench 70 is used by data scientists for profiling and schema discovery of unfamiliar data sources. Signal Hub provides tools that can discover schema (e.g., data types and column names) from a flat file or a database table. It also has built-in profiling tools, which automatically compute various statistics on each column of the data such as missing values, distribution parameters, frequent items, and more. These built-in tools accelerate the initial data load and quality check”; Fig 9, ¶113: “The Workbench 306 could include a workflow to process signals that includes loading 330, data ingestion and preparation 332, descriptive signal generation 336, use case building 338, and sending 340. In the loading step 330, source data is loaded into the Workbench 306 in any of a variety of formats (e.g., SFTP, JDBC, Sqoop, Flume, etc.). In the data ingestion and preparation step 332, the Workbench 306 provides the ability to process a variety of big data (e.g., internal, external, structured, unstructured, etc.) in a variety of ways (e.g., delta processing, profiling, visualizations, ETL, DQM, workflow management, etc.). In the descriptive signal generation step 334, a variety of descriptive signals could be generated (e.g., mathematical transformations, time series, distributions, pattern detection, etc.). In the predictive signal generation step 336, a variety of predictive signals could be generated (e.g., linear regression, logistic regression, decision tree, Naïve Bayes, PCA, SVM, deep autoencoder, etc.). In the use case building step 338, uses cases could be created (e.g., reporting, rules engine, workflow creator, visualizations, etc.). In the sending step 340, the Workbench 306 electronically transmits the output to downstream connectors (e.g., APIs, SQL, batch file transfer, etc.)”; Fig 11-17, ¶115: “The Workbench provides direct access to the Signal API, which speeds up development and simplifies (e.g., reduce errors in) signal creation (e.g., descriptive signals). The Signal API provides an ever-growing set of mathematical transformations that will allow for the creation of powerful descriptive signals, along with a syntax that is clear, concise, and expressive”; ¶133: “FIG. 22 is a screenshot illustrating a user interface screen generated by the system for visualizing signal parts of a signal using the Knowledge Center 600 generated by the system. Shown is a table showing various signals of a signal set… Signal Hub platform 600 can display the top-level diagram 650, the definition level diagram 652, the predecessors 654, raw data 656, consumers 658, definition 660, schema 62, and metadata 664 and stats… the table could include a column 670 of names of the signals within the signal set (e.g., within signal set “signals. signals_pos_txn_mst.sub.'04_app”), as well as the formula 672, and what the signal is defined in 674”) Barker and Bar-Or are directed to the same field of endeavor since they are related to data modeling and analysis in a computing environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the machine learning models, migration techniques and analytical models for the migration of multitenant database platforms of Barker with the Signal Hub tools and platform as taught by Bar-Or since it allows for profiling, signal creation and schema discovery of unfamiliar data sources via the Workbench and Signal API (¶97, ¶113, ¶133, Fig 5, 9, 11-17). With respect to claims 4, 11 and 18, Barker and Bar-Or disclose all of the above limitations, Bar-Or further discloses, generate a schema corresponding to the discovered sub-sites; and store the signals in the schema (¶55: “FIG. 29A is a screenshot illustrating a solution dependency diagram of the integrated development environment generated by the system”;; Fig 4A, ¶89: “Signal Hub components include signal sets, ETL processing, dataflow engine, signal-generating components (e.g., signal-generation processes), APIs, centralized security, model execution, and model monitoring. Signals are hierarchical, such that within Signal Hub 60, a signal array might include simple signals that can be used by themselves to predict behavior (e.g., customer behavior powering a recommendation) and/or can be used as inputs into more sophisticated predictive models. These models, in turn, could generate second-order, highly refined signals, which could serve as inputs to business-process decision points”; ¶96: “. A signal schema is a specific type of template used to transform data into signals. Different types of schema may be used, depending on the nature of the data, the domain, and/or the business environment. Initial signal discovery could fall into one or more of a variety of problem classes (e.g., regression classification, clustering, forecasting, optimization, simulation, sparse data inference, anomaly detection, natural language processing, intelligent data design, etc.).”; ¶97: “The data models and schemas are stored along with the code and can be governed and maintained using modern software lifecycle tools. Typically, at the beginning of a Signal Hub project, the Workbench 70 is used by data scientists for profiling and schema discovery of unfamiliar data sources. Signal Hub provides tools that can discover schema (e.g., data types and column names) from a flat file or a database table”; ¶100: “FIG. 6 is a diagram 90 illustrating use cases (e.g., outputs, signals, etc.) of the system. There could be multiple signal libraries, each with subcategories for better navigation and signal searching. For example, as shown, the Signal Hub could include a Customer Management signal library 92. Within the Customer Management Signal Library 92 are subcategories for Flight 94, Frequent Flyer Program 96, Partner 98, and Ancillary 99”; ¶112: “Signal Hub 304 integrates seamlessly with a variety of front-end systems 322 (e.g., use-case specific apps, business intelligence, customer relationship management (CRM) system, content management system, campaign execution engine, etc.) …written to a special data landing zone, or accessed on-demand via APIs (application programming interfaces). Signal Hub 304 could also integrate with existing analytic tools, pre-existing code, and models”; ¶113: “In the sending step 340, the Workbench 306 electronically transmits the output to downstream connectors (e.g., APIs, SQL, batch file transfer, etc.)”; ¶115: “FIGS. 11-17 are screenshots illustrating use of the Signal Hub platform to create descriptive signals… The Signal API provides an ever-growing set of mathematical transformations that will allow for the creation of powerful descriptive signals, along with a syntax that is clear, concise, and expressive”) Barker and Bar-Or are directed to the same field of endeavor since they are related to data modeling and analysis in a computing environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the machine learning models, migration techniques and analytical models for the migration of multitenant database platforms of Barker with the Signal Hub tools and platform as taught by Bar-Or since it allows for profiling, signal creation and schema discovery of unfamiliar data sources via the Workbench and Signal API (Figs 4A, 4B, 5-9, Fig 29A, ¶55, ¶89, ¶96, ¶97, ¶100, ¶112-¶115, ¶133). With respect to claims 5 and 12, Barker and Bar-Or disclose all of the above limitations, Barker further discloses, scan the tenant environment to identify at least one of a page, a workflow, an info path, a web permission, a site group, a team, a team channel, a team member, a OneDrive installation, a user, a group, a group member, a mail message, an environment, a data verse, a flow, a flow connection, a power application, a power application connection, a capacity, a license or a business intelligence workspace (¶14: “FIG. 1 is a block/flow diagram for migrating a database in a multitenant system according to the present principles”; ¶21: “By taking query latency into account and monitoring system performance in real-time, system performance can be guaranteed according to tenant service level agreements (SLAs) even during the live migration”; ¶23: “Block 106 chooses a best method by selecting a method that will perform the migration in the shortest time without generating any SLA violations… Block 106 further selects optimal migration parameters that correspond to the selected migration method. The migration method and parameters are employed in block 108 to migrate, e.g., a tenant from a first database system to a second database system”; ¶30: “Block 204 uses machine learning and historic data to update the analytical models. Each model is then employed in block 206 to predict migration performance for each of the methods based on current data and predicted future workloads. This predicted performance includes the cost of SLA violations as well as migration costs such as resource usage and migration duration”; Fig 4, ¶35: “Block 402 derives an acceptable performance level based on existing SLAs… Block 408 monitors the system during migration to track changes in tenant workloads T.sub.i. Block 410 adjusts the slack resources accordingly--as tenant workloads increase, slack resources will decrease and vice versa”; Fig 6, ¶38: “System monitors 614 tracks the resources being used at each server 602/604, providing PID controller 612 and processor 606 with information regarding processor usage, memory usage, and network usage”) With respect to claims 6 and 19, Barker and Bar-Or disclose all of the above limitations, Barker further discloses, train the machine learning model by processing labeled historical migration log files (¶30: “Block 204 uses machine learning and historic data to update the analytical models. Each model is then employed in block 206 to predict migration performance for each of the methods based on current data and predicted future workloads. This predicted performance includes the cost of SLA violations as well as migration costs such as resource usage and migration duration”; ¶33: “the predictive models are constantly updated by incorporating real-time data into historic data and by repeating the training stage of machine learning methods in real time. In other words, the ML model is updated (trained again) whenever new data is available”; ¶34: “FIG. 3, a method for live migration is provided that allows maintenance of SLA guarantees. Block 302 designates the database or databases to be migrated and selects a target server for the migration(s). Block 304 starts a hot backup of the databases to be migrated, creating a snapshot of each database. Each snapshot is transferred to its respective target server, and block 306 creates a new database from each snapshot at its respective target server. Block 308 transfers the query log that accumulated during the hot backup to the target server and replays the query log to synchronize the new database with the state of the old database”) With respect to claims 7, 14 and 20, Barker and Bar-Or disclose all of the above limitations, Bar-Or further discloses, generate one or more visualizations corresponding to the respective signal values; and cause the visualizations to be displayed on a display device (¶44: “FIG. 23B is a screenshot illustrating a user interface screen generated by the system for displaying signal values, statistics and visualization of signal value distribution”) Barker and Bar-Or are directed to the same field of endeavor since they are related to data modeling and analysis in a computing environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the machine learning models, migration techniques and analytical models for the migration of multitenant database platforms of Barker with the Signal Hub tools and platform as taught by Bar-Or since it allows for providing a user interface screen for displaying signal values and visualization of signal value distribution (Fig 23B, ¶44). With respect to claim 9, Barker and Bar-Or disclose all of the above limitations, Bar-Or further discloses, wherein processing the content migration project parameters, the resource migration parameters, the services parameters, and the respective signal values to determine costs, profits and pricing information corresponding to the migration of the tenant environment includes processing the one or more respective signal values using one or more formulas embedded in the schemas to determine dynamic respective signal values (Fig 5, ¶96: “A signal schema is a specific type of template used to transform data into signals. Different types of schemas may be used, depending on the nature of the data, the domain, and/or the business environment. Initial signal discovery could fall into one or more of a variety of problem classes”; ¶97: “Fig 9, ¶113: “The Workbench 306 could include a workflow to process signals that includes loading 330, data ingestion and preparation 332, descriptive signal generation 336, use case building 338, and sending 340. In the loading step 330, source data is loaded into the Workbench 306 in any of a variety of formats (e.g., SFTP, JDBC, Sqoop, Flume, etc.). In the data ingestion and preparation step 332, the Workbench 306 provides the ability to process a variety of big data (e.g., internal, external, structured, unstructured, etc.) in a variety of ways (e.g., delta processing, profiling, visualizations, ETL, DQM, workflow management, etc. In the descriptive signal generation step 334, a variety of descriptive signals could be generated (e.g., mathematical transformations, time series, distributions, pattern detection, etc.).)”; Figs 11-17, ¶115: “The Signal API provides an ever-growing set of mathematical transformations that will allow for the creation of powerful descriptive signals, along with a syntax that is clear, concise, and expressive”; Fig 22, ¶133: “FIG. 22 is a screenshot illustrating a user interface screen generated by the system for visualizing signal parts of a signal using the Knowledge Center 600 generated by the system. Shown is a table showing various signals of a signal set... The Signal Hub platform 600 can display the top-level diagram 650, the definition level diagram 652, the predecessors 654, raw data 656, consumers 658, definition 660, schema 62, and metadata 664 and stats… the table could include a column 670 of names of the signals within the signal set (e.g., within signal set “signals. signals_pos_txn_mst.sub.'04_app”), as well as the formula 672, and what the signal is defined in 674”) Barker and Bar-Or are directed to the same field of endeavor since they are related to data modeling and analysis in a computing environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the machine learning models, migration techniques and analytical models for the migration of multitenant database platforms of Barker with the Signal Hub tools and platform as taught by Bar-Or since it allows for generating a variety of descriptive signals (e.g. mathematical transformations, time series, distributions, pattern detection etc.) along with a syntax that is clear concise and expressive (Figs 5, 9, 22, ¶96, ¶97, ¶113, ¶115, ¶133, Figs 4A, 4B, 5, 9, 11-17). With respect to claim 13, Barker and Bar-Or disclose all of the above limitations, Barker further discloses, wherein processing the content migration project parameters, the resource migration parameters, the services parameters, and the respective signal values to determine costs, profits and pricing information corresponding to the migration of the tenant environment includes training the machine learning model by processing labeled historical migration log files (¶30: “Block 204 uses machine learning and historic data to update the analytical models. Each model is then employed in block 206 to predict migration performance for each of the methods based on current data and predicted future workloads. This predicted performance includes the cost of SLA violations as well as migration costs such as resource usage and migration duration”; ¶33: “the predictive models are constantly updated by incorporating real-time data into historic data and by repeating the training stage of machine learning methods in real time. In other words, the ML model is updated (trained again) whenever new data is available”; ¶34: “FIG. 3, a method for live migration is provided that allows maintenance of SLA guarantees. Block 302 designates the database or databases to be migrated and selects a target server for the migration(s). Block 304 starts a hot backup of the databases to be migrated, creating a snapshot of each database. Each snapshot is transferred to its respective target server, and block 306 creates a new database from each snapshot at its respective target server. Block 308 transfers the query log that accumulated during the hot backup to the target server and replays the query log to synchronize the new database with the state of the old database”) Claims 3, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Barker, Bar-Or et al., in further view of Gafton et al., WO 2017/059324 A1. With respect to claims 3, 10 and 17, Barker and Bar-Or disclose all of the above limitations, the combination of Barker and Bar-Or does not distinctly describe the following limitations, but Gafton however as shown discloses, wherein the tenant environment includes a plurality of networked sites, and the memory having stored thereon instructions that, when executed by the one or more processors, cause the system to: crawl a root site or entry point uniform resource locator of a site within the tenant environment to discover one or more associated sub-sites or linked resources; and generate one or more respective signals corresponding to each discovered sub-sites or resource (Fig 1, ¶26: “configuration information by the agent may include identification of one or more of: software packages installed on the computer system, processes running on the computer system, type of server running on the computer system, type of operating system on the computer system, source entities for network communications received at the computer system, destination entities for network communications sent from the computer system, or performance of a process running on the computer system. In some embodiments, configuration information may include network information, performance information, component health information and/or dependency information”; ¶35: “the agents gather discovery information and report it back to the discovery service through the connector. The connector may be configured with long-term credentials to the service, receives discovery data from the agents, and aggregates and pushes data to the discovery service, in embodiments. The aggregation may be performed by the agent, by the connector, or by the service”; Fig 8, ¶283: “agents are configured to gather information (e.g., about the customer's various resources such as virtual machines from their virtual infrastructure management applications, their networking equipment (i.e. firewalls, routers, etc.), their storage arrays, databases, and more. In at least the illustrated embodiment, the connector(s) may aggregate the discovery data and send the discovery data to the discovery service (block 522)”; ¶286: “A client's discovery data is received (e.g., from a connector, directly from agents, and/or from other tools) (block 612). For example, configuration data for resources operating on behalf of the customer or on the customer's data center may be gathered by a discovery connector and sent to the discovery service where the data is stored in a database, or may be sent directly to the database from the agents”; ¶287: “discovery service 100 may analyze the client's data from the database 120 and determine configurations and dependencies of the customer's resources. In some embodiments, the data may be analyzed to determine, hierarchical structures, grouping or layers (e.g., identify server layers, logging server layers) of components”; ¶300: “when a large company wants to migrate a Microsoft SharePoint application to a cloud-based service provider, they engage with a Solutions Architect. As the first step, the Solutions Architect would setup the discovery service in their datacenter”; ¶301: the Solutions Architect can observe that SharePoint is dependent on a Microsoft SQL Server and Microsoft IIS Server in their network. In addition to these application dependencies, the SharePoint application depends on infrastructure services like a DHCP server, a DNS server, a Microsoft Active Directory (AD) server, and a Log Server. Armed with this information, the Solutions Architect can create a migration plan for the large company to migrate the SharePoint application”) Applicant’s disclosure teaches, ¶18: “the tenant scanner (also referred to herein interchangeably as a crawler, spider or mapper) may discover capabilities of Microsoft applications including SharePoint sites”; ¶25: “the tenant computing device 102a may be one of several tenant computing devices owned/leased by the company, each comprising a hosted Microsoft SharePoint site that services yet further customers”). Gafton discloses a network-based client discovery service for providing customers with a discovery platform for collecting, storing, and analyzing their enterprise assets in a cloud-based and/or on-premises datacenter environment via observing configurations and dependencies, recording the findings of Microsoft applications including SharePoint sites in a database and keeping the database up-to-date with ongoing changes. Giving the broadest reasonable interpretation of applicant’s claim limitation in light of the disclosure, Examiner interprets the network-based client discovery service for analyzing, observing configurations and dependencies and recording the finding of Microsoft applications as taught by Gafton as teaching applicant’s crawling step. Barker, Bar-Or and Gafton are directed to the same field of endeavor since they are related to data modeling and analysis in a computing environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the machine learning models, migration techniques and analytical models for the migration of multitenant database platforms of Barker and the Signal Hub tools and platform of Bar-Or with the network-based client discovery service as taught by Gafton since it allows for simplifying the task of migrating workloads by identifying some or all resources that power a client’s application servers, databases, file shares and tracks configuration and performance changes throughout the migration process (Figs 1, 5-8, ¶14, ¶26, ¶35, ¶283, ¶286, ¶287, ¶300). The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Luft et al., US Patent Application Publication No US 2015/0195141, “Apparatus and Method for Data Center Migration”, relating to an apparatus and method for data center migration. Garrett, US Patent Application Publication No US 2024/0118822 A1, “Migration Management System and Method”, relating to migration systems that make predictions concerning the difficulty of a migration. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry of a general nature or relating to the status of this application or concerning this communication or earlier communications from the Examiner should be directed to Kimberly L. Evans whose telephone number is 571.270.3929. The Examiner can normally be reached on Monday-Friday, 9:30am-5:00pm. If attempts to reach the examiner by telephone are unsuccessful, the Examiner’s supervisor, Lynda Jasmin can be reached at 571.272.6782. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal/pair <http://pair-direct.uspto.gov >. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866.217.9197 (toll-free). Any response to this action should be mailed to: Commissioner of Patents and Trademarks, P.O. Box 1450, Alexandria, VA 22313-1450 or faxed to 571-273-8300. Hand delivered responses should be brought to the United States Patent and Trademark Office Customer Service Window: Randolph Building 401 Dulany Street, Alexandria, VA 22314. /KIMBERLY L EVANS/Examiner, Art Unit 3629 /NATHAN C UBER/Supervisory Patent Examiner, Art Unit 3626
Read full office action

Prosecution Timeline

Nov 08, 2022
Application Filed
May 31, 2025
Non-Final Rejection — §101, §103, §112
Aug 25, 2025
Interview Requested
Sep 23, 2025
Applicant Interview (Telephonic)
Sep 30, 2025
Examiner Interview Summary
Nov 04, 2025
Response Filed
Feb 07, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602661
SYSTEM FOR SEARCHING AND CORRELATING ONLINE ACTIVITY WITH INDIVIDUAL CLASSIFICATION FACTORS
2y 5m to grant Granted Apr 14, 2026
Patent 12277615
DETECTING AND VALIDATING IMPROPER RESIDENCY STATUS THROUGH DATA MINING, NATURAL LANGUAGE PROCESSING, AND MACHINE LEARNING
2y 5m to grant Granted Apr 15, 2025
Patent 12118558
ESTIMATING QUANTILE VALUES FOR REDUCED MEMORY AND/OR STORAGE UTILIZATION AND FASTER PROCESSING TIME IN FRAUD DETECTION SYSTEMS
2y 5m to grant Granted Oct 15, 2024
Patent 12056745
Machine-Learning Driven Data Analysis and Reminders
2y 5m to grant Granted Aug 06, 2024
Patent 11990213
METHODS AND SYSTEMS FOR VISUALIZING PATIENT POPULATION DATA
2y 5m to grant Granted May 21, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
12%
Grant Probability
26%
With Interview (+13.4%)
7y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 362 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month