Prosecution Insights
Last updated: April 19, 2026
Application No. 15/707,135

Method and Apparatus for Cloud Based Predictive Service Scheduling and Evaluation

Non-Final OA §101§103§112
Filed
Sep 18, 2017
Examiner
SANTOS-DIAZ, MARIA C
Art Unit
3629
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Tyco Fire & Security GmbH
OA Round
12 (Non-Final)
33%
Grant Probability
At Risk
12-13
OA Rounds
4y 3m
To Grant
63%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
97 granted / 291 resolved
-18.7% vs TC avg
Strong +30% interview lift
Without
With
+30.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
35 currently pending
Career history
326
Total Applications
across all art units

Statute-Specific Performance

§101
26.3%
-13.7% vs TC avg
§103
27.8%
-12.2% vs TC avg
§102
21.7%
-18.3% vs TC avg
§112
22.3%
-17.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 291 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/16/2025 has been entered. Status of the Application This is a Non-Final Action in response to the Remarks and Amendments as submitted on 12/16/2025. Claims 1, 11 and 21 are amended. No claims are new. Claims 1-5, 7-8, 10-15, 17-18 and 20-31 are examined below. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-5, 7-8, 10-15, 17-18 and 20-31 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Amended claims 1, 11 and 21 and claims 29, 31 disclose the new and previously presented limitations: “generating, by the service data aggregator, validation information…wherein the validation information further includes technician performance metrics comprising changes in service duration trends... for a particular technician overtime”; “correlating, for inspection or testing services, control-panel device events received during an inspection of a particular device with contemporaneous local service data for that device to confirm that the service was actually performed”, “correlating, during inspections or testing of a particular device, control-panel device events indicating activation of the device with contemporaneous local service data identifying the inspection result for that device to validate actual service completion”; “at least one of setting a review and auditing flag when the device events and local service data do not correspond within a defined inspection time window”. “wherein the validation information further includes identification of technician-specific performance patterns over time comprising changes in service duration trends and completion rates for a particular technician.” “wherein the validation information includes a quality scoring system for services completed by each technician, the quality scores being based on historical completion data and comprising factors such as error rates, repeated service needs, and time to resolution.” In regards to “generating, by the service data aggregator, validation information…wherein the validation information further includes technician performance metrics comprising changes in service duration trends... for a particular technician overtime”; and “wherein the validation information further includes identification of technician-specific performance patterns over time comprising changes in service duration trends and completion rates for a particular technician.” the examiner notes that there is no analysis or determination of service duration trends in the originally filled specification. The originally filled specification generically discloses in paragraph [068] a comparison of the average time for performing a service by a technician compared to an average time for performing a service for all technicians. Although the specification generically does disclose a technician performance metric (i.e. time for performing a service) compared to an average for all other technicians performing the same service, the specification does not provide the level of detail recited in the limitation. That is, there is no service duration trend being established, determined or analyzed in order to generate validation information including technician performance metrics comprising changes in service duration trends... for a particular technician overtime. At most, the specification provides support for generically determining the service time of a technician and an aggregated service time of a technician but not trend is disclosed. In regards to the validation information, the originally filled specification discloses: [0022] The service data aggregator can also evaluate technicians. For example, if the technicians are completing certain tasks too quickly or slowly, additional training or oversight can be recommended. Additionally, if the tasks are being completed too quickly, the service data aggregator can flag the tasks for review to determine whether the service was actually completed and/or if the work needs to be repeated. This service data becomes part of an official record that service customers can use to validate services have actually been performed (for example, in response to an audit). [0027] In yet another example, the validation information is based on whether average durations of particular types of services performed by a particular technician match average durations of the same types of services performed by all other technicians, and the validation information includes whether further review of the service is indicated and/or additional training or oversight of the particular technician is indicated. [058] Based on the predicted service intervals, the service data aggregator 107 also generates service alerts. The validation information is based on whether average durations of particular types of services performed by a particular technician 150 match the average durations of the same types of services performed by other technicians 150 and includes whether further review of services performed is indicated and/or whether additional training or oversight of the particular technician 150 providing the services is indicated. [0068] In the illustrated example, the service event “service event 5” indicates that the device “D0004”, which is a smoke detector in the fire alarm system “Fire1”, located in the building 103 “building1”, was cleaned by the technician 150 “technician1”, who is a contractor working for “company1”. The cleaning was performed starting at “time9” and ending at “time10”, the difference of which could indicate the duration of the cleaning. In this example, the service data aggregator 107 could compare the duration of the cleaning to the duration of all other cleanings to validate whether the service was actually performed. At a higher level, the service data aggregator 107 could compare the duration of all cleanings by “technician1” to the duration of all cleanings by all other technicians 150 to determine whether “technician1” requires further training and/or oversight. At an even higher level, the service data aggregator 107 could compare the average duration of all cleanings by technicians 150 who are contractors to the average duration of all cleanings by technicians 150 who are employees of the entity providing service to evaluate the effectiveness and/or efficiency of contractors providing service, [085] Additionally, in step 614, if the average durations for the particular technician 150 are shorter than those for all technicians 150, the service data aggregator 107 sets another flag for a review and/or audit of the services performed by the particular technician 150 whose durations were shorter than the average for all technicians 150. In regards to the “correlating, for inspection or testing services, control-panel device events received during an inspection of a particular device with contemporaneous local service data for that device to confirm that the service was actually performed”, and “correlating, during inspections or testing of a particular device, control-panel device events indicating activation of the device with contemporaneous local service data identifying the inspection result for that device to validate actual service completion” the originally filled specification does not appear to disclose the level of detail added to the claim. Regarding the comparison of a duration in service, the originally filled specification discloses on [068] “In this example, the service data aggregator 107 could compare the duration of the cleaning to the duration of all other cleanings to validate whether the service was actually performed At a higher level, the service data aggregator 107 could compare the duration of all cleanings by "technician1" to the duration of all cleanings by all other technicians 150 to determine whether "technician1" requires further training and/or oversight. At an even higher level, the service data aggregator 107 could compare the average duration of all cleanings by technicians 150 who are contractors to the average duration of all cleanings by technicians 150 who are employees of the entity providing service to evaluate the effectiveness and/or efficiency of contractors providing service.” That is, the originally filled specification merely provides support for comparing or correlating the time a technician spends providing cleaning or servicing a device with the time or duration of all other cleanings to validate whether the service was actually performed, the system can further compare duration of all cleanings of one technician to the duration of all cleanings by all other technicians to determine if the one technician requires further training or oversight. Additionally the system is able to compare an average of all cleanings of contractors with an average of all cleanings of employed technicians to determine effectiveness or efficiency of contractors. However nowhere in the specification there is a disclosure related to comparing or correlating “control-panel device events received during an inspection of a particular device with contemporaneous local service data for that device to confirm that the service was actually performed”, as claimed. Paragraph [063] discloses the tabulated data, however nothing is disclosed related to correlating “control-panel device events received during an inspection of a particular device with contemporaneous local service data for that device to confirm that the service was actually performed” as claimed. In regards to “at least one of setting a review and auditing flag when the device events and local service data do not correspond within a defined inspection time window” the originally filled specification disclose [0022] The service data aggregator can also evaluate technicians. For example, if the technicians are completing certain tasks too quickly or slowly, additional training or oversight can be recommended. Additionally, if the tasks are being completed too quickly, the service data aggregator can flag the tasks for review to determine whether the service was actually completed and/or if the work needs to be repeated. This service data becomes part of an official record that service customers can use to validate services have actually been performed (for example, in response to an audit); [0081] Fig. 6 is a flow diagram illustrating an example of how the service data aggregator 107 generates validation information for services performed by particular technicians 150. [082] In step 602, the service data aggregator 107 retrieves service events pertaining to a particular technician 150 to be evaluated (for example, by referencing the technician's 150 technician ID). [083] In step 604, the average service durations of different types of service and/or types of devices for the particular technician 150 are calculated based on the service events. These average durations are compared to average durations of the same types of service and/or devices for all technicians 150 in step 606. [084] In step 608, if the average durations match, the services performed by the particular technician 150 are flagged as validated in step 610. On the other hand, if they do not match, in step 612, the service data aggregator 107 sets a flag for additional training and/or oversight for the particular technician. [085] Additionally, in step 614, if the average durations for the particular technician 150 are shorter than those for all technicians 150, the service data aggregator 107 sets another flag for a review and/or audit of the services performed by the particular technician 150 whose durations were shorter than the average for all technicians 150. The originally filled specification provides disclosure related to flagging durations and average durations that do not match related to services provided by a technician compared with previous services of that one technician or compared with other technicians. The specification does not provide any disclose related to setting a review and/or auditing flag when the device events and local service data do not correspond within a defined inspection time window as claimed. In regards to “wherein the validation information includes a quality scoring system for services completed by each technician, the quality scores being based on historical completion data and comprising factors such as error rates, repeated service needs, and time to resolution.” Nowhere in the specification it is disclosed a scoring system for services completed by each technician. The originally filled specification generically discloses tracking the service time for a technician servicing a device, however does not go into the details as claimed. As shown the level of detail claimed is not the same as the originally filled specification, for that reason the claims are rejected for disclosing new matter. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-5,7-8, 10-15, 17-18 and 20-31 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 discloses “a server” in lines 3 and 10. It is unclear if the claim requires 2 or 1 “server” to cover the requirements of the claim. Therefore, the claim is indefinite. For examination purpose the claim is interpreted as best understood. Claim 1 recites the limitation "the particular technician" in line 17. There is insufficient antecedent basis for this limitation in the claim. The claim is determine to be indefinite due to the term not been introduced properly. For examination purpose the claim is interpreted as best understood. Claim 1 discloses “the particular technician” in line 17, “a particular technician” in line 19, and “a particular technician” in line 22. It is unclear if the claim requires 3, 2 or 1 “particular technician” to cover the requirements of the claim. Therefore, the claim is indefinite. For examination purpose the claim is interpreted as best understood. Claim 1 recites the limitation "the same types of services" in line 20. There is insufficient antecedent basis for this limitation in the claim. The claim is determine to be indefinite due to the term not been introduced properly. For examination purpose the claim is interpreted as best understood. Claim 1 recites the limitation "that device" in line 28. It is unclear to what “device” the applicant is referring to. For examination purpose the claim is interpreted as best understood. Claim 1 recites the limitation "the service" in line 29. It is unclear if the applicant is referring to the “service” introduced in the preamble of the term “inspection” recited in line 28. It is unclear what “service” needs to be confirmed to cover the requirements of the claim. For examination purpose the claim is interpreted as best understood. Claim 1 recites the limitation “the device events” in line 30. However it is unclear if the applicant refers to the “device events” introduced in line 4 or the “control-panel device events” introduced in line 22. For examination purpose the claim is interpreted as best understood. Claim 1 discloses “local service data” in line 30. It is unclear if the applicant indented to introduce new “local service data” or if the term is the same as previously introduced in line 5. For examination purpose the claim is interpreted as best understood. Claim 11 recites the limitation “generating and storing, the service workflow module, service events based local service data” in line 6. The claim appears to have typographical errors making the claim unclears as to what needs to be generated and stored. It is not clear if the requirement of the claim is to generate and store the service workflow module or the service events local service data. Correction is required. For examination purpose the claim is interpreted as best understood. Claim 11 recites the limitation “service events based local service data” in line 6. The claim appears to have typographical errors making the claim indefinite. It is unclear if the requirement is to generate service events, local service data or if the label of the data is “service events local service data” Correction is required. For examination purpose the claim is interpreted as best understood. Claim 11 recites the limitation “the service events” in line 10. However it is unclear if the applicant refers to the “service events based local service data” introduced in line 6 since the term appears to be introduced as data. For examination purpose the claim is interpreted as best understood. Claim 11 recites the limitation "the particular technician" in line 16. There is insufficient antecedent basis for this limitation in the claim. The claim is determine to be indefinite due to the term not been introduced properly. For examination purpose the claim is interpreted as best understood. Claim 11 recites the limitation "the same types of services" in line 18. There is insufficient antecedent basis for this limitation in the claim. The claim is determine to be indefinite due to the term not been introduced properly. For examination purpose the claim is interpreted as best understood. Claim 11 discloses “the particular technician” in line 18, and “a particular technician” in line 20. It is unclear if the claim requires 2 or 1 “particular technician” to cover the requirements of the claim. Therefore, the claim is indefinite. For examination purpose the claim is interpreted as best understood. Claim 11 discloses the limitation “the device” in line 27, it is unclear if the term is related to “a mobile computing device” introduced in line 5, or if it is related to “a particular device” as introduced in line 26. For examination purpose the claim is interpreted as best understood. Claim 11 recites the limitation "the inspection result" in line 27. There is insufficient antecedent basis for this limitation in the claim. The claim is determine to be indefinite due to the term not been introduced properly. For examination purpose the claim is interpreted as best understood. Claim 11 discloses the limitation “that device” in line 28, it is unclear if the term is related to “a mobile computing device” introduced in line 5, “a particular device” as introduced in line 26 or “the device” recited in line 27. It is unclear if the requirement of the claim is 2-4 different devices or just 1. For examination purpose the claim is interpreted as best understood. Claim 11 discloses “local service data” in line 29. It is unclear if the applicant indented to introduce new “local service data” or if the term is the same as previously introduced in line 5 or 27. It is unclear if the terms are all related or if they are different. For examination purpose the claim is interpreted as best understood. Claim 21 recites the limitation "the services" in line 9. There is insufficient antecedent basis for this limitation in the claim. The claim is determine to be indefinite due to the term not been introduced properly. For examination purpose the claim is interpreted as best understood. Claim 21 recites the limitation "the aggregated service data" in line 9. There is insufficient antecedent basis for this limitation in the claim. The claim is determine to be indefinite due to the term not been introduced properly. For examination purpose the claim is interpreted as best understood. Claim 21 discloses the limitation “a particular technician” in lines 8 and 10. It is unclear if the terms are related or if the requirement of the claim is two distinct “particular technician”. For examination purposes the claim is interpreted as best understood. Claim 21 discloses “the devices” in line 14. However it is unclear if the term is related to “devices” as introduced in line 4 or “devices” as introduced in line 12. It is further unclear what is the amount of “devices” required to cover the scope of the claim. For examination purposes the claim is interpreted as best understood. Claim 21 discloses “those devices” in line 22. However it is unclear if the term is related to “devices” as introduced in line 4, “devices” as introduced in line 12, “the devices” in line 14 or “individual devices” as introduced in line 21. It is further unclear what is the amount of “devices” required to cover the scope of the claim. For examination purposes the claim is interpreted as best understood. Claim 21 discloses “the services” in line 22. However it is unclear if the term is related to “the services” in line 9 or “particular types of services” as introduced in line 10, “same types of services” as introduced in line 11. For examination purposes the claim is interpreted as best understood. Claim 21 recites the limitation "the device events" in line 23. There is insufficient antecedent basis for this limitation in the claim. The claim is determine to be indefinite due to the term not been introduced properly. For examination purpose the claim is interpreted as best understood. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5, 7-8, 10-15, 17-18 and 20-31 are rejected under 35 U.S.C. 101 because the claims are directed to an abstract idea without significantly more. Claims 1-5, 7-8, 10-15, 17-18 and 20-31 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The eligibility analysis in support of these findings is provided below, in accordance with MPEP 2106. With respect to Step 1 of the eligibility inquiry (as explained in MPEP 2106), it is first noted that the method (claims 10-15, 17-18 and 20-31), system (claims 1-5, 7-8) are directed to at least one potentially eligible category of subject matter (i.e., process and machine, respectively). Thus, Step 1 of the Subject Matter Eligibility test for the claims is satisfied. With respect to Step 2A Prong One, it is next noted that the claims recite an abstract idea that falls under the “Mental Processes” and the “Mathematical Concepts” groups within the enumerated groupings of abstract ideas set forth in the MPEP 2106.04 since the claims set forth steps that can be performed in the human mind (e.g., observation, evaluation, judgment, opinion) and mathematical concepts, relationships, formulas or equations, or calculations . Claims 1, 11 and 21 recites the abstract idea collecting and analyzing service data to determine how long certain services take to perform [See originally filled specification at paragraph 020]. With respect to independent claims 1, and 11, this abstract idea is described by the following claim steps: receiving device events and local service data; generating and storing, service events based local service data and services performed on devices of the building management systems in response to the device events; generating, aggregated service data based on the service events; generating, prediction information based on the aggregated service data, wherein the prediction information includes a predicted service interval of a particular type of service based on time and date information of service events previously generated from a same type of service; generating, validation information including whether additional training of the particular technician providing the services is indicated based on whether the aggregated service data indicates that average durations of particular types of services performed by a particular technician match average durations of the same types of services performed by all other technicians, wherein the validation information further includes technician performance metrics comprising changes in service duration trends and completion rates for a particular technician over time; correlating, for inspection or testing services, control-panel device events received during an inspection of a particular device with contemporaneous local service data for that device to confirm that the service was actually performed, and at least one of setting a review and auditing flag when the device events and local service data do not correspond within a defined inspection time window. With respect to independent claim 21 this abstract idea is described by the following claim steps: accumulating, information about previously performed service on building management systems, information about devices of the building management systems, information about technicians performing the previously performed service, and information about the one or more buildings to generate accumulated information; generating, validation data including whether additional training of a particular technician providing the services is indicated based on whether the aggregated service data indicates that average durations of particular types of services performed by a particular technician match average durations of same types of services performed by all other technicians, and predicted service intervals for devices requiring periodic service based on the accumulated information, building accessibility information, and optimized service schedules for the building management systems, wherein service intervals are periods of time between services performed on the devices, and wherein generating the validation data further includes correlating control-panel device events received during an inspection of individual devices with contemporaneous local service data for those device to validate actual completion of the service and to at least one of set review and audit flags when the device events and local service data do not correspond within a defined inspection window; Therefore, because the limitations above set forth activities falling within the “Mental Processes”, and “Mathematical Concepts” abstract idea groupings described in the MPEP 2106.04, the additional elements recited in the claims are further evaluated, individually and in combination, under Step 2A Prong Two and Step 2B below. Claims 2-5, 7-8, 10, 12-15, 17-18 and 20, 22-31 recites similar limitations as claims 1, 11 and 21 and are therefore determined to recite the same abstract idea. With respect to Step 2A Prong Two of the MPEP 2106, the judicial exception is not integrated into a practical application. The additional elements are: a service workflow module executing on one or more computer processors of a server system; a mobile computing device; a connected services database; a service data aggregator; These additional elements have been evaluated, but fail to integrate the abstract idea into a practical application because they amount to using generic computing elements or computer-executable instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), and merely serve to link the use of the judicial exception to a particular technological environment. See MPEP 2106.05(f) and 2106.05(h). Even if the “receiving” step is evaluated as an additional element, this step amounts at most to insignificant extra-solution data gathering activity, which is not indicative of a practical application, as noted in MPEP 2106.05(g). The examiner views these additional elements as results-oriented steps given that there is no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result are currently present such that this is viewed as equivalent to “apply it” for merely implementing the abstract idea using generic computing components (See Id.). In addition, these limitations fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to apply the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Accordingly, because the Step 2A Prong One and Prong Two analysis resulted in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception. With respect to Step 2B of the eligibility inquiry, it has been determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As noted above, the claims as a whole merely describes a method, computer system, and computer program product that generally “apply” the concepts discussed in prong 1 above. (See MPEP 2106.05 f (II)) In particular applicant has recited the computing components at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. As the court stated in TLI Communications v. LLC v. AV Automotive LLC, 823 F.3d 607, 613 (Fed. Cir. 2016) merely invoking generic computing components or machinery that perform their functions in their ordinary capacity to facilitate the abstract idea are mere instructions to implement the abstract idea within a computing environment and does not add significantly more to the abstract idea. Accordingly, these additional computer components do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Therefore, even when viewed as a whole, nothing in the claim adds significantly more (i.e. an inventive concept) to the abstract idea and as a result the claim is not patent eligible. In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrates the abstract idea into a practical application. Their collective functions merely provide generic computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that, as an ordered combination, amount to significantly more than the abstract idea itself. Dependent claims 2-5, 7-8, 10, 12-15, 17-18 and 20, 22-31 recite the same abstract idea as recited in the independent claims, and when evaluated under Step 2A Prong One are found to merely recite details that serve to narrow the same abstract idea recited in the independent claims accompanied by the same generic computing elements or software as those addressed above in the discussion of the independent claims, which is not sufficient to amount to a practical application or add significantly more, or other additional elements that fail to amount to a practical application or add significantly more, as noted above. Dependent claims 2-3, 12-13 and 28 further limits the abstract idea by embellishing the abstract idea and linking the judicial exception to a particular technological environment by introducing the limitation wherein the building management systems include fire alarm systems, intrusion systems, and/or building automation systems and wherein the aggregated service data is further based on device information, building information, and/or technician information stored in the connected services database and wherein the fire alarm systems comprise fire detection devices including one or more of smoke detectors, carbon monoxide detectors, flame detectors, temperature sensors, and/or pull stations and alarm notification devices including one or more of speakers, horns, chimes, light emitting diode (LED) reader boards, and/or flashing lights. Further embellishing the invention by specifying the type of data aggregated and analyzed does not integrate the abstract idea into a practical application or adds significantly more to the abstract idea. Therefore the claims are also non-statutory subject matter. Dependent claims 4 and 14 further limits the abstract idea by embellishing the abstract idea and linking the judicial exception to a particular technological environment by introducing the limitation wherein the service events further include device information, status information, service type information, date information, time information, and/or technician information. Further embellishing the invention by specifying the type of data aggregated and analyzed does not integrate the abstract idea into a practical application or adds significantly more to the abstract idea. Therefore the claims are also non-statutory subject matter. Dependent claims 5, 7 and 15, 17 further limits the abstract idea by embellishing the abstract idea and linking the judicial exception to a particular field of use by introducing the limitation wherein the prediction information includes a predicted duration of a particular type of service based on time and date information of service events previously generated from the same type of service and wherein the predicted service interval is further based on building type information. Further embellishing the invention by further limiting an abstract process of mathematical steps does not integrate the abstract idea into a practical application or adds significantly more to the abstract idea. Therefore the claims are also non-statutory subject matter. Dependent claims 8, 10 and 18, 20 further limits the abstract idea by embellishing the abstract idea and linking the judicial exception to a particular technological environment by introducing the limitation wherein service alerts are generated based on the predicted service interval and wherein generating the validation information further comprises generating the validation information based on whether the aggregated service data indicates that further review of the service is required and/or additional training or oversight of the particular technician is required. Further embellishing that the invention is capable of processing information in a generic computing environment does not integrate the abstract idea into a practical application or adds significantly more to the abstract idea. Therefore the claims are also non-statutory subject matter. Dependent claims 22-23 and 25-26 further limits the abstract idea by embellishing the abstract idea and linking the judicial exception to a particular technological environment by introducing the limitation wherein the predicted service intervals include predicted failure rate of devices and/or battery replacement for devices, wherein the predicted service intervals are for devices that require periodic service, and the service data aggregator retrieves service events pertaining to the periodic service, and calculates the predicted service interval for the periodic service based on a frequency of the periodic services of individual devices as indicated by the service events, wherein the service data aggregator generates an alert to inspect and/or test devices at the predicted service interval to determine if the periodic service is needed and wherein the service data aggregator generating the alert includes scheduling future service visits. Further embellishing that the invention is capable of processing information in a generic computing environment does not integrate the abstract idea into a practical application or adds significantly more to the abstract idea. Therefore the claims are also non-statutory subject matter. Dependent claims 24 and 27 further limits the abstract idea by embellishing the abstract idea and linking the judicial exception to a particular technological environment by introducing the limitation wherein the periodic service is smoke detector cleaning, and the service data aggregator adjusts the predicted service interval for the periodic service based on a type of smoke detector and wherein the service data aggregator generating the alert includes displaying alerts via the service workflow module to technicians currently providing service. Further embellishing the invention by specifying the type of data aggregated and analyzed does not integrate the abstract idea into a practical application or adds significantly more to the abstract idea. Therefore the claims are also non-statutory subject matter. Dependent claims 29-31 further limits the abstract idea by embellishing the abstract idea and linking the judicial exception to a particular technological environment by introducing the limitation wherein the validation information further includes identification of technician-specific performance patterns over time comprising changes in service duration trends and completion rates for a particular technician, wherein the validation information compares technician performance against dynamic performance benchmarks that adjust based on building type, device type, and service environment and wherein the validation information includes a quality scoring system for services completed by each technician, the quality scores being based on historical completion data and comprising factors such as error rates, repeated service needs, and time to resolution. Further embellishing the invention by specifying the type of data aggregated and analyzed does not integrate the abstract idea into a practical application or adds significantly more to the abstract idea. Therefore the claims are also non-statutory subject matter. The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology, and the collective functions merely provide conventional computer implementation. Therefore, whether taken individually or as an order combination, the claims are nonetheless rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. For more information, see MPEP 2106. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 1. Claims 1-5, 7-8, 10-15, 17-18, 20-23 and 25-26 and 28-31 are rejected under 35 U.S.C. 103 as being unpatentable over Subramanian (US Patent Publication 2017/0011312) in view of Webb (US Patent Publication 2015/0142491). Regarding claim 1, Subramanian discloses a service management system for facilitating service of building management systems of a building (abstract, [0001] This description relates to operation of security systems in particular intrusion detection and fire monitoring systems.), comprising: a service workflow module, executing on one or more computer processors of a server system, for receiving device events from the building management systems and local service data from a mobile computing device operated by a technician and generating service events based on the local service data and services performed on devices of the building management system in response to the device events (Paragraph [039] and Figures 4 and 5 discloses the system generating historical database records (i.e. service events) based on received job cause (i.e. device events) and related local service data provided by the technician such as customer number, store number, region number among other data (i.e. local data) in order to further aggregate such data and perform predictions for a specific system. [039]“This historical data is retrieved by the recommendation engine 52 in the work order prediction system 50. Typically, such data is available in multiple tables in a normalized form. The data is preprocessed 50c from certain tables and certain columns to capture the following columns of data. The particulars of what tables and columns are captured will vary according to setups of the third party service program used. In addition, names associated with the data are illustrative and non-limiting in that similar functional data is typically found for all such systems, but may be named differently. In one example the following data are captured. [0040] Field Name Field Description [0041] Customer No: Uniquely identifies a customer [0042] Site No: Uniquely identifies a customer site/store [0043] Region No: Sites are categorized into regions, this uniquely identifies a region [0044] Job No: Uniquely identifies a job [0045] Date: Date job created [0046] Job Cause Number: A number that identifies a particular reason why the job was requested. [0047] Job Cause Desc: A textual description of the job cause, for example, “Faulty part” [0048] Resolution No: The resolution that was used to fix the problem. [0049] Resolution Desc: A textual description of the fix. [0050] Job Comments: A free text field, where the job done is described by the technician.” See further paragraphs [053]-[097]. ); a connected services database for storing the service events (Figure 4 “historical database records” [0035] The work order prediction system 50 includes a recommendation engine 52 that receives historical data from various customer jobs [0039] Referring now to FIG. 5, processing 50a of the work order prediction system 50 is shown. The work order prediction system 50 retrieves 50b historical data stored in the database 51.); and a service data aggregator, executing on one or more computer processors of a server system, for generating aggregated service data based on the service events, generating prediction information based on the aggregated service data, wherein the prediction information includes a predicted service interval of a particular type of service based on time and date information of service events previously generated from a same type of service (Fig.5 and [051] discloses the system aggregating the customer job records retrieved from the historical database records in order to perform a prediction using the data gathered and aggregated [0051] “With an understanding of the above data, the work order prediction system 50 using historical job cause numbers (for a given site) generates (via recommendation engine 52) a prediction 50d of the next upcoming historical job cause number based on a set of job cause Nos. that includes a most recent number of job cause id's for that site. The work order prediction system 50 generates 50e an explanation for the user to explain why the recommendation engine 52 is recommending a certain job cause number, and furnishes 50f more information on that job cause number. The results can be stored and forwarded 50g to dispatch personnel systems in the form of a graphical user interface.” [0052] Referring now to FIG. 6, the recommendation engine 52 uses the historical job numbers for a given site to predict the next upcoming job cause number for that site. In other words, the recommendation engine attempts to predict what type of service or service will be required at a particular site based on historical data. Recommendation engine 52 includes a feature generator 52′ and a model builder 52″. Jobs are performed at customer premises sequentially and records of these jobs are stored. The recommendation engine 52 executes an algorithm to analyze these job records to form predictions. The algorithm used is the so called Apriori algorithm that is commonly found in “market basket” or affinity analysis. An Apriori algorithm is used for frequent item set data mining and association learning in transactional databases. The historical records in database 51 can be considered as a transactional database containing work order records. The Apriori algorithm proceeds by identifying frequent individual items in a database and extending the frequent individual items to larger and larger item sets as long as the frequent individual items sets appear sufficiently often in the database 51. The frequent item sets determined by Apriori algorithm are used to determine the rules mentioned above, i.e., association rules that highlight general trends in the data found in the database 51. Other techniques could be used including Random Forest and Multi-linear Logistic Regression algorithms. [0053] Recommendation engine 52 processing 52a is shown using the feature generator 52′ and the model builder 52″. The feature generator 52′ in the recommendation engine 52 orders 52a the historical data by site and then by the date of job creation 52b. The feature generator 52′ in the recommendation engine 52 is preconfigured with a defined time window of size “W.” This time window is typically defined in terms of a week. However, other window sizes could be used. For illustration, the window is defined in terms of a week, so a value of “W=4” refers to a window size of four weeks. The feature generator 52′ scans 52c sequentially the job history, and all the job cause ids that are within the window “W.” The feature generator 52′ groups 52d all job history and all the job cause nos. that are within the defined window “W”, as one transaction record of job cause nos. The feature generator processes the data for all job cause numbers 52e for all sites 52f. See further paragraphs [053]-[097].); and correlating, for inspection or testing services, control-panel device events received during an inspection of a particular device with contemporaneous local service data for that device to confirm that the service was actually performed, and at least one of setting a review and auditing flag when the device events and local service data do not correspond within a defined inspection time window ([0052] Referring now to FIG. 6, the recommendation engine 52 uses the historical job numbers for a given site to predict the next upcoming job cause number for that site. In other words, the recommendation engine attempts to predict what type of service or service will be required at a particular site based on historical data. Recommendation engine 52 includes a feature generator 52′ and a model builder 52″. Jobs are performed at customer premises sequentially and records of these jobs are stored. The recommendation engine 52 executes an algorithm to analyze these job records to form predictions. The algorithm used is the so called Apriori algorithm that is commonly found in “market basket” or affinity analysis. An Apriori algorithm is used for frequent item set data mining and association learning in transactional databases. The historical records in database 51 can be considered as a transactional database containing work order records. The Apriori algorithm proceeds by identifying frequent individual items in a database and extending the frequent individual items to larger and larger item sets as long as the frequent individual items sets appear sufficiently often in the database 51. The frequent item sets determined by Apriori algorithm are used to determine the rules mentioned above, i.e., association rules that highlight general trends in the data found in the database 51. Other techniques could be used including Random Forest and Multi-linear Logistic Regression algorithms. [0053] Recommendation engine 52 processing 52a is shown using the feature generator 52′ and the model builder 52″. The feature generator 52′ in the recommendation engine 52 orders 52a the historical data by site and then by the date of job creation 52b. The feature generator 52′ in the recommendation engine 52 is preconfigured with a defined time window of size “W.” This time window is typically defined in terms of a week. However, other window sizes could be used. For illustration, the window is defined in terms of a week, so a value of “W=4” refers to a window size of four weeks. The feature generator 52′ scans 52c sequentially the job history, and all the job cause ids that are within the window “W.” The feature generator 52′ groups 52d all job history and all the job cause nos. that are within the defined window “W”, as one transaction record of job cause nos. The feature generator processes the data for all job cause numbers 52e for all sites 52f.) Subramanian discloses the system aggregating data related to the service provided by the technicians, however Subramanian does not explicitly disclose: generating validation information including whether additional training of the particular technician providing the services is indicated based on whether the aggregated service data indicates that average durations of particular types of services performed by a particular technician match average durations of the same types of services performed by all other technicians, wherein the validation information further includes technician performance metrics comprising changes in service duration trends and completion rates for a particular technician over time Webb which is directed to a system that that manages field-based workers further teaches: generating validation information including whether additional training of the particular technician providing services is indicating based average durations of particular types of services performed by a particular technician match average durations of the same types of services performed by all other technicians, wherein the validation information further includes technician performance metrics comprising changes in service duration trends and completion rates for a particular technician over time (See Fig.3 and 8 wherein it is disclosed the generated data for a particular technician over time, further see KPI (Key Performance Indicators) for a technician tracking progress of the technician. [0086] Section 7 (which is shown in more detail in FIG. 8) shows the Key Performance Indicators (KPIs) which are the building blocks of Performance Management, and what may be seen as the most effective method of analyzing a workforce, when KPIs across different perspectives of performance, are displayed simultaneously, in effective combination. Ultimately this KPI information is an enabler, it allows a business to aggregate and compare performance across different dimensions--both the organizational structure and time in the first instance--to better understand a business. It is only by reviewing these performance perspectives in combination, that a business can identify effective improvement strategies. Established from a configurable mix of underlying metrics, and with the ability to apply configurable weightings to the contribution of these metrics, the KPIs themselves are calculated/presented in a `normalized scale` (0-100), to allow simplified evaluation by the User. The configuration allows the KPIs to better reflect the specific strategies and policies of the individual Customer operation, and to enable a meaningful single `day score` (as shown in section 8 of the score card) to be calculated/presented. By simplifying a complex combination of metrics, across multiple different performance perspectives, the provision of a single `day score` represents an opportunity to enable much improved communication and comparison of performance throughout the operation (and most importantly to the Field Workers themselves). Once normalized, this single score may be used in many different ways (e.g. to generate league tables, influence scheduling rules, etc.). ) Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was filled to use the aggregated data as gathered by Subramanian and use the teachings as presented by Webb to include generating validation information including whether additional training of the particular technician providing the services is indicated based on whether the aggregated service data indicates that average durations of particular types of services performed by a particular technician match average durations of the same types of services performed by all other technicians, wherein the validation information further includes technician performance metrics comprising changes in service duration trends and completion rates for a particular technician over time since such improvement is just a combination of well-known prior art elements in the art that yield to predictable results such as identifying workers who are performing well and identifying of workers who may require additional training/guidance. Comparison of multiple score cards for different workers may also enable identification of process improvements (e.g. where all workers have their efficiency impaired by a particular activity or aspect of an activity) as disclosed by Webb [0099]. Regarding claims 2 and 12, Subramanian further discloses: wherein the building management systems include fire alarm systems, intrusion systems, and/or building automation systems ([001], [002], [004], [026]-[028]). Regarding claims 3 and 13, Subramanian further discloses: wherein the aggregated service data is further based on device information, building information, and/or technician information stored in the connected services database ([0051]-[0053] disclosing wherein the aggregated service data is based on device information, customer information, site information and technician input. Further see [0053]-[0094].). Regarding claims 4 and 14, Subramanian further discloses: wherein the service events further include device information, status information, service type information, date information, time information, and/or technician information (See Fig. 5 and [039]-[050], [053]-[094] disclosing the data included within the service events.). Regarding claims 5 and 15, Subramanian further discloses: wherein the prediction information includes a predicted duration of a particular type of service based on time and date information of service events previously generated from the same type of service (See paragraphs [006], [034-0035], [053], [0119]-[0125] disclosing the information related to the prediction performed by the system which includes a particular type of service based on historical data of the sites. ). Regarding claims 7 and 17, Subramanian further discloses: wherein the predicted service interval is further based on building type information (See paragraphs [006], [034-0035], [053], [0119]-[0125] disclosing the information related to the prediction performed by the system which includes a predicted service interval based on historical data of the sites.). Regarding claims 8 and 18, Subramanian further discloses: wherein service alerts are generated based on the predicted service interval ([027], [051]). Regarding claims 10 and 20 Webb further teaches: wherein generating the validation information further comprises generating the validation information based on whether the aggregated service data indicates that further review of the service is required and/or additional training or oversight of the particular technician is required ([0065] FIG. 3 shows an example of a field-worker's score card which may be presented in the GUI 108 and parts of the score card are shown in more detail in FIGS. 5-9. This score card, which is displayed within the GUI as a single screen, provides objective data about how the worker performs and may be presented to a manager of the field-worker and/or the field-worker themselves and it will be appreciated that the system may be configured to present different information (e.g. different subsets of the available information) to different people depending upon their role or level of authorization. This score card provides detailed information on one shift (or day) of the field-based worker and in addition the score card provides statistics which are based on the current shift (to which the score card relates) and previous shifts. By presenting all the information in a single screen, the field-worker's score card provides a balanced approach and enables the reviewer to see (and value) more than one behavior (e.g. more than just the completion of tasks, which is all that is monitored in known task-based approaches). [0066] In the example score card 300 shown in FIG. 3, section 1 comprises a Control Chart which plots the Overall Score (as shown for the particular shift in section 8) over time (e.g. in the form of a graph) to show how the worker's performance is trending. [0099] A score card such as the one shown in FIGS. 3 and 5-9 provides a tool for management of workers based on objective information collected at a very low (i.e. fine) level of granularity. It enables identification of workers who are performing well and identification of workers who may require additional training/guidance. Comparison of multiple score cards for different workers may also enable identification of process improvements (e.g. where all workers have their efficiency impaired by a particular activity or aspect of an activity). [0100] By use of systems as described above and a score card (such as the one shown in FIG. 3) which is generated by the Activity Processing Engine, it is possible to answer the following questions: A) What is happening ( . . . at the front lines of my business)? B) Was it Good/Bad? [0101] C) What are the opportunities for improvement, and which are the imperatives? D) What action should we take, to improve (whilst avoiding potential unintended consequences)? E) Did our actions have an impact? F) Were the impacts all positive/expected/etc.?). Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was filled to include wherein the service data aggregator generates validation information based on the aggregated service data, and the validation information is based on further review of the service is indicated and/or additional training or oversight of the particular technician is indicated since such modification is just a combination of well-known prior art elements in the art that yield to predictable results such as allowing the system to access, for each technician, the actual durations of time to complete job of the particular type with the further benefit of validating that a job is being performed appropriately and identifying if more training/improvement is needed as disclosed by Webb [0099, 0101]. Regarding claim 11, Subramanian discloses a method for facilitating service of building management systems of a building (abstract, [0001] This description relates to operation of security systems in particular intrusion detection and fire monitoring systems.), comprising: receiving, by a service workflow module executing on one or more computer processors of a server system, device events from the building management systems and local service data from a mobile computing device operated by a technician; generating and storing, the service workflow module, service events based local service data and services performed on devices of the building management systems in response to the device events (Paragraph [039] and Figures 4 and 5 discloses the system generating historical database records (i.e. service events) based on received job cause (i.e. device events) and related local service data provided by the technician such as customer number, store number, region number among other data (i.e. local data) in order to further aggregate such data and perform predictions for a specific system. [039]“This historical data is retrieved by the recommendation engine 52 in the work order prediction system 50. Typically, such data is available in multiple tables in a normalized form. The data is preprocessed 50c from certain tables and certain columns to capture the following columns of data. The particulars of what tables and columns are captured will vary according to setups of the third party service program used. In addition, names associated with the data are illustrative and non-limiting in that similar functional data is typically found for all such systems, but may be named differently. In one example the following data are captured. [0040] Field Name Field Description [0041] Customer No: Uniquely identifies a customer [0042] Site No: Uniquely identifies a customer site/store [0043] Region No: Sites are categorized into regions, this uniquely identifies a region [0044] Job No: Uniquely identifies a job [0045] Date: Date job created [0046] Job Cause Number: A number that identifies a particular reason why the job was requested. [0047] Job Cause Desc: A textual description of the job cause, for example, “Faulty part” [0048] Resolution No: The resolution that was used to fix the problem. [0049] Resolution Desc: A textual description of the fix. [0050] Job Comments: A free text field, where the job done is described by the technician.” See further paragraphs [053]-[097].); generating, by a service data aggregator executing on one or more computer processors of a server system, aggregated service data based on the service events; generating, by the service data aggregator, prediction information based on the aggregated service data (Fig.5 and [051] discloses the system aggregating the customer job records retrieved from the historical database records in order to perform a prediction using the data gathered and aggregated [0051] “With an understanding of the above data, the work order prediction system 50 using historical job cause numbers (for a given site) generates (via recommendation engine 52) a prediction 50d of the next upcoming historical job cause number based on a set of job cause Nos. that includes a most recent number of job cause id's for that site. The work order prediction system 50 generates 50e an explanation for the user to explain why the recommendation engine 52 is recommending a certain job cause number, and furnishes 50f more information on that job cause number. The results can be stored and forwarded 50g to dispatch personnel systems in the form of a graphical user interface.” [0052] Referring now to FIG. 6, the recommendation engine 52 uses the historical job numbers for a given site to predict the next upcoming job cause number for that site. In other words, the recommendation engine attempts to predict what type of service or service will be required at a particular site based on historical data. Recommendation engine 52 includes a feature generator 52′ and a model builder 52″. Jobs are performed at customer premises sequentially and records of these jobs are stored. The recommendation engine 52 executes an algorithm to analyze these job records to form predictions. The algorithm used is the so called Apriori algorithm that is commonly found in “market basket” or affinity analysis. An Apriori algorithm is used for frequent item set data mining and association learning in transactional databases. The historical records in database 51 can be considered as a transactional database containing work order records. The Apriori algorithm proceeds by identifying frequent individual items in a database and extending the frequent individual items to larger and larger item sets as long as the frequent individual items sets appear sufficiently often in the database 51. The frequent item sets determined by Apriori algorithm are used to determine the rules mentioned above, i.e., association rules that highlight general trends in the data found in the database 51. Other techniques could be used including Random Forest and Multi-linear Logistic Regression algorithms. [0053] Recommendation engine 52 processing 52a is shown using the feature generator 52′ and the model builder 52″. The feature generator 52′ in the recommendation engine 52 orders 52a the historical data by site and then by the date of job creation 52b. The feature generator 52′ in the recommendation engine 52 is preconfigured with a defined time window of size “W.” This time window is typically defined in terms of a week. However, other window sizes could be used. For illustration, the window is defined in terms of a week, so a value of “W=4” refers to a window size of four weeks. The feature generator 52′ scans 52c sequentially the job history, and all the job cause ids that are within the window “W.” The feature generator 52′ groups 52d all job history and all the job cause nos. that are within the defined window “W”, as one transaction record of job cause nos. The feature generator processes the data for all job cause numbers 52e for all sites 52f. See further paragraphs [053]-[097].); wherein the prediction information includes a predicted service interval of a particular type of service based on time and date information of service events previously generated from a same type of service (See paragraphs [006], [034-0035], [053], [0119]-[0125] disclosing the information related to the prediction performed by the system which includes a particular type of service based on historical data of the sites.); correlating, for inspection or testing services, control-panel device events received during an inspection of a particular device with contemporaneous local service data for that device to confirm that the service was actually performed, and at least one of setting a review and auditing flag when the device events and local service data do not correspond within a defined inspection time window [0052] Referring now to FIG. 6, the recommendation engine 52 uses the historical job numbers for a given site to predict the next upcoming job cause number for that site. In other words, the recommendation engine attempts to predict what type of service or service will be required at a particular site based on historical data. Recommendation engine 52 includes a feature generator 52′ and a model builder 52″. Jobs are performed at customer premises sequentially and records of these jobs are stored. The recommendation engine 52 executes an algorithm to analyze these job records to form predictions. The algorithm used is the so called Apriori algorithm that is commonly found in “market basket” or affinity analysis. An Apriori algorithm is used for frequent item set data mining and association learning in transactional databases. The historical records in database 51 can be considered as a transactional database containing work order records. The Apriori algorithm proceeds by identifying frequent individual items in a database and extending the frequent individual items to larger and larger item sets as long as the frequent individual items sets appear sufficiently often in the database 51. The frequent item sets determined by Apriori algorithm are used to determine the rules mentioned above, i.e., association rules that highlight general trends in the data found in the database 51. Other techniques could be used including Random Forest and Multi-linear Logistic Regression algorithms. [0053] Recommendation engine 52 processing 52a is shown using the feature generator 52′ and the model builder 52″. The feature generator 52′ in the recommendation engine 52 orders 52a the historical data by site and then by the date of job creation 52b. The feature generator 52′ in the recommendation engine 52 is preconfigured with a defined time window of size “W.” This time window is typically defined in terms of a week. However, other window sizes could be used. For illustration, the window is defined in terms of a week, so a value of “W=4” refers to a window size of four weeks. The feature generator 52′ scans 52c sequentially the job history, and all the job cause ids that are within the window “W.” The feature generator 52′ groups 52d all job history and all the job cause nos. that are within the defined window “W”, as one transaction record of job cause nos. The feature generator processes the data for all job cause numbers 52e for all sites 52f.). Subramanian discloses the system aggregating data related to the service provided by the technicians, however Subramanian does not explicitly disclose: generating validation information including whether additional training of the particular technician providing the services is indicated based on whether the aggregated service data indicates that average durations of particular types of services performed by a particular technician match average durations of the same types of services performed by all other technicians, wherein the validation information further includes technician performance metrics comprising changes in service duration trends and completion rates for a particular technician over time. Webb which is directed to a system that that manages field-based workers further teaches: generating validation information including whether additional training of the particular technician providing services is indicating based average durations of particular types of services performed by a particular technician match average durations of the same types of services performed by all other technicians, wherein the validation information further includes technician performance metrics comprising changes in service duration trends and completion rates for a particular technician over time (See Fig.3 and 8 wherein it is disclosed the generated data for a particular technician over time, further see KPI (Key Performance Indicators) for a technician tracking progress of the technician. [0086] Section 7 (which is shown in more detail in FIG. 8) shows the Key Performance Indicators (KPIs) which are the building blocks of Performance Management, and what may be seen as the most effective method of analyzing a workforce, when KPIs across different perspectives of performance, are displayed simultaneously, in effective combination. Ultimately this KPI information is an enabler, it allows a business to aggregate and compare performance across different dimensions--both the organizational structure and time in the first instance--to better understand a business. It is only by reviewing these performance perspectives in combination, that a business can identify effective improvement strategies. Established from a configurable mix of underlying metrics, and with the ability to apply configurable weightings to the contribution of these metrics, the KPIs themselves are calculated/presented in a `normalized scale` (0-100), to allow simplified evaluation by the User. The configuration allows the KPIs to better reflect the specific strategies and policies of the individual Customer operation, and to enable a meaningful single `day score` (as shown in section 8 of the score card) to be calculated/presented. By simplifying a complex combination of metrics, across multiple different performance perspectives, the provision of a single `day score` represents an opportunity to enable much improved communication and comparison of performance throughout the operation (and most importantly to the Field Workers themselves). Once normalized, this single score may be used in many different ways (e.g. to generate league tables, influence scheduling rules, etc.). ), Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was filled to use the aggregated data as gathered by Subramanian and use the teachings as presented by Webb to include generating validation information including whether additional training of the particular technician providing the services is indicated based on whether the aggregated service data indicates that average durations of particular types of services performed by a particular technician match average durations of the same types of services performed by all other technicians, wherein the validation information further includes technician performance metrics comprising changes in service duration trends and completion rates for a particular technician over time since such improvement is just a combination of well-known prior art elements in the art that yield to predictable results such as identifying workers who are performing well and identifying of workers who may require additional training/guidance. Comparison of multiple score cards for different workers may also enable identification of process improvements (e.g. where all workers have their efficiency impaired by a particular activity or aspect of an activity) as disclosed by Webb [0099]. Regarding claim 21, Subramanian discloses a method for facilitating service of building management systems of one or more buildings (abstract), comprising accumulating, by a server system, information about previously performed service on building management systems, information about devices of the building management systems, and information about the one or more buildings (Paragraph [039] and Figures 4 and 5 discloses the system accumulating historical database records (i.e. service events) based on received job cause (i.e. device events) and related local service data provided by the technician such as customer number, store number, region number among other data (i.e. local data) in order to further aggregate such data and perform predictions for a specific system. [039]“This historical data is retrieved by the recommendation engine 52 in the work order prediction system 50. Typically, such data is available in multiple tables in a normalized form. The data is preprocessed 50c from certain tables and certain columns to capture the following columns of data. The particulars of what tables and columns are captured will vary according to setups of the third party service program used. In addition, names associated with the data are illustrative and non-limiting in that similar functional data is typically found for all such systems, but may be named differently. In one example the following data are captured. [0040] Field Name Field Description [0041] Customer No: Uniquely identifies a customer [0042] Site No: Uniquely identifies a customer site/store [0043] Region No: Sites are categorized into regions, this uniquely identifies a region [0044] Job No: Uniquely identifies a job [0045] Date: Date job created [0046] Job Cause Number: A number that identifies a particular reason why the job was requested. [0047] Job Cause Desc: A textual description of the job cause, for example, “Faulty part” [0048] Resolution No: The resolution that was used to fix the problem. [0049] Resolution Desc: A textual description of the fix. [0050] Job Comments: A free text field, where the job done is described by the technician.” See further paragraphs [053]-[097]. ), and generating, by a service data aggregator executing on one or more computer processors of a server system, predicted service intervals for devices requiring periodic service based on the accumulated information, and optimized service schedules for the building management systems, wherein service intervals are periods of time between service performed on the devices (Fig.5 and [051] discloses the system aggregating the customer job records retrieved from the historical database records in order to perform a prediction using the data gathered and aggregated [0051] “With an understanding of the above data, the work order prediction system 50 using historical job cause numbers (for a given site) generates (via recommendation engine 52) a prediction 50d of the next upcoming historical job cause number based on a set of job cause Nos. that includes a most recent number of job cause id's for that site. The work order prediction system 50 generates 50e an explanation for the user to explain why the recommendation engine 52 is recommending a certain job cause number, and furnishes 50f more information on that job cause number. The results can be stored and forwarded 50g to dispatch personnel systems in the form of a graphical user interface.” [0052] Referring now to FIG. 6, the recommendation engine 52 uses the historical job numbers for a given site to predict the next upcoming job cause number for that site. In other words, the recommendation engine attempts to predict what type of service or service will be required at a particular site based on historical data. Recommendation engine 52 includes a feature generator 52′ and a model builder 52″. Jobs are performed at customer premises sequentially and records of these jobs are stored. The recommendation engine 52 executes an algorithm to analyze these job records to form predictions. The algorithm used is the so called Apriori algorithm that is commonly found in “market basket” or affinity analysis. An Apriori algorithm is used for frequent item set data mining and association learning in transactional databases. The historical records in database 51 can be considered as a transactional database containing work order records. The Apriori algorithm proceeds by identifying frequent individual items in a database and extending the frequent individual items to larger and larger item sets as long as the frequent individual items sets appear sufficiently often in the database 51. The frequent item sets determined by Apriori algorithm are used to determine the rules mentioned above, i.e., association rules that highlight general trends in the data found in the database 51. Other techniques could be used including Random Forest and Multi-linear Logistic Regression algorithms. [0053] Recommendation engine 52 processing 52a is shown using the feature generator 52′ and the model builder 52″. The feature generator 52′ in the recommendation engine 52 orders 52a the historical data by site and then by the date of job creation 52b. The feature generator 52′ in the recommendation engine 52 is preconfigured with a defined time window of size “W.” This time window is typically defined in terms of a week. However, other window sizes could be used. For illustration, the window is defined in terms of a week, so a value of “W=4” refers to a window size of four weeks. The feature generator 52′ scans 52c sequentially the job history, and all the job cause ids that are within the window “W.” The feature generator 52′ groups 52d all job history and all the job cause nos. that are within the defined window “W”, as one transaction record of job cause nos. The feature generator processes the data for all job cause numbers 52e for all sites 52f. See further paragraphs [053]-[097].)) The Examiner notes that the newly added limitations are directed to new matter not disclosed in the originally filled specification and therefore given little to no patentable weight correlating, for inspection or testing services, control-panel device events received during an inspection of a particular device with contemporaneous local service data for that device to confirm that the service was actually performed, and at least one of setting a review and auditing flag when the device events and local service data do not correspond within a defined inspection time window ([0052] Referring now to FIG. 6, the recommendation engine 52 uses the historical job numbers for a given site to predict the next upcoming job cause number for that site. In other words, the recommendation engine attempts to predict what type of service or service will be required at a particular site based on historical data. Recommendation engine 52 includes a feature generator 52′ and a model builder 52″. Jobs are performed at customer premises sequentially and records of these jobs are stored. The recommendation engine 52 executes an algorithm to analyze these job records to form predictions. The algorithm used is the so called Apriori algorithm that is commonly found in “market basket” or affinity analysis. An Apriori algorithm is used for frequent item set data mining and association learning in transactional databases. The historical records in database 51 can be considered as a transactional database containing work order records. The Apriori algorithm proceeds by identifying frequent individual items in a database and extending the frequent individual items to larger and larger item sets as long as the frequent individual items sets appear sufficiently often in the database 51. The frequent item sets determined by Apriori algorithm are used to determine the rules mentioned above, i.e., association rules that highlight general trends in the data found in the database 51. Other techniques could be used including Random Forest and Multi-linear Logistic Regression algorithms. [0053] Recommendation engine 52 processing 52a is shown using the feature generator 52′ and the model builder 52″. The feature generator 52′ in the recommendation engine 52 orders 52a the historical data by site and then by the date of job creation 52b. The feature generator 52′ in the recommendation engine 52 is preconfigured with a defined time window of size “W.” This time window is typically defined in terms of a week. However, other window sizes could be used. For illustration, the window is defined in terms of a week, so a value of “W=4” refers to a window size of four weeks. The feature generator 52′ scans 52c sequentially the job history, and all the job cause ids that are within the window “W.” The feature generator 52′ groups 52d all job history and all the job cause nos. that are within the defined window “W”, as one transaction record of job cause nos. The feature generator processes the data for all job cause numbers 52e for all sites 52f.); wherein generating the validation data further includes correlating, control-panel device events received during inspection individual devices with contemporaneous local service data for those devices to validate actual completion of the services, and to at least one of set review and auditing flags when the device events and local service data do not correspond within a defined inspection time window ([0052] Referring now to FIG. 6, the recommendation engine 52 uses the historical job numbers for a given site to predict the next upcoming job cause number for that site. In other words, the recommendation engine attempts to predict what type of service or service will be required at a particular site based on historical data. Recommendation engine 52 includes a feature generator 52′ and a model builder 52″. Jobs are performed at customer premises sequentially and records of these jobs are stored. The recommendation engine 52 executes an algorithm to analyze these job records to form predictions. The algorithm used is the so called Apriori algorithm that is commonly found in “market basket” or affinity analysis. An Apriori algorithm is used for frequent item set data mining and association learning in transactional databases. The historical records in database 51 can be considered as a transactional database containing work order records. The Apriori algorithm proceeds by identifying frequent individual items in a database and extending the frequent individual items to larger and larger item sets as long as the frequent individual items sets appear sufficiently often in the database 51. The frequent item sets determined by Apriori algorithm are used to determine the rules mentioned above, i.e., association rules that highlight general trends in the data found in the database 51. Other techniques could be used including Random Forest and Multi-linear Logistic Regression algorithms. [0053] Recommendation engine 52 processing 52a is shown using the feature generator 52′ and the model builder 52″. The feature generator 52′ in the recommendation engine 52 orders 52a the historical data by site and then by the date of job creation 52b. The feature generator 52′ in the recommendation engine 52 is preconfigured with a defined time window of size “W.” This time window is typically defined in terms of a week. However, other window sizes could be used. For illustration, the window is defined in terms of a week, so a value of “W=4” refers to a window size of four weeks. The feature generator 52′ scans 52c sequentially the job history, and all the job cause ids that are within the window “W.” The feature generator 52′ groups 52d all job history and all the job cause nos. that are within the defined window “W”, as one transaction record of job cause nos. The feature generator processes the data for all job cause numbers 52e for all sites 52f.) Subramanian discloses the system aggregating data related to the service provided by the technicians, and generating service schedules based on a plurality of elements. However Subramanian does not explicitly disclose generating validation information including whether additional training of the particular technician providing services is indicating based average durations of particular types of services performed by a particular technician match average durations of the same types of services performed by all other technicians and information about technicians performing the previously performed service. Webb which is directed to a system that that manages field-based workers further teaches: generating validation information including whether additional training of the particular technician providing services is indicating based average durations of particular types of services performed by a particular technician match average durations of the same types of services performed by all other technicians (See Fig.3 and 8 wherein it is disclosed the generated data for a particular technician over time, further see KPI (Key Performance Indicators) for a technician tracking progress of the technician. [0086] Section 7 (which is shown in more detail in FIG. 8) shows the Key Performance Indicators (KPIs) which are the building blocks of Performance Management, and what may be seen as the most effective method of analyzing a workforce, when KPIs across different perspectives of performance, are displayed simultaneously, in effective combination. Ultimately this KPI information is an enabler, it allows a business to aggregate and compare performance across different dimensions--both the organizational structure and time in the first instance--to better understand a business. It is only by reviewing these performance perspectives in combination, that a business can identify effective improvement strategies. Established from a configurable mix of underlying metrics, and with the ability to apply configurable weightings to the contribution of these metrics, the KPIs themselves are calculated/presented in a `normalized scale` (0-100), to allow simplified evaluation by the User. The configuration allows the KPIs to better reflect the specific strategies and policies of the individual Customer operation, and to enable a meaningful single `day score` (as shown in section 8 of the score card) to be calculated/presented. By simplifying a complex combination of metrics, across multiple different performance perspectives, the provision of a single `day score` represents an opportunity to enable much improved communication and comparison of performance throughout the operation (and most importantly to the Field Workers themselves). Once normalized, this single score may be used in many different ways (e.g. to generate league tables, influence scheduling rules, etc.). ). Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was filled to use the aggregated data as gathered by Subramanian and use the teachings as presented by Webb to include generating validation information including whether additional training of the particular technician providing the services is indicated based on whether the aggregated service data indicates that average durations of particular types of services performed by a particular technician match average durations of the same types of services performed by all other technicians, wherein the validation information further includes technician performance metrics comprising changes in service duration trends and completion rates for a particular technician over time since such improvement is just a combination of well-known prior art elements in the art that yield to predictable results such as identifying workers who are performing well and identifying of workers who may require additional training/guidance. Comparison of multiple score cards for different workers may also enable identification of process improvements (e.g. where all workers have their efficiency impaired by a particular activity or aspect of an activity) as disclosed by Webb [0099]. Regarding claim 22, Subramanian further discloses: wherein the predicted service intervals include predicted failure rate of devices and/or battery replacement for devices ( [054]-[082]). Regarding claim 23, Subramanian further discloses: wherein the predicted service intervals are for devices that require periodic service, and the service data aggregator retrieves service events pertaining to the periodic service, and calculates the predicted service interval for the periodic service based on a frequency of the periodic services of individual devices as indicated by the service events ( [0119]-[0125]). Regarding claim 25, Subramanian further discloses: wherein the service data aggregator generates an alert to inspect and/or test devices at the predicted service interval to determine if the periodic service is needed ([027], [051]). Regarding claim 26, Subramanian further discloses: wherein the service data aggregator generating the alert includes scheduling future service visits ([027], [051]). Regarding claim 28, Subramanian further discloses: wherein the fire alarm systems comprise fire detection devices including one or more of smoke detectors, carbon monoxide detectors, flame detectors, temperature sensors, and/or pull stations and alarm notification devices including one or more of speakers, horns, chimes, light emitting diode (LED) reader boards, and/or flashing lights ([0027] Several types of sensor/detectors (terms used interchangeably herein) can be used such as microphones, motion detectors, smart switches and cameras. The detectors 19 may be hard wired or communicate with the intrusion detection panel 18 wirelessly. In general, detectors 19 sense glass breakage, motion, gas leaks, fire, and/or breach of an entry point, and send the sensed information to the intrusion detection panel 18. Based on the information received from the detectors 19, the intrusion detection panel 18 determines whether to trigger alarms, e.g., by triggering one or more sirens (not shown) at the premises 16 and/or sending alarm messages to the monitoring station 20. ). Regarding claim 29, Subramanian further discloses: wherein the validation information further includes identification of technician-specific performance patterns over time comprising changes in service duration trends and completion rates for a particular technician ([0052] Referring now to FIG. 6, the recommendation engine 52 uses the historical job numbers for a given site to predict the next upcoming job cause number for that site. In other words, the recommendation engine attempts to predict what type of service or service will be required at a particular site based on historical data. Recommendation engine 52 includes a feature generator 52′ and a model builder 52″. Jobs are performed at customer premises sequentially and records of these jobs are stored. The recommendation engine 52 executes an algorithm to analyze these job records to form predictions. The algorithm used is the so called Apriori algorithm that is commonly found in “market basket” or affinity analysis. An Apriori algorithm is used for frequent item set data mining and association learning in transactional databases. The historical records in database 51 can be considered as a transactional database containing work order records. The Apriori algorithm proceeds by identifying frequent individual items in a database and extending the frequent individual items to larger and larger item sets as long as the frequent individual items sets appear sufficiently often in the database 51. The frequent item sets determined by Apriori algorithm are used to determine the rules mentioned above, i.e., association rules that highlight general trends in the data found in the database 51. Other techniques could be used including Random Forest and Multi-linear Logistic Regression algorithms.). Regarding claim 30, Subramanian further discloses: wherein the validation information compares technician performance against dynamic performance benchmarks that adjust based on building type, device type, and service environment ([0097] This list of transactions is passed as an input and processed 52g in the server using the Apriori algorithm. The Apriori algorithm runs several experiments, with various values for support and confidence. Support and confidence are defined parameters for the Apriori algorithm. Support is a value of the count of the number of occurrences of each Historical job cause number separately in the database over a first scan of the database. Confidence values come from the algorithm. Different thresholds values can be supplied to limit the Apriori output. These values for support and confidence serve as thresholds for the Apriori algorithm. [0114] The prediction engine, queries the key extraction module with the list of rules that are shortlisted, e.g., rules may be filtered by the support and confidence thresholds mentioned above, or they may also be removed due to redundancies. Given those rules, the key extraction module returns 54i a list of identified (extracted) key phrases. The system generates a graphical user interface (GUI), (discussed below) that displays on a display device those key phrases in the form of a word cloud as shown in FIG. 7A.). Regarding claim 31, Subramanian further discloses: wherein the validation information includes a quality scoring system for services completed by each technician, the quality scores being based on historical completion data and comprising factors such as error rates, repeated service needs, and time to resolution ([0111] The key phrase extraction module is configured for the grams following a binomial distribution. The standard deviation of a binomial distribution is square root (number of trials*p(success)*1-p(success)). The key phrase extraction module based on such a binomial distribution calculates a score 54h“z” or “z score” by calculating the standard deviation for each of the grams as Square root (Number of time grams occurs in background (p(gram in background)*1-(p(gram in background)) [0112] The key phrase extraction module calculates the “z score” “Z” as follows:Z=observed probability−expected probability/standard deviation [0113] The calculated “z score” is provided for every gram in the foreground. The system is configured to display those grams with very high z scores to the user. ). Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Subramanian (US Patent Publication 2017/0011312) in view of Webb (US Patent Publication 2015/0142491) and Azevedo (US 2017/0180214). Regarding claim 24, Subramanian discloses wherein the periodic service is smoke detector cleaning ([0119]-[0125]). The Examiner notes the use of non-functional descriptive material used in the claim. The description of the specific type of service does not alter in any manner the step of generating a predicted service interval for devices requiring periodic service. Subramanian does not explicitly disclose: the service data aggregator adjusts the predicted service interval for the periodic service based on the type of smoke detector. However Azevedo, which is directed to a remote predictive monitoring for industrial equipment further teaches: adjusting the predicted service interval for the periodic service based on a type of detector ([0041] FIG. 2A is a flowchart illustrating the selection of sensors given a set of restrictions. In Step 1, the user informs the system about the Sensor Networks setup restrictions (R), such as information about the equipment to be monitored (type, model etc.), cost restrictions, desired sampling frequency etc. In Step 2, the system uses a sensors database (Step 2a) to suggest a set of sensors (S) based on the user restrictions (R). In Step 3, given a set of sensors (S), the system estimates the sensors cost (C), including, for example, installation and maintenance estimated costs. [0042] FIG. 2B is a flowchart illustrating an exemplary flow from capturing data to generating reports. In Step 1, data is captured from sensors (Sd). In Step 2, a predictive model (PM) is obtained from a predictive models database (Step 2a), based on the set of sensors (S). In Step 3, the sensors data (Sd) is classified according to a database of signal models (Step 3a), generating a time series of classified sensors data (TS) in Step 4. In Step 5, the time series of classified sensors data (TS) is combined with the predictive model (PM) to estimate the predictive model metrics (PMm), such as the failure risk of the monitored equipment, the cost of preventive maintenance, the cost impact of an equipment failure etc. In Step 6, a report is generated summarizing the time series of classified sensors data (TS) and the predictive model metrics (PMm). ). Therefore, it would have been obvious to one of ordinary skill in the art to adjust the predicted service interval for the service based on the type of detector or sensor since such improvement in the system of Subramanian is just a combination of prior art elements previously known in the art to provide the known benefit of providing a business model based on defect prediction service by defining sensor types as disclosed by Azevedo on paragraph [08]. Claims 27 are rejected under 35 U.S.C. 103 as being unpatentable over Subramanian (US Patent Publication 2017/0011312) in view of Webb (US Patent Publication 2015/0142491) and Ming (US Patent Publication 2015/0262114) . Regarding claim 27, Subramanian does not explicitly disclose: wherein the service data aggregator generating the alert includes displaying alerts via the service workflow module to technicians currently providing service. However Ming which is related to tracking job status information receive from work stations teaches: wherein the service data aggregator generating the alert includes displaying alerts via the service workflow module to technicians currently providing service ( [0035] To service an item 108 of a customer 106, the entity may operate one or more service facilities 112. Each service facility may include one or more work stations 114(1) . . . 114(N), wherein N is an integer number such as one, two, three, four, five, eight, ten, twenty, etc. As further discussed herein, a technician may be assigned to an individual work station, e.g., 114(1), to perform a service job on an item 108 of the customer 106. A technician, in the context of this document, may comprise an individual technician or multiple technicians (e.g., two or more technicians) that control an individual work station, e.g., 114(1). For example, the service job may be a job capable of being performed by one person (e.g., an oil change, styling of hair, a back massage, etc.). Or, the service job may be a job that, for any one of a variety reasons, the entity would like more than one technician to perform (e.g., the job requires multiple people, efficiency is improved if multiple people perform the job, etc.). Accordingly, a technician may include, but is not limited to, a mechanic, a repair man or woman, a hair stylist, a masseuse, a make-up artist, an equipment tuner (e.g., skis, bicycles, etc.) and so forth. In some service environments, a work station may also be referred to as a stall, a team or any other identifier or label that can separate one service unit from the next (e.g., work station 114(1) from work station 114(2)). [0037] In the course of a given operation period (e.g., a day, a week, etc.), the technician may continuously receive and process service jobs of a particular type or service jobs of a variety of types. Moreover, work stations 114(1) . . . 114(N) may be assigned one or more work station devices 116 configured to communicate, to the entity device(s) 104, information regarding the status of a job (referred to herein as "job status information"). For example, a technician may use a work station device 116 to communicate, e.g., to an entity device 104, that a service job has been drawn from a list of queued service jobs waiting to be started. The technician may use the work station device 116 to communicate, e.g., to an entity device 104, an indication of an actual start time for the job and/or an indication of an actual complete time for the job. [0038] In various embodiments, a work station device 116 may be assigned to each of the work stations 114(1) . . . 114(N). In other embodiments, multiple work stations 114(1) . . . 114(N) may share a work station device 116. In some implementations, the technicians may communicate the job status information to the entity device(s) 104 independent of using a work station device 116, such as by physically walking from a first location of the work station 114(1) to a second location of the entity device 104 to verbally communicate the indications. A representative of the service facility 112 other than the technicians may then enter the job status information at an entity device 104. For example, the representative may be a manager, an advisor, a supervisor, a receptionist, a secretary a scheduler, etc.). Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to display alerts via the service workflow module to technicians currently providing service since such modification in the system of Subramanian is just a combination of prior art elements that yield to predictable results such as allowing the system to provide for the technicians an environment wherein, in the course of a given operation period (e.g., a day, a week, etc.), the technician may continuously receive and process service jobs of a particular type or service jobs of a variety of types as disclosed by Ming on [038] Response to Arguments Applicant’s arguments see Applicant Arguments/Remarks Made in an Amendment, filed 12/16/2025, with respect to the rejection(s) of claim(s) have been fully considered. In regards to the previously presented 35 USC 112 1st, Applicant’s arguments are considered, however the arguments are found non-persuasive. Applicant argues that the originally filled specification provides support for: “receiving device events from building management systems and contemporaneous local service data from a technician; generating service events; and storing them” Examiner understands that the originally filled specification discloses at [056] that the service workflow module receives local service data provided by the technician indicating that a particular panel or device has been serviced and also receives device events from the particular panel and combines the data, however does not disclose the requirement of the claim of “correlating, for inspection or testing services, control-panel device events received during an inspection of a particular device with contemporaneous local service data for that device to confirm that the service was actually performed” as specifically claimed. Please revisit the 35 USC 112 1st above providing more evidence. Applicant further pointed out paragraphs [018, 063-064], however these paragraphs do not provide support for the requirements of the claim. “Newly recited correlation-based inspection validation mechanism)” Applicant refers to paragraphs [018] which is merely describing a typical fire alarm service performed by technicians, see paragraph [017]; paragraphs [063-064] which are disclosing the data stored by the service events table, however the paragraphs are not disclosing that the data is gathered in real-time as argued, there is not management of the data or how the data is used. The paragraphs are simply describing the information that is been stored by the system, no correlation or any association is described other than what data the service events table is storing; [022] generically discloses that technicians are flagged based on certain tasks being too quick or too slow; [068] discloses comparing the time duration of a service performed by a technician with a time duration performed by the same technician and/or to the time duration by other technicians to determine if the service was actually performed, and [085] further provides details of flagging a particular technician if the service time is shorter than other technicians performing a similar job. The specification only uses the time duration of a particular technician compared to previous services for that particular technician or to other technicians performing a similar job to determine if the service was performed and/or to flag a service for review/audit. However does not disclose the requirements of the claim “correlating, for inspection or testing services, control-panel device events received during an inspection of a particular device with contemporaneous local service data for that device to confirm that the service was actually performed” as claimed. In regards to the previously presented 35 USC 101 Applicant argues: Applicant argues “The Office characterizes the claims as organizing human activity or collecting/analyzing data. However, this characterization is not supported. The amended claims recite specific operations performed by computing components (e.g., a service workflow module and a service data aggregator), which are not fundamental economic practices or mental processes. These components generate predicted service intervals and validation information based on aggregated service history and technician-specific performance metrics.” Examiner clarifies that the claims are directed to mental processes and mathematical concepts. Although the claims do recite computing components performing the steps of the abstract idea, these computing components are part of the analysis under step 2B wherein it is determined if the claims recite additional elements that amount to significantly more than the judicial exception. Applicant argues “Amended claim 1 recites a specific set of elements, including receive real-time technician service data via mobile devices, generate and store service events, aggregate service event history, compute predicted service intervals, and generate technician-specific validation information including risk indicators and performance metrics. Amended independent claims 11 and 21 recite similar elements. At least these additional elements integrate the alleged judicial exception into a practical application. MPEP § 2106.04(d).” First, the Examiner clarifies that the claim do not require real-time data. Additionally limitations that are indicative of integration into a practical application are: Improvements to the functioning of a computer, or to any other technology or technical field; Applying or using a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition; Applying the judicial exception with, or by use of, a particular machine; Effecting a transformation or reduction of a particular article to a different state or thing; Applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception - see MPEP 2106.05. The claims at hand do not provide any of the above. Furthermore “receive real-time technician service data via mobile devices, generate and store service events, aggregate service event history, compute predicted service intervals, and generate technician-specific validation information including risk indicators and performance metrics” are limitations that can be performed manually perhaps with the aid of pen and paper and are only using computer elements such as a mobile device and a processor to link the abstract idea to a specific technological environment by requiring that the steps are performed with the recited computer elements. Applicant argues “Here, the claimed invention improves the functionality of service management systems by enabling predictive oversight and performance validation for distributed technical field workers.” Examiner points out that the improvement provided by the system is related to a business solution rather than an improvement to the functioning of a computer, or to any other technology or technical field-See MPEP 2106.05(a). Applicant argues “Furthermore, even if the claims recited a judicial exception, they do not, the above quoted limitations show that the claims recite "additional element[s] us[e] a judicial exception in conjunction with[,] a particular machine or manufacture that is integral to the claim." MPEP § 2106.04(d)(1). For example, the method involves a specific configuration of computing modules, databases, and aggregators, which are physical components interacting in a novel way. In Enfish, LLC vy. Microsoft Corp., 822 F.3d 1327 (Fed. Cir. 2016), the court held that claims directed to a specific improvement in how computers store and retrieve data were not abstract. Similarly, in this case, the method improves how building management service systems monitor technician quality using historical completion data, making it more than a mere abstract idea. The recited additional elements are necessarily an "[i]ntegral use of a machine to achieve performance of a method," thereby "integrat[ing] the recited judicial exception into a practical application." MPEP § 2106.0S(b)(I).” Examiner respectfully disagrees. Enfish provided a background on the state of the art, at the time of the invention, in the technology, namely, with regards to the management of information in a computer database. This served as reference material in order to identify the improvement or, more specifically, establish that the claimed invention of Enfish was deeply rooted in the technology and was seeking to remedy a problem that arose from the technology. That is to say, Enfish provided a background explanation with regards to the state of the art to establish the flaws that arose from the use of “pivot tables” and demonstrated that the inventive concept of Enfish laid with the improvement of this technology, i.e. self-referential tables. It was established in Enfish that the claimed invention did not contain an abstract idea because it was not directed towards a fundamental economic practice, a method of organizing human activities, an idea of itself, or mathematical relationships/formulas because the inventive concept was directed towards the improvement of the technology, specifically, i.e. although the invention was directed towards the organization of information the invention of Enfish was not simply relying on or applying well-understood, routine, and conventional concepts known in the technical field or describing the use of generic devices and technologies to perform an abstract idea, but was, in fact, directed and seeking to improve upon the technology by addressing issues known in the technology. In the case of the instant invention, the Examiner asserts that the specification lacks any disclosure of evidence to demonstrate that the invention is seeking to improve upon the technology or, more specifically, that the claimed invention is directed towards addressing and improving upon an issue that arose from the technology, but merely demonstrating that the claimed invention is directed towards the abstract idea and merely applying or utilizing generic computing devices performing the abstract process of analyzing the data in order to generate validation information. Applicant argues that the claims recite “significantly more”, however applicant failed to provide an articulated reasoning and failed to point out what elements of the claim should be considered as significantly more. Applicant argues “In McRO, Inc. V. Bandai Namco Games Am. Inc., 837 F.3d 1299 (Fed. Cir. 2016), the court found that the invention was patent-eligible because it used specific rules to automate a previously manual process in animation. Here, the claimed method also automates a previously manual process-tracking technician proficiency and identifying risk-based service triggers. The amended claims, thus, recite "significantly more" than any alleged abstract idea by providing at least a significant technical advantage to a technical field." Examiner respectfully disagrees. In McRO the claimed improvement is allowing computers to produce "accurate and realistic lip synchronization and facial expressions in animated characters" that previously could only be produced by human animators. '576 patent col. 2 II. 49-50. As the district court correctly recognized, this computer automation is realized by improving the prior art through "the use of rules, rather than artists, to set the morph weights and transitions between phonemes." Patentability Op., 55 F. Supp. 3d at 1227. The rules are limiting in that they define morph weight sets as a function of the timing of phoneme sub-sequences. In the instant case, there is no rules that allow the system to claimed improvement that allows the computer to produce any synchronization realistic lip synchronization and facial expressions in animated characters" that previously could only be produced by human animators. In the instant case, the system is gathering information in order to generate a prediction and validation information. A process that could be performed manually until limited by the use of a computer. There is no improvement in the claimed electronic elements but rather an abstract idea that uses computer elements as tools to aid an expedite the processes of the abstract idea. Applicant argues “As in Bascom Global Internet Services, Inc. v. AT&T Mobility LLC, 827 F.3d 1341 (Fed. Cir. 2016), where a combination of steps in a particular arrangement (a filtering system) was found patentable, this method’s combination of validating technician service quality through duration and frequency trend analysis adds a non-conventional, technological improvement to service processing systems.” Examiner respectfully disagrees. The inventive concept described and claimed in the ’606 patent is the installation of a filtering tool at a specific location, remote from the end-users, with customizable filtering features specific to each end user. This design gives the filtering tool both the benefits of a filter on a local computer and the benefits of a filter on the ISP server. BASCOM explains that the inventive concept rests on taking advantage of the ability of at least some ISPs to identify individual accounts that communicate with the ISP server, and to associate a request for Internet content with a specific individual account. According to BASCOM, the inventive concept harnesses this technical feature of network technology in a filtering system by associating individual accounts with their own filtering scheme and elements while locating the filtering system on an ISP server. See Research Corp. Techs. v. Microsoft Corp., 627 F.3d 859, 869 (Fed. Cir. 2010) (“[I]nventions with specific applications or improvements to technologies in the marketplace are not likely to be so abstract that they override the statutory language and framework of the Patent Act.”). On this limited record, this specific method of filtering Internet content cannot be said, as a matter of law, to have been conventional or generic. In the instant case there is no unconventional arrangement of the conventional computer elements, the computer elements are used in a conventional manner such that the system is able to receive, gather and transmit information. Applicant argues “To the extent the Examiner contends the additional elements in the amended claims are "generic" or "well understood, routine, and conventional," Applicant respectfully requests that the Examiner meets the necessary burden of Step 2B in providing the support for the rejection according to M.P.E.P. § 2106.07(a) (II) ("Evidentiary Requirements In Making A § 101 Rejection"). Applicant respectfully submits that the Examiner cannot provide evidence that additional elements in the amended claims were well-known, routine, and conventional. Therefore, the amended claims recite additional elements that favor eligibility. See MPEP § 2106.05(d)(1).” It is noted that in Step 2B of the analysis the Examiner considered the additional elements as follow “…the claims as a whole merely describes a method, computer system, and computer program product that generally “apply” the concepts discussed in prong 1 above. (See MPEP 2106.05 f (II)) In particular applicant has recited the computing components at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. As the court stated in TLI Communications v. LLC v. AV Automotive LLC, 823 F.3d 607, 613 (Fed. Cir. 2016) merely invoking generic computing components or machinery that perform their functions in their ordinary capacity to facilitate the abstract idea are mere instructions to implement the abstract idea within a computing environment and does not add significantly more to the abstract idea.” That is, the Examiner did not considered the additional elements as simply appending well-understood, routine, conventional activities previously known to the industry, as argued but rather as “Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)” In regards to the previously presented 35 USC 103, Applicant’s arguments are considered, however the arguments are found non-persuasive. -It appears that the Applicant is arguing features not claimed and features that are not disclosed in the originally filled specification (please refer to the 35 USC 112, 1st paragraph above). The originally filled specification discloses at paragraph [068] that a duration of a cleaning service performed by a technician is compared to the duration of all other cleanings in order to validate whether the service was actually performed. It further discloses that the service data aggregator compares the duration of all cleanings of the technician to the duration of all cleanings by all other technicians to determine if further training is required. And finally it compares a group of technicians with a second group of technicians to determine effectiveness. However, Applicant argues functions and limitations that do not have support in the originally filled specification and failed to provide an articulated reasoning as to how Webb differentiates from the originally filled specification. Furthermore, it is noted that the claims are examined as best understood and based on the originally filled specification due to the amount of indefiniteness found on the claims and the new matter rejection presented above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARIA C SANTOS-DIAZ whose telephone number is (571)272-6532. The examiner can normally be reached Monday-Friday 8:00AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sarah Monfeldt can be reached at 571-270-1833. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARIA C SANTOS-DIAZ/Primary Examiner, Art Unit 3629
Read full office action

Prosecution Timeline

Sep 18, 2017
Application Filed
Feb 01, 2020
Non-Final Rejection — §101, §103, §112
May 05, 2020
Response Filed
Aug 03, 2020
Non-Final Rejection — §101, §103, §112
Nov 06, 2020
Response Filed
Feb 22, 2021
Final Rejection — §101, §103, §112
Jun 10, 2021
Response after Non-Final Action
Jun 23, 2021
Response after Non-Final Action
Jul 15, 2021
Request for Continued Examination
Jul 20, 2021
Response after Non-Final Action
Oct 17, 2021
Non-Final Rejection — §101, §103, §112
Jan 21, 2022
Response Filed
May 16, 2022
Final Rejection — §101, §103, §112
Nov 18, 2022
Request for Continued Examination
Nov 23, 2022
Response after Non-Final Action
Feb 06, 2023
Non-Final Rejection — §101, §103, §112
Jun 05, 2023
Response Filed
Sep 09, 2023
Final Rejection — §101, §103, §112
Dec 12, 2023
Request for Continued Examination
Dec 13, 2023
Response after Non-Final Action
Jan 27, 2024
Non-Final Rejection — §101, §103, §112
May 01, 2024
Response Filed
Aug 27, 2024
Final Rejection — §101, §103, §112
Nov 27, 2024
Request for Continued Examination
Dec 02, 2024
Response after Non-Final Action
Dec 28, 2024
Non-Final Rejection — §101, §103, §112
Jul 01, 2025
Response Filed
Sep 18, 2025
Final Rejection — §101, §103, §112
Dec 16, 2025
Request for Continued Examination
Jan 09, 2026
Response after Non-Final Action
Jan 22, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602633
DATA CENTER GUIDE CREATION AND COST ESTIMATION FOR AUGMENTED REALITY HEADSETS
2y 5m to grant Granted Apr 14, 2026
Patent 12602632
WORK CHAT ROOM-BASED TASK MANAGEMENT APPARATUS AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602628
EVALUATING ACTION PLANS FOR OPTIMIZING SUSTAINABILITY FACTORS OF AN ENTERPRISE
2y 5m to grant Granted Apr 14, 2026
Patent 12572882
SYSTEM OF AND METHOD FOR OPTIMIZING SCHEDULE DESIGN VIA COLLABORATIVE AGREEMENT FACILITATION
2y 5m to grant Granted Mar 10, 2026
Patent 12555082
SMART WASTING STATION FOR MEDICATIONS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

12-13
Expected OA Rounds
33%
Grant Probability
63%
With Interview (+30.0%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 291 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month