DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Application
This office action is in response to the most recent filings filed by applicant on 10/16/25.
Claims 1-2, 5-6, 9-10, 14, 15, 19, 22 and 24 are amended
Claims 7 and 11-13 are cancelled
No claims are added
Claims 1-6, 8-10, and 14-24 are pending
Note:
As was discussed in the interview dated 10/14/25, the amended claims are still recited at a very high level of generality. The claim scope is broad and so the claims are still “apply it”. Additionally, the amended claims have the clustering limitation that is still shown by the prior art.
In light of these notes, the amended claims, do not overcome previously presented rejections under 101 and 103. As is discussed below. This note is intended as a conversation starter to help applicants understand the examiner’s perspective. Applicants are welcome to call the examiner to discuss this further.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6, 8-10, and 14-24 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more.
Step One - First, pursuant to step 1 in the January 2019 Guidance on 84 Fed. Reg. 53, the claims 9-10, 14, 21 and 23 is/are directed to a method which is a statutory category.
Step One - First, pursuant to step 1 in the January 2019 Guidance on 84 Fed. Reg. 53, the claims 1-6, 8 and 24 is/are directed to a system which is a statutory category.
Step One - First, pursuant to step 1 in the January 2019 Guidance on 84 Fed. Reg. 53, the claims 15-20 and 22 is/are directed to a computer program product which is a statutory category.
Under the 2019 PEG, Step 2A under which a claim is not “directed to” a judicial exception unless the claim satisfies a two-prong inquiry. Further, particular groupings of abstract ideas are consistent with judicial precedent and are based on an extraction and synthesis of the key concepts identified by the courts as being abstract.
With respect to the Step 2A, Prong One, the claims as drafted, and given their broadest reasonable interpretation, fall within the Abstract idea grouping of “certain methods of organizing human activity” (business relations; relationships or interactions between people). For instance, independent Claim 9 is directed to an abstract idea, as evidenced by claim limitations “storing, historical interaction information associated with previous service interactions for receiving the service, wherein the historical interaction information is obtained of a provider of the service, and wherein the historical interaction information indicates one or more stages of receiving the service, user information of respective users receiving the service and information corresponding to a time duration of the one or more stages; determining, using a clustering model of an impact analysis model, and based on analyzing the historical interaction information, whether a first group of the users is associated with experiencing the degradation, during a particular stage of the service, based on a duration of time of the particular stage; receiving, a request involving a user receiving the service, wherein the request involving the user receiving the service indicates that the user is associated with, an attribute of the first group of the users; analyzing, the first group of the user and a second group of the users to diagnose a cause of the first group experiencing the degradation, wherein the cause of the degradation during the particular stage; providing, the user to receive the service during the particular stage; transmitting, a request for feedback that solicits the user to indicate whether the second provider system was available to the user and whether the second provider system was useful in facilitating a satisfactory level of service; and modifying, based on a response associated with the feedback and based on an availability.”
These claim limitations belong to the grouping of “certain methods of organizing human activity” because the claims are related to service and schedule management for a service provider to prevent the user experience from being degraded. Managing and analyzing historical user interactions to determine overscheduling of a service provider and to track degraded service experience for one or more human entities (see specification [0011]) involves organizing human activity based on the description of “certain methods of organizing human activity” provided by the courts. The court have used the phrase “Certain methods of organizing human activity” as —fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions).
Independent Claims 1 and 15 is/are recite substantially similar limitations to independent claim 9 and is/are rejected under 2A for similar reasons to claim 9 above.
With respect to the Step 2A, Prong Two - This judicial exception is not integrated into a practical application. In particular, the claim only recites “A method for improving a degradation in timing for users receiving a service, comprising: by a device and using one or more memories of the device, by one or more processors of the device, by the one or more processors of the device, by the one or more processors and using a linear regression analysis performed by the impact analysis model, comprises using a first provider system, by the one or more processors of the device, to a user device of the user, an instruction for, using a second provider system, by the one or more processors of the device and to the user device, by the one or more processors of the device, one or more settings of the clustering model, of the one or more provider systems, via one or more provider systems; performing, an action that reduces the degradation, wherein performing the action comprises: A non-transitory computer-readable medium storing a set of instructions for identifying and improving a degradation in timing for users receiving a service, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to: A system for identifying and improving a degradation in a timing for users receiving a service, the system comprising: one or more memories; and one or more processors, communicatively coupled to the one or more memories, configured to:”, such that it amounts to no more than: adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f).
As a result, claims 1, 9 and 15 does not provide any specifics regarding the integration into a practical application when recited in a claim with a judicial exception.
Similarly dependent claims 2-6, 8, 10, 14, 16-20 and 21-24 are also directed to an abstract idea under 2A, first and second prong. In the present application, all of the dependent claims have been evaluated and it was found that they all inherit the deficiencies set forth with respect to the independent claims. For instance, dependent claims 10 recite “wherein the second group of the users did not experience the degradation”. Dependent claims 24 recites “the feedback comprises an indication of whether the one or more provider systems facilitated the satisfactory level of service for the particular stage of the service”. Here, these claims offer further descriptive limitations of elements found in the independent claims which are similar to the abstract idea noted in the independent claim above.
Dependent claim 5 recites “wherein the one or more processors are further configured to: receive, by a server of the system and from a wireless communication device of the user, an electronic message that requests the service and indicates one or more attributes of the user, wherein the with an attribute one or more attributes of the user are associated with the degradation; and determine, based on the one or more attributes, that the user is associated with the first group.”. In this claim, “by a server of the system and from a wireless communication device of the user, an electronic message that requests the service” is an additional element, but it is still being recited such that it amounts to no more than: adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). As a result, Examiner asserts that dependent claims, such as dependent claims 2-6, 8, 10, 14, 16-20 and 21-24 are also directed to the abstract idea identified above.
With respect to Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. First, the invention lacks improvements to another technology or technical field [see Alice at 2351; 2019 IEG at 55], and lacks meaningful limitations beyond generally linking the use of an abstract idea to a particular technological environment [Alice at 2360, 2019 IEG at 55], and fails to effect a transformation or reduction of a particular article to a different state or thing [2019 IEG, 55]. For the reasons articulated above, the claims recite an abstract idea that is limited to a particular field of endeavor (MPEP § 2106.05(h)) and recites insignificant extra-solution activity (MPEP § 2106.05(g)). By the factors and rationale provided above with respect to these MPEP sections, the additional elements of the claims that fail to integrate the abstract idea into a practical application also fail to amount to “significantly more” than the abstract idea.
As discussed above with respect to integration of the abstract idea into a practical application, the additional element(s) of “A method for improving a degradation in timing for users receiving a service, comprising: by a device and using one or more memories of the device, by one or more processors of the device, by the one or more processors of the device, by the one or more processors and using a linear regression analysis performed by the impact analysis model, comprises using a first provider system, by the one or more processors of the device, to a user device of the user, an instruction for, using a second provider system, by the one or more processors of the device and to the user device, by the one or more processors of the device, one or more settings of the clustering model, of the one or more provider systems, via one or more provider systems; performing, an action that reduces the degradation, wherein performing the action comprises: A non-transitory computer-readable medium storing a set of instructions for identifying and improving a degradation in timing for users receiving a service, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to: A system for identifying and improving a degradation in a timing for users receiving a service, the system comprising: one or more memories; and one or more processors, communicatively coupled to the one or more memories, configured to:” are insufficient to amount to significantly more. Applicants originally submitted specification describes the computer components above at least in [0059]-[0060]. In light of the specification, it should be noted that the components discussed above did not meaningfully limit the abstract idea because they merely linked the use of the abstract idea to a particular technological environment (i.e., "implementation via computers"). In light of the specification, it should be noted that the claim limitations discussed above are merely instructions to implement the abstract idea on a computer. See MPEP 2106.05(f). (See MPEP 2106.05(f) - Mere Instructions to Apply an Exception - “Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” Alice Corp., 134 S. Ct. at 235). Mere instructions to apply an exception using computer component cannot provide an inventive concept.).
The claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself.
Independent Claims 15 and 1 is/are recite substantially similar limitations to independent claim 9 and is/are rejected under 2B for similar reasons to claim 9 above.
Further, it should be noted that additional elements of the claimed invention such as claim limitations when considered individually or as an ordered combination along with the other limitations discussed above in method claim 1 also do not meaningfully limit the abstract idea because they merely linked the use of the abstract idea to a particular technological environment (i.e., "implementation via computers"). In light of the specification, it should be noted that the claim limitations discussed above are merely instructions to implement the abstract idea on a computer. See MPEP 2106.
Similarly, dependent claims 2-6, 8, 10, 14, 16-20 and 21-24 also do not include limitations amounting to significantly more than the abstract idea under the second prong or 2B of the Alice framework. In the present application, all of the dependent claims have been evaluated and it was found that they all inherit the deficiencies set forth with respect to the independent claims. Further, it should be noted that the dependent claims do not include limitations that overcome the stated assertions. Here, the dependent claims recite features/limitations that include computer components identified above in part 2B of analysis of independent claims 1, 9 and 15. As a result, Examiner asserts that dependent claims, such as dependent claims 2-6, 8, 10, 14, 16-20 and 21-24 are also directed to the abstract idea identified above.
For more information on 101 rejections, see MPEP 2106, January 2019 Guidance at https://www.govinfo.gov/content/pkg/FR-2019-01 -07/pdf/2018-28282.pdf
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-6, 8-10 and 14-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over (US 2019/0287039) Ridgeway, and further in view of (US 2016/0292611) Boe et al.
As per claims 1, 9 and 15: Regarding the claim limitations below, Reference Ridgeway shows:
A system for identifying and improving a degradation in a timing for users receiving a service, the system comprising: one or more memories; and one or more processors, communicatively coupled to the one or more memories, configured to: (Ridgeway: [0023]: system), the system comprising:
A method for improving a degradation in timing for users receiving a service, (Ridgeway: [0023]: method), comprising:
Regarding the claim limitations below, Reference Ridgeway shows:
one or more memories (Ridgeway: [0023]: processor, memories); and
Regarding the claim limitations below, Reference Ridgeway shows:
one or more processors, communicatively coupled to the one or more memories (Ridgeway: [0023]: processor, memories), configured to:
A non-transitory computer-readable medium storing a set of instructions for identifying and improving a degradation in timing for users receiving a service, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to (Ridgeway: [0023]: computer readable medium):
one or more instructions that, when executed by one or more processors of a device, cause the device to (Ridgeway: [0023]: processor, memories):
Regarding the claim limitations below, Reference Ridgeway shows:
storing, by a device and using one or more memories of the device, historical interaction information associated with previous service interactions for receiving the service
Ridgeway shows [0002]: the present invention relates generally to methods and systems for the statistical analysis of retrospective service provider data for evaluating the effects of the performance of specific providers among a collection of service providers, and more specifically to the analysis of medical data for evaluating the performance of physicians, clinics, hospitals, and other medical providers. [0006]: The benchmark includes creating a propensity scoring model, which weights the data for each patient treated by other service providers to collectively resemble the hospital for which the benchmark is being constructed, a regression model providing an estimate of the effects of the service provider on outcomes, a doubly robust estimate that measures the effect of the service provider on the identified effect. [0007]: the system and methods provide mechanisms for dynamically updating the benchmark as new patient records join the data systems, and as providers offer new treatments…Service providers may be queued for updating their benchmarks, or for analyzing whether an update is necessary, based on a refresh priority score that measures how important is to check whether a given provider's benchmark should be updated. In some embodiments, the quality of the benchmark may be measured to determine if the benchmark is within a specified or user-defined threshold or tolerance. If the quality of the benchmark is within the threshold and therefore the benchmark is sufficient, the original benchmark is used and the propensity score model is not updated. If the quality of the benchmark has deteriorated beyond the threshold, the benchmark is insufficient and the propensity score model is recomputed. Here, the benchmarking, priority scoring, specified or user-defined threshold or tolerance, using a regression model read on the claim limitation above. Ridgeway further shows maintaining records of user or customer history (see [0081]). Ridgeway further shows [0030] The benchmark comparison component 120 provides an interface for determining an outlier probability for each service provider for each outcome to identify underperforming and overperforming service providers. The outlier probability determines whether this service provider should be expected to have an elevated (or reduced) outcome relative to other service providers, based on the mix of cases for that particular service provider. [0044] In one embodiment, the propensity scores may be created as follows. Patient records associated with multiple medical providers are populated in a database or table, for example, by the propensity scoring software component. Initial “seeding” propensity scores for each provider's patients and residuals are assigned. In one embodiment, initial seeding scores are calculated by dividing the number of patents for a given provider (e.g., Provider 1001) by the total number of patient records across all providers. The residual for each patient record is calculated by subtracting the initial seeding scores from a provider indicator identifying whether the patient is associated with the given provider or not. The provider indicator can be assigned a value of 1 if the patient is one of the given provider's patients, and assigned a value of 0 if the patient is not one of the given provider's patients. The provider indicator may be populated as a column in the database or table.), [0006]: The benchmark includes creating a propensity scoring model, which weights the data for each patient treated by other service providers to collectively resemble the hospital for which the benchmark is being constructed, a regression model providing an estimate of the effects of the service provider on outcomes, a doubly robust estimate that measures the effect of the service provider on the identified effect. [0007]: the system and methods provide mechanisms for dynamically updating the benchmark as new patient records join the data systems, and as providers offer new treatments…Service providers may be queued for updating their benchmarks, or for analyzing whether an update is necessary, based on a refresh priority score that measures how important is to check whether a given provider's benchmark should be updated. In some embodiments, the quality of the benchmark may be measured to determine if the benchmark is within a specified or user-defined threshold or tolerance. If the quality of the benchmark is within the threshold and therefore the benchmark is sufficient, the original benchmark is used and the propensity score model is not updated. If the quality of the benchmark has deteriorated beyond the threshold, the benchmark is insufficient and the propensity score model is recomputed. Here, the benchmarking, priority scoring, specified or user-defined threshold or tolerance, using a regression model read on the claim limitation above. Ridgeway further shows maintaining records of user or customer history (see [0081]), which reads on “historical” in the claim. Regarding the claim limitations memory, device and processor, is shown by Ridgeway in [0023] The systems and methods herein include a benchmark comparison using a propensity scoring, a weighted regression model, and an outlier probability. FIG. 1 illustrates a schematic block diagram of a system for comparing service providers based on dynamically updated observational data according to one embodiment. As shown in FIG. 1, the system includes one or more operator terminals 102 for providing system access to operators over a data communications network 104 to an analysis machine 106 and a service provider data machine 108. The service provider data machine 108 may provide access for an operator to a service provider record database 110 and an admin database 112. The analysis machine 106 may include a processor, a non-volatile memory device operably coupled to the processor storing programming instructions and other data, the processor operable to execute program instructions, and a network connection to permit the analysis machine 106 to receive input from the service provider data machine 108 and other sources and to output results to the operator terminal 102 and other destinations. [0026], [0033]-[0037], [0052]);
Regarding the claim limitations below, Reference Ridgeway in view of Reference Boe shows:
wherein the historical interaction information is obtained via one or more provider systems of a provider of the service, and
wherein the historical interaction information indicates one or more stages of receiving the service, user information of respective users receiving the service and information corresponding to a time duration of the one or more stages;
(Ridgeway: [0021]: Underperforming and/or overperforming service providers may be compared relative to each other, based on their performance relative to their individual benchmarks. This enables a determination for whether observed differences in outcomes between service providers is due to systematic differences in the service providers themselves, or whether observed differences in outcomes is due to a service provider having a different mix of cases than other service providers. [0023] The systems and methods herein include a benchmark comparison using a propensity scoring, a weighted regression model, and an outlier probability. FIG. 1 illustrates a schematic block diagram of a system for comparing service providers based on dynamically updated observational data according to one embodiment. As shown in FIG. 1, the system includes one or more operator terminals 102 for providing system access to operators over a data communications network 104 to an analysis machine 106 and a service provider data machine 108.
Even though Ridgeway shows in [0067] (3) the number of new patient records, or the percentage increase of records, that have entered the database for the service provider having a particular feature of interest. For example, it may be determined that a particular feature has a relatively large impact on outcomes (such as being prescribed a certain medication), and may warrant a higher priority. Ridgeway does not explicitly show “feedback” as is recited in the claim.
Boe shows the above limitations at least in [0734]-[0742]: impact analysis model and impact score. Boe further shows in [0873]-[0880]: feedback and impact.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Boe, particularly the satisfying the threshold ([0053], [0769], [1345]), in the disclosure of Reference Ridgeway, particularly in the threshold tracking in [0107], in order to provide for a system that not only tracks the threshold but also makes sure the threshold is satisfied as taught by Reference Boe (see at least in [0663]: the value can be produced by a search query using the search of 2902 and can be, for example, the value of threshold field 2904 associated with an event satisfying search criteria of the search query when the search query is executed, a statistic calculated based on values for the specified threshold field of 2904 associated with the one or more events satisfying the search criteria of the search query when the search query is executed, or a count of events satisfying the search criteria of the search query that include a constraint for the threshold field of 2904, etc.), so that the process of managing performance monitoring can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar performance monitoring field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Ridgeway in view of Reference Boe, the results of the combination were predictable (MPEP 2143 A));
Regarding the claim limitations below, Reference Ridgeway in view of Reference Boe shows:
determining, by one or more processors of the device, using a clustering model of an impact analysis model, and based on analyzing the historical interaction information, whether a first group of the users is associated with experiencing the degradation, during a particular stage of the service, based on a duration of time of the particular stage
(Ridgeway: [0021]: Underperforming and/or overperforming service providers may be compared relative to each other, based on their performance relative to their individual benchmarks. This enables a determination for whether observed differences in outcomes between service providers is due to systematic differences in the service providers themselves, or whether observed differences in outcomes is due to a service provider having a different mix of cases than other service providers. [0023] The systems and methods herein include a benchmark comparison using a propensity scoring, a weighted regression model, and an outlier probability. FIG. 1 illustrates a schematic block diagram of a system for comparing service providers based on dynamically updated observational data according to one embodiment. As shown in FIG. 1, the system includes one or more operator terminals 102 for providing system access to operators over a data communications network 104 to an analysis machine 106 and a service provider data machine 108.
Even though Ridgeway shows in [0067] (3) the number of new patient records, or the percentage increase of records, that have entered the database for the service provider having a particular feature of interest. For example, it may be determined that a particular feature has a relatively large impact on outcomes (such as being prescribed a certain medication), and may warrant a higher priority. Ridgeway does not explicitly show “feedback” as is recited in the claim. Ridgeway also does not explicitly show “clustering”
Boe shows “feedback” at least in [0734]-[0742]: impact analysis model and impact score. Boe further shows in [0873]-[0880]: feedback and impact. Boe shows “clustering” in [0812], [1505], [1507].
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Boe, particularly the satisfying the threshold ([0053], [0769], [1345]), in the disclosure of Reference Ridgeway, particularly in the threshold tracking in [0107], in order to provide for a system that not only tracks the threshold but also makes sure the threshold is satisfied as taught by Reference Boe (see at least in [0663]: the value can be produced by a search query using the search of 2902 and can be, for example, the value of threshold field 2904 associated with an event satisfying search criteria of the search query when the search query is executed, a statistic calculated based on values for the specified threshold field of 2904 associated with the one or more events satisfying the search criteria of the search query when the search query is executed, or a count of events satisfying the search criteria of the search query that include a constraint for the threshold field of 2904, etc.), so that the process of managing performance monitoring can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar performance monitoring field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Ridgeway in view of Reference Boe, the results of the combination were predictable (MPEP 2143 A));
Regarding the claim limitations below, Reference Ridgeway in view of Reference Boe shows:
receiving, by the one or more processors of the device, a request involving a user receiving the service, wherein the request involving the user receiving the service indicates that the user is associated with, an attribute of the first group of the users
(Ridgeway: [0002]: the present invention relates generally to methods and systems for the statistical analysis of retrospective service provider data for evaluating the effects of the performance of specific providers among a collection of service providers, and more specifically to the analysis of medical data for evaluating the performance of physicians, clinics, hospitals, and other medical providers. [0006]: The benchmark includes creating a propensity scoring model, which weights the data for each patient treated by other service providers to collectively resemble the hospital for which the benchmark is being constructed, a regression model providing an estimate of the effects of the service provider on outcomes, a doubly robust estimate that measures the effect of the service provider on the identified effect. [0007]: the system and methods provide mechanisms for dynamically updating the benchmark as new patient records join the data systems, and as providers offer new treatments…Service providers may be queued for updating their benchmarks, or for analyzing whether an update is necessary, based on a refresh priority score that measures how important is to check whether a given provider's benchmark should be updated. In some embodiments, the quality of the benchmark may be measured to determine if the benchmark is within a specified or user-defined threshold or tolerance. If the quality of the benchmark is within the threshold and therefore the benchmark is sufficient, the original benchmark is used and the propensity score model is not updated. If the quality of the benchmark has deteriorated beyond the threshold, the benchmark is insufficient and the propensity score model is recomputed. [0021]: Underperforming and/or overperforming service providers may be compared relative to each other, based on their performance relative to their individual benchmarks. This enables a determination for whether observed differences in outcomes between service providers is due to systematic differences in the service providers themselves, or whether observed differences in outcomes is due to a service provider having a different mix of cases than other service providers. [0023] The systems and methods herein include a benchmark comparison using a propensity scoring, a weighted regression model, and an outlier probability. FIG. 1 illustrates a schematic block diagram of a system for comparing service providers based on dynamically updated observational data according to one embodiment. As shown in FIG. 1, the system includes one or more operator terminals 102 for providing system access to operators over a data communications network 104 to an analysis machine 106 and a service provider data machine 108. Here, the benchmarking, priority scoring, specified or user-defined threshold or tolerance, using a regression model read on the claim limitation above.);
Regarding the claim limitations below, Reference Ridgeway in view of Reference Boe shows:
analyzing, by the one or more processors and using a linear regression analysis performed by the impact analysis model, the first group of the user and a second group of the users to diagnose a cause of the first group experiencing the degradation,
wherein the cause of the degradation comprises using a first provider system during the particular stage
(Ridgeway: [0002]: the present invention relates generally to methods and systems for the statistical analysis of retrospective service provider data for evaluating the effects of the performance of specific providers among a collection of service providers, and more specifically to the analysis of medical data for evaluating the performance of physicians, clinics, hospitals, and other medical providers. [0006]: The benchmark includes creating a propensity scoring model, which weights the data for each patient treated by other service providers to collectively resemble the hospital for which the benchmark is being constructed, a regression model providing an estimate of the effects of the service provider on outcomes, a doubly robust estimate that measures the effect of the service provider on the identified effect. [0007]: the system and methods provide mechanisms for dynamically updating the benchmark as new patient records join the data systems, and as providers offer new treatments…Service providers may be queued for updating their benchmarks, or for analyzing whether an update is necessary, based on a refresh priority score that measures how important is to check whether a given provider's benchmark should be updated. In some embodiments, the quality of the benchmark may be measured to determine if the benchmark is within a specified or user-defined threshold or tolerance. If the quality of the benchmark is within the threshold and therefore the benchmark is sufficient, the original benchmark is used and the propensity score model is not updated. If the quality of the benchmark has deteriorated beyond the threshold, the benchmark is insufficient and the propensity score model is recomputed. [0021]: Underperforming and/or overperforming service providers may be compared relative to each other, based on their performance relative to their individual benchmarks. This enables a determination for whether observed differences in outcomes between service providers is due to systematic differences in the service providers themselves, or whether observed differences in outcomes is due to a service provider having a different mix of cases than other service providers. [0023] The systems and methods herein include a benchmark comparison using a propensity scoring, a weighted regression model, and an outlier probability. FIG. 1 illustrates a schematic block diagram of a system for comparing service providers based on dynamically updated observational data according to one embodiment. As shown in FIG. 1, the system includes one or more operator terminals 102 for providing system access to operators over a data communications network 104 to an analysis machine 106 and a service provider data machine 108. Here, the benchmarking, priority scoring, specified or user-defined threshold or tolerance, using a regression model read on the claim limitation above. Ridgeway: [0107] In accordance with the dynamic updating model described above, the data for the 26 hospitals in this study may be continually or intermittently updated with additional hospital data as new records are created. The service provider record database may be updated with additional data to provide the latest and most up to date information in the benchmarks. By updating the record database and benchmarks with new data, users can track trends over time and see the effects of new or revised treatment plans. For example, hospitals that were previously identified as outliers (e.g., as compared to their benchmarks) may be tracked to see if their performance changes over time. This may inform hospital administrators, other hospitals, and policy makers of whether new initiatives in high performing hospitals should be applied broadly to other hospitals, and whether remedial actions are warranted to improve the performance for underperforming hospitals, such as additional funding, revised policies, or changes in management. Ridgeway also shows in claim 4, The system of claim 3, wherein the threshold is set as 1% of the largest difference across all values in the distributions. Ridgeway: [0026]: The analysis machine 106 may provide access for an operator to several components including a data selection component 114, a propensity scoring component 116, a regression modeling component 118, a benchmark comparison component 120, a dynamic updating component 122, and a data output component 124. These components may take the form of computer instructions stored in computer memory and executed by a computer processor. [0028] The propensity scoring component 116 determines and assigns a propensity score to each service provider. In the example of benchmarking for medical services, the propensity score represents the likelihood of a patient in the database being treated by a given medical provider being benchmarked. The propensity scores are used to create a distribution of the features for patients of the other medical providers to match a distribution of the features for the patients of the given medical provider. The propensity scoring component 116 applies the propensity score to the patient data of the other medical providers to weight the data of the patient records such that the weighted data for the patients of the other medical providers (excluding the given medical provider) closely resembles the non-weighted data for the group of patients of the given medical provider. [0029] The regression modeling component 118 provides an interface for the operator to estimate the effects of the service provider on observed outcomes. In the example of benchmarking for medical service providers, the regression modeling component 118 may estimate the relative likelihood that a patient of the given medical provider would experience an identified outcome (such as expected patient readmission rate within 30 days) as compared to if the patent had been treated by the other medical providers in the record database 110. The regression modeling component 118 receives weighted data weighted by the propensity scoring component 116. Here, the group can be the patients treated by each service provider out of the many service providers being benchmarked reads on the first and second in the claim).
Ridgeway shows “linear regression” at least in [0023] The systems and methods herein include a benchmark comparison using a propensity scoring, a weighted regression model, and an outlier probability. Also, see [0026]-[0029], [0038]-[0049], [0050] To compute a doubly robust estimate of the effect of the provider, and simultaneously adjust for remaining confounding, the system estimates a propensity score weighted generalized linear model. Depending on the type of outcome the regression model will be an ordinary least squares model (for continuous outcomes), a logistic regression model (for 0/1 outcomes), a Poisson regression model (for count outcomes), or other standard statistical models appropriate for the type of outcome. [0084] An estimate for the propensity score p(x)p(x) was calculated from the patient data in the service provider record database. Generalized boosted modeling was used to estimate the propensity score. This modeling strategy is similar to logistic regression except that, rather than using the individual xs as covariates, a linear combination of basis functions is used. The following equation was used for generalized boosted modeling, Specifically, the functions h.sub.j(x)h.sub.j(x) are all piecewise constant functions of x and their interactions involving up to three patient features. This allows for the estimate of the propensity score p(x)p(x) to be flexible including non-linear relationships, threshold and saturation effects, and higher-order interactions. As a result, matching patient features on their entire distribution (not just their averages) is possible, as well as a match on combinations of patient features.
Reference Boe also discloses linear regression, degradation and threshold: linear regression: [1253] FIG. 62B illustrates an example of a GUI 6220 for editing a graphical visualization of KPI values along a time-based graph lane in a visual interface, in accordance with one or more implementations of the present disclosure. In one implementation, in response to the selection of the “Edit Lane” option in drop down menu 5618, the system presents GUI 6220 in order to edit the graph rendering options for the corresponding graphical visualization. In one implementation, the graph rendering options include the vertical axis scale 6222 and the vertical axis boundary 6224 for the corresponding lane. Options for the vertical axis scale 6222 include linear and logarithmic. Depending on the selection, the vertical axis of the corresponding lane will be displayed with either a linear or a logarithmic scale. Options for the vertical axis boundary 6224 include data extent, zero extent, and static. When data extent is selected, the range of values shown on the vertical axis of the corresponding lane will be set to include the full range of KPI values during the selected time period (i.e., the vertical axis will range from the maximum to the minimum KPI value). When zero extent is selected, the range of values shown on the vertical axis of the corresponding lane will be set to range from the maximum KPI value to zero (or to a negative value, if such a value exists in the data). When static is selected, the user can enter a custom range of values which will be shown on the vertical axis of the corresponding lane. Threshold: [0053]: when the KPI is determined using data for Entity-3, the value for the KPI for Avg CPU Load may be at 80%. If the threshold is applied to the values of the aggregate of the entities (two at 50% and one at 80%), the aggregate value of the entities is 60%, and the KPI would not exceed the 80% threshold. If the threshold is applied using an entity basis for the thresholds (applied to the individual KPI values as calculated pertaining to each entity), the computing machine can determine that the KPI pertaining to one of the entities (e.g., Entity-3) satisfies the threshold by being equal to 80%. [0769]: One or more thresholds can be applied to the value associated with the threshold field. In particular, the value can be produced by the KPI search query and can be, for example, the value of the threshold field in an event satisfying search criteria of the search query when the search query is executed, a statistic calculated based on one or more values of the threshold field in one or more events satisfying the search criteria of the search query when the search query is executed, a count of events satisfying the search criteria of the search query that include a constraint for the threshold field, etc. [1345]: The tolerance range may affect how precisely the aggregate triggering conditions are evaluated, for example, a tolerance range of 10% may consider values within 10% of one or more thresholds in the KPI criterion to satisfy the KPI criterion. Degradation: [1320] Implementations of the present disclosure may include a mechanism to generate correlation searches based on information displayed in one or more graph lanes. The graph lanes may be selected by a user and may be customized to cover a desired time period. The graph lanes may allow a user to detect, diagnose or solve a problem (e.g., system malfunction, performance degradation) or identify a performance pattern of interest (e.g., increased usage of one or more services by end users). The graph lanes may allow the user to visually inspect a diverse set of information and may enhance the user's ability to identify patterns amongst the graph lanes. Once a user has identified the graph lanes that relate to a problem or a pattern of interest, the user may submit a request to create a new correlation search. The system may then analyze the information represented by the graph lanes to create a definition for a new correlation search. The new correlation search provided by the created definition may then be run to detect a re-occurrence of the problem or the pattern of interest, and to cause an action (e.g., an alert or a notification of the user) to be performed.
Reference Ridgeway and Reference Boe are analogous prior art to the claimed invention because the references generally relate to field of performance monitoring. Further, said references are part of the same classification, i.e., G06Q10/06393. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Boe, particularly the satisfying the threshold ([0053], [0769], [1345]), in the disclosure of Reference Ridgeway, particularly in the threshold tracking in [0107], in order to provide for a system that not only tracks the threshold but also makes sure the threshold is satisfied as taught by Reference Boe (see at least in [0663]: the value can be produced by a search query using the search of 2902 and can be, for example, the value of threshold field 2904 associated with an event satisfying search criteria of the search query when the search query is executed, a statistic calculated based on values for the specified threshold field of 2904 associated with the one or more events satisfying the search criteria of the search query when the search query is executed, or a count of events satisfying the search criteria of the search query that include a constraint for the threshold field of 2904, etc.), so that the process of managing performance monitoring can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar performance monitoring field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Ridgeway in view of Reference Boe, the results of the combination were predictable (MPEP 2143 A));
Regarding the claim limitations below, Reference Ridgeway in view of Reference Boe shows:
performing, by the one or more processors of the device, an action that reduces the degradation, wherein performing the action comprises:
(Ridgeway: [0023]: processor, memories: [0002]: the present invention relates generally to methods and systems for the statistical analysis of retrospective service provider data for evaluating the effects of the performance of specific providers among a collection of service providers, and more specifically to the analysis of medical data for evaluating the performance of physicians, clinics, hospitals, and other medical providers. [0006]: The benchmark includes creating a propensity scoring model, which weights the data for each patient treated by other service providers to collectively resemble the hospital for which the benchmark is being constructed, a regression model providing an estimate of the effects of the service provider on outcomes, a doubly robust estimate that measures the effect of the service provider on the identified effect. [0007]: the system and methods provide mechanisms for dynamically updating the benchmark as new patient records join the data systems, and as providers offer new treatments…Service providers may be queued for updating their benchmarks, or for analyzing whether an update is necessary, based on a refresh priority score that measures how important is to check whether a given provider's benchmark should be updated. In some embodiments, the quality of the benchmark may be measured to determine if the benchmark is within a specified or user-defined threshold or tolerance. If the quality of the benchmark is within the threshold and therefore the benchmark is sufficient, the original benchmark is used and the propensity score model is not updated. If the quality of the benchmark has deteriorated beyond the threshold, the benchmark is insufficient and the propensity score model is recomputed. [0021]: Underperforming and/or overperforming service providers may be compared relative to each other, based on their performance relative to their individual benchmarks. This enables a determination for whether observed differences in outcomes between service providers is due to systematic differences in the service providers themselves, or whether observed differences in outcomes is due to a service provider having a different mix of cases than other service providers. [0023] The systems and methods herein include a benchmark comparison using a propensity scoring, a weighted regression model, and an outlier probability. FIG. 1 illustrates a schematic block diagram of a system for comparing service providers based on dynamically updated observational data according to one embodiment. As shown in FIG. 1, the system includes one or more operator terminals 102 for providing system access to operators over a data communications network 104 to an analysis machine 106 and a service provider data machine 108. Here, the benchmarking, priority scoring, specified or user-defined threshold or tolerance, using a regression model read on the claim limitation above. Claim 1: receive, from the one or more electronic devices, first record data for a plurality of service providers and transmit the first record data to a database, wherein the first record data is transmitted over the electronic communication channel. Ridgeway: [0107] In accordance with the dynamic updating model described above, the data for the 26 hospitals in this study may be continually or intermittently updated with additional hospital data as new records are created. The service provider record database may be updated with additional data to provide the latest and most up to date information in the benchmarks. By updating the record database and benchmarks with new data, users can track trends over time and see the effects of new or revised treatment plans. For example, hospitals that were previously identified as outliers (e.g., as compared to their benchmarks) may be tracked to see if their performance changes over time. This may inform hospital administrators, other hospitals, and policy makers of whether new initiatives in high performing hospitals should be applied broadly to other hospitals, and whether remedial actions are warranted to improve the performance for underperforming hospitals, such as additional funding, revised policies, or changes in management. Ridgeway also shows in claim 4, The system of claim 3, wherein the threshold is set as 1% of the largest difference across all values in the distributions);
Regarding the claim limitations below, Reference Ridgeway in view of Reference Boe shows:
providing, to a user device of the user, an instruction for the user to receive the service using a second provider system during the particular stage
Ridgeway: [0107] In accordance with the dynamic updating model described above, the data for the 26 hospitals in this study may be continually or intermittently updated with additional hospital data as new records are created. The service provider record database may be updated with additional data to provide the latest and most up to date information in the benchmarks. By updating the record database and benchmarks with new data, users can track trends over time and see the effects of new or revised treatment plans. For example, hospitals that were previously identified as outliers (e.g., as compared to their benchmarks) may be tracked to see if their performance changes over time. This may inform hospital administrators, other hospitals, and policy makers of whether new initiatives in high performing hospitals should be applied broadly to other hospitals, and whether remedial actions are warranted to improve the performance for underperforming hospitals, such as additional funding, revised policies, or changes in management. Ridgeway also shows in claim 4, The system of claim 3, wherein the threshold is set as 1% of the largest difference across all values in the distributions. However, Ridgeway does not explicitly show “using the particular stage”
Reference Boe discloses “using the particular stage” [0053]: when the KPI is determined using data for Entity-3, the value for the KPI for Avg CPU Load may be at 80%. If the threshold is applied to the values of the aggregate of the entities (two at 50% and one at 80%), the aggregate value of the entities is 60%, and the KPI would not exceed the 80% threshold. If the threshold is applied using an entity basis for the thresholds (applied to the individual KPI values as calculated pertaining to each entity), the computing machine can determine that the KPI pertaining to one of the entities (e.g., Entity-3) satisfies the threshold by being equal to 80%. [0769]: One or more thresholds can be applied to the value associated with the threshold field. In particular, the value can be produced by the KPI search query and can be, for example, the value of the threshold field in an event satisfying search criteria of the search query when the search query is executed, a statistic calculated based on one or more values of the threshold field in one or more events satisfying the search criteria of the search query when the search query is executed, a count of events satisfying the search criteria of the search query that include a constraint for the threshold field, etc. [1345]: The tolerance range may affect how precisely the aggregate triggering conditions are evaluated, for example, a tolerance range of 10% may consider values within 10% of one or more thresholds in the KPI criterion to satisfy the KPI criterion.
Reference Ridgeway and Reference Boe are analogous prior art to the claimed invention because the references generally relate to field of performance monitoring. Further, said references are part of the same classification, i.e., G06Q10/06393. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Boe, particularly the satisfying the threshold ([0053], [0769], [1345]), in the disclosure of Reference Ridgeway, particularly in the threshold tracking in [0107], in order to provide for a system that not only tracks the threshold but also makes sure the threshold is satisfied as taught by Reference Boe (see at least in [0663]: the value can be produced by a search query using the search of 2902 and can be, for example, the value of threshold field 2904 associated with an event satisfying search criteria of the search query when the search query is executed, a statistic calculated based on values for the specified threshold field of 2904 associated with the one or more events satisfying the search criteria of the search query when the search query is executed, or a count of events satisfying the search criteria of the search query that include a constraint for the threshold field of 2904, etc.), so that the process of managing performance monitoring can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar performance monitoring field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Ridgeway in view of Reference Boe, the results of the combination were predictable (MPEP 2143 A);
Regarding the claim limitations below, Reference Ridgeway in view of Reference Boe shows:
transmitting, by the one or more processors of the device and to the user device, a request for feedback that solicits the user to indicate whether the second provider system was available to the user and whether the second provider system was useful in facilitating a satisfactory level of service; and
modifying, by the one or more processors of the device, one or more settings of the clustering model based on a response associated with the feedback and based on an availability of the one or more provider systems.
Even though Ridgeway shows in [0067] (3) the number of new patient records, or the percentage increase of records, that have entered the database for the service provider having a particular feature of interest. For example, it may be determined that a particular feature has a relatively large impact on outcomes (such as being prescribed a certain medication), and may warrant a higher priority. Ridgeway does not explicitly show “feedback” as is recited in the claim. Ridgeway also does not explicitly show “clustering”
Boe shows “feedback” at least in [0734]-[0742]: impact analysis model and impact score. Boe further shows in [0873]-[0880]: feedback and impact. Boe shows “clustering” in [0812], [1505], [1507]. Boe shows the above limitations at least in [0734]-[0742]: impact analysis model and impact score. Boe further shows in [0873]-[0880]: feedback and impact. [0873]: In providing the referenced sensitivity setting control 34695, the described technologies can enable a user to adjust the sensitivity setting (thereby setting a higher or lower error threshold with respect to which error values are or are not identified as anomalies) and to be presented with real-time feedback (via search preview window 34698) reflecting the error values (and their underlying KPI values), as described below. [0876]: Conversely, as the user drags the slider (that is, sensitivity setting control 34695) towards the right, thereby raising the sensitivity setting (that is, the error threshold by which error values are to be determined to be anomalies with respect to their deviation from historical error values for the KPI), relatively fewer anomalies are likely to be identified. In doing so, the user can actively adjust the sensitivity setting via sensitivity setting control 34695 and be presented with immediate visual feedback regarding anomalies that are identified based on the provided sensitivity setting. [0878] It should also be noted that, in certain implementations, the referenced anomaly information 34702 dialog box (and/or one or more elements of GUI 34699 can enable a user to provide various types of feedback with respect to various anomalies that have been identified and/or presented (as well as information associated with such anomalies) …. in certain implementations, the referenced feedback may originate from a multitude of sources (similar to the different sources of training data described herein). For example, labeled examples of anomalies and non-anomalies can be gathered from similar but distinct systems or from communal databases. [0879] It should be further noted that while in certain implementations (such as those described herein) the referenced feedback can be solicited and/or received after an initial attempt has been made with respect to identifying anomalies, in other implementations the described technologies can be configured such that a training phase can first be initiated, such as where a user is presented with some simulated or hypothetical anomalies with respect to which the user can provide the various types of feedback referenced above. Such feedback can then be analyzed/processed to gauge the user's sensitivity and/or to identify what types of anomalies are (or aren't) of interest to them. Then, upon completing the referenced training phase, a detection phase can be initiated (e.g., by applying the referenced techniques to actual KPI values, etc.). Moreover, in certain implementations the described technologies can be configured to switch between training and detection modes/phases (e.g., periodically, following some conditional trigger such as a string of negative user feedback, etc.). [0895]: The distribution shift for each KPI can be determined, and each KPI can be categorized accordingly. When the KPIs for a service are categorized, the categorized KPIs can be compared to criteria for triggering a notable event. If the criteria are satisfied, a notable event can be triggered. [0912] In one implementation, when there are multiple trigger criteria pertaining to a particular KPI, the KPI correlation search processes the multiple trigger criteria pertaining to the particular KPI disjunctively (i.e., their results are logically OR'ed). For example, the KPI correlation search can include trigger criterion 3485A and trigger criterion 3485B pertaining to KPI1 3480A. If either trigger criterion 3485A or trigger criterion 3485B is satisfied, the KPI correlation search positively indicates the satisfaction of trigger criteria for KPI1 3480A. In another example, the KPI correlation search can include trigger criterion 3485C, trigger criterion 3485D, and trigger criterion 3485E pertaining to KPI2 3480B. If any one or more of trigger criterion 3485C, trigger criterion 3485D, and trigger criterion 3485E is satisfied, the KPI correlation search positively indicates the satisfaction of trigger criteria for KPI2 3496B. [0970] As described above, in one implementation, when there are multiple trigger criteria that pertain to a particular KPI, the trigger criteria are processed disjunctively. For example, if one of the two triggers that have been specified for KPI 34181A are satisfied, then the trigger criteria for KPI 34181A are considered satisfied. If any one of the three triggers that have been specified for KPI 34181B are satisfied, then the trigger criteria for KPI 34181B are considered satisfied. [1020]: During creation of a KPI correlation search, a name and/or title of the KPI correlation search may be defined such that if the data produced by the search query satisfies the triggering condition, the resulting notable event will be associated with that name. When the notable event is stored, one piece of associated information is the name of the correlation search from which the notable event is generated. Multiple notable events that are generated as a result of the same correlation search may then be given the same name, although they may have different timestamps to allow for differentiation.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Boe, particularly the satisfying the threshold ([0053], [0769], [1345]), in the disclosure of Reference Ridgeway, particularly in the threshold tracking in [0107], in order to provide for a system that not only tracks the threshold but also makes sure the threshold is satisfied as taught by Reference Boe (see at least in [0663]: the value can be produced by a search query using the search of 2902 and can be, for example, the value of threshold field 2904 associated with an event satisfying search criteria of the search query when the search query is executed, a statistic calculated based on values for the specified threshold field of 2904 associated with the one or more events satisfying the search criteria of the search query when the search query is executed, or a count of events satisfying the search criteria of the search query that include a constraint for the threshold field of 2904, etc.), so that the process of managing performance monitoring can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar performance monitoring field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Ridgeway in view of Reference Boe, the results of the combination were predictable (MPEP 2143 A).
Limitations recited in independent claims 1 and 15 that are not recited in claim 9:
Regarding the claim limitations below, Reference Ridgeway in view of Boe shows:
receiving, by the device, a request involving a user receiving the service, wherein the request involving the user receiving the service indicates that the user is associated with an attribute of the first group of the users (Ridgeway: [0002]: the present invention relates generally to methods and systems for the statistical analysis of retrospective service provider data for evaluating the effects of the performance of specific providers among a collection of service providers, and more specifically to the analysis of medical data for evaluating the performance of physicians, clinics, hospitals, and other medical providers. [0006]: The benchmark includes creating a propensity scoring model, which weights the data for each patient treated by other service providers to collectively resemble the hospital for which the benchmark is being constructed, a regression model providing an estimate of the effects of the service provider on outcomes, a doubly robust estimate that measures the effect of the service provider on the identified effect. [0007]: the system and methods provide mechanisms for dynamically updating the benchmark as new patient records join the data systems, and as providers offer new treatments…Service providers may be queued for updating their benchmarks, or for analyzing whether an update is necessary, based on a refresh priority score that measures how important is to check whether a given provider's benchmark should be updated. In some embodiments, the quality of the benchmark may be measured to determine if the benchmark is within a specified or user-defined threshold or tolerance. If the quality of the benchmark is within the threshold and therefore the benchmark is sufficient, the original benchmark is used and the propensity score model is not updated. If the quality of the benchmark has deteriorated beyond the threshold, the benchmark is insufficient and the propensity score model is recomputed. [0021]: Underperforming and/or overperforming service providers may be compared relative to each other, based on their performance relative to their individual benchmarks. This enables a determination for whether observed differences in outcomes between service providers is due to systematic differences in the service providers themselves, or whether observed differences in outcomes is due to a service provider having a different mix of cases than other service providers. [0023] The systems and methods herein include a benchmark comparison using a propensity scoring, a weighted regression model, and an outlier probability. FIG. 1 illustrates a schematic block diagram of a system for comparing service providers based on dynamically updated observational data according to one embodiment. As shown in FIG. 1, the system includes one or more operator terminals 102 for providing system access to operators over a data communications network 104 to an analysis machine 106 and a service provider data machine 108. Here, the benchmarking, priority scoring, specified or user-defined threshold or tolerance, using a regression model read on the claim limitation above.
Regarding the claim limitations below, Reference Ridgeway in view of Reference Boe shows:
“select, based on determining whether the first group of users is associated with experiencing the degradation and using a service management model, a service experience associated with receiving the service using a particular system during the particular stage; and”
Ridgeway: [0002]: the present invention relates generally to methods and systems for the statistical analysis of retrospective service provider data for evaluating the effects of the performance of specific providers among a collection of service providers, and more specifically to the analysis of medical data for evaluating the performance of physicians, clinics, hospitals, and other medical providers. [0006]: The benchmark includes creating a propensity scoring model, which weights the data for each patient treated by other service providers to collectively resemble the hospital for which the benchmark is being constructed, a regression model providing an estimate of the effects of the service provider on outcomes, a doubly robust estimate that measures the effect of the service provider on the identified effect. [0007]: the system and methods provide mechanisms for dynamically updating the benchmark as new patient records join the data systems, and as providers offer new treatments…Service providers may be queued for updating their benchmarks, or for analyzing whether an update is necessary, based on a refresh priority score that measures how important is to check whether a given provider's benchmark should be updated. In some embodiments, the quality of the benchmark may be measured to determine if the benchmark is within a specified or user-defined threshold or tolerance. If the quality of the benchmark is within the threshold and therefore the benchmark is sufficient, the original benchmark is used and the propensity score model is not updated. If the quality of the benchmark has deteriorated beyond the threshold, the benchmark is insufficient and the propensity score model is recomputed. [0021]: Underperforming and/or overperforming service providers may be compared relative to each other, based on their performance relative to their individual benchmarks. This enables a determination for whether observed differences in outcomes between service providers is due to systematic differences in the service providers themselves, or whether observed differences in outcomes is due to a service provider having a different mix of cases than other service providers. [0023] The systems and methods herein include a benchmark comparison using a propensity scoring, a weighted regression model, and an outlier probability. FIG. 1 illustrates a schematic block diagram of a system for comparing service providers based on dynamically updated observational data according to one embodiment. As shown in FIG. 1, the system includes one or more operator terminals 102 for providing system access to operators over a data communications network 104 to an analysis machine 106 and a service provider data machine 108.
Ridgeway: [0107] In accordance with the dynamic updating model described above, the data for the 26 hospitals in this study may be continually or intermittently updated with additional hospital data as new records are created. The service provider record database may be updated with additional data to provide the latest and most up to date information in the benchmarks. By updating the record database and benchmarks with new data, users can track trends over time and see the effects of new or revised treatment plans. For example, hospitals that were previously identified as outliers (e.g., as compared to their benchmarks) may be tracked to see if their performance changes over time. This may inform hospital administrators, other hospitals, and policy makers of whether new initiatives in high performing hospitals should be applied broadly to other hospitals, and whether remedial actions are warranted to improve the performance for underperforming hospitals, such as additional funding, revised policies, or changes in management. Ridgeway also shows in claim 4, The system of claim 3, wherein the threshold is set as 1% of the largest difference across all values in the distributions. However, Ridgeway does not explicitly show “experiencing the degradation”.
Reference Boe discloses “experiencing the degradation” [0053]: when the KPI is determined using data for Entity-3, the value for the KPI for Avg CPU Load may be at 80%. If the threshold is applied to the values of the aggregate of the entities (two at 50% and one at 80%), the aggregate value of the entities is 60%, and the KPI would not exceed the 80% threshold. If the threshold is applied using an entity basis for the thresholds (applied to the individual KPI values as calculated pertaining to each entity), the computing machine can determine that the KPI pertaining to one of the entities (e.g., Entity-3) satisfies the threshold by being equal to 80%. [0769]: One or more thresholds can be applied to the value associated with the threshold field. In particular, the value can be produced by the KPI search query and can be, for example, the value of the threshold field in an event satisfying search criteria of the search query when the search query is executed, a statistic calculated based on one or more values of the threshold field in one or more events satisfying the search criteria of the search query when the search query is executed, a count of events satisfying the search criteria of the search query that include a constraint for the threshold field, etc. [1345]: The tolerance range may affect how precisely the aggregate triggering conditions are evaluated, for example, a tolerance range of 10% may consider values within 10% of one or more thresholds in the KPI criterion to satisfy the KPI criterion.
Reference Ridgeway and Reference Boe are analogous prior art to the claimed invention because the references generally relate to field of performance monitoring. Further, said references are part of the same classification, i.e., G06Q10/06393. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Boe, particularly the satisfying the threshold ([0053], [0769], [1345]), in the disclosure of Reference Ridgeway, particularly in the threshold tracking in [0107], in order to provide for a system that not only tracks the threshold but also makes sure the threshold is satisfied as taught by Reference Boe (see at least in [0663]: the value can be produced by a search query using the search of 2902 and can be, for example, the value of threshold field 2904 associated with an event satisfying search criteria of the search query when the search query is executed, a statistic calculated based on values for the specified threshold field of 2904 associated with the one or more events satisfying the search criteria of the search query when the search query is executed, or a count of events satisfying the search criteria of the search query that include a constraint for the threshold field of 2904, etc.), so that the process of managing performance monitoring can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar performance monitoring field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Ridgeway in view of Reference Boe, the results of the combination were predictable (MPEP 2143 A).
Regarding the claim limitations below, Reference Ridgeway in view of Reference Boe shows:
cause a provider system to be configured in association with the service experience to provide the service for the user
Ridgeway: [0026]: The analysis machine 106 may provide access for an operator to several components including a data selection component 114, a propensity scoring component 116, a regression modeling component 118, a benchmark comparison component 120, a dynamic updating component 122, and a data output component 124. These components may take the form of computer instructions stored in computer memory and executed by a computer processor. [0028] The propensity scoring component 116 determines and assigns a propensity score to each service provider. In the example of benchmarking for medical services, the propensity score represents the likelihood of a patient in the database being treated by a given medical provider being benchmarked. The propensity scores are used to create a distribution of the features for patients of the other medical providers to match a distribution of the features for the patients of the given medical provider. The propensity scoring component 116 applies the propensity score to the patient data of the other medical providers to weight the data of the patient records such that the weighted data for the patients of the other medical providers (excluding the given medical provider) closely resembles the non-weighted data for the group of patients of the given medical provider. [0029] The regression modeling component 118 provides an interface for the operator to estimate the effects of the service provider on observed outcomes. In the example of benchmarking for medical service providers, the regression modeling component 118 may estimate the relative likelihood that a patient of the given medical provider would experience an identified outcome (such as expected patient readmission rate within 30 days) as compared to if the patent had been treated by the other medical providers in the record database 110. The regression modeling component 118 receives weighted data weighted by the propensity scoring component 116. Here, the group can be the patients treated by each service provider out of the many service providers being benchmarked reads on the first and second in the claim).
As per claim 2: Regarding the claim limitations below:
wherein a higher percentage of the second group utilizes for the second provider system for receiving the service than the first group.
Since Ridgeway does not explicitly show satisfying the threshold step in claim 1. Ridgeway does not explicitly show the above limitations. Boe shows the above limitations at least in [0774]: the user may specify thresholds for the first time frame (e.g., working hours), and then the computing machine may automatically predict, based on prior history, how KPI values during the second time frame (e.g., non-working hours) would differ from KPI values during the first time frame, and suggest thresholds for the second time frame based on the predicted difference. In one example, if average KPI values during the first time frame are 80 percent higher than average KPI values during the second time frame, the computing machine may suggest KPI thresholds for the second time frame that are 80 percent lower than the KPI thresholds specified for the first time frame. The user may then either accept suggested KPI thresholds or modify them as needed. In another example, a suggestion of a KPI threshold for the second time frame may be based on the KPI values within the second time frame without relying on the values within other time frames. In this example, the computing machine may suggest a KPI threshold at a particular percentile of the values in the second time frame (e.g., 75.sup.th percentile). In either example, the suggestion may be based on a statistical method such as, percentile, average, median, standard deviation or other statistical technique.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Boe, particularly the satisfying the threshold ([0053], [0769], [1345]), in the disclosure of Reference Ridgeway, particularly in the threshold tracking in [0107], in order to provide for a system that not only tracks the threshold but also makes sure the threshold is satisfied as taught by Reference Boe (see at least in [0663]: the value can be produced by a search query using the search of 2902 and can be, for example, the value of threshold field 2904 associated with an event satisfying search criteria of the search query when the search query is executed, a statistic calculated based on values for the specified threshold field of 2904 associated with the one or more events satisfying the search criteria of the search query when the search query is executed, or a count of events satisfying the search criteria of the search query that include a constraint for the threshold field of 2904, etc.), so that the process of managing performance monitoring can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar performance monitoring field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Ridgeway in view of Reference Boe, the results of the combination were predictable (MPEP 2143 A).
As per claim 3: Regarding the claim limitations below:
Wherein:
a first attribute of the first group is associated with the degradation
and the second group is associated with a second attribute,
a range of values of the first attribute is mutually exclusive from a range of values of the second attribute.
Since Ridgeway does not explicitly show satisfying the threshold step in claim 1. Ridgeway does not explicitly show the above limitations. Boe shows the above limitations at least in [0774]: the user may specify thresholds for the first time frame (e.g., working hours), and then the computing machine may automatically predict, based on prior history, how KPI values during the second time frame (e.g., non-working hours) would differ from KPI values during the first time frame, and suggest thresholds for the second time frame based on the predicted difference. In one example, if average KPI values during the first time frame are 80 percent higher than average KPI values during the second time frame, the computing machine may suggest KPI thresholds for the second time frame that are 80 percent lower than the KPI thresholds specified for the first time frame. The user may then either accept suggested KPI thresholds or modify them as needed. In another example, a suggestion of a KPI threshold for the second time frame may be based on the KPI values within the second time frame without relying on the values within other time frames. In this example, the computing machine may suggest a KPI threshold at a particular percentile of the values in the second time frame (e.g., 75.sup.th percentile). In either example, the suggestion may be based on a statistical method such as, percentile, average, median, standard deviation or other statistical technique.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Boe, particularly the satisfying the threshold ([0053], [0769], [1345]), in the disclosure of Reference Ridgeway, particularly in the threshold tracking in [0107], in order to provide for a system that not only tracks the threshold but also makes sure the threshold is satisfied as taught by Reference Boe (see at least in [0663]: the value can be produced by a search query using the search of 2902 and can be, for example, the value of threshold field 2904 associated with an event satisfying search criteria of the search query when the search query is executed, a statistic calculated based on values for the specified threshold field of 2904 associated with the one or more events satisfying the search criteria of the search query when the search query is executed, or a count of events satisfying the search criteria of the search query that include a constraint for the threshold field of 2904, etc.), so that the process of managing performance monitoring can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar performance monitoring field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Ridgeway in view of Reference Boe, the results of the combination were predictable (MPEP 2143 A).
As per claim 4: Ridgeway shows:
wherein the service involves multiple stages ([0021]: [0021] The systems and methods described herein provide a benchmark for analyzing service providers against one another. For each service provider, a benchmark is created whereby a first service provider's cases, services, patients, and/or clients can be compared to a dataset containing a collection of cases, services, patients, and/or clients having similar characteristics as the first service provider, but were treated by other service providers. Accounting for the characteristics of the first service provider's cases, services, patients, or clients assures that the benchmark contains cases, services, patients, or clients with a similar set of characteristics. The process can be repeated for each service provider such that multiple benchmarks are established, one per service provider. Each benchmark will have characteristics that are targeted to the service provider under test for that benchmark. For each service provider, a comparison may be created for various observed outcomes for the service provider's cases, services, patients, or clients relative to the benchmark for that service provider. The benchmark comparison may be used to simultaneously compare many different service providers, while adjusting for differences between the cases, services, patients, or clients seen by the various providers. Underperforming and/or overperforming service providers may be compared relative to each other, based on their performance relative to their individual benchmarks. This enables a determination for whether observed differences in outcomes between service providers is due to systematic differences in the service providers themselves, or whether observed differences in outcomes is due to a service provider having a different mix of cases than other service providers.)
As per claim 5: Ridgeway shows:
wherein the one or more processors are further configured to:
receive, by a server of the system and from a wireless communications device of the user, an electronic message that requests the service and indicates one or more attributes of the user, wherein the one or more of attributes of the user are associated with the degradation ([0023], [0029]: input or receive request); and
determine, based on the one or more the attribute, that the user is associated with the first group ([0023], [0029], [0066]).
As per claim 6: Regarding the claim limitations below:
wherein the one or more processors, to perform the action, are configured to:
provide, to an agent device, a notification that indicates that members associated with the attribute associated with the degradation are to receive the service in association with the second provider system, wherein one or more of the first group and the second group comprise the members.
Ridgeway shows “wherein one or more of the first group and the second group comprise the members.” in [0107]: Ridgeway shows “users” which reads on the above claim limitation. [0107]: the data for the 26 hospitals in this study may be continually or intermittently updated with additional hospital data as new records are created. The service provider record database may be updated with additional data to provide the latest and most up to date information in the benchmarks. By updating the record database and benchmarks with new data, users can track trends over time and see the effects of new or revised treatment plans. Ridgeway does not explicitly show a system for notification or messaging. Reference Boe shows notification or messaging at least in [0262]-[0266].
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Boe, particularly the satisfying the threshold ([0053], [0769], [1345]), in the disclosure of Reference Ridgeway, particularly in the threshold tracking in [0107], in order to provide for a system that not only tracks the threshold but also makes sure the threshold is satisfied as taught by Reference Boe (see at least in [0663]: the value can be produced by a search query using the search of 2902 and can be, for example, the value of threshold field 2904 associated with an event satisfying search criteria of the search query when the search query is executed, a statistic calculated based on values for the specified threshold field of 2904 associated with the one or more events satisfying the search criteria of the search query when the search query is executed, or a count of events satisfying the search criteria of the search query that include a constraint for the threshold field of 2904, etc.), so that the process of managing performance monitoring can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar performance monitoring field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Ridgeway in view of Reference Boe, the results of the combination were predictable (MPEP 2143 A).
As per claim 8: Ridgeway shows:
wherein a value of a difference threshold, associated with a difference between a parameter for the first group and the second group is based on a type of the parameter ([0047]: propensity score based on parameters).
As per claim 10: Regarding the claim limitations below:
wherein the second group of the users did not experience the degradation.
Since Ridgeway does not explicitly show satisfying the threshold step in claim 1. Ridgeway does not explicitly show the above limitations. Boe shows the above limitations at least in [0774]: the user may specify thresholds for the first time frame (e.g., working hours), and then the computing machine may automatically predict, based on prior history, how KPI values during the second time frame (e.g., non-working hours) would differ from KPI values during the first time frame, and suggest thresholds for the second time frame based on the predicted difference. In one example, if average KPI values during the first time frame are 80 percent higher than average KPI values during the second time frame, the computing machine may suggest KPI thresholds for the second time frame that are 80 percent lower than the KPI thresholds specified for the first time frame. The user may then either accept suggested KPI thresholds or modify them as needed. In another example, a suggestion of a KPI threshold for the second time frame may be based on the KPI values within the second time frame without relying on the values within other time frames. In this example, the computing machine may suggest a KPI threshold at a particular percentile of the values in the second time frame (e.g., 75.sup.th percentile). In either example, the suggestion may be based on a statistical method such as, percentile, average, median, standard deviation or other statistical technique.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Boe, particularly the satisfying the threshold ([0053], [0769], [1345]), in the disclosure of Reference Ridgeway, particularly in the threshold tracking in [0107], in order to provide for a system that not only tracks the threshold but also makes sure the threshold is satisfied as taught by Reference Boe (see at least in [0663]: the value can be produced by a search query using the search of 2902 and can be, for example, the value of threshold field 2904 associated with an event satisfying search criteria of the search query when the search query is executed, a statistic calculated based on values for the specified threshold field of 2904 associated with the one or more events satisfying the search criteria of the search query when the search query is executed, or a count of events satisfying the search criteria of the search query that include a constraint for the threshold field of 2904, etc.), so that the process of managing performance monitoring can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar performance monitoring field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Ridgeway in view of Reference Boe, the results of the combination were predictable (MPEP 2143 A).
As per claim 14: Ridgeway shows:
wherein the linear regression analysis performed by the impact analysis model comprises a linear regression analysis of attributes of the users ([0026]-[0038]: regression model).
As per claim 16: Regarding the claim limitations below:
wherein an impact score output by the impact analysis model is indicative of whether the attribute is associated with corresponding interactions of the previous service interactions wherein the corresponding interactions involve the degradation.
Since Ridgeway does not explicitly show satisfying the threshold step in claim 1. Ridgeway does not explicitly show the above limitations. Boe shows the above limitations at least in [0774]: the user may specify thresholds for the first time frame (e.g., working hours), and then the computing machine may automatically predict, based on prior history, how KPI values during the second time frame (e.g., non-working hours) would differ from KPI values during the first time frame, and suggest thresholds for the second time frame based on the predicted difference. In one example, if average KPI values during the first time frame are 80 percent higher than average KPI values during the second time frame, the computing machine may suggest KPI thresholds for the second time frame that are 80 percent lower than the KPI thresholds specified for the first time frame. The user may then either accept suggested KPI thresholds or modify them as needed. In another example, a suggestion of a KPI threshold for the second time frame may be based on the KPI values within the second time frame without relying on the values within other time frames. In this example, the computing machine may suggest a KPI threshold at a particular percentile of the values in the second time frame (e.g., 75.sup.th percentile). In either example, the suggestion may be based on a statistical method such as, percentile, average, median, standard deviation or other statistical technique.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Boe, particularly the satisfying the threshold ([0053], [0769], [1345]), in the disclosure of Reference Ridgeway, particularly in the threshold tracking in [0107], in order to provide for a system that not only tracks the threshold but also makes sure the threshold is satisfied as taught by Reference Boe (see at least in [0663]: the value can be produced by a search query using the search of 2902 and can be, for example, the value of threshold field 2904 associated with an event satisfying search criteria of the search query when the search query is executed, a statistic calculated based on values for the specified threshold field of 2904 associated with the one or more events satisfying the search criteria of the search query when the search query is executed, or a count of events satisfying the search criteria of the search query that include a constraint for the threshold field of 2904, etc.), so that the process of managing performance monitoring can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar performance monitoring field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Ridgeway in view of Reference Boe, the results of the combination were predictable (MPEP 2143 A).
As per claim 17: Ridgeway shows:
wherein the attribute comprises at least one of:
an age range of members of the first group;
a health status of members of the first group; or
a location associated with members of the first group.
([0021]: [0021] The systems and methods described herein provide a benchmark for analyzing service providers against one another. For each service provider, a benchmark is created whereby a first service provider's cases, services, patients, and/or clients can be compared to a dataset containing a collection of cases, services, patients, and/or clients having similar characteristics as the first service provider, but were treated by other service providers. Accounting for the characteristics of the first service provider's cases, services, patients, or clients assures that the benchmark contains cases, services, patients, or clients with a similar set of characteristics. The process can be repeated for each service provider such that multiple benchmarks are established, one per service provider. Each benchmark will have characteristics that are targeted to the service provider under test for that benchmark. For each service provider, a comparison may be created for various observed outcomes for the service provider's cases, services, patients, or clients relative to the benchmark for that service provider. The benchmark comparison may be used to simultaneously compare many different service providers, while adjusting for differences between the cases, services, patients, or clients seen by the various providers. Underperforming and/or overperforming service providers may be compared relative to each other, based on their performance relative to their individual benchmarks. This enables a determination for whether observed differences in outcomes between service providers is due to systematic differences in the service providers themselves, or whether observed differences in outcomes is due to a service provider having a different mix of cases than other service providers.)
As per claims 18 and 23: Ridgeway shows:
wherein the service involves multiple stages ([0021]: [0021] The systems and methods described herein provide a benchmark for analyzing service providers against one another. For each service provider, a benchmark is created whereby a first service provider's cases, services, patients, and/or clients can be compared to a dataset containing a collection of cases, services, patients, and/or clients having similar characteristics as the first service provider, but were treated by other service providers. Accounting for the characteristics of the first service provider's cases, services, patients, or clients assures that the benchmark contains cases, services, patients, or clients with a similar set of characteristics. The process can be repeated for each service provider such that multiple benchmarks are established, one per service provider. Each benchmark will have characteristics that are targeted to the service provider under test for that benchmark. For each service provider, a comparison may be created for various observed outcomes for the service provider's cases, services, patients, or clients relative to the benchmark for that service provider. The benchmark comparison may be used to simultaneously compare many different service providers, while adjusting for differences between the cases, services, patients, or clients seen by the various providers. Underperforming and/or overperforming service providers may be compared relative to each other, based on their performance relative to their individual benchmarks. This enables a determination for whether observed differences in outcomes between service providers is due to systematic differences in the service providers themselves, or whether observed differences in outcomes is due to a service provider having a different mix of cases than other service providers.).
As per claim 19: Regarding the claim limitations below:
Wherein the second group received the service without experiencing the degradation.
([0021]: [0021] The systems and methods described herein provide a benchmark for analyzing service providers against one another. For each service provider, a benchmark is created whereby a first service provider's cases, services, patients, and/or clients can be compared to a dataset containing a collection of cases, services, patients, and/or clients having similar characteristics as the first service provider, but were treated by other service providers. Accounting for the characteristics of the first service provider's cases, services, patients, or clients assures that the benchmark contains cases, services, patients, or clients with a similar set of characteristics. The process can be repeated for each service provider such that multiple benchmarks are established, one per service provider. Each benchmark will have characteristics that are targeted to the service provider under test for that benchmark. For each service provider, a comparison may be created for various observed outcomes for the service provider's cases, services, patients, or clients relative to the benchmark for that service provider. The benchmark comparison may be used to simultaneously compare many different service providers, while adjusting for differences between the cases, services, patients, or clients seen by the various providers. Underperforming and/or overperforming service providers may be compared relative to each other, based on their performance relative to their individual benchmarks. This enables a determination for whether observed differences in outcomes between service providers is due to systematic differences in the service providers themselves, or whether observed differences in outcomes is due to a service provider having a different mix of cases than other service providers.).
As per claim 20: Ridgeway shows:
wherein the attribute is a first attribute and the second group is associated with a second attribute, wherein a range of values of the first attribute is mutually exclusive from a range of values of the second attribute.
([0021]: [0021] The systems and methods described herein provide a benchmark for analyzing service providers against one another. For each service provider, a benchmark is created whereby a first service provider's cases, services, patients, and/or clients can be compared to a dataset containing a collection of cases, services, patients, and/or clients having similar characteristics as the first service provider, but were treated by other service providers. Accounting for the characteristics of the first service provider's cases, services, patients, or clients assures that the benchmark contains cases, services, patients, or clients with a similar set of characteristics. The process can be repeated for each service provider such that multiple benchmarks are established, one per service provider. Each benchmark will have characteristics that are targeted to the service provider under test for that benchmark. For each service provider, a comparison may be created for various observed outcomes for the service provider's cases, services, patients, or clients relative to the benchmark for that service provider. The benchmark comparison may be used to simultaneously compare many different service providers, while adjusting for differences between the cases, services, patients, or clients seen by the various providers. Underperforming and/or overperforming service providers may be compared relative to each other, based on their performance relative to their individual benchmarks. This enables a determination for whether observed differences in outcomes between service providers is due to systematic differences in the service providers themselves, or whether observed differences in outcomes is due to a service provider having a different mix of cases than other service providers.).
As per claim 21: Ridgeway shows:
further comprising:
providing, to an agent device, a notification that indicates that members associated with the attribute are to receive the service in association with the second provider system.
Reference Ridgeway shows [0021] The systems and methods described herein provide a benchmark for analyzing service providers against one another. For each service provider, a benchmark is created whereby a first service provider's cases, services, patients, and/or clients can be compared to a dataset containing a collection of cases, services, patients, and/or clients having similar characteristics as the first service provider, but were treated by other service providers. Accounting for the characteristics of the first service provider's cases, services, patients, or clients assures that the benchmark contains cases, services, patients, or clients with a similar set of characteristics. The process can be repeated for each service provider such that multiple benchmarks are established, one per service provider. Each benchmark will have characteristics that are targeted to the service provider under test for that benchmark. For each service provider, a comparison may be created for various observed outcomes for the service provider's cases, services, patients, or clients relative to the benchmark for that service provider. The benchmark comparison may be used to simultaneously compare many different service providers, while adjusting for differences between the cases, services, patients, or clients seen by the various providers. Underperforming and/or overperforming service providers may be compared relative to each other, based on their performance relative to their individual benchmarks. This enables a determination for whether observed differences in outcomes between service providers is due to systematic differences in the service providers themselves, or whether observed differences in outcomes is due to a service provider having a different mix of cases than other service providers. However, Ridgeway does not explicitly show “a notification”.
Reference Boe shows “a notification” at least in [0262]: A defined action (e.g., creating an alarm, sending a notification, displaying information in an interface, etc.) can be taken on conditions specified by the KPI correlation search. Implementations of the present disclosure provide users (e.g., business analysts) a graphical user interface (GUI) for defining a KPI correlation search. Implementations of the present disclosure provide visualizations of current KPI state performance that can be used for specifying search information and information for a trigger determination for a KPI correlation search. [0266]: After creating the new correlation's search definition, the system may run the correlation search to monitor the services and when the correlation search identifies a re-occurrence of the problem, the correlation search may generate a notable event or alarm to notify the user who created the correlation search or some other users. [0881] Alert management control 34703 can be, for example, a selectable element or interface item that, upon selection (e.g., by a user), enables a user to further manage various aspects of alerts, notifications, etc. (e.g., email alerts, notable events, etc., as are described herein) that are to be generated and/or provided, e.g., upon identification of various anomalies.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Boe, particularly the satisfying the threshold ([0053], [0769], [1345]), in the disclosure of Reference Ridgeway, particularly in the threshold tracking in [0107], in order to provide for a system that not only tracks the threshold but also makes sure the threshold is satisfied as taught by Reference Boe (see at least in [0663]: the value can be produced by a search query using the search of 2902 and can be, for example, the value of threshold field 2904 associated with an event satisfying search criteria of the search query when the search query is executed, a statistic calculated based on values for the specified threshold field of 2904 associated with the one or more events satisfying the search criteria of the search query when the search query is executed, or a count of events satisfying the search criteria of the search query that include a constraint for the threshold field of 2904, etc.), so that the process of managing performance monitoring can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar performance monitoring field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Ridgeway in view of Reference Boe, the results of the combination were predictable (MPEP 2143 A).
As per claim 24: Ridgeway shows:
wherein the one or more processors, to update the impact analysis model, are further configured to:
wherein the feedback comprises an indication of whether the one or more provider systems facilitated the satisfactory level of service for the particular stage of the service.
Ridgeway shows in [0023] The systems and methods herein include a benchmark comparison using a propensity scoring, a weighted regression model, and an outlier probability. FIG. 1 illustrates a schematic block diagram of a system for comparing service providers based on dynamically updated observational data according to one embodiment. [0029] The regression modeling component 118 provides an interface for the operator to estimate the effects of the service provider on observed outcomes. [0058] For each service provider, a report card is created listing its observed patient outcomes, its benchmark outcomes, and the outlier probability, as shown in the table below. The Provider X column is computed as the percentage or mean of the features of patients treated by Provider X. The Benchmark column is computed as the mean of the weighted regression model predictions of what would have happened to the Provider X patients had they been treated elsewhere. one-time process of fitting the propensity score model, outcome regression model, and reporting. As described above, the service provider record database may be continually updated to continually or intermittently updated with additional service provider data over time, for example, as existing hospitals in the database treat additional patients, or as the hospitals have updated records for existing patients, or as new hospitals are to be included in the database, or as new treatments are provided. When new records appear in the database and there is an existing benchmark for that service provider, the observed patient outcomes for that service provider may be updated to include the new data in a computationally efficient manner, and may be compared to the existing benchmark results. That is the outcome regression model, outlier probabilities, and reports may be updated in an efficient manner with new records so long as the original propensity score model is used and is not updated. [0075]: if the quality of the benchmark is within a specified or user-defined threshold or tolerance (i.e., the benchmark is sufficient), then the outcome regression model is recomputed, outlier probabilities are updated, and new reports are generated. That is, if the quality of the benchmark is sufficient, the original benchmark is used and the computationally expensive propensity score model is not updated. In some embodiments, the specified threshold may be 1%, such that the propensity score model is not updated if the percentage point difference between the aggregate updated data for a given service provider is within 1% of the aggregate weighted data for patients of the other service providers.
Response to Arguments
Applicants’ arguments are moot in view of the new grounds of rejection necessitated by the amendments made to previously presented claims.
Applicant’s Argument #1
Applicants argue on page(s) 13-16 of applicants remarks that “as recited in amended claim 1, do not constitute a certain method of organizing human activity, and more specifically, do not constitute a method of managing personal behavior or relationships, or interactions between people. For example, at least using a clustering model of an impact analysis model to make a determination does not constitute a method of managing personal behavior or relationships, or interactions between people …. Regarding Prong Two of Step 2A, Section 2106.04(d)(I) of the MPEP states that "if the claim as a whole integrates the recited judicial exception into a practical application the claim is eligible." Even if the claims could be construed as reciting an abstract idea-which Applicant does not concede-Applicant respectfully asserts that the claims integrate the alleged abstract idea into the practical application of diagnosing a cause of a performance degradation and improving one or more systems for providing a service.” (See applicants remarks for more details).
Response to Argument #1
Applicants' arguments have been fully considered; however, the examiner respectfully disagrees.
As is discussed in the 101 rejection above, specifically in the Step 2A prong 1, the amended claims are related to service and schedule management for a service provider to prevent the user experience from being degraded. Managing and analyzing historical user interactions to determine overscheduling of a service provider and to track degraded service experience for one or more human entities (see specification [0011]) involves organizing human activity based on the description of “certain methods of organizing human activity” provided by the courts. The court have used the phrase “Certain methods of organizing human activity” as —fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions).
Additionally, the limitations applicants are discussing above, particularly “as recited in amended claim 1, does not constitute a certain method of organizing human activity, and more specifically, does not constitute a method of managing personal behavior or relationships, or interactions between people. For example, using one or more processors to provide instructions to a user device, transmit a request for feedback to the user device, and update a model does not constitute a method of managing personal behavior or relationships, or interactions between people” discusses additional elements that are such that it amounts to no more than: adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f).
As was discussed in the interview dated 10/14/25, the amended claims are still recited at a very high level of generality.
Additionally, applicants’ arguments related to “Applicant respectfully asserts that the claims integrate the alleged abstract idea into the practical application of enhancing predictability with respect to providing a service.”
It should be noted that applicants originally submitted specification shows: “[0011] A scheduling system typically involves allocating a time slot of a calendar for multiple individuals to receive services from a service provider (e.g., a healthcare organization, a financial institution, food service organization, a vehicle services center, amongst other examples). For example, a service provider may maintain and/or manage the scheduling system by indicating availability to provide a service, and a user may use the system to reserve a time slot to receive the service from the service provider. Timing for providing a service may be somewhat unpredictable. For example, various factors or events may prevent the service provider from staying on schedule. Such unpredictability causes a degraded user experience, degraded production, and/or degraded efficiency. Some systems or service providers may attempt to track timing for providing a service in order to retroactively make adjustments to more accurately predict timing for providing the service, however, this may result in further degraded performance of the service provider (e.g., because changes to past performance cannot be made and the changes may not solve the problems involved in providing the service). Moreover, services that involve multiple stages (e.g., separate periods of time for receiving the service or multiple sub-services) add further complexity with respect to issues involved in providing a service and/or predicting timing associated with providing the service. [0012] Some implementations described herein provide a service management system that is configured to receive or collect historical interaction information associated with a service provider providing a service to various groups of users. The historical interaction information may include information associated with timing of users receiving a service along with corresponding attributes of the users and parameters involved in receiving the service. The attribute may include individual characteristics of the users, such as age, location, gender, health condition, among other physical or health-related characteristics. The parameters involved in receiving the service may involve or be associated with using certain systems, devices, processes, technologies, or representatives that provided the service to the users at individual stages of providing the service. …. [0014] In this way, the service management system may proactively identify a degradation of service involving a particular group and diagnose or identify a factor that improves performance of providing the service for individuals associated with the group (e.g., individuals that share the attribute). Accordingly, the service management system may receive, maintain, and analyze various factors involved in receiving a service, including multiple stages of receiving the service, to maintain and/or improve performance with respect to providing a service, thereby leading to improved predictability for providing the service. Therefore, the system, as described herein may reduce downtime or other inefficiencies, improve a user experience, reduce waste with respect to scheduling (or allocating) resources for providing a service whether the resources are not necessary or needed.”
The amended claims are directed to solving the problem concerned with the claims integrate the alleged abstract idea into the practical application of enhancing predictability with respect to providing a service, which is old and well known in the pre-Internet world and not rooted in the realm of computing. As such, in the present application the solution is rooted in computer technology and applied to a technical environment, but the problem origin is not.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
NPL Reference:
G. S. Kumar, R. Priyadarshini, N. H. Parmenas, H. Tannady, F. Rabbi and A. Andiyan, "Design of Optimal Service Scheduling based Task Allocation for Improving CRM in Cloud Computing," 2022 Sixth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Dharan, Nepal, 2022, pp. 438-445, doi: 10.1109/I-SMAC55078.2022.9987392.
Cloud computing is a service level computing that provide various service to the customers in order to establish an effective customer Relationship management (CRM). This offers various services like virtual machine, self-service provisioning, elasticity computing and storage (pay-as-you-go). Cloud computing provides shared services by enhancing resource management scalability, interoperability and prediction resources as a key to realize the resource utilization with high-performance management metrics. However, when the number of users increase, the process of task scheduling and service allocation using a traditional computing environment will degrade the CRM. In order to address this challenge, this research study proposes an Optimal Service Level Scheduling (OSLS) based task allocation design for improving CRM in cloud computing environment. The Adaptive Service Level Scheduling Algorithm (ASLSA) and the Support Level Load Balancer (SLLB) will reduce the workload in cloud computing environment in order to improve the Quality of Service (QoS) of CRM. This process will optimize the resource utilization in cloud platform based on the service requirement. It provides optimal scheduling features to CRM in order to improve the service optimality based on task and enhance the computational processes such as service load management, heterogeneous service delivery, pricing, resource pools and elasticity. The proposed system leverages high performance when compared to the existing models.
Foreign Reference:
(WO 2018075945 A1) The system performs statistical analysis of retrospective service provider data for evaluating effects of performance of providers among collection of service providers. The system creates a propensity scoring model, which weights the data for each patient treated by service providers to collectively resemble a hospital in which the benchmark is constructed. The system provides mechanisms for dynamically updating the benchmark as patient records to join data systems. The system ensures that the benchmark is recomputed only when needed to save computational cost for continually updating the benchmark for each service provider for each record that is added.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANCY PRASAD whose telephone number is (571)270-3265. The examiner can normally be reached M-F: 8:00 AM - 4:30 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached on (571)270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.N.P/Examiner, Art Unit 3624 /PATRICIA H MUNSON/Supervisory Patent Examiner, Art Unit 3624