Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
In the amendments dated 19 September 2025, the following has occurred: Claims 1, 10 and 19 have been amended.
Claims 1-2, 4-11 and 13-21 are pending.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-2, 4-11 and 13-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claims 1, 10 and 19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite non-transitory computer readable medium (CRM), method and system for receiving and analyzing data to generate a report of identified potential fraud/waste. The limitations of
Claim 1, which is representative of independent claims 10 and 19
[… obtaining …] a first set of data identifying entities and performance information for analysis, wherein the entities of the first set of data include providers of health services and the performance information includes health care information including charges for treatment and treatments provided to patients, and wherein the first set of data includes at least hundreds of thousands of data points; [… obtaining …] a second set of data identifying entities and performance information associated with known or suspected past fraud or abuse, wherein the entities of the first set of data include providers of health services and the performance information includes health care information including charges for treatment and treatments provided to patients, and wherein the second set of data includes at least hundreds of thousands of data points; [… obtaining …] metric and lens selections; performing metric and lens functions, based on the metric and lens selections, on a combination of the first and second sets of data to map performance information from the first and second sets to a reference space; generating a cover dividing the reference space into open sets and clustering mapped performance information using the open sets in the reference space defining nodes in a graph, at least a subset of the open sets each defining a node and membership of that node being
defined based on contents of that open set, each node including one or more entities as members, at least a subset of nodes including two or more entities as members, each node being connected to another node if they share at least one common entity as members of each of those nodes; identifying nodes that include at least one entity from the second set of data as a member; identifying entities from the first set of data that share membership of at least one identified node with at least one entity from the second set of data; for at least a subset of identified entities from the first set of data, for each identified entity, determining a number of entities from the second set of data that share node membership with the identified entity; ranking the at least a subset of identified entities from the first set of data based on shared node membership with the entities from the second set of data, wherein the ranking is based, at least in part, on the determined number of entities from the second set of data that share node membership with the identified entity; and generating a first report showing the ranking of the at least a subset of entities and listing the identified entities that are members of the identified nodes that are from the first set of data as possibly involved in fraud or abuse.
, as drafted, is a system, that, under the broadest reasonable interpretation, covers a method of organizing human activity (i.e., managing personal behavior including following rules or instructions) via human interaction with generic computer components. That is, via human interaction with a non-transitory CRM and a processor (claim 1), a processor and memory (claim 19), the claimed invention amounts to managing personal behavior or interaction between people, the Examiner notes as stated in in 2106.04(a)(2), “certain activity between a person and a computer… may fall within the “certain methods of organizing human activity” grouping”. For example, but for the non-transitory CRM and a processor (claim 1), a processor and memory (claim 19), the claim encompasses collection of known and unknown data with respect to fraud and abuse, to allow a user to select functions to organize the data into a data structure to create rankings for the creation of a report for a user. If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people but for the recitation of generic computer components, then it falls within the “method of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Further, the claims additionally recite financial concepts of mitigating financial risk (fraud), which is also an abstract idea. Also, the claims recite collection of data, mapping, and use mathematical functions, similar to mathematical relationships under Mathematical Concepts grouping of abstract ideas (see MPEP 2106.04(a)(2) A iv).
This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a non-transitory CRM and a processor (claim 1), a processor and memory (claim 19), which implements the identified abstract idea. The non-transitory CRM and a processor (claim 1), a processor and memory (claim 19) are recited at a high-level of generality (i.e., general purpose computers with processors and memory, performing/ implementing generic computer functions; see Applicant’s specification: Figure 18, paragraphs [0263]-[0264]) such that it amounts no more than mere instructions to apply the exception using generic computer components. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim recites the additional elements of “receiving… receiving… receiving…”. The “receiving… receiving… receiving…” steps are recited at a high level of generality (i.e., as a general means of receiving/transmitting data) and amounts to the mere transmission and/or receipt of data, which is a form of extra-solution activity. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a non-transitory CRM and a processor (claim 1), a processor and memory (claim 19), to perform the noted steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (“significantly more”).
Also as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “receiving… receiving… receiving…” were considered extra-solution activity. The “receiving… receiving… receiving…” has been re-evaluated under the “significantly more” analysis and determined to amount to be well-understood, routine, and conventional elements/functions. As described in MPEP 2106.05(d)(II)(i) “Receiving or transmitting data over a network” is well-understood, routine, and conventional. Well-understood, routine, and conventional elements/functions cannot provide “significantly more.” As such the claim is not patent eligible.
Claims 2, 4-9, 11, 13-18 and 20-21 are similarly rejected because either further define the abstract idea and/or do not further limit the claim to a practical application or provide as inventive concept such that the claims are subject matter eligible.
Claims 2, 4-6 and 11, 13-15 further define the analysis of the data in generating the report, however do not recite any additional elements sufficient to provide a practical application/significantly more.
Claims 7-9 and 16-18 further recite receiving data, however receiving has already been determined to be extra-solution activity and well-understood, routine and conventional activity, therefore the claim is not sufficient for providing a practical application/significantly more.
Claims 20 and 21 recite the additional element of displaying a graphical user interface, however the graphical user interface is recited at a high-level of generality (i.e., as a generic display interface for presentation of information to a user) and amounts to generally linking the abstract idea to a particular technological environment. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. The claim is directed to an abstract idea.
Also, as discussed above with respect to integration of the abstract idea into a practical application, the displaying a graphical user interface was considered generally linking the abstract idea to particular technological environment. This has been re-evaluated under the “significantly more” analysis and determined to amount to be well-understood, routine, and conventional elements/functions. As described in Agarwal (10,628,834) Figure 5 and column 20; Lum (2014/0297642): Figure 9, paragraph [0147]; displaying data on a graphical user interface is well-understood, routine and conventional. Well-understood, routine, and conventional elements/functions cannot provide “significantly more.” As such the claim is not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-2, 4-11 and 13-21 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 10,628,834 (hereafter “Agarwal”), in view of U.S. Patent Pub. No. 2014/0297642 (hereafter “Lum”).
Regarding (Currently Amended) claim 1, Agarwal teaches a non-transitory computer readable medium including executable instructions, the instructions being executable by a processor to perform a method (Agarwal: Column 3, lines 1-5, “one or more non-transitory machine-readable media storing instructions which, when executed by one or more processors, cause automatically detecting an instance of suspected misuse by an entity associated with a claim”), the method comprising:
--receiving a first set of data identifying entities and performance information for analysis (Agarwal: Figures 2-3, Column 8, lines 15-20, “A plurality of data 202 including, but not limited to, medical claims data, pharmacy claims data… are fed into a fraud lead generation module 204”, Column 14, lines 15-25, “the data import component 308 generating provider objects 312 that describe different health care providers. Data for the provider objects may be obtained, for example, from claims submissions of providers to insurers, who then provide the data to a computer system that implements the techniques herein. A health care provider may be any entity that provides health care services”, Column 15, lines 3-15, “the data import component 308 generating health care event objects 318 that describe one or more of: health care claims, prescriptions, medical procedures, or diagnoses. For example, an event object may be generated for each log entry in one or more logs from providers, insurers, and/or pharmacies, or based on claims submissions to insurers”. Also see, Column 5, lines 45-60, Column 9, lines 40-Column 10, line 5. The Examiner notes provider objects (Figure 3, element 312) and Health care event objects (Figure 3, element 318), which are correlated into a set of data (see Column 15, lines 25-40), under the broadest reasonable interpretation read on entities and performance information),
--wherein the entities of the first set of data include providers of health services and the performance information includes health care information including charges for treatment and treatments provided to patients, and wherein the first set of data includes at least hundreds of thousands of data points (Agarwal: Column 5, lines 40-55, “medical claims (collectively referred to as medical 45 claims or healthcare claims) may number in the millions or billions per year”, Column 8, lines 10-30, “A plurality of data 202 including, but not limited to, medical claims data, pharmacy claims data… identification of a potential fraud-related entity, such as a medical service or product provider, pharmacist, or health care plan member (e.g., patient), or a medical claim that involves such an entity or group of entities”, Column 18, lines 25-40, “large number of identified leads, which may number in the hundreds or thousands”, Also see, Column 26, paragraph 25-40);
--receiving a second set of data identifying entities and performance information associated with known or suspected past fraud or abuse (Agarwal: Figure 2-3, 4C-D, Column 6, lines 5-20, “trained using known outcomes of analyses of previously suspected entities. The known outcomes may include, for example, a fraud analysts' conclusion as to whether one or more of the previously suspected entities were actually involved in fraud”, Column 8, lines 15-20, “A plurality of data 202 including, but not limited to, medical claims data, pharmacy claims data, fraud tips (e.g., from news publications, blogs, consumer or provider reports, criminal investigations, etc.), previous positive leads, previous investigated leads, example positive leads, and the like are fed into a fraud lead generation module 204”, Column 22, lines 9-25, “obtaining training data… Training data (also referred to as training data set, example 20 leads, or example data) may comprise, for instance, data indicating previous leads and final dispositions towards those leads (e.g., known to be fraudulent”, Column 25, lines 38-40, “obtaining fraud-related information from machine and/or human sources”. Also see, Column 9, lines 40-Column 10, line 5. The Examiner notes provider objects (Figure 3, element 312) and Health care event objects (Figure 3, element 318), which are correlated into a set of data (see Column 15, lines 25-40), under the broadest reasonable interpretation read on entities and performance information, additionally this information includes known fraudulent information and is interpreted to be a second set of data),
--wherein the entities of the first set of data include providers of health services and the performance information includes health care information including charges for treatment and treatments provided to patients, and wherein the second set of data includes at least hundreds of thousands of data points (Agarwal: Column 5, lines 40-55, “medical claims (collectively referred to as medical 45 claims or healthcare claims) may number in the millions or billions per year”, Column 8, lines 10-30, “A plurality of data 202 including, but not limited to, medical claims data, pharmacy claims data… identification of a potential fraud-related entity, such as a medical service or product provider, pharmacist, or health care plan member (e.g., patient), or a medical claim that involves such an entity or group of entities”, Column 18, lines 25-40, “large number of identified leads, which may number in the hundreds or thousands”, Also see, Column 26, paragraph 25-40);
--receiving metric […] selections (Agarwal: Figure 4A, element 412, Column 23, lines 5-15, “users may optionally add new signals to the model to reflect newly available metrics, properties, or other data. In an embodiment, metrics are added”. Also see, Column 16, lines 5-45);
--performing metric […] functions, based on the metric […] selections, on a combination of the first and second sets of data to map performance information from the first and second sets to a reference space (Agarwal: Figure 4, Column 16, lines 5-15, “the metric generation component 342 computing values of metrics associated with the provider objects, the patient objects, and the pharmacy objects… one or more of the particular metrics for which values are calculated may be variables within the particular fraud detection model(s)”, Column 19, lines 40-50, “calculations based on comparing properties and/or metrics associated with the identified lead and properties and/or metrics associated with previously identified leads”, Column 20, lines 15-20, “utilize any of a variety of data visualization techniques, such as maps, node-based graphs, and so forth, for presenting the lead objects” Column 23, line 50-Column 24, line 20, “training data translation component 346 deriving one or more metrics from the training data… the model refinement component 348 modifying or updating the existing nearest neighbor model using the metric(s) derived… The derived metric(s) define a metric space (also referred to as a feature space) in which known fraudulent leads and yet-undetected fraudulent leads are clustered together… the lead identification component 344 applying the modified/updated existing model to database objects to identify the set of unusual metric values”. Also see, column 16, lines 15-65, Column 24, line 35-column 26, line 15. The Examiner notes metric functions are applied to the training information (i.e., the second set of data) to determine metrics, and create a feature space (i.e., a reference space), the values of metrics are calculated for the provider objects (i.e., the first set of data) and identified unusual metrics are identified from the objects, which under the broadest reasonable interpretation reads on application of the metric functions on a combination of the first and second set of data);
--[…] clustering mapped performance information […] in the reference space; [… to identify …] nodes in a graph, […], each node including one or more entities as members, at least a subset of nodes including two or more entities as members, each node being connected to another node if they share at least one common entity as members of each of those nodes (Agarwal: Figure 4, Column 8, lines 10-30, “a medical claim that involves such an entity or group of entities”, Column 14, lines 15-55, “A health care provider may be any entity that provides health care services. Health care providers may include organizational entities, 25 also referred to as facilities or institutions, such as hospitals and clinics. Health care providers may also or instead include individual practitioners, also referred to as health care workers, such as doctors and dentists. In some cases, such as in the case of solo practitioners, an individual practitioner may also function as an organizational entity”, Column 23, line 64-Column 24, line 10, “The derived metric(s) define a metric space (also referred to as a feature space) in which known fraudulent leads and yet undetected fraudulent leads are clustered together. The features of the positive and/or investigated leads, which are defined in the corresponding derived metrics, provide a starting point from which to search for other leads having similar features (e.g., the nearest neighbors) and may also define a permissible maximum distance from the starting point for a lead to be considered a nearest neighbor”, Column 24, lines 35-45, “The derived metrics may also define what network relationship(s) to look for between pairs of entities (or a cluster of entities) and/or the suspected fraudulent features to look for between pairs of entities”, Column 25, lines 4-15, “network model starts with a known "bad" provider (e.g., previously identified positive and/or investigated provider lead), determines the "bad" provider's network(s)… A provider-provider graph is conceptually constructed where each node of the graph represents a provider and edges of the graph represent jaccard distances of patients shared between providers to detect the one or more additional "bad" providers”, Column 29, lines 5-15, “use a clustering technique”. The Examiner notes a clustering technique is used on the derived metrics to construct a provider-provider graph, in which providers can be individual entities (i.e., a doctor) or organizational entities (i.e., a clinic or a plurality of doctors), are nodes and edges are patients that are shared between the various providers, and teaches what is required under the broadest reasonable interpretation);
--identifying nodes that include at least one entity from the second set of data as a member (Agarwal: Figure 4, Table 1, “Identify non-flagged providers who are strongly connected to previously flagged providers using, for example, a weighted data structure (e.g., a weighted provider-provider bidirectional graph in which edges are weighted by members shared between providers”, Column 19, lines 40-50, “calculations based on comparing properties and/or metrics associated with the identified lead and properties and/or metrics associated with previously identified leads”, Column 23, line 64-Column 24, line 10, “The features of the positive and/or investigated leads, which are defined in the corresponding derived metrics, provide a starting point from which to search for other leads having similar features (e.g., the nearest neighbors) and may also define a permissible maximum distance from the starting point for a lead to be considered a nearest neighbor”, Column 25, lines 4-15, “network model starts with a known "bad" provider (e.g., previously identified positive and/or investigated provider lead), determines the "bad" provider's network(s)… A provider-provider graph is conceptually constructed where each node of the graph represents a provider and edges of the graph represent jaccard distances of patients shared between providers to detect the one or more additional "bad" providers”. Also see, Column 8, lines 20-30, Column 24, lines 35-45);
--identifying entities from the first set of data that share membership of at least one identified node with at least one entity from the second set of data (Agarwal: Figure 4, Table 1, “Identify providers who have a certain proportion of members who are associated with flagged providers or non-flagged providers strongly connected to previously flagged providers using, for example, an unweighted data structure (e.g. , an unweighted provider-member graph”, Column 2, lines 15-20, “identifying one or more similar known instances of misuse from among the one or more known instances of misuse based on the degree of similarity calculated between the detected instance and 20 each of the one or more known instances of misuse”, Column 11, lunes 40-60, “determines how some or all of the identified leads provided by the lead identification component 344 relates to certain previous leads”, Column 19, lines 5-30, “if a sufficient number of other leads were previously labelled as fraud by analysts, like. and a nearest neighbor analysis reveals that a given identified lead is sufficiently similar to one or more of these 10 fraud-labelled other leads… identify these previously fraud-labelled leads”. Also see, Columns 23-25. The claim is taught by the identification of providers with previously labeled leads with a proportion of non-labeled providers, and teaches what is required under the broadest reasonable interpretation);
--for at least a subset of identified entities from the first set of data, for each identified entity, determining a number of entities from the second set of data that share node membership with the identified entity (Agarwal: Column 17, lines 40-50, “how many providers issued the same prescription to a single member. If the number of providers exceeds a threshold value”, Column 24, lines 60-Column 25, line 15, “provider-provider relationships form networks that are highly informative and can be used to uncover fraudulent entities”, Column 34, lines 30-end, “Various example metrics for automatically identifying, prioritizing, and/or investigating leads are described… metrics may be calculated and displayed in various visualization interfaces associated with search results… Metrics related to member objects may include, without limitation, one or more of… a count of distinct providers”. Also see, Table 1. The Examiner notes a count of number of entities associated with a node in a provider-provider graph teaches what is required of the claim under the broadest reasonable interpretation);
--ranking the at least a subset of identified entities from the first set of data based on shared node membership with the entities from the second set of data, wherein the ranking is based, at least in part, on the determined number of entities from the second set of data that share node membership with the identified entity (Agarwal: Column 8, lines 25-30, “The identification of fraud leads may also include ranking the leads from most to least suspected of fraudulent activity”, Column 13, lines 5-50, “leads may be ranked by functions that are specific to the fraud detection model by which they were identified”, Column 16, line 1-Column 17, line 40, “count the number of associated events… count the number of the practitioner's patients who have a certain quality such as a history of drug abuse… calculate score(s) that quantify how likely it is that an object is associated with fraudulent activity… a statistical quantity (e.g., a score) and may be computed on a count of a number of procedures between a member and a provider”, Column 20, lines 5-25, “The set of leads, in some embodiments, may be a ranked list based on one or more ranking criteria, such as highest to lowest fraud probability ( e.g., based on scores from the fraud detection models and/or other ranking functions)”, Column 34, lines 30-end, “Various example metrics for automatically identifying, prioritizing, and/or investigating leads are described… metrics may be calculated and displayed in various visualization interfaces associated with search results… Metrics related to member objects may include, without limitation, one or more of… a count of distinct providers”. Also see, column 17 for discussion of threshold. The Examiner notes the score can be based on a count of shared entities (i.e., providers) associated with known fraud/abuse in a provider-provider graph, and this score is used to rank the entities and teaches what is required of the claim under the broadest reasonable interpretation); and
--generating a first report showing the ranking of the at least a subset of entities and listing the identified entities that are members of the identified nodes that are from the first set of data as possibly involved in fraud or abuse (Agarwal: Abstract, “generating and presenting, for a set of suspected entities, natural language explanatory information explaining how and/or why each of the respective suspected entities is considered to be suspected of fraudulent, wasteful, and/or abusive activity”, Column 6, lines 22-25, “a natural language explanation accompanying a report of one or more suspected entities”, Column 8, lines 20-40, “identify one or more fraud leads from among the medical claims data. Each of the fraud leads comprises identification of a potential fraud-related entity… fraud lead explanation generation module 206 generates graphical and/or textual information to accompany each of the identified fraud leads. The graphical and/or textual information provides a natural language explanation or context for the respective fraud lead, such as explaining how the lead is similar to a previous lead deemed to be a positive lead”, Column 20, lines 5-30, “the user interface component 370 presenting a set of the identified lead objects… The set of leads, in some embodiments, may be a ranked list based on one or more ranking criteria, such as highest to lowest fraud probability (e.g., based on scores from the fraud detection models and/or other ranking functions)”. Also see, Table 1).
Agarwal may not explicitly teach (underlined below for clarity):
--receiving metric and lens selections;
--performing metric and lens functions, based on the metric and lens selections, on a combination of the first and second sets of data to map performance information from the first and second sets to a reference space;
generating a cover dividing the reference space into open sets and clustering mapped performance information using the open sets in the reference space;
defining nodes in a graph, at least a subset of the open sets each defining a node and membership of that node being defined based on contents of that open set, each node including one or more entities as members, at least a subset of nodes including two or more entities as members, each node being connected to another node if they share at least one common entity as members of each of those nodes;
Lum teaches receiving metric and lens selections (Lum: Figure 3, elements 310, 312, Figure 6, paragraphs [0095]-[0097], “generates an interface window allowing the user of the user device 102a options for a variety of different metrics and filter preferences. The interface window may be a drop down menu identifying a variety of distance metrics to be used in the analysis. Metric options may include, but are not limited to, Euclidean, DB Metric, variance normalized Euclidean, and total normalized Euclidean… the user selects and provides filter identifier(s) to the filter module 216… The filters, for example, may be user defined, geometric, or based on data which has been pre-processed”. Also see, paragraphs [0098]-[0104], [0127]. The Examiner notes a filter is a lens);
--performing metric and lens functions, based on the metric and lens selections, on a combination of the first and second sets of data to map performance information from the first and second sets to a reference space (Lum: Figure 3, element 316, Figure 8, paragraph [0106], “analysis module 220 processes data of selected fields based on the metric, filter(s), and resolution(s) to generate the visualization” paragraphs [0129]-[0130], “a reference of map from S is to a reference metric space R. R may be Euclidean space of some dimension, but it may also be the circle, torus, a tree, or other metric space. The map can be described by one or more filters (i.e., real valued functions on S). These filters can be defined by geometric invariants, such as the output of a density estimator, a notion of data depth, or functions specified by the origin of S as arising from a data set”);
generating a cover dividing the reference space into open sets and clustering mapped performance information using the open sets in the reference space (Figures 7-8, 12, paragraph [0131], “the resolution module 218 generates a cover of R based on the resolution received from the user (e.g., filter(s), intervals, and overlap--see FIG. 7). The cover of R may be a finite collection of open sets (in the metric of R) such that every point in R lies in at least one of these sets”, paragraph [0137], “In step 810, the analysis module 220 clusters each S(d) based on the metric, filter, and the space S. In some embodiments, a dynamic single-linkage clustering algorithm may be used to partition S(d). Those skilled in the art will appreciate that any number of clustering algorithms may be used with embodiments discussed herein”, paragraphs [0140]-[0142], “the visualization engine 222 identifies nodes which are associated with a subset of the partition elements of all of the S(d) for generating an interactive visualization”);
defining nodes in a graph, at least a subset of the open sets each defining a node and membership of that node being defined based on contents of that open set, each node including one or more entities as members, at least a subset of nodes including two or more entities as members, each node being connected to another node if they share at least one common entity as members of each of those nodes (Lum: Figure 8, 12, paragraph [0063], “nodes comprising data that has been clustered”, paragraphs [0140]-[0142], “the visualization engine 222 identifies nodes which are associated with a subset of the partition elements of all of the S(d) for generating an interactive visualization… Once the nodes are constructed, the intersections (e.g., edges) may be computed "all at once," by computing, for each point, the set of node sets”, paragraphs [0147]-[0150]. “The interactive visualization comprises of two types of objects: nodes (e.g., nodes 902 and 906) (the colored balls) and the edges (e.g., edge 904) (the black lines). The edges connect pairs of nodes (e.g., edge 904 connects node 902 with node 906). As discussed herein, each node may represent a collection of data points (rows in the database identified by the user). In one example, connected nodes tend to include data points which are "similar to" (e.g., clustered with) each other”, paragraph [0215], “nodes representing clusters of patient members and edges between nodes representing common patient members”, paragraphs [0218]-[0222], “The analysis server may join clusters to identify edges (e.g., connecting lines between nodes). Clusters joined by edges (i.e., interconnections) share one or more member patients…. Each node (i.e., ball or grouping displayed in the map visualization 1400) contains a subset of patients with similar genetic profiles”);
One of ordinary skill in the art of the art before the effective filing date would have found it obvious to include using both metric and lens selection to generate a cover of a reference space and defining of nodes in a graph that are connected by shared patients as edges as taught by Lum within the use of metrics to generate a reference space and provider-provider graph as taught by Agarwal with the motivation improving the accuracy of the visualization (Lum: paragraph [0195] and [0241]).
Regarding (Previously Presented) claim 2, Agarwal and Lum teaches the limitations of claim 1, and further teaches wherein the method further comprises determining other entities that are members of other nodes that are connected to the identified nodes by an edge, the other entities being from the first set of data, and adding one or more of the other entities to the first report (Agarwal: Figure 4, Table 1, “Identify non-flagged providers who are strongly connected to previously flagged providers using, for example, a weighted data structure (e.g., a weighted provider-provider bidirectional graph in which edges are weighted by members shared between providers”, Column 25, lines 4-15, “network model starts with a known "bad" provider (e.g., previously identified positive and/or investigated provider lead), determines the "bad" provider's network(s)… A provider-provider graph is conceptually constructed where each node of the graph represents a provider and edges of the graph represent jaccard distances of patients shared between providers to detect the one or more additional "bad" providers”, Column 23, line 64-Column 24, line 10, “The features of the positive and/or investigated leads, which are defined in the corresponding derived metrics, provide a starting point from which to search for other leads having similar features (e.g., the nearest neighbors) and may also define a permissible maximum distance from the starting point for a lead to be considered a nearest neighbor.”. Also see, Column 8, lines 20-30, Column 24, lines 35-45. The Examiner interprets the distance is what is used in Agarwal from a “bad” provider, and any node (entity) within the distance are identified as a lead and added to the report, this includes the nodes not directly touching a previously determined fraudulent node).
The motivation to combine is the same as in claim 1, incorporated herein.
Regarding (Previously Presented) claim 4, Agarwal and Lum teaches the limitations of claim 1, and further teaches wherein the entities of the first and second sets of data further include consumers of services and the performance information of the first and second sets of data further includes access information and resource utilization of networks and network resources (Agarwal: Column 8, lines 10-30, “A plurality of data 202 including, but not limited to, medical claims data, pharmacy claims data… identification of a potential fraud-related entity, such as a medical service or product provider, pharmacist, or health care plan member (e.g., patient), or a medical claim that involves such an entity or group of entities”, Column 32, lines 50-Column 33, line 2, “claims objects for prescriptions, claim objects for laboratory tests, claim objects for medical procedures, and claim objects for other types of services”. The Examiner interprets the data in the claims for the patients read on resource utilization in the network).
The motivation to combine is the same as in claim 1, incorporated herein.
Regarding (Original) claim 5, Agarwal and Lum teaches the limitations of claim 1, and further teaches wherein the method further comprises applying one or more functions on at least some performance data of the first set of data and including those determined entities that are members of the identified nodes that are from the first set of data in the first report based, in part, on at least one value calculated as a result of the one or more functions (Agarwal: Column 8, lines 20-40, “The identification of fraud leads may also include ranking the leads from most to least suspected of fraudulent activity.”, Column 11, lines 45-50, “the lead-relatedness calculation component 360 may determine that an identified lead is, based on various calculations and/or functions, similar in characteristics to, or identified for similar reasons as, one or more previous leads that were determined to actually correspond to fraudulent activity, or one or more previous leads that led to follow-up investigations”, Column 13, lines 20-33, “two primary metrics for ranking leads are configured to quantify likeliness of fraud, and impact of fraud if fraud has in fact occurred… leads may be ranked by functions that are specific to the fraud detection model by which they were identified, and/or by functions that consider the leads independently of the fraud detection model(s) by which they were identified”).
The motivation to combine is the same as in claim 1, incorporated herein.
Regarding (Original) claim 6, Agarwal and Lum teaches the limitations of claim 1, and further teaches wherein the one or more functions include an LI function or L infinity function (Lum: paragraphs [0097]-[0100], “A variety of geometric filters may be available for the user to choose. Geometric filters may include… L1 Eccentricity… L-infinity Eccentricity”).
The motivation to combine is the same as in claim 1, incorporated herein.
Regarding (Previously Presented) claim 7, Agarwal and Lum teaches the limitations of claim 1, and further teaches wherein the method further comprises: receiving a new entity with new performance information associated with that new entity (Agarwal: Figures 2-3, Column 8, lines 15-20, “A plurality of data 202 including, but not limited to, medical claims data, pharmacy claims data… are fed into a fraud lead generation module 204”, Column 14, lines 15-25, “the data import component 308 generating provider objects 312 that describe different health care providers. Data for the provider objects may be obtained, for example, from claims submissions of providers to insurers, who then provide the data to a computer system that implements the techniques herein. A health care provider may be any entity that provides health care services”, Column 15, lines 3-15, “the data import component 308 generating health care event objects 318 that describe one or more of: health care claims, prescriptions, medical procedures, or diagnoses. For example, an event object may be generated for each log entry in one or more logs from providers, insurers, and/or pharmacies, or based on claims submissions to insurers”. Also see, Column 5, lines 45-60, Column 9, lines 40-Column 10, line 5. The Examiner notes provider information can be continuously fed into the trained system, in particular after the model has been updated with feedback from the first/second data);
--determining distances between new performance information of new entity and performance information of entities of first and second sets of data; comparing distances between new performance information for new entity and the distances between entities of each node; determining a location of new entity in the graph based on the comparison; and generating a second report if the new entity is determined to be in a node that has at least one member from the second set of data (Agarwal: Column 19, lines 40-50, “calculations based on comparing properties and/or metrics associated with the identified lead and properties and/or metrics associated with previously identified leads”, Column 24, lines 10-20, “applying the modified/updated existing model to database objects to identify the set of unusual metric values, similar to the description above with respect to block 456a. The updated nearest neighbor model uses or implements the metric space to find new leads that are closest in cosine distance to the previously known fraudulent leads. The new leads that are identified using this model are outputted in ranked order relative to each other”. Also see, column 23, lines 4-25, column 24, lines 4-15. The Examiner interprets new data can be fed into the model as the model is updated based on the feedback of the first/second data, to produce a new report of leads).
The motivation to combine is the same as in claim 1, incorporated herein.
Regarding (Previously Presented) claim 8, Agarwal and Lum teaches the limitations of claim 1, and further teaches wherein the method further comprises: receiving a new entity with new performance information associated with that new entity (Agarwal: Figures 2-3, Column 8, lines 15-20, “A plurality of data 202 including, but not limited to, medical claims data, pharmacy claims data… are fed into a fraud lead generation module 204”, Column 14, lines 15-25, “the data import component 308 generating provider objects 312 that describe different health care providers. Data for the provider objects may be obtained, for example, from claims submissions of providers to insurers, who then provide the data to a computer system that implements the techniques herein. A health care provider may be any entity that provides health care services”, Column 15, lines 3-15, “the data import component 308 generating health care event objects 318 that describe one or more of: health care claims, prescriptions, medical procedures, or diagnoses. For example, an event object may be generated for each log entry in one or more logs from providers, insurers, and/or pharmacies, or based on claims submissions to insurers”. Also see, Column 5, lines 45-60, Column 9, lines 40-Column 10, line 5. The Examiner notes provider information can be continuously fed into the trained system, in particular after the model has been updated with feedback from the first/second data);
--determining distances between new performance information of new entity and performance information of entities of first and second sets of data; comparing distances between new performance information for new entity and the distances between entities of each node; determining a location of new entity in the graph based on the comparison; and generating a second report if the new entity is determined to be in a node that is linked by an edge to a node that has at least one member from the second set of data (Agarwal: Table 1, “Identify non-flagged providers who are strongly connected to previously flagged providers using, for example, a weighted data structure (e.g., a weighted provider-provider bidirectional graph in which edges are weighted by members shared between providers”, Column 19, lines 40-50, “calculations based on comparing properties and/or metrics associated with the identified lead and properties and/or metrics associated with previously identified leads”, Column 24, lines 10-20, “applying the modified/updated existing model to database objects to identify the set of unusual metric values, similar to the description above with respect to block 456a. The updated nearest neighbor model uses or implements the metric space to find new leads that are closest in cosine distance to the previously known fraudulent leads. The new leads that are identified using this model are outputted in ranked order relative to each other”. Also see, column 23, lines 4-25, column 24, lines 4-15. The Examiner interprets new data can be fed into the model as the model is updated based on the feedback of the first/second data, to produce a new report of leads).
The motivation to combine is the same as in claim 1, incorporated herein.
Regarding (Original) claim 9, Agarwal and Lum teaches the limitations of claim 1, and further teaches wherein the method further comprises: removing performance information from the first set of data if the performance information is older than a predetermined date leaving remaining performance information (Agarwal: Column 16, lines 35-45, “metrics may be time-sensitive. For example, some metrics may pertain to events of a recent time period such as the last month or year, while others may pertain to designated time periods”. The Examiner interprets this removing data older than a specific date);
--performing the metric and lens functions on first and second set of data to map remaining performance information from the first set of data and the performance information from the second set of data to the reference space (Agarwal: Column 13, lines 45-50, Column 20, lines 15-20, Column 25, lines 4-15, Column 23, line 64-Column 24, line 10; Lum: Figure 3, elements 310, 312, Figure 6, paragraphs [0095]-[0097], “generates an interface window allowing the user of the user device 102a options for a variety of different metrics and filter preferences. The interface window may be a drop down menu identifying a variety of distance metrics to be used in the analysis. Metric options may include, but are not limited to, Euclidean, DB Metric, variance normalized Euclidean, and total normalized Euclidean… the user selects and provides filter identifier(s) to the filter module 216… The filters, for example, may be user defined, geometric, or based on data which has been pre-processed”); and
--generating cover of the reference space and cluster mapped performance information to identify nodes in the graph (Agarwal: Figure 4, Column 25, lines 4-15, “network model starts with a known "bad" provider (e.g., previously identified positive and/or investigated provider lead), determines the "bad" provider's network(s)… A provider-provider graph is conceptually constructed where each node of the graph represents a provider and edges of the graph represent jaccard distances of patients shared between providers to detect the one or more additional "bad" providers”, Column 23, line 64-Column 24, line 10, “The derived metric(s) define a metric space (also referred to as a feature space) in which known fraudulent leads and yet undetected fraudulent leads are clustered together. The features of the positive and/or investigated leads, which are defined in the corresponding derived metrics, provide a starting point from which to search for other leads having similar features (e.g., the nearest neighbors) and may also define a permissible maximum distance from the starting point for a lead to be considered a nearest neighbor.”, Column 24, lines 35-45, “The derived metrics may also define what network relationship(s) to look for between pairs of entities (or a cluster of entities) and/or the suspected fraudulent features to look for between pairs of entities”, Also see, Figure 8, 12, paragraphs [0131], [0137], [0140]-[0142]).
The motivation to combine is the same as in claim 1, incorporated herein.
REGARDING CLAIM(S) 10 AND 19
Claim(s) 10 and 19 are analogous to Claim(s) 1, thus Claim(s) 10 and 19 are similarly analyzed and rejected in a manner consistent with the rejection of Claim(s) 1.
REGARDING CLAIM(S) 11 and 13-18
Claim(s) 11 and 13-18 are analogous to Claim(s) 2 and 4-9, thus Claim(s) 11 and 13-18 are similarly analyzed and rejected in a manner consistent with the rejection of Claim(s) 2 and 4-9.
Regarding (Previously Presented) claim 20, Agarwal and Lum teaches the limitations of claim 1, and further teaches wherein the method further comprises:
--displaying a graphical user interface depicting at least a part of a topological data analysis (TDA) network, the TDA network depicting at least a subset of the nodes and edges, the graphical user interface allowing a user to interact with information associated with the nodes (Agarwal: Figure 4, Column 20, lines 5-30, “the user interface component 370 presenting a set of the identified lead objects… utilize any of a variety of data visualization techniques, such as maps, node-based graphs, and so forth, for presenting the lead objects”; Lum: Figures 9-11);
--indicating one or more nodes in the at least a part of the TDA network, the one or more nodes being indicated including at least one entity associated with known or suspected past fraud or abuse as a member (Agarwal: Figure 4, Column 20, lines 5-30, “the user interface component 370 presenting a set of the identified lead objects… The set of leads, in some embodiments, may be a ranked list based on one or more ranking criteria, such as highest to lowest fraud probability (e.g., based on scores from the fraud detection models and/or other ranking functions)”; Lum: Figures 9-11. The identified nodes are displayed and indicated via the display/list);
--receiving, by the graphical user interface, a selection of at least one node, the at least one node indicating that the at least one node has the at least one entity associated with known or suspected past fraud or abuse as a member (Agarwal: Column 8, line 60-Column 9, line 5, “Feedback 210 may be captured by machines via interactions on fraud analyst workspaces 208… which leads were selected”, Column 20, line 60-Column 21, line 25, “As analyst(s) review the identified lead objects and associated explanations, analyst(s) may label or flag certain of the identified lead objects… such feedback type of data is received by the user interface component 370 in block 426”; Lum: Figures 9-11. The Examiner notes a lead may be selected to provide feedback, which teaches selection of a node); and
--displaying identifying information that identifies entities that are members of the selection (Agarwal: Figure 5, Columns 29-31, “FIG. 5 illustrates a user interface 500 illustrating an example lead summary report for a particular identified lead… The element 506 can identify the particular fraud detection model(s) or scheme(s) 10 upon which the particular identified lead was deemed to be potentially fraudulent”; Lum: Figures 9-11, paragraph [0158], “when the user selects a node or edge, node information or edge information may be displayed”).
The motivation to combine is the same as in claim 1, incorporated herein.
Regarding (Previously Presented) claim 21, Agarwal and Lum teaches the limitations of claim 20, and further teaches wherein indicating in the at least a part of the TDA network the one or more nodes that includes the at least one entity associated with known or suspected past fraud or abuse as a member comprises distinguishing the one or more nodes from each other based on number of entities associated with known suspected past fraud or abuse as members, wherein those nodes that have a greater number of members who are entities associated with known or suspected past fraud or abuse are visually distinctive from those nodes that have a fewer number of members who are entities associated with known or suspected past fraud or abuse (Agarwal: Column 8, lines 25-30, “The identification of fraud leads may also include ranking the leads from most to least suspected of fraudulent activity”, Column 13, lines 5-50, “Lead prioritization may comprise, for instance, filtering the set of leads based on one or more of… which leads constitute the most obvious cases of fraud, which leads are easiest to investigate, or which leads are closely clustered… leads may be ranked by functions that are specific to the fraud detection model by which they were identified”; Lum: Figures 9-11, paragraph [0160], “Color option 914 allows the user to display different information based on color of the objects. Color option 914 in FIG. 9 is set to filter Density, however, other filters may be chosen and the objects re-colored based on the selection”. The Examiner notes, displaying the ranking is a visual distinction on the display, and teaches what is required under the broadest reasonable interpretation).
The motivation to combine is the same as in claim 1, incorporated herein.
Response to Arguments
Applicant's arguments filed 19 September 2025 have been fully considered but they are not persuasive. Applicant’s arguments will be addressed herein below in the order in which they appear in the response filed on 19 September 2025.
Rejections under 35 U.S.C. § 101
Regarding the rejection of claims 1-2, 4-11 and 13-21, the Examiner has considered the Applicant’s arguments but does not find them persuasive. The Examiner has attempted to address all of the arguments presented by the Applicant; however, any arguments inadvertently not addressed are not persuasive for at least the following reasons:
Applicant argues:
However, Applicant respectfully submits that this characterization is incorrect. The claims are not directed to managing personal behavior or interactions between people, but rather to a specific technological solution for analyzing large-scale healthcare data using topological data analysis techniques… Applicant respectfully submits that the current claim is not a "rare circumstance" in which this grouping should be expanded… This limitation describes a specific computational technique rooted in topological data analysis, which is a mathematical framework for analyzing the shape of data. The process of generating covers, dividing reference spaces into open sets, and clustering mapped performance information represents a particular technological approach to data analysis that goes far beyond abstract data organization… The claims address a specific technological problem in the field of large-scale data analysis… This is not merely organizing human activity, but rather solving a technical challenge inherent in processing and analyzing massive datasets… this approach overcomes problems with the prior art where "certain qualitative properties" may not be readily represented within prior art… Addressing how to discover interrelationships within the data using an analytical approach to reveal those relationships in a queryless manner is clearly technical (and an improvement over the prior art). As described in paragraph [0081 ]… The claims recite specific technological implementations including the use of metric and lens functions to map performance information to a reference space, the generation of covers dividing the reference space into open sets, and the clustering of mapped performance information using these open sets… Applicant submits that the claims require specific computational processes that transform the computer into a specialized analytical tool… The claims also recite processing "at least hundreds of thousands of data points," which presents significant computational challenges that require more than conventional computer processing… The receiving of metric and lens selections is integral to the claimed analytical process, as these selections directly control how the performance information is mapped to the reference space and how the subsequent clustering and node identification occurs. This is not peripheral data transmission, but rather a core component of the claimed technological solution.
The Examiner respectfully disagrees.
It is respectfully submitted, the claims are directed toward a human user interacting with and organizing data using various human user selections to organize and provide to a human an output of the organized data, using various generic off-the-shelf computer components, which as stated in in 2106.04(a)(2), “certain activity between a person and a computer… may fall within the “certain methods of organizing human activity” grouping”. The claim is directed toward using human activity to organize data for a user, and is directed toward the certain method of human activity grouping of abstract ideas.
With respect to practical application, only the additional elements may provide a technical solution to a technical problem recited in Applicant’s specification and/or an improvement in the functionality of the computer. In particular, the selection and use of various functions to organize data and the organization of data are not additional elements, they are the various generic computer components applying the abstract idea to a particular technological environment. Looking to argued Applicant’s specification paragraphs [0081]-[0082], these paragraphs do not recite any technical problems rooted in computer hardware technology, these paragraphs describe problems of “identifying entities… where there is no legal entitlement due to fraud, abuse, and/or waste”, which are at best, human activity problems of detection of fraud, while the claims may improve upon the abstract idea, nevertheless an improved abstract idea is still an abstract idea. Finally, with respect to the amount of data being analyzed, at best this is merely application of the abstract idea on generic computer components which are not particular, they are generic off-the shelf hardware (see Applicant’s Specification: Figure 18, paragraphs [0263]-[0264]), and which as stated in 2106.05(f)(2) “claiming the improved speed or efficiency inherent with applying the abstract idea on a computer” does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), merely analyzing “at least hundreds of thousands of data points” is not a technical improvement to the functionality of a computer. As the claims do not recite any technical solutions to a technical problem recited in Applicant’s specification nor does it improve the performance of the computer the claim is not subject matter eligible, and the argument is unpersuasive.
Rejections under 35 U.S.C. § 103
Regarding the rejection of claims 1-2, 4-11 and 13-21, the Examiner has considered the applicant’s arguments; however, the arguments are not persuasive as addressed herein. Any arguments inadvertently not addressed are unpersuasive for at least the following reasons:
Applicant argues:
Applicant respectfully disagrees, but to accelerate prosecution, without waiving any arguments, Applicant is amending the independent claims to refer explicitly to "defining" (i.e., in claims 1 and 9) and "define" (i.e., in claim 19) the nodes… At no point, however, does Agarwal include using claims are within a data set that is mapped to a reference space from which open sets are created. Further, these claims and organizational entities are not part of such sets that nodes are defined… Further, when each node is merely a provider, the nodes of Agarwal cannot be redefined as anything else. As such, they cannot be created as required by the claim, nor can the claim elements that rely on the specific creation of the nodes as defined by the claim have any relevance to Agarwal… As discussed previously, Agarwal explicitly discloses an incompatible network structure to Lum and is therefore not combinable… Applicant submits that Agarwal discloses creating data objects as well as a provider-provider graphs. A data object is not a graph "node." The nodes in the provider-provider graph are not nodes as defined and created in the claim… As discussed below, Agarwal 's objects, such as provider objects, are objects that store data received from various data sources. They are not defined by data analytical lenses (e.g., functions) applied to open sets of data as discussed by Lum. These are fundamentally different. No amount of creating open sets of underlying data to identify nodes using lenses will have any impact on the creation of data objects defined by Agarwal. As such, any steps of functions using such nodes discussed in Lum will have no impact on the system of Agarwal… An object is not a node. Aganval does not disclose that a node can be a plurality of providers... These objects are not created by the process described in the Applicant's claims that create nodes AND objects that share the same doctor are not connected by an edge (also required by the claim)… Agarwal does not disclose creating provider objects by (1) performing metric and lens functions to map performance information to a reference space or (2) generating a cover dividing the reference space into sets and clustering mapped performance information using the sets in the reference space to identify objects… The provider-provider graph does not depict or show nodes as defined by the claim… A "provider" is not a "provider object" as earlier defined in Col. 14. The graph literally represents a single provider with edges to patients they provide services to…. The provider-provider graph literally defined in Agarwal shows connections to patients. They do not show connections between providers based on duplications of providers… Rather, the provider-provider graph shows a line between each provider and a member which can help seeing which providers are connected to a same member. These are not nodes that are connected because the nodes themselves share the same data that is used to define the nodes themselves… Moreover, Agarwal does not disclose that providers are connected to other providers, much less that providers are connected to other providers based on the mapping of the providers to the reference space and finding the same provider in multiple sets as required by the claim… Agarwal does not disclose using a metric space and clustering to create/identify nodes. Agarwal only discloses a metric space to perform analysis on claim data… In other words, Agarwal does not create nodes based on analysis of leads. The analysis in Agarwal only classifies leads… Here, while leads may be clustered to "provide a starting point from which to search for other leads having similar features," the leads themselves are not grouped into nodes based on dividing a reference space to identify nodes. Agarwal only discloses the process in FIG. 4 for lead classification… References Cannot Be Combined Where Reference Teaches Away from Their Combination… Creating new nodes from the claims data is not discussed in Agarwal and, further, it would fundamentally change the invention of Agarwal. There does not appear to be anything missing the Agarwal approach… Agarwal teaches away from Lum because any combination of the process of Lum would render Agarwal unworkable (i.e., applying algebraic topological analysis to generate and identify nodes that contain similar sensor data for interactive graph construction is NOT classification of fraudulent leads and the result would obscure or overwrite the relationships that Agarwal sought to unveil).
The Examiner respectfully disagrees.
It is respectfully submitted, in view of the amendments to the claim the rejections’ citations have been updated accordingly, as such Lum is now relied on for teaching many of the argued features, that have been solely argued as not being taught by Agarwal, as such it is the combination of Lum within Agarwal which teaches the argued limitations. In particular, Agarwal, explicitly teaches deriving metrics using a selected metric function (see above, but at least Column 16, lines 5-15 and Column 19, lines 40-50), and further clustering uses these derived metrics for the creation of a provider-provider graph in which providers, which can be individual entities (i.e., a doctor) or organizational entities (i.e., a clinic or a plurality of doctors), are nodes and edges are patients that are shared between the various providers (see above, but at least Col 24, lines 35-45 and Column 25, lines 4-15), and teach what is required under the broadest reasonable interpretation for a node that contains one or more members and clustering of data that has had metric functions performed as required by the claim, however, Agarwal may not explicitly recite selection and use of lens functions in addition to the selection and use of metric functions to generate a cover and to define nodes, nevertheless Lum explicitly teaches selection and use of both metric and lens functions (paragraphs [0095]-[0097] and paragraph [0106]), and further recites generation of a cover (see above, but at least, paragraph [0131]), clustering of performance information using open sets (paragraph [0137]), and defining of nodes in which the nodes are defined by open sets and membership is defined based on the open sets (paragraphs [0140]-[0142] and paragraphs [0147]-[0150]), and would be prima facie obvious to include within the teachings of Agrawal with the motivation improving the accuracy of the visualization (Lum: paragraph [0195] and [0241]).
With respect to inoperability, the MPEP states at 2143(B), “An assessment of whether a combination would render the device inoperable must not "ignore the modifications that one skilled in the art would make”, the Examiner notes Agrawal is directed toward analysis of healthcare data to generating and presenting visualizations of identified data to a provider using a graph structure (see at least Abstract). While, Lum is directed toward mapping and visualization of data using mapping and lens selections to perform analysis using a graph structure (see at least Abstract). One of ordinary skill in the art would find it prima facie obvious and would have no problem modifying the teachings of selection and use of both metric and lens functions for creation of a graph data structure as taught by Lum within selection and use of metric functions to generate a graph structure as taught by Agrawal, as both references are directed at analysis of healthcare data to construct a graph data structure for visualization to a human user, one of ordinary skill would understand the system is not inoperable and would find it prima facie obvious with the motivation of improving the accuracy of the visualization (Lum: paragraph [0195] and [0241]) and therefore, Applicant’s argument is unpersuasive.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew E Lee whose telephone number is (571)272-8323. The examiner can normally be reached on M-Th 9-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached on 571-270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.E.L./Examiner, Art Unit 3684
/Shahid Merchant/Supervisory Patent Examiner, Art Unit 3684