Prosecution Insights
Last updated: April 19, 2026
Application No. 18/206,927

SYSTEMS AND METHODS OF USING CONFIGURABLE FUNCTIONS TO HARMONIZE DATA FROM DISPARATE SOURCES

Final Rejection §101§103§112
Filed
Jun 07, 2023
Examiner
LE, UYEN T
Art Unit
2156
Tech Center
2100 — Computer Architecture & Software
Assignee
Walgreen Co.
OA Round
4 (Final)
84%
Grant Probability
Favorable
5-6
OA Rounds
2y 11m
To Grant
94%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
669 granted / 797 resolved
+28.9% vs TC avg
Moderate +10% lift
Without
With
+9.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
24 currently pending
Career history
821
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
27.6%
-12.4% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
22.2%
-17.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 797 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant’s summary of the telephone interview on 16 October 2025 is accurate. Claims 1-3, 6-10, 13-17, 20-23 are pending. Applicant’s amendment is insufficient to overcome the previous rejection of all pending claims under 35 U.S.C. 101 and introduces new issues of 35 U.S.C 112 discussed below. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-23 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The specification as originally filed does not support the now claimed: “identifying, by the one or more processors, the one or more functions to be applied to the third dataset by applying a trained curation framework machine learning model to one or more of the first dataset, the second dataset or the third dataset”. The specification paragraph 0035 merely describes “For instance, the trained curation framework machine learning model 116 may be used to identify a schema or structure for a dataset, to identify fields of the dataset based on their values, to determine appropriate formatting for particular values, to identify functions or transformations to be applied to a dataset, to stitch the dataset with another dataset, and/or to make a prediction or recommendation for an individual associated with a data record in the stitched dataset.”, The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-23 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The specification as originally filed does not support the now claimed: “identifying, by the one or more processors, the one or more functions to be applied to the third dataset by applying a trained curation framework machine learning model to one or more of the first dataset, the second dataset or the third dataset”. The specification paragraph 0035 merely describes “For instance, the trained curation framework machine learning model 116 may be used to identify a schema or structure for a dataset, to identify fields of the dataset based on their values, to determine appropriate formatting for particular values, to identify functions or transformations to be applied to a dataset, to stitch the dataset with another dataset, and/or to make a prediction or recommendation for an individual associated with a data record in the stitched dataset.”, It is not clear how the one or more processors identifies functions to be applied to the third dataset by applying a trained curation framework machine learning model to the dataset. For examination purpose the limitations are interpreted as “identifying, by the one or more processors, the one or more functions to be applied to the third dataset”. Response to Arguments Applicant's arguments filed 3 November 2025 have been fully considered but they are not persuasive. Applicant argues at page 11 first two paragraphs of the response: “However, while Mohamad generally discloses a "function library," Mohamad fails to disclose identifying a particular function from the function library to be applied to a particular database, let alone doing so by applying a trained machine learning model to the particular database and/or other databases. Consequently, Mohamad fails to disclose "identifying, by the one or more processors, the one or more functions to be applied to the third dataset by applying a trained curation framework machine learning model to one or more of the first dataset, the second dataset, or the third dataset," and "applying, by the one or more processors, the identified one or more functions to the third plurality of data records of the third dataset to produce an output dataset," as recited by claim 1. Claims 8 and 15 recite similar features and are amended similarly. The other cited references do not remedy the deficiencies of Mohamad with respect to these features, nor are they cited for this purpose.” In response applicant argues the claims as amended. The examiner points out Mohamad was merely cited for the teaching of a function library. Note as written each first and second dataset merely include respective data records having a first and second set of fields. The third set of fields merely are fields included in both first and second set of fields. Thus the third set merely includes fields common to both first and second dataset. Which function to apply to the common fields is clearly suggested by Burghoffer Fig. 6. Applicant further argues at page 11 of the response: “Additionally, for similar reasons, the cited references, even in combination, would fail to disclose "training, by the one or more processors, the curation framework machine learning model using training data including one or more of: historical datasets, structures of the historical datasets, schemas of the historical datasets, fields of the historical datasets, values of the historical datasets, formatting associated with the values of the historical datasets, functions that were applied to the historical datasets, ways in which the historical datasets were stitched to other historical datasets, or data associated with individuals in the historical datasets," as recited by newly added claim 21. Claims 22 and 23 recite similar features. Accordingly, Applicant respectfully requests that the rejection of claims 1, 8, 15, 21, 22, and 23, and their respective dependent claims, under 35 U.S.C. § 103 be withdrawn.” In response the examiner points out applicant’s arguments regarding the new claims are moot in view of the new grounds of rejection presented in this final Office action. Regarding the rejection under 35 U.S.C. 101, applicant argues at page 12 of the response: “Here, the claims do not fall under the "mental processes" grouping of abstract ideas because the claims recite limitations that cannot practically be performed in the human mind, including claim limitations that encompass AI in a way that cannot be practically performed in the human mind. In particular, amended claim 1 recites, inter alia, "identifying, by the one or more processors, one or more functions from the library of pre-built functions to be applied to the third dataset by applying a trained curation framework machine learning model to one or more of the first dataset, the second dataset, or the third dataset." Claims 8 and 15 are amended similarly. Applicant respectfully submits that these claims include claim limitations that encompass AI in a way that cannot be practically performed in the human mind, and thus should not fall into the "mental process" grouping of abstract ideas in view of the USPTO Memo.” In response the examiner points out as written claim 1 merely includes limitation previously recited in claim 5 and adds “by applying a trained curation framework machine learning model to one or more of the first dataset, the second dataset, or the third dataset". However the machine learning model recited at a high level of generality is considered insignificant extra solution activity. Furthermore accessing a library of pre-built functions and identify the ones to be applied are mere data gathering and evaluation are processes that under the broadest reasonable interpretation, cover performance of the limitations by a human user. Applicant further argues at page 13 first paragraph: “ Claims 22 and 23 recite similar features. Applicant respectfully submits that these claims also include claim limitations that encompass AI in a way that cannot be practically performed in the human mind, and thus should not fall into the "mental process" grouping of abstract ideas in view of the USPTO Memo”. In response the examiner points our again the newly added claims merely recite “train the curation framework machine learning model using training data including one or more of…” considered insignificant extra solution activities because the limitations recited at a high level of generality do not seem to impose any meaningful limits on practicing the abstract idea of their parent claims. Applicant presents no further arguments. For all the reasons discussed above, the rejection of all pending claims under 35 U.S.C. 101 is maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims1-3, 6-10, 13-17, 20-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Subject matter eligibility analysis of claim 1: Step 1: claim 1 recites a method thus seems to be directed to a process which is one of the four statutory categories of invention. Step 2A Prong 1: The claim recites "retrieving a first dataset. retrieving a second dataset analyzing the sets of fields... identifying data records., stitching data records These operations are data gathering and records identification processes that under the broadest reasonable interpretation, cover performance of the limitations by a human with the aid of pen and paper. That is other than reciting "by one or more processors', nothing in the claim element precludes the operations from practically being performed by a human mind with the aid of pen and paper. If a claim limitation, under its broadest reasonable interpretation, cover performance of the limitation in the mind, then it falls within the "Mental Processes' grouping of abstract idea (concept performed in the human mind including an observation, evaluation, judgment and opinion). The mere nominal recitation of "one or more processors' does not take the claim limitation out of the mental processes grouping. Thus, the claim recites a mental process. The added "wherein the first set of fields...", "wherein the second set of fields... merely further describe association of each set of fields and what each set of fields includes. Mere non-functional descriptive material does not make an abstract idea less abstract. Furthermore the "one or more" recitation merely names the fields, thus is considered insignificant extra solution activity, not imposing any meaningful limit on the claimed method. Step 2A Prong 2: the judicial exception is not integrated into a practical application. The claim recites the additional elements "stitching identified records...., applying one or more functions to produce an output dataset converting. comparing... accessing... identifying… by applying a trained curation framework machine learning model to one or more of the first dataset, the second dataset, or the third dataset, all recited at a high level of generality are considered mere insignificant extra solution activity because they do not impose any meaningful limits on practicing the abstract idea, do not improve any technology or technical field, do not apply the judicial exception with or by use of a particular machine, do not add specific limitation other than what is well-understood, routine, conventional activity in the field, do not add unconventional steps that confine the claim to a particular useful application, do not include other meaningful limitations beyond linking the use of the judicial exception to a particular technological environment. Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitation of "displaying the output dataset via a user interface" amounts to a mere generic presentation of collected and analyzed data which is considered to be insignificant extra solution activity (see MPEP 2106.05(g)). Claim 2 further "analyze in order to..." which amounts to mere observation, evaluation, judgment and opinion. Claim 3 merely further describes the analyzing step considered mere observation, evaluation, judgment and opinion. Claim 6 further recites "identifying...", which is mere observation, evaluation. Claim 7 describes the data records and the output dataset, thus merely adding insignificant extra solution activity to the judicial exception (see MPEP 2106.04(d)(I); MPEP 2106.05(g)). Claim 21 merely further describes “train the curation framework machine learning model using training data including one or more of…” considered insignificant extra solution activities because they do not seem to impose any meaningful limits on practicing the abstract idea. Claims 8-10, 13-14, 22 essentially recite limitations similar to claims 1-3, 6-7, 21 respectively in form of systems thus are non-statutory for the same reasons discussed in claims 1-3, 6-7, 21 above. Claims 15-17, 20, 23 essentially recite limitations similar to claims 1-3, 6, 21 respectively in form of non-transitory computer readable storage medium thus are mere instructions to apply the judicial exception of claims 1-3, 6, 21 discussed above. For all the reasons discussed above, no claim is patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 6-10, 13-17, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Burghoffer et al (US 20190057100 A1) of record, in view of St. Clair et al (US 20170270257 A1) of record, further in view of Mohamad et al (US 10909120 B1) of record. Regarding claim 1, Burghoffer substantially discloses a computer-implemented method for using configurable functions to harmonize data from disparate sources, comprising: retrieving, by one or more processors, a first dataset from a first external data source, the first dataset including a first plurality of data records having values for each of a first set of fields (See at least Fig.6 block 610); the difference is Burghoffer does not specifically show "wherein the first set of fields is associated with one or more patients, and includes one or mor of a patient name field, a diagnosis field, an insurance field, a patient address field, a patient phone number field, or a doctor field"; however the method of Burghoffer clearly operates in the context of shared ata by several users or all users of a given organization that is a tenant of a multitenant system (see at least [0043]: While each user's data can be stored separately from other users' data regardless of the employers of each user, some data can be organization-wide data shared or accessible by several users or all of the users for a given organization that is a tenant. Thus, there can be some data structures managed by the system 16 that are allocated at the tenant level while other data structures can be managed at the user level. Because an MTS can support multiple tenants including possible competitors, the MTS can have security protocols that keep data, applications, and application use separate. Also, because many tenants may opt for access to an MTS rather than maintain their own system, redundancy, up-time, and backup are additional functions that can be implemented in the MTS. In addition to user-specific data and tenant-specific data, the system 16 also can maintain system level data usable by multiple tenants or other data. Such system level data can include industry reports, news, postings, and the like that are sharable among tenants). Furthermore it is customary in the art as shown by St, Clair that health records sources include physicians, patients, hospitals and other providers (see at least [0006]: Broadly speaking, there are three sources of health care information about patients: the patients themselves (or their care givers); the patients' physicians, hospitals and other providers; and the patients' health plan or other payer). it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include associating the first set of fields with one or more patients, the set including one or more of a patient name field, a diagnosis field, an insurance field, a patient address field, a patient phone number field, or a doctor field while implementing the method of Burghoffer in order to allow different users of a health care system to benefit from the multitenant database taught by Burghoffer; Burghoffer/St Clair further teaches: retrieving, by the one or more processors, a second dataset from a second external data source, distinct from the first external data source, the second dataset including a second plurality of data records having values for each of a second set of fields (see at least Burghoffer Fig.6 block 630); wherein the second set of fields is associated with one or more customers, and includes one or more of: a customer name field, a loyalty identification number field, a customer address field, a customer phone number field, or a purchases field (Burghoffer [0045]... For example, a CRM database can include a table that describes a customer with fields for basic contact information such as name, address, phone number, fax number, etc. Another table can describe a purchase order, including fields for information such as customer, product, sale price, date, etc. In some MTS implementations, standard entity tables can be provided for use by all tenants. For CRM database applications, such standard entities can include tables for case, account, contact, lead, and opportunity data objects, each containing pre-defined fields. As used herein, the term "entity" also may be used interchangeably with "object" and "table."); note the limitations are recited in the alternatives; analyzing, by the one or more processors, the first set of fields and the second set of fields to identify a third set of fields, the third set of fields being fields included in both the first set of fields and the second set of fields (see at least Burghoffer Fig.6 block 650); converting, by the one or more processors, one or more values for a field of the third set of fields in the first dataset from a first format associated with the first dataset to a second format associated with the second dataset (see at least Burghoffer [0078]: The capability fields 312 may characterize prospective treatment (e.g., how to jointly process) data in underlying fields. Accordingly, in the depicted example, the "from" email field may be classified as a "sender" capability (e.g., capable of sending emails) and the "to," "cc," and "bcc" fields may be classified as a "recipient" capability (e.g., capable of receiving emails). Furthermore, the "body" may be classified as "body" capability (e.g., contains body text) and the "date" field may be classified with "date" capability (e.g., contains a date formatted in some way). Another capability may be a location, such as latitude and longitude, which may be associated with social media or the location a picture was taken, or the like. Still another capability may be a "summary," with reference to a subject line in an email or a calendar invitation. Additional or different capabilities may be employed, as discussed herein., [0109]: Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.); Note the claimed "converting data' is met by the fact that data from disparate sources are jointly processed in the method of Burghoffer. Note as written the claimed converting does not require any particular algorithm, comparing, by the one or more processors, the converted one or more values for the field of the third set of fields in the first dataset to one or more values for the field of the third set of fields in the second dataset (see at least Burghoffer [0078]: The capability fields 312 may characterize prospective treatment (e.g., how to jointly process) data in underlying fields. Accordingly, in the depicted example, the "from" email field may be classified as a "sender" capability (e.g., capable of sending emails) and the "to," "cc," and "bcc" fields may be classified as a "recipient" capability (e.g., capable of receiving emails). Furthermore, the "body" may be classified as "body" capability (e.g., contains body text) and the "date" field may be classified with "date" capability (e.g., contains a date formatted in some way). Another capability may be a location, such as latitude and longitude, which may be associated with social media or the location a picture was taken, or the like. Still another capability may be a "summary," with reference to a subject line in an email or a calendar invitation. Additional or different capabilities may be employed, as discussed herein, [0110]: It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as "determining;" "identifying;" "adding;" "selecting;" or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.); Note as written the comparing merely results in identifying matching field values, does not require any specific algorithm thus reads on the fact that data from disparate sources is jointly processed in the method of Burghoffer, identifying, by the one or more processors, based on the comparing, one or more data records of the first plurality of data records, and one or more respective data records of the second plurality of data records, having matching values for fields of the third set of fields (see at least Burghoffer Fig.6 block 650); stitching, by the one or more processors, each identified data record of the first plurality of data records with each respective identified data record of the second plurality of data records in order to generate a third dataset including a third plurality of data records having values for each of the first set of fields and for each of the second set of fields (see at least Burghoffer Fig.6 block 660); Burghoffer/St. Clair does not specifically show accessing, by the one or more processors, a library of pre-built functions; and identifying, by the one or more processors, the one or more functions to be applied to the third dataset (by applying a trained curation framework machine learning model to one or more of the first dataset, the second dataset or the third dataset). However it is customary in the art to use pre-built functions from a library as shown by Mohamad (see at least col. 15 lines 5-22). it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teachings of Mohamad while implementing the method of Burghoffer/St. Clair because the library of pre-build functions of Mohamad would provide readily available functions usable by programmers/developers in processing data for Butghoffer/St. Clair (see at least Burghoffer [0095]-[0096]). Note the claim language “to be applied to the third dataset” seems merely intentional, does not actually require performing any operation; Butghoffer/St. Clair/Mohamad further teaches: applying, by the one or more processors, one or more identified functions to the third plurality of data records of the third dataset to produce an output dataset (see at least Burghoffer Fig.6 block 660); and displaying, by the one or more processors, the output dataset via a user interface (see at least Burghoffer Fig.6 block 660). Regarding claim 2, Burghoffer/St Clair/Mohamad teaches the computer-implemented method of claim 1, further comprising: analyzing, by the one or more processors, the first dataset in order to identify the first set of fields; and analyzing, by the one or more processors, the second dataset in order to identify the second set of fields (see at least Burghoffer [0020]: An extractor may be employed to extract textual information (e.g., via natural language processing or other code that analyzes text) of any field in a data record classified with a "body" capability, or a body-like capability, such as a date or future date, which can also be expressed with text, as will be discussed in more detail. Different extractors may contain or be programmed with different natural processing algorithms in order to extract predetermined type of information within the text of the field data being processed. The extractors may also include regular expressions, pattern matching, or machine-learned models; [0021] A handle may describe how an underlying field relates to external data, e.g., to records located in a networked cloud or to data stored in a different source within the data pipelines than the database of the subject records. The handles may facilitate the data pipeline (or coupled system) in performing a lookup of an external or separate database to find second data that is correlated to the first data of the field in the database record, as will be further explained. [0022] In one implementation, the disclosed system may access, with relation to a data pipeline, first data of a first data type, and identify a first field within the first data that is classified with a first capability. The disclosed system may also access, with relation to the data pipeline, second data of a second data type, and identify a second field within the second data that is also classified with the first capability. The disclosed system may then execute, using a processing device, processing logic on a combination of the first data within the first field and the second data within the second field in a way consistent with the first capability, and thus be able to jointly process data from disparate sources that are classified with the same capability (e.g., "body" capability). The disclosed system may then generate a data file as an output from execution of the processing logic, the data file being independent from the first data and the second data). Regarding claim 3, Butghoffer/St. Clair/Mohamad teaches the computer-implemented method of claim 2, wherein analyzing the first dataset in order to identify the first set of fields includes analyzing the respective values of each field of the first set of fields in order to identify the first set of fields, and wherein analyzing the second dataset in order to identify the second set of fields includes analyzing the respective values of each field of the second set of fields in order to identify the second set of fields (see at least Butghoffer Fig.7). Regarding claim 6, Burghoffer/St.Clair/Mohamad teaches the computer- implemented method of claim 1, wherein identifying the one or more functions to be applied to the third dataset is based on the identified fields of the third set of fields (see at least Burghoffer Fig.6). Regarding claim 7, Butghoffer/St. Clair/Mohamad teaches the computer-implemented method of claim 1, wherein each data record of the third dataset is associated with an individual (see at least Burghoffer [0070]), and wherein the output dataset includes recommendations or predictions for the individuals associated with the data records of the third dataset (see at least Burghoffer [0067]). Claims 8-10, 13-14 correspond to a system performing the method of claims 1-3, 6-7 thus are rejected for the same reasons discussed in claims 1-3, 6-7 above. Claims 15-17, 20 correspond to a non-transitory computer program product storing instructions for performing the method of claims 1-3, 6 thus are rejected for the same reasons discussed in claims 1-3, 6 above. Claim(s) 21-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Burghoffer et al (US 20190057100 A1) of record, in view of St. Clair et al (US 20170270257 A1) of record, in view of Mohamad et al (US 10909120 B1) of record, further in view of Shi et al (US 20200394592 A1) Regarding claim 21, Butghoffer/St. Clair/Mohamad does not specifically show the computer-implemented method of claim 1, further comprising: training, by the one or more processors, the curation framework machine learning model using training data including one or more of: historical datasets, structures of the historical datasets, schemas of the historical datasets, fields of the historical datasets, values of the historical datasets, formatting associated with the values of the historical datasets, functions that were applied to the historical datasets, ways in which the historical datasets were stitched to other historical datasets, or data associated with individuals in the historical datasets. However it is customary in the art to train models using historical attribute values associated with users as shown by Shi ([0021] Machine-learned model 142 is automatically trained using one or more machine learning techniques. Machine learning is the study and construction of algorithms that can learn from, and make predictions on, data. Such algorithms operate by building a model from inputs in order to make data-driven predictions or decisions. Thus, a machine learning technique is used to generate a statistical model that is trained based on a history of attribute values associated with users. The statistical model is trained based on multiple attributes. In machine learning parlance, such attributes are referred to as “features.” To generate and train a statistical prediction model, a set of features is specified and a set of training data is identified). it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include such features while implementing the method of Butghoffer/St. Clair/Mohamad in order to generate a statistical model using machine learning techniques. Claims 22, 23 recite limitations similar to claim 21 in form of system and non-transitory computer program product storing instructions for performing the method of claim 21 thus are rejected for the same reasons discussed in claim 21 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. BANSAL et al (US 20190188313 A1) teach a method for linking records from different datasets based on record similarities. The method includes ingesting a first dataset, including a first set of records with a first set of fields, wherein the first dataset is associated with a first vendor and a first type of data, and a second dataset, including a second set of records with a second set of fields, wherein the second dataset is associated with a second vendor and a second type of data; determining that a first record from the first set of records is similar to a second record from the second set of records based on similarities between fields in the first and second set of fields; and linking the first and second records in response to determining that the similarity, wherein the first and second vendors are different and/or the first and second types of data are different. Knuesel et al (US 20220405651 A1) teach embodiments directed to a federated machine learning, including the inference and training. Inference may be done by applying multiple machine learnable models to a mapped record. The mapped record may be obtained by applying a mapping rule to a local record. The mapping rule may generalize or extend data features in the local record. A machine learning engine may be HANA APL, which is an Application Function Library (AFL) configured with functions supporting model training and model inference. Other libraries may be used instead of HANA APL. For example, one may use Google Tensorflow for the machine learning engine. Yang et al (US 20210406598 A1) teach techniques for detecting label shift and adjusting training data of predictive models in response. In an embodiment, a first machine-learned model is used to generate a predicted label for each of multiple scoring instances. The first machine-learned model is trained using one or more machine learning techniques based on a plurality of training instances, each of which includes an observed label. In response to detecting a shift in observed labels, for each segment of one or more segments in multiple segments, a portion of training data that corresponds to the segment is identified. For each training instance in a subset of the portion of training data, the training instance is adjusted. The adjusted training instance is added to a final set of training data. The machine learning technique(s) are used to train a second machine-learned model based on the final set of training data. A machine learning technique is used to generate a statistical model that is trained based on a history of attribute values associated with one or more objects. The statistical model is trained based on multiple attributes described herein. In machine learning parlance, such attributes are referred to as “features.” To generate and train a statistical model, a set of features is specified and a set of training data is identified. Sarferaz (US 20210209501 A1( teaches systems and methods for receiving a request for data associated with a particular functionality of an application, identifying a first attribute for which data is to be generated to fulfill the request, and determining that the first attribute corresponds to data to be generated by a first machine learning model. The systems and methods further providing for executing a view or procedure to generate data for input to the first machine learning model, inputting the generated data into the first machine learning model, and receiving output from the first machine learning model. The output is provided in response to the request for data associated with the particular functionality of the application. Predictive analysis library (PAL) and automated predictive library (APL) application function libraries offer statistical and data mining machine learning algorithms. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to UYEN T LE whose telephone number is (571)272-4021. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay M Bhatia can be reached at 5712723906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /UYEN T LE/Primary Examiner, Art Unit 2156 4 February 2026
Read full office action

Prosecution Timeline

Jun 07, 2023
Application Filed
Sep 14, 2024
Non-Final Rejection — §101, §103, §112
Nov 21, 2024
Interview Requested
Dec 05, 2024
Applicant Interview (Telephonic)
Dec 05, 2024
Examiner Interview Summary
Dec 16, 2024
Response Filed
Mar 11, 2025
Final Rejection — §101, §103, §112
May 05, 2025
Interview Requested
May 13, 2025
Applicant Interview (Telephonic)
May 13, 2025
Examiner Interview Summary
May 16, 2025
Response after Non-Final Action
Jun 16, 2025
Request for Continued Examination
Jun 18, 2025
Response after Non-Final Action
Aug 04, 2025
Non-Final Rejection — §101, §103, §112
Oct 06, 2025
Interview Requested
Oct 16, 2025
Examiner Interview Summary
Oct 16, 2025
Applicant Interview (Telephonic)
Nov 03, 2025
Response Filed
Feb 04, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591550
SHARE REPLICATION BETWEEN REMOTE DEPLOYMENTS
2y 5m to grant Granted Mar 31, 2026
Patent 12591540
DATA MIGRATION IN A DISTRIBUTIVE FILE SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12581301
MEDIA AGNOSTIC CONTENT ACCESS MANAGEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12579189
METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR GENERATING OBJECT IDENTIFIER
2y 5m to grant Granted Mar 17, 2026
Patent 12561371
GRAPH OPERATIONS ENGINE FOR TENANT MANAGEMENT IN A MULTI-TENANT SYSTEM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
84%
Grant Probability
94%
With Interview (+9.7%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 797 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month