DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f): (FP 7.30.03)
(f) ELEMENT IN CLAIM FOR A COMBINATION-An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35
112, sixth paragraph, is invoked.
As explained in MPEP 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as "configured to" or "so that"; and
the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. (FP 7.30.05)
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a selection component, a transferability component, a training component in claim 18 and a database in claim 20.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. (FP 7.30.06)
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-18, 20 are rejected under 35 U.S.C. 101
because the claimed invention is directed to an abstract idea without significantly
more.
When considering subject matter eligibility under 35 U.S.C. 101, it must be
determined whether the claim is directed to one of the four statutory categories of
invention, i.e., process, machine, manufacture, or composition of matter (Step 1). If the
claim does fall within one of the statutory categories, the second step in the analysis is
to determine whether the claim is directed to a judicial exception (Step 2A). The Step 2A
analysis is broken into two prongs. In the first prong (Step 2A, Prong 1), it is determined
whether or not the claims recite a judicial exception (e.g., mathematical concepts,
mental processes, certain methods of organizing human activity). If it is determined in
Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the
second prong (Step 2A, Prong 2), where it is determined whether or not the claims
integrate the judicial exception into a practical application. If it is determined at step 2A,
Prong 2 that the claims do not integrate the judicial exception into a practical
application, the analysis proceeds to determining whether the claim is a patent-eligible
application of the exception (Step 2B). If an abstract idea is present in the claim, any
element or combination of elements in the claim must be sufficient to ensure that the
claim integrates the judicial exception into a practical application, or else amounts to
significantly more than the abstract idea itself. Applicant is advised to consult the 2019
PEG for more details of the analysis.
Step 1
According to the first part of the analysis, in the instant case, claims 1-10, 11-17, 18, 20 are directed to a method, method and apparatus of transfer learning. Thus, each of the claims falls within one of the four statutory categories (i.e. process, machine, manufacture, or composition of matter). Step 2A,
Step 2A, Prong 1
Following the determination of whether or not the claims fall within one of the four
categories (Step 1), it must be determined if the claims recite a judicial exception (e.g.
mathematical concepts, mental processes, certain methods of organizing human
activity) (Step 2A, Prong 1). In this case, the claims are determined to recite a judicial
exception as explained below.
Regarding Claims 1 and 11, 18 these claims recite
Claim 1 and claim 18 recite obtaining a target dataset, a source dataset, and a machine learning model trained on the source dataset; selecting a hard subset of the target dataset by computing a hardness score for a sample of the target dataset based on a similarity between the sample of the target dataset and samples of the source dataset, binning the target dataset based on the hardness score to obtain a plurality of bins, and selecting the hard subset of the target dataset based on the plurality of bins; selecting the target dataset as a training set for the machine learning model based on the hard subset of the target dataset by computing a transferability metric for the target dataset based on the hard subset of the target dataset and selecting the target dataset based on the transferability metric; and training the machine learning model using the target dataset based on the selection of the target dataset as the training set.
Claim 11 recite obtaining input data in a target domain; identifying a machine learning model for the target domain, wherein the machine learning model is trained on a source dataset of a source domain and fine-tuned on a target dataset of the target domain, and wherein the machine learning model is fine-tuned by computing a hardness score for a sample of a target dataset based on a similarity between the sample of the target dataset and samples of the source dataset, binning the target dataset based on the hardness score to obtain a plurality of bins, selecting a hard subset of the target dataset based on the plurality of bins, computing a transferability metric for the target dataset based on the hard subset of the target dataset, and selecting the target dataset based on the transferability metric; and generating a label for the input data using the machine learning model
The claims recite a mental process. As set forth in MPEP 2106.04(a)(2)(III)(C), “Claims can recite a mental process even if they are claimed as being performed on a computer”. These are recited at a high level and it disclosed as a human user performing these functions, simply using a computer as a tool-see spec, [0036-[0047], Fig. 1, etc. Thus, the claim recites abstract ideas.
Step 2A, Prong 2
Following the determination that the claims recite a judicial exception, it must be
determined if the claims recite additional elements that integrate the exception into a
practical application of the exception (Step 2A, Prong 2). In this case, after considering
all claim elements individually and as an ordered combination, it is determined that the
claims do not include additional elements that integrate the exception into a practical
application of the exception as explained below.
In Prong Two, a claim is evaluated as a whole to determine whether the recited judicial exception is integrated into a practical application of that exception. A claim is not “directed to” a judicial exception, and thus is patent eligible, if the claim as a whole integrates the recited judicial exception into a practical application of that exception. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. MPEP 2106.04(d). The claims recite an abstract idea and further the claims as a whole does not integrate the recited judicial exception into a practical application of the exception. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. MPEP 2106.04(d).
Regarding Claims 1, 11 and 18 these claims
This limitation recites using one or more neural networks as a tool to perform an abstract idea, which is not indicative of integration into a practical application. MPEP 2106.05(f).)
This limitation is understood to be generic computer equipment and mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.0S(f))
MPEP § 2106.05(f): Mere Instructions to Apply an Exception. Do the additional element(s) amount to merely the words “apply it” (or an equivalent)
or are mere instructions to implement an abstract idea or other exception on a computer? (Yes)
Step 2B
Based on the determination in Step 2A of the analysis that the claims are
directed to a judicial exception, it must be determined if the claims contain any element
or combination of elements sufficient to ensure that the claim amounts to significantly
more than the judicial exception (Step 2B). In this case, after considering all claim
elements individually and as an ordered combination, it is determined that the claims do
not include additional elements that are sufficient to amount to significantly more than
the judicial exception for the same reasons given above in the Step 2A, Prong 2
analysis. Furthermore, each additional element identified above as being insignificant
extra-solution activity is also well-known, routine, conventional as described below.
Claims 1, 11 and 18: The claims do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than generic computing components and field of use/technological environment which do not amount to significantly more than the abstract idea. The underlying concept merely receives information, analyzes it, and store the results of the analysis – this concept is not meaningfully different than concepts found by the courts to be abstract (see Electric Power Group, collecting information, analyzing it, and displaying certain results of the collection and analysis; see Cybersource, obtaining and comparing intangible data; see Digitech, organizing information through mathematical correlations; see Grams, diagnosing an abnormal condition by performing clinical tests and thinking about the results; see Cyberfone, using categories to organize store and transmit information; see Smartgene, comparing new and stored information and using rules to identify options). The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as a combination do not amount to significantly more than the abstract idea. For example, claim 1 further recites the steps of “obtaining…”, “selecting…”, “selecting…” and “training…”… and Claim 11 further recites “obtaining…”, “identifying”, “generating”…These elements are recited at a high level of generality and are well-understood, routine, and conventional activities in the computer art. Generic computers performing generic computer functions, without an inventive concept, do not amount to significantly more than the abstract idea. Looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims do not amount to significantly more than the abstract idea itself.
Step 2A/2B Prong 2 Dependent Claims
Regarding to claim 2-3, 12-13
Claim 2-3, 12-13 merely recite other additional elements that compute embedding using the ML model which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 4, 14
Claim 4, 14 merely recite other additional elements that compute a hardness score for the dataset which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 5, 15
Claim 5, 15 merely recite other additional elements that compute a matrix between the datasets which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 6, 16
Claim 6, 16 merely recite other additional elements that sorting the datasets which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 7, 17
Claim 7, 17 merely recite other additional elements that determine the transferability metric which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 8-9
Claim 8, 9 merely recite other additional elements that selecting subset of data based on similarity between the datasets, computing a transferability metric and refraining from training the model based on the transferability metric which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 10
Claim 10 merely recite other additional elements that define the dataset which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 20
Claim 20 merely recite other additional elements that define the components of the system which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1, 8-9, 11-13, 18, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Tomlinson et al. (Tomlinson) US 2023/0128832 as applied to claim 1, further in view of Lurie et al. (Lurie) US 20220180762 and Dohrn et al. (Dohrn) US 2024/0144168
In regard to claim 1, Tomlinson disclose A method for transfer learning, ([0003]-[0005] transferring recommendation) comprising:
obtaining a target dataset, a source dataset, and a machine learning model trained on the source dataset; ([0021]-[0023] [0039]-[0045] obtain data from the organizations for source and data from the organizations for target, training a ML model on the source cluster using the data obtained from source)
selecting a hard subset of the target dataset based on a similarity between the sample of the target dataset and samples of the source dataset; ([0021]-[0023][0039]-[0045] identify data from a subset of organizations based on homogeneity between the subset data and the data from source organizations) and
selecting the target dataset as a training set for the machine learning model based on the hard subset of the target dataset by computing a transferability metric for the target dataset based on the hard subset of the target dataset; ([0003]-[0005] [0021]-[0023][0026]-[0030][0039]-[0045] [0051]-[0052] [0078] [0087] calculate a transferability metric for the target organizations based on the subset of data from the organizations and select the target dataset as the best training data for the ML model based on the subset of the target dataset identified) and
selecting the target dataset based on the transferability metric; ([0003]-[0005] [0021]-[0023][0026]-[0030] [0039]-[0045] [0051]-[0052] [0078] [0087] the dataset is selected based on the transferability metric. Note: please further define the relationships between the hardness score, the similarity and the transferability metric to help move forward the prosecution, “based on” is very broad)
training the machine learning model using the target dataset based on the selection of the target dataset as the training set. ([0003]-[0005] [0021]-[0023][0026]-[0030] [0039]-[0045] [0051]-[0052] [0078] [0087] training the ML model using the subset of the data based on the selection of the subset of the organizations with the best training data)
But Tomlinson fail to explicitly disclose “selecting the hard subset of the target dataset by computing a hardness score for a sample of the target dataset based on the similarity between the sample of the target dataset and samples of the source dataset,
classifying the target dataset based on the hardness score to obtain a plurality of groups;”
Lurie disclose selecting the hard subset of the target dataset by computing a hardness score for a sample of the target dataset based on the similarity between the sample of the target dataset and samples of the source dataset, ([0005]-[0007][0025]-[0036] [0039]-[0040] selecting the subset of the dataset by computing a learnability score which represents learning difficulty based on similarity of the characteristics between the source and target samples)
classifying the target dataset based on the hardness score to obtain a plurality of groups; ([0005]-[0007][0025]-[0033] [0039]-[0040] classifying the target dataset based on the learnability score to obtain groups, here it discloses action conditions)
selecting the hard subset of the target dataset based on the plurality of groups; ([0005]-[0007][0025]-[0033] [0039]-[0040] selecting the target dataset based on the classified groups)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Lurie ‘s computer assisted training in ML into Tomlinson’s invention as they are related to the same field endeavor of model training and learning. The motivation to combine these arts, as proposed above, at least because Lurie ‘s selecting the subset of data based on its learnability would help to provide more training data identification method into Tomlinson’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more training data identification method based on its learnability would help to improve training efficiency of the ML model.
But Tomlinson and Lurie fail to explicitly disclose “binning the target dataset to obtain a plurality of bins; selecting the hard subset of the target dataset based on the plurality of bins;”
Dohrn disclose binning the target dataset to obtain a plurality of bins; ([0022]-[0025][0035] output categories of bins of data)
selecting the hard subset of the target dataset based on the plurality of bins; ([0022]-[0028][0036] selecting the data based on the categories of bins of data. Note: please further define how the hard subset of the target dataset are selected based on the plurality of bins to help move forward the prosecution, ”based on” is very broad)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Dohrn‘s data classification in ML training into Lurie and Tomlinson’s invention as they are related to the same field endeavor of model training and learning. The motivation to combine these arts, as proposed above, at least because Dohrn’s data classification in bins to training the ML would help to provide more training data classification method into Lurie and Tomlinson’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more training data classification method would help to improve training efficiency of the ML model.
In regard to claim 8, Tomlinson, Lurie and Dohrn disclose The method of claim 1,
Tomlinson disclose further comprising:
obtaining an alternative source dataset and an alternative machine learning model trained on the alternative source dataset; ([0021]-[0023] [0039]-[0045] obtain data from various cluster of organizations form the source and training a ML model on the source cluster)
selecting an alternative hard subset of the target dataset based on a similarity between the hard subset and the alternative source dataset; ([0021]-[0023][0039]-[0045] identify data from a subset of organizations based on homogeneity between the subset data and the data from source cluster organizations)
computing an alternative transferability metric for the target dataset based on the alternative hard subset of the target dataset; ([0003]-[0005] [0021]-[0023][0026]-[0030] [0051]-[0052] [0078] [0087] calculate a transferability metric for the target organizations based on the subset of data from the organizations) and
refraining from training the alternative machine learning model using the target dataset based on the alternative transferability metric. ([0003]-[0005][0021]-[0023] [0039]-[0045] [0051]-[0052] [0078] [0087] only the organizations (e.g., domains) including data that result in the most robust models are used (selected) for training which is based on the transferability metric (the highest performance metric) and the rest are discarded)
In regard to claim 9, Tomlinson, Lurie and Dohrn disclose The method of claim 1,
Tomlinson disclose further comprising:
obtaining an alternative target dataset; ([0021]-[0023] [0039]-[0045] obtain data from various cluster of organizations for target)
selecting an alternative hard subset of the alternative target dataset based on a similarity between the alternative hard subset and the source dataset; ([0021]-[0023][0039]-[0045] identify data from a subset of organizations for the target based on homogeneity between the subset data and the data from source organizations)
computing an alternative transferability metric for the alternative target dataset based on the alternative hard subset of the alternative target dataset; ([0003]-[0005] [0021]-[0023][0026]-[0030] [0051]-[0052] [0078] [0087] calculate a transferability metric for the target organizations based on the subset of data from the organizations) and
refraining from training the machine learning model using the alternative target dataset based on the alternative transferability metric. ([0003]-[0005][0021]-[0023] [0039]-[0045] [0051]-[0052] [0078] [0087] only the organizations (e.g., domains) including data that result in the most robust models are used (selected) for training which is based on the transferability metric (the highest performance metric) and the rest are discarded)
In regard to claim 11, Tomlinson disclose A method for transfer learning, ([0003]-[0005] transferring recommendation) comprising:
obtaining input data in a target domain; ([0021]-[0023] [0039]-[0045] obtain data from the target organizations (domain))
identifying a machine learning model for the target domain, ([0021]-[0023] [0039]-[0045] select a ML model for the target organizations (domain))
wherein the machine learning model is trained on a source dataset of a source domain ([0021]-[0023] [0039]-[0045] training a ML model on the source cluster using the data obtained from cluster of source organizations)
computing a transferability metric for the target dataset based on the hard subset of the target dataset; ([0003]-[0005] [0021]-[0023][0026]-[0030] [0051]-[0052] [0078] [0087] calculate a transferability metric for the target organizations based on the subset of data from the organizations) and
selecting the target dataset based on the transferability metric; ([0003]-[0005] [0021]-[0023][0026]-[0030] [0039]-[0045] [0051]-[0052] [0078] [0087] the dataset is selected based on the transferability metric)
But Tomlinson fail to explicitly disclose “by computing a hardness score for a sample of the target dataset based on the similarity between the sample of the target dataset and samples of the source dataset, classifying the target dataset based on the hardness score to obtain a plurality of groups;”
Lurie disclose by computing a hardness score for a sample of the target dataset based on the similarity between the sample of the target dataset and samples of the source dataset, ([0005]-[0007][0025]-[0036] [0039]-[0040] by computing a learnability score which represents learning difficulty based on similarity of the characteristics between the source and target samples)
classifying the target dataset based on the hardness score to obtain a plurality of groups; ([0005]-[0007][0025]-[0033] [0039]-[0040] classifying the target dataset based on the learnability score to obtain groups, here it discloses action conditions)
selecting the hard subset of the target dataset based on the plurality of groups; ([0005]-[0007][0025]-[0033] [0039]-[0040] selecting the target dataset based on the classified groups)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Lurie ‘s computer assisted training in ML into Tomlinson’s invention as they are related to the same field endeavor of model training and learning. The motivation to combine these arts, as proposed above, at least because Lurie ‘s selecting the subset of data based on its learnability would help to provide more training data identification method into Tomlinson’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more training data identification method based on its learnability would help to improve training efficiency of the ML model.
But Tomlinson and Lurie fail to explicitly disclose “and the machine learning model is fine-tuned based on a target dataset of the target domain, and wherein the machine learning model is fine-tuned by computing the parameter, binning the target dataset to obtain a plurality of bins; selecting the hard subset of the target dataset based on the plurality of bins; and generating a label for the input data using the machine learning model.”
Dohrn disclose and the machine learning model is fine-tuned on a target dataset of the target domain ([0047]-[0052] the ML model is tuned based on the target data of the target domain) and wherein the machine learning model is fine-tuned by computing the parameter, ([0047]-[0052] the ML model is tuned based on the parameters calculated)
binning the target dataset to obtain a plurality of bins; ([0022]-[0025][0035] output categories of bins of data)
selecting the hard subset of the target dataset based on the plurality of bins; ([0022]-[0028][0036] selecting the data based on the categories of bins of data)
and generating a label for the input data using the machine learning model. (([0022]-[0027][0035] output categories of bins of data and generate label for the input data using the ML model)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Dohrn‘s data classification in ML training into Lurie and Tomlinson’s invention as they are related to the same field endeavor of model training and learning. The motivation to combine these arts, as proposed above, at least because Dohrn’s data classification in bins to training the ML would help to provide more training data classification method into Lurie and Tomlinson’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more training data classification method would help to improve training efficiency of the ML model.
In regard to claims 12-13, claims 12-13 are method claims corresponding to the method claims 2-3 above and, therefore, are rejected for the same reasons set forth in the rejections of claims 2-3.
In regard to claim 18, Tomlinson disclose An apparatus for transfer learning, ([0003]-[0005] transferring recommendation) comprising:
at least one processor; a memory storing instructions executable by the at least one processor; ([0026]-[0030] processor, a memory and instructions)
a selection component configured to select a hard subset of a target dataset based on a similarity between the sample of the target dataset and samples of a source dataset used to train a machine learning model; ([0021]-[0023][0026]-[0030] [0039]-[0045] identify data by the processor from a subset of organizations for the target based on homogeneity between the subset data and the data from source organizations) and
a transferability component configured to select the target dataset as a training set for the machine learning model based on the hard subset of the target dataset by computing a transferability metric for the target dataset and the machine learning model based on the hard subset of the target dataset ([0003]-[0005] [0021]-[0023][0026]-[0030][0039]-[0045] [0051]-[0052] [0078] [0087] calculate a transferability metric for the target organizations based on the subset of data from the organizations and select the target dataset as the best training data for the ML model based on the subset of the target dataset identified.) and
selecting the target dataset based on the transferability metric; ([0003]-[0005] [0021]-[0023][0026]-[0030] [0039]-[0045] [0051]-[0052] [0078] [0087] the dataset is selected based on the transferability metric) and
a training component configured to train the machine learning model using the target dataset based on the selection of the target dataset as the training set. ([0003]-[0005] [0021]-[0023][0026]-[0030] [0039]-[0045] [0051]-[0052] [0078] [0087] training the ML model using the subset of the data based on the selection of the subset of the organizations with the best training data)
But Tomlinson fail to explicitly disclose “select the hard subset of the target dataset by computing a hardness score for a sample of the target dataset based on the similarity between the sample of the target dataset and samples of the source dataset,
classifying the target dataset based on the hardness score to obtain a plurality of groups;”
Lurie disclose select the hard subset of the target dataset by computing a hardness score for a sample of the target dataset based on the similarity between the sample of the target dataset and samples of the source dataset, ([0005]-[0007][0025]-[0036] [0039]-[0040] selecting the subset of the dataset by computing a learnability score which represents learning difficulty based on similarity of the characteristics between the source and target samples)
classifying the target dataset based on the hardness score to obtain a plurality of groups; ([0005]-[0007][0025]-[0033] [0039]-[0040] classifying the target dataset based on the learnability score to obtain groups, here it discloses action conditions) and selecting the hard subset of the target dataset based on the plurality of groups; ([0005]-[0007][0025]-[0033] [0039]-[0040] selecting the target dataset based on the classified groups)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Lurie ‘s computer assisted training in ML into Tomlinson’s invention as they are related to the same field endeavor of model training and learning. The motivation to combine these arts, as proposed above, at least because Lurie ‘s selecting the subset of data based on its learnability would help to provide more training data identification method into Tomlinson’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more training data identification method based on its learnability would help to improve training efficiency of the ML model.
But Tomlinson and Lurie fail to explicitly disclose “binning the target dataset to obtain a plurality of bins; and selecting the hard subset of the target dataset based on the plurality of bins;”
Dohrn disclose binning the target dataset to obtain a plurality of bins; ([0022]-[0025][0035] output categories of bins of data)
selecting the hard subset of the target dataset based on the plurality of bins; ([0022]-[0028][0036] selecting the data based on the categories of bins of data)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Dohrn‘s data classification in ML training into Lurie and Tomlinson’s invention as they are related to the same field endeavor of model training and learning. The motivation to combine these arts, as proposed above, at least because Dohrn’s data classification in bins to training the ML would help to provide more training data classification method into Lurie and Tomlinson’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more training data classification method would help to improve training efficiency of the ML model.
In regard to claim 20, Tomlinson, Lurie and Dohrn disclose The apparatus of claim 18,
Tomlinson disclose further comprising:
a database configured to store the target dataset and the source dataset. ([0005] [0061] [0074][0103] database store various source and target data)
Claims 2-3, 10 are rejected under 35 U.S.C. 103 as being unpatentable over Tomlinson et al. (Tomlinson) US 2023/0128832, Lurie et al. (Lurie) US 20220180762 and Dohrn et al. (Dohrn) US 2024/0144168 as applied to claim 1, further in view of Vu et al. (Vu) US 2024/0020546
In regard to claim 2, Tomlinson, Lurie and Dohrn disclose The method of claim 1,
But Tomlinson, Lurie and Dohrn fail to explicitly disclose “further comprising: computing an intermediate target embedding for the sample of the target dataset using the machine learning model; computing an intermediate source embedding for a sample of the source dataset using the machine learning model; and computing a similarity score based on the intermediate target embedding and the intermediate source embedding, wherein the hard subset is selected based on the similarity score.”
Vu disclose further comprising: computing an intermediate target embedding for a sample of the target dataset using the machine learning model; ([0005]-[0013] [0029]-[0033][0038]-[0047] generate a target embedding for a target dataset using ML model)
computing an intermediate source embedding for the sample of the source dataset using the machine learning model; ([0005]-[0013] [0029]-[0033][0038]-[0047] generate a source embedding for a source dataset using ML model) and
computing a similarity score based on the intermediate target embedding and the intermediate source embedding, wherein the hard subset is selected based on the similarity score. ([0012]-[0013] [0029]-[0033][0038]-[0047] [0114] [0126] generating a similarity score based on the similarity between the target embedding and the source embedding and determine the target data based on the similarity score)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Vu‘s model adaptation into Lurie, Dohrn and Tomlinson’s invention as they are related to the same field endeavor of model training and learning. The motivation to combine these arts, as proposed above, at least because Vu‘s model adaptation for identifying similarity between the source and target data would help to provide similarity identification method into Lurie, Dohrn and Tomlinson’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing similarity identification between the source and target data model would help to improve training efficiency of the ML model.
In regard to claim 3, Tomlinson, Lurie, Dohrn and Vu disclose The method of claim 2,
But Tomlinson Lurie, Dohrn fail to explicitly disclose “further comprising: computing a plurality of intermediate target embeddings at a plurality of layers of the machine learning model, wherein the similarity score is based on the plurality of intermediate target embeddings.
Vu disclose further comprising: computing a plurality of intermediate target embeddings at a plurality of layers of the machine learning model, wherein the similarity score is based on the plurality of intermediate target embeddings. ([0005]-[0013] [0029]-[0033][0038]-[0047][0075] [0102]-[0117] [0126][0149] generate a plurality of embeddings at layers of the ML model, the similarity score is based on the plurality of embeddings)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Vu‘s model adaptation into Lurie, Dohrn and Tomlinson’s invention as they are related to the same field endeavor of model training and learning. The motivation to combine these arts, as proposed above, at least because Vu‘s model adaptation for identifying similarity between the source and target data would help to provide similarity identification method into Lurie, Dohrn and Tomlinson’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing similarity identification between the source and target data model would help to improve training efficiency of the ML model.
In regard to claim 10, Tomlinson Lurie, Dohrn disclose Tomlinson disclose The method of claim 1,
But Tomlinson Lurie, Dohrn fail to explicitly disclose “wherein: the target dataset represents a different domain than the source dataset.”
Vu disclose wherein: the target dataset represents a different domain than the source dataset. ([0272]-[0274] source and target dataset represent different domains)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Vu‘s model adaptation into Lurie, Dohrn and Tomlinson’s invention as they are related to the same field endeavor of model training and learning. The motivation to combine these arts, as proposed above, at least because Vu‘s model adaptation for providing different source and target domain data would help to provide different domain data method into Lurie, Dohrn and Tomlinson’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing different domain data would help to improve training accuracy of the ML model.
Claims 4, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Tomlinson et al. (Tomlinson) US 2023/0128832, Lurie et al. (Lurie) US 20220180762 and Dohrn et al. (Dohrn) US 2024/0144168 as applied to claim 1, further in view of Zhang et al. (Zhang) US 2023/0098656
In regard to claim 4, Tomlinson, Lurie and Dohrn disclose The method of claim 1,
But Tomlinson, Lurie and Dohrn fail to explicitly disclose “further comprising: computing a hardness score for each sample of the target dataset.
Zhang disclose further comprising: computing a hardness score for each sample of the target dataset. ([0033]-[0035] [0042]-[0056] [0064]-[0072] calculate a hardness score for each sample of the data and subsample the data based on the hardness score)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Zhang‘s data subsampling for recommendation into Lurie, Dohrn and Tomlinson’s invention as they are related to the same field endeavor of model training and learning. The motivation to combine these arts, as proposed above, at least because Zhang‘s data subsampling for recommendation based on the hardness score would help to provide more data sampling method into Lurie, Dohrn and Tomlinson’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing data sampling based on the hardness score would help to facilitate data selection based on user’s desire.
In regard to claim 14, claim 14 is method claim corresponding to the method claim 4 above and, therefore, is rejected for the same reasons set forth in the rejections of claim 4.
Claims 5, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Tomlinson et al. (Tomlinson) US 2023/0128832, Lurie et al. (Lurie) US 20220180762 and Dohrn et al. (Dohrn) US 2024/0144168 as applied to claim 1, further in view of Zhang et al. (Zhang) US 2023/0098656 and Vu et al. (Vu) US 2024/0020546
In regard to claim 5, Tomlinson, Lurie and Dohrn disclose The method of claim 1,
But Tomlinson, Lurie and Dohrn fail to explicitly disclose “wherein the hardness score is based on the pairwise activation similarity matrix.”
Zhang disclose wherein the hardness score is based on the pairwise activation similarity matrix. ([0023] [0033]-[0041] [0042]-[0056] [0064]-[0072][0081] the hardness score is based on the pair matrix)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Zhang‘s data subsampling for recommendation into Lurie and Dohrn, Tomlinson’s invention as they are related to the same field endeavor of model training and learning. The motivation to combine these arts, as proposed above, at least because Zhang‘s data subsampling for recommendation based on the hardness score would help to provide more data sampling method into Lurie and Dohrn, Tomlinson’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing data sampling based on the hardness score would help to facilitate data selection based on user’s desire.
But Tomlinson, Lurie and Dohrn and Zhang fail to explicitly disclose “further comprising: calculating a pairwise activation similarity matrix between a plurality of target samples of the target dataset and a plurality of source samples of the source dataset,”
Van disclose further comprising: calculating a pairwise activation similarity matrix between a plurality of target samples of the target dataset and a plurality of source samples of the source dataset. ([241]-[0252] multiple metrics are calculated and can be used to determine similarity between the datasets (first and second))
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Vu‘s model adaptation into Lurie, Dohrn, Zhang and Tomlinson’s invention as they are related to the same field endeavor of model training and learning. The motivation to combine these arts, as proposed above, at least because Vu‘s model adaptation for identifying similarity between the source and target data would help to provide similarity identification method into Lurie, Dohrn, Zhang and Tomlinson’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing similarity identification between the source and target data model would help to improve training efficiency of the ML model.
In regard to claim 15, claim 15 is method claim corresponding to the method claim 5 above and, therefore, is rejected for the same reasons set forth in the rejections of claim 5.
Claims 6, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Tomlinson et al. (Tomlinson) US 2023/0128832, , Lurie et al. (Lurie) US 20220180762 and Dohrn et al. (Dohrn) US 2024/0144168 as applied to claim 1, further in view of Maxwell et al. (Maxwell) US 2019/0370254
In regard to claim 6, Tomlinson, Lurie and Dohrn disclose The method of claim 1,
But Tomlinson, Lurie and Dohrn fail to explicitly disclose “further comprising: the plurality of bins are non-overlapping.
Maxwell disclose further comprising: the plurality of bins are non-overlapping.
([0115] [0188] data is split into non-overlapping bins and sorting the data and select the subset of data based on the non-overlapping bins)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Maxwell‘s data organization method into Lurie, Dohrn and Tomlinson’s invention as they are related to the same field endeavor of data manipulation. The motivation to combine these arts, as proposed above, at least because Maxwell‘s data organization method with non-overlapping bins would help to provide more data organizing method into Lurie, Dohrn and Tomlinson’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more data organizing method would help to facilitate data selection based on user’s desire.
In regard to claim 16, claim 16 is method claim corresponding to the method claim 6 above and, therefore, is rejected for the same reasons set forth in the rejections of claim 6.
Claims 7, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Tomlinson et al. (Tomlinson) US 2023/0128832, , Lurie et al. (Lurie) US 20220180762 and Dohrn et al. (Dohrn) US 2024/0144168 as applied to claim 1, further in view of Komatsuda et al. (Komatsuda) US 2021/0157707
In regard to claim 7, Tomlinson, Lurie, Dohrn disclose The method of claim 1,
Tomlinson wherein the machine learning model is trained using the target dataset based on the determination. ([0003]-[0005][0039]-[0045] [0051]-[0052] [0078] [0087] training the ML model using the application data from organizations based on the transferability metric)
But Tomlinson, Lurie, Dohrn failed to explicitly disclose “further comprising: determining that the transferability metric is greater than a transferability threshold,”
Komatsuda disclose further comprising: determining that the transferability metric is greater than a transferability threshold, wherein the machine learning model is trained using the target dataset based on the determination. ([0010] [0056] [0086]-[0087][0100]- [00182] determine the transferability is larger than a threshold and the model is trained based on the determination)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Komatsuda‘s transferability determination with ML training into Lurie, Dohrn and Tomlinson’s invention as they are related to the same field endeavor of machine learning model. The motivation to combine these arts, as proposed above, at least because Komatsuda‘s transferability determination based on a threshold value would help to provide more transferability method into Lurie, Dohrn and Tomlinson’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more transferability method based on a threshold value would help to facilitate data transferability identification.
In regard to claim 17, claim 17 is method claim corresponding to the method claim 7 above and, therefore, is rejected for the same reasons set forth in the rejections of claim 7.
Response to Arguments
Applicant’s arguments with respect to claims 1-20 filed on 2/23/2026 have been considered but are moot because the arguments do not apply to the current rejection.
With respect to arguments related to 35 U.S.C.101 rejection regarding to claims 1-18, 20, please see the rejection above. The applicant’s argument is not persuasive.
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure.
U.S. Patent Documents PATENT DATE INVENTOR(S) TITLE
US 20220300641 A1 2022-09-22 BENNATI et al.
METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR QUANTIFYING THE LINKABILITY OF TRAJECTORY DATA
BENNATI et al. disclose measuring and quantifying the linkability of trajectory data based on similarities to candidate trajectory data. Methods may include: receiving a set of probe data points defining a target trajectory from a probe apparatus; characterizing the trajectory based on features of the target trajectory; identifying a plurality of candidate trajectories sharing at least some features with the target trajectory; calculating, for each of the plurality of candidate trajectories, a similarity score with respect to the target trajectory; calculating a privacy score representing a likelihood of identifying the probe apparatus from the target trajectory based on a number of trajectories in the plurality of candidate trajectories and their respective similarity score; and providing information associated with the target trajectory for location-based services in response to the privacy score satisfying a predetermined value… see abstract.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XUYANG XIA whose telephone number is (571)270-3045. The examiner can normally be reached Monday-Friday 8am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at 571-272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
XUYANG XIA
Primary Examiner
Art Unit 2143
/XUYANG XIA/Primary Examiner, Art Unit 2143