DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to the amendment filed on 7/21/2025. Claims 1, 8, and 15 have been amended. Claims 21 and 22 are new. Claims 1-22 are pending and have been examined.
Drawings
The drawings were received on 7/21/2025. These drawings are acceptable. The objection to drawings for missing reference signs is WITHDRAWN in view of Applicant’s submitted drawings.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1:
Step 1:
The claim recites a method of determining item similarity, which is one of the four statutory categories of patentable subject matter.
Step 2A prong 1:
The claim recites an abstract idea. Specifically, the limitation generating… a similarity score between the source item and each of the one or more candidate items amounts to a mental process as it can be performed in a human mind.
The claim recites an additional abstract idea. Specifically, the limitation joint optimization of masked language model loss and a metric-based triplet loss is a mathematical concept.
Step 2A prong 2:
The additional element of inputting a source item and one or more candidate items does not integrate the abstract idea into practical application because inputting data into a model is considered an insignificant extra solution activity of "mere data gathering" MPEP 2106.05(g). Therefore, the claim is directed to an abstract idea.
The additional element of a triplet-trained machine learning model trained using… is generally linked to the abstract idea, therefore does not integrate the abstract idea into practical application see MPEP 2106.05(h).
The additional element of a triplet-trained machine learning model in the generating step is generally linked to the abstract idea, therefore does not integrate the abstract idea into practical application see MPEP 2106.05(h).
Step 2B:
The additional element inputting a source item and one or more candidate items does not amount to significantly more because the additional element is an insignificant extra solution activity and further is a well understood routine and conventional activity. See MPEP 2106.05(d)(II)(i), (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)).
The additional element of a triplet-trained machine learning model trained using… is generally linked to the abstract idea, therefore does not amount to significantly more see MPEP 2106.05(h).
The additional element of a triplet-trained machine learning model in the generating step is generally linked to the abstract idea, therefore does not amount to significantly more MPEP 2106.05(h).
Therefore, the claim is ineligible.
Regarding Claim 2:
Claim 2 which incorporates the rejection of Claim 1, recites a further abstract idea reducing a combination of a masked language model loss function and a metric-based loss function which is a mathematical concept. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element during training which amounts to mere instructions to apply the judicial exception MPEP 2106.05(f). The claim is ineligible.
Regarding Claim 3:
Claim 3 which incorporates the rejection of Claim 1, recites a further abstract idea trained based on a metric-based loss function which is a mathematical concept. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element during training which amounts to mere instructions to apply the judicial exception MPEP 2106.05(f). The claim is ineligible.
Regarding Claim 4:
Claim 4 which incorporates the rejection of Claim 3, recites only a further abstract idea loss function is based on an angular distance between an embedded anchor element vector and an embedded positive element vector which is a mathematical concept. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible.
Regarding Claim 5:
Claim 5 which incorporates the rejection of Claim 3, recites only a further abstract idea loss function is based on an angular distance between an embedded anchor element vector and an embedded negative element vector which is a mathematical concept. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible.
Regarding Claim 6:
Claim 6 which incorporates the rejection of Claim 3, recites only a further abstract idea loss function is based on a first angular distance between an embedded anchor element vector and an embedded negative element vector subtracted from a second angular distance between the embedded anchor element vector and an embedded positive element vector which is a mathematical concept. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible.
Regarding Claim 7:
Claim 7 incorporates the rejection of Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element training the triplet-trained machine learning model using the training data that includes triplets which amounts to mere instructions to apply the judicial exception MPEP 2106.05(f). The claim is ineligible.
Regarding Claim 8:
Step 1:
The claim recites a comparison engine for determining item similarity, which is one of the four statutory categories of patentable subject matter.
Step 2A prong 1:
The claim recites an abstract idea. Specifically, the limitation generate… a similarity score between the source item and each of the one or more candidate items amounts to a mental process as it can be performed in a human mind.
The claim recites an additional abstract idea. Specifically, the limitation joint optimization of masked language model loss and a metric-based triplet loss is a mathematical concept.
Step 2A prong 2:
The additional element of input a source item and one or more candidate items does not integrate the abstract idea into practical application because inputting data into a model is considered an insignificant extra solution activity of "mere data gathering" MPEP 2106.05(g). Therefore, the claim is directed to an abstract idea.
The additional element of a triplet-trained machine learning model trained using… is generally linked to the abstract idea, therefore does not integrate the abstract idea into practical application see MPEP 2106.05(h).
The additional element of a triplet-trained machine learning model in the generator step is generally linked to the abstract idea, therefore does not integrate the abstract idea into practical application see MPEP 2106.05(h).
The additional element of an input interface is a generic computer component used to implement the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f).
The additional element of a similarity score generator is a generic computer component used to implement the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f).
The additional element of one or more hardware processors is a generic computer component used to implement the abstract idea, therefore does not integrate the abstract idea into practical application MPEP 2106.05(f).
Step 2B:
The additional element input a source item and one or more candidate items does not amount to significantly more because the additional element is an insignificant extra solution activity and further is a well understood routine and conventional activity. See MPEP 2106.05(d)(II)(i), (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)).
The additional element of a triplet-trained machine learning model trained using… is generally linked to the abstract idea, therefore does not amount to significantly more see MPEP 2106.05(h).
The additional element of a triplet-trained machine learning model in the generator step is generally linked to the abstract idea, therefore does not amount to significantly more MPEP 2106.05(h).
The additional element of an input interface is a generic computer component used to implement the abstract idea, therefore does not amount to significantly more MPEP 2106.05(f).
The additional element of a similarity score generator is a generic computer component used to implement the abstract idea, therefore does not amount to significantly more MPEP 2106.05(f).
The additional element of one or more hardware processors is a generic computer component used to implement the abstract idea, therefore does not amount to significantly more MPEP 2106.05(f).
Therefore, the claim is ineligible.
Regarding Claim 9:
Claim 9 which incorporates the rejection of Claim 8, recites a further abstract idea trained based on a masked language model loss function which is a mathematical concept. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element during training which amounts to mere instructions to apply the judicial exception MPEP 2106.05(f). The claim is ineligible.
Regarding Claim 10:
Claim 10 which incorporates the rejection of Claim 8, recites a further abstract idea trained based on a combination of a masked language model loss function and a metric-based loss function which is a mathematical concept. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element during training which amounts to mere instructions to apply the judicial exception MPEP 2106.05(f). The claim is ineligible.
Regarding Claim 11:
Claim 11 which incorporates the rejection of Claim 8, recites a further abstract idea trained based on a metric-based loss function which is a mathematical concept. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element during training which amounts to mere instructions to apply the judicial exception MPEP 2106.05(f). The claim is ineligible.
Regarding Claim 12:
Claim 12 which incorporates the rejection of Claim 11, recites only a further abstract idea loss function is based on an angular distance between an embedded anchor element vector and an embedded positive element vector which is a mathematical process. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible.
Regarding Claim 13:
Claim 13 which incorporates the rejection of Claim 11, recites only a further abstract idea loss function is based on an angular distance between an embedded anchor element vector and an embedded negative element vector which is a mathematical process. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible.
Regarding Claim 14:
Claim 14 which incorporates the rejection of Claim 11, recites only a further abstract idea loss function is based on an angular distance between an embedded anchor element vector and an embedded negative element vector subtracted from an angular distance between the embedded anchor element vector and an embedded positive element vector which is a mathematical process. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible.
Regarding Claim 15:
Step 1:
The claim recites one or more tangible processor-readable storage media of a tangible article of manufacture encoding processor-executable instructions for executing a computing process for determining item similarity, which is one of the four statutory categories of patentable subject matter.
Step 2A prong 1:
The claim recites an abstract idea. Specifically, the limitation generating… a similarity score between the source item and each of the one or more candidate items amounts to a mental process as it can be performed in a human mind.
The claim recites an additional abstract idea. Specifically, the limitation joint optimization of masked language model loss and a metric-based triplet loss is a mathematical concept.
Step 2A prong 2:
The additional element of inputting a source item and one or more candidate items does not integrate the abstract idea into practical application because inputting data into a model is considered an insignificant extra solution activity of "mere data gathering" MPEP 2106.05(g). Therefore, the claim is directed to an abstract idea.
The additional element of a triplet-trained machine learning model trained using… is generally linked to the abstract idea, therefore does not integrate the abstract idea into practical application see MPEP 2106.05(h).
The additional element of a triplet-trained machine learning model in the generating step is generally linked to the abstract idea, therefore does not integrate the abstract idea into practical application see MPEP 2106.05(h).
Step 2B:
The additional element inputting a source item and one or more candidate items does not amount to significantly more because the additional element is an insignificant extra solution activity and further is a well understood routine and conventional activity. See MPEP 2106.05(d)(II)(i), (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)).
The additional element of a triplet-trained machine learning model trained using… is generally linked to the abstract idea, therefore does not amount to significantly more see MPEP 2106.05(h).
The additional element of a triplet-trained machine learning model in the generating step is generally linked to the abstract idea, therefore does not amount to significantly more MPEP 2106.05(h).
Therefore, the claim is ineligible.
Regarding Claim 16:
Claim 16 which incorporates the rejection of Claim 15, recites a further abstract idea trained based on a combination of a masked language model loss function and a metric-based loss function which is a mathematical concept. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element during training which amounts to mere instructions to apply the judicial exception MPEP 2106.05(f). The claim is ineligible.
Regarding Claim 17:
Claim 17 which incorporates the rejection of Claim 15, recites a further abstract idea trained based on a metric-based loss function which is a mathematical concept. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element during training which amounts to mere instructions to apply the judicial exception MPEP 2106.05(f). The claim is ineligible.
Regarding Claim 18:
Claim 18 which incorporates the rejection of Claim 17, recites only a further abstract element loss function is based on an angular distance between an embedded anchor element vector and an embedded positive element vector which is a mathematical process. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible.
Regarding Claim 19:
Claim 19 which incorporates the rejection of Claim 17, recites only a further abstract element loss function is based on an angular distance between an embedded anchor element vector and an embedded negative element vector which is a mathematical process. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible.
Regarding Claim 20:
Claim 20 which incorporates the rejection of Claim 17, recites only a further abstract element loss function is based on an angular distance between an embedded anchor element vector and an embedded negative element vector subtracted from an angular distance between the embedded anchor element vector and an embedded positive element vector which is a mathematical process. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible.
Regarding Claim 21:
Claim 21 which incorporates the rejection of Claim 1, recites further abstract elements masked language model loss is computed by reconstructing masked tokens in the source item and one or more candidate items which is a mental process as it can be performed in a human mind and metric-based triplet loss is computed based on an angular distance metric between the anchor, positive, and negative elements which is a mathematical process. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. The claim is ineligible.
Regarding Claim 22:
Claim 22 incorporates the rejection of Claim 1. The claim does not recite any additional elements that integrate the abstract idea into practical application or amount to significantly more. Specifically, the claim recites a further additional element classifying the one or more candidate items based on the similarity score computed between the source item and a candidate item which amounts to mere instructions to apply the judicial exception MPEP 2106.05(f). The claim is ineligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shah et al. (U.S. Patent Application Publication No. US 20220391590 A1), hereinafter “Shah”.
Regarding Claim 1, Shah teaches:
A method of determining item similarity, the method comprising:
inputting a source item and one or more candidate items into a triplet-trained machine learning model (Figure 6, Object Title 604, Supplementary Object Text Record 600, Supplementary Object Text Record 602, Figure 3, Training Data 302 inputted into model, ¶57, “the named entity tagger can receive object data” and “The object data can include, for example, a plurality of object titles and a plurality of supplementary object text records”) trained using training data including triplets of anchor elements, positive elements, and negative elements, wherein each triplet corresponds to an item included in the training data, the anchor elements and the positive elements are included in the corresponding item, and the negative element is included in a different item in the training data (¶21, “The triplet loss can be, for example, a function with three inputs: an anchor, a true input, and a false input”, Claim 12 “applying a triplet loss function using the first item as an anchor, the first item description as a true input, and the second item description as a false input.”), wherein the triplet-trained machine learning model training includes a joint optimization(¶38, “seek to optimize a combination of the losses determined by the triplet loss component 310 and the named entity recognition component 312”) of masked language model loss (Figure 4, step 406, ¶6, “determining a named entity recognition loss”, ¶37, “a named entity recognition loss by determining a difference between predicted classifications of the words and actual classifications of the words”, named entity recognition loss is natural language processing and therefore a language model loss function, Fig 1, Entity tagger 100 selects one or more words of text, i.e. masking them) and a metric-based triplet loss (Figure 4, step 404, ¶6, “determining a triplet loss”, Figure 5, Determine Triplet Loss 404, the triplet loss is a metric based loss function); and
generating from the triplet-trained machine learning model a similarity score between the source item and each of the one or more candidate items (¶7, “calculating a first cosine similarity between an embedding associated with the item title and an embedding associated with an item description of the plurality of item descriptions”).
Regarding Claim 2, Shah teaches the method of claim 1 as referenced above. Shah further teaches:
wherein the triplet-trained machine learning model is trained based on reducing a combination of a masked language model loss function(Figure 4, step 406, ¶6, “determining a named entity recognition loss”, ¶37, “a named entity recognition loss by determining a difference between predicted classifications of the words and actual classifications of the words”, named entity recognition loss is natural language processing and therefore a language model loss function, Fig 1, Entity tagger 100 selects one or more words of text, i.e. masking them) and a metric-based loss function (Figure 4, step 404, ¶6, “determining a triplet loss”, Figure 5, Determine Triplet Loss 404, the triplet loss is a metric based loss function) during training using the training data (Figure 4, step 408, ¶6, “Fine-tuning the neural network to perform named entity recognition includes… optimizing a multi-task objective function comprising the triplet loss and the named entity recognition loss.”).
Regarding Claim 3, Shah teaches the method of claim 1 as referenced above. Shah further teaches:
wherein the triplet-trained machine learning model is trained based on a metric-based loss function during training using the training data (Figure 4 step 404, Figure 5, ¶7, “applying a triplet loss function to each of the plurality of item titles to obtain a triplet loss.”).
Regarding Claim 4, Shah teaches the method of claim 3 as referenced above. Shah further teaches:
wherein the metric-based loss function is based on an angular distance between an embedded anchor element vector and an embedded positive element vector (Figure 6, Loss 618, Positive Embedding 606, Title Embedding 608, and First Cosine 612, Figure 5, step 504, ¶7, “triplet loss includes, for each item title: calculating a first cosine similarity between an embedding associated with the item title and an embedding associated with an item description of the plurality of item descriptions that is associated with the item title”).
Regarding Claim 5, Shah teaches the method of claim 3 as referenced above. Shah further teaches:
wherein the metric-based loss function is based on an angular distance between an embedded anchor element vector and an embedded negative element vector (Figure 6, Loss 618, Title Embedding 608, Negative Embedding 610, and Second Cosine 614, Figure 5, step 506, ¶7, “calculating a second cosine similarity between the embedding associated with the item title and an embedding associated with an item description of the plurality of item descriptions that is not associated with the item title”).
Regarding Claim 6, Shah teaches the method of claim 3 as referenced above. Shah further teaches:
wherein the metric-based loss function is based on a first angular distance between an embedded anchor element vector and an embedded negative element vector subtracted from a second angular distance between the embedded anchor element vector and an embedded positive element vector (Figure 6, Loss 618, Positive Embedding 606, Title Embedding 608, Negative Embedding 610, First Cosine 612, Second Cosine 614, and Difference 616, Figure 5, steps 504, 506, 508, ¶7, “triplet loss includes… determining a difference between the first cosine similarity and the second cosine similarity by subtracting the second cosine similarity from the first cosine similarity”).
Regarding Claim 7, Shah teaches the method of claim 1 as referenced above. Shah further teaches:
training the triplet-trained machine learning model trained using the training data that includes triplets of anchor elements, positive elements, and negative elements (Figure 3, Training Data 302, Triplet Loss Component 310, Figure 4, steps 404 and 410, ¶21, “The triplet loss can be, for example, a function with three inputs: an anchor, a true input, and a false input”).
Regarding Claim 8, Shah teaches:
A comparison engine for determining item similarity, the comparison engine comprising:
one or more hardware processors (Fig 9, ¶64, “One or more aspects of the computing system 900 can be used to implement one or more aspects of the present disclosure” ¶65, “computing system 900 includes one or more processors”);
a triplet-trained machine learning model executable by the one or more hardware processors and trained using training data including triplets of anchor elements, positive elements, and negative elements, wherein each triplet corresponds to an item included in the training data, the anchor elements and the positive elements are included in the corresponding item, and the negative element is included in a different item in the training data (¶21, “The triplet loss can be, for example, a function with three inputs: an anchor, a true input, and a false input”, Claim 12 “applying a triplet loss function using the first item as an anchor, the first item description as a true input, and the second item description as a false input.”), wherein the triplet-trained machine learning model training includes a joint optimization(¶38, “seek to optimize a combination of the losses determined by the triplet loss component 310 and the named entity recognition component 312”) of masked language model loss (Figure 4, step 406, ¶6, “determining a named entity recognition loss”, ¶37, “a named entity recognition loss by determining a difference between predicted classifications of the words and actual classifications of the words”, named entity recognition loss is natural language processing and therefore a language model loss function, Fig 1, Entity tagger 100 selects one or more words of text, i.e. masking them) and a metric-based triplet loss (Figure 4, step 404, ¶6, “determining a triplet loss”, Figure 5, Determine Triplet Loss 404, the triplet loss is a metric based loss function);
an input interface executable by the one or more hardware processors (Figure 9, ¶68, “The computing system 900 also includes an input/output controller 906 for receiving and processing input from a number of other devices, including a touch user interface display screen, or another type of input device”, ¶69, “The mass storage device 914 and/or the RAM 910 also store software instructions that, when executed by the one or more processors 902, cause one or more of the systems, devices, or components described herein to provide functionality described”) and configured to input a source item and one or more candidate items into the triplet-trained machine learning model (Figure 3, Training Data 302 inputted into model); and
a similarity score generator executable by the one or more hardware processors (Figure 9, ¶69, “The mass storage device 914 and/or the RAM 910 also store software instructions that, when executed by the one or more processors 902, cause one or more of the systems, devices, or components described herein to provide functionality described) and configured to generate from the triplet-trained machine learning model a similarity score between the source item and each of the one or more candidate items (¶7, “calculating a first cosine similarity between an embedding associated with the item title and an embedding associated with an item description of the plurality of item descriptions”).
Regarding Claim 9, Shah teaches the comparison engine of claim 8 as referenced above. Shah further teaches:
wherein the triplet-trained machine learning model is trained based on a masked language model loss function during training using the training data (Figure 4, step 406, ¶41, “the named entity tagger 100 can determine a named entity recognition loss”, named entity recognition loss is natural language processing and therefore a language model loss function, Fig 1, Entity tagger 100 selects one or more words of text, i.e. masking them).
Regarding Claim 10, the rejection of Claim 8 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 2.
Regarding Claim 11, the rejection of Claim 8 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 3.
Regarding Claim 12, the rejection of Claim 11 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 4.
Regarding Claim 13, the rejection of Claim 11 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 5.
Regarding Claim 14, the rejection of Claim 11 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 6.
Regarding Claim 15, Shah teaches:
One or more tangible processor-readable storage media of a tangible article of manufacture (¶66, The mass storage device 914 and its associated computer-readable data storage media provide non-volatile, non-transitory storage for the computing system 900) encoding processor-executable instructions for executing a computing process for determining item similarity, the computing process comprising:
inputting a source item and one or more candidate items into a triplet-trained machine learning model (Figure 6, Object Title 604, Supplementary Object Text Record 600, Supplementary Object Text Record 602, Figure 3, Training Data 302 inputted into model, ¶57, “the named entity tagger can receive object data” and “The object data can include, for example, a plurality of object titles and a plurality of supplementary object text records”) trained using training data including triplets of anchor elements, positive elements, and negative elements, wherein each triplet corresponds to an item included in the training data, the anchor elements and the positive elements are included in the corresponding item, and the negative element is included in a different item in the training data (¶21, “The triplet loss can be, for example, a function with three inputs: an anchor, a true input, and a false input”, Claim 12 “applying a triplet loss function using the first item as an anchor, the first item description as a true input, and the second item description as a false input.”), wherein the triplet-trained machine learning model training includes a joint optimization(¶38, “seek to optimize a combination of the losses determined by the triplet loss component 310 and the named entity recognition component 312”) of masked language model loss (Figure 4, step 406, ¶6, “determining a named entity recognition loss”, ¶37, “a named entity recognition loss by determining a difference between predicted classifications of the words and actual classifications of the words”, named entity recognition loss is natural language processing and therefore a language model loss function, Fig 1, Entity tagger 100 selects one or more words of text, i.e. masking them) and a metric-based triplet loss (Figure 4, step 404, ¶6, “determining a triplet loss”, Figure 5, Determine Triplet Loss 404, the triplet loss is a metric based loss function); and
generating from the triplet-trained machine learning model a similarity score between the source item and each of the one or more candidate items (¶7, “calculating a first cosine similarity between an embedding associated with the item title and an embedding associated with an item description of the plurality of item descriptions”).
Regarding Claim 16, the rejection of Claim 15 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 2.
Regarding Claim 17, the rejection of Claim 15 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 3.
Regarding Claim 18, the rejection of Claim 17 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 4.
Regarding Claim 19, the rejection of Claim 17 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 5.
Regarding Claim 20, the rejection of Claim 17 is incorporated and further, the claim is rejected for the same reasons as set forth in Claim 6.
Regarding Claim 21, Shah teaches the method of claim 1 as referenced above. Shah further teaches:
wherein the masked language model loss is computed by reconstructing masked tokens in the source item and the one or more candidate items (Figure 4, step 406, ¶6, “determining a named entity recognition loss”, ¶37, “a named entity recognition loss by determining a difference between predicted classifications of the words and actual classifications of the words”, named entity recognition loss is natural language processing and therefore a language model loss function, Fig 1, Entity tagger 100 selects one or more words of text, i.e. masking them, ¶25, “the named entity tagger 100 can tag the word as being, for example, the first word, a middle word, or the last word in the named entity”, shows reconstructing the masked tokens), and the metric-based triplet loss is computed based on an angular distance metric between the anchor, positive, and negative elements (Figure 5, Seps 504 and 506, ¶7, “triplet loss includes, for each item title: calculating a first cosine similarity between an embedding associated with the item title and an embedding associated with an item description of the plurality of item descriptions that is associated with the item title; calculating a second cosine similarity between the embedding associated with the item title and an embedding associated with an item description of the plurality of item descriptions that is not associated with the item title”).
Regarding Claim 22, Shah teaches the method of claim 1 as referenced above. Shah further teaches:
further comprising: classifying the one or more candidate items based on the similarity score computed between the source item and a candidate item (¶7, “calculating a first cosine similarity between an embedding associated with the item title and an embedding associated with an item description of the plurality of item descriptions”, words are classified with model that was trained based on similarity score computed between source item and candidate item, ¶ 5, “classifying, using the trained neural network, one or more words of the plurality of words”).
-
Response to Arguments
101
Argument 1: the claims do not recite an abstract idea, specifically the methodology of:
A triplet-trained model using anchor, positive, and negative elements
A joint optimization of two distinct loss functions: masked language model (MLM) loss and metric-based triplet loss
A similarity score generated from the trained model
is not a mental process.
With respect to applicant asserting that the claims when read as a whole is not a mental process, in the 101 rejection of Claim 1 the only limitation listed as a mental process is generating from the triplet-trained machine learning model a similarity score between the source item and each of the one or more candidate items. With the details provided, generating a similarity score from a model between two items is clearly an abstract idea that can be performed in a human mind. Regarding applicant assertion that a triplet trained model using anchor, positive, and negative elements is not a mental process, this is addressed in the Claim 1 rejection where a triplet trained machine model trained using… is generally linked to the abstract idea and not an abstract idea. Regarding applicant’s assertion that a joint optimization of loss functions would not be an abstract idea, this would be a mathematical concept as show in the 101 rejection of Claim 1.
With respect to Applicant citing USPTO Example 43 (2024 Update), Example 43 is from the 2019 update and is about treating kidney disease and does not have any machine learning or training. If applicant intended to refer to example 47, example 47 does not confirm that specific training techniques for machine learning are not abstract. As show in both ineligible claim 2 and eligible claim 3 of example 47, the training steps in these claims are mathematical concepts. The reason claim 3 with training of the ANN is deemed eligible is because steps d-f show an improvement in the technical field of network intrusion detection. Present claims from applicant do not show an improvement of function of a computer or technology.
Regarding McRO, Inc. v. Bandai Namco Games America Inc., 837 F.3d 1299 (Fed. Cir. 2016), the claims were directed to an improvement in computer animation which is an improvement of technology. Applicant assertion that performance of machine learning models are improved for item similarity scoring does not show an improvement of technology because generating a similarity score is an abstract idea not a technology. The claims must show an improvement of function of a computer or technology, not an abstract idea.
Argument 2: The specific training architecture integrates into practical application.
With respect to applicant assertion that joint optimization of MLM and triplet loss enables the model to learn richer semantic representations and avoid collapse, the claims only recite a trained model use and output and do not have a specific manner in which an improvement of function of a computer or technology is shown.
With respect to applicant assertion that the item-to-item similarity has practical application in recommendation systems, search engines, and classification systems, the claims do not recite any of these improvements. The claims are only directed towards model use and output and clearly don’t recite any improvements in recommendation systems, search engines, and classification systems. For practical application the claims need to show improvement of function of a computer or technology.
Regarding applicant assertion that claim 21 demonstrates real world technical benefits, claim 21 only recites details on implementing masked language model loss and metric-based triplet loss but does not show any improvement of function of computer or technology. Regarding the use of angular distance rather than cosine similarity, any improvement shown is directed towards the abstract idea of generating a similarity score and not a function of a computer or technology.
Regarding applicant assertion that claim 22 integrates the claims into practical application, the claim only recites classifying the one or more candidate items based on similarity score computed between the source item and a candidate item which amounts to mere instructions to apply the abstract idea as show in the 101 rejection of claim 22. The claim does not show any improvement of function of a computer or technology.
Argument 3: The claims amount to significantly more.
Regarding applicant assertion that joint optimization of MLM loss and triplet loss is not well-understood, routine, or conventional, the only limitation in the claims asserted to be well-understood, routine, or conventional in the 101 rejection is the inputting of source and candidate items. The joint optimization of MLM loss and triplet loss is a mathematical concept as shown in the 101 rejection of claim 1.
Regarding applicant assertion that joint optimization of MLM triplet loss provides a solution to the problem of learning accurate item similarity in high-dimensional embedding space, joint optimization is a mathematical concept and generating a similarity score is an abstract idea so any improvement would be directed to an abstract idea and not a computer or technology. DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245 (Fed. Cir. 2014) shows improvements to a functioning of a computer with modification of conventional internet hyperlink protocol to dynamically produce a dual-source hybrid webpage. The present claims do not recite any specifics that show improvement of a computer or technology.
102
Argument 1: NER loss of Shah is different than MLM loss.
Regarding applicant assertion that MLM loss is different than NER loss, while they may be technically different the claim does not explicitly recite any of these differences. Specifically the claim does not require that the loss be a self-supervised objective without requiring labeled data. The claim language does not provide further details besides “masked language model loss”. As set forth in Claim 1, NER loss is a MLM loss, named entity recognition loss is natural language processing and therefore a language model loss function and selects one or more words of text, i.e. masking them.
Argument 2: Shah does not disclose use of MLM/triplet loss during as part of multi-task
objective
With respect to applicant assertion that Shah does not disclose the use of MLM/triplet loss as part of the multi-task objective, Shah teaches this in the rejection of Claim 2. Shah teaches this with “optimizing a multi-task objective function comprising the triplet loss and the named entity recognition loss”. Optimizing a multi task objective with triplet loss and the named entity recognition loss encompasses using MLM/triplet loss as part of a multi-task objective. NER loss is same as MLM loss as shown above.
New Claims
Argument 1: Shah fails to teach metric-based triplet loss based on an angular distance metric
Regarding applicant assertion that angular distance is a distinct metric different than cosine similarity, the claim does not explicitly recite any of these differences. The claim language does not provide further details besides “angular distance”. Cosine similarity uses cosine of an angle and measures the relationship and similarity between two items, which is a type of angular distance.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSE CHEN COULSON whose telephone number is (571)272-4716. The examiner can normally be reached Monday-Friday 8:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JESSE C COULSON/
Examiner, Art Unit 2122
/BRIAN M SMITH/Primary Examiner, Art Unit 2122