DETAILED ACTION
Status of Claims
Claims 1-20 submitted on 10/21/2025 are pending and have been examined. Claims 1-5, 7-11, 14-17, 19, and 20 have been amended.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
No foreign priority or domestic benefit was claimed by the applicant and the application has been examined with respect to its filing date of 01/31/2024.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. The claims recite an abstract idea. This judicial exception is not integrated into a practical application. The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Step 1
Claims 1-10 are directed to a machine and claims 11-20 are directed to a process (see MPEP 2106.03).
Step 2A, Prong 1
Claim 1, taken as representative, recites at least the following limitations that recite an abstract idea:
based on pairing information that is included in data, filtered based on filtering criteria, and corresponds to a query item pair; and
to generate an output sequence that includes missing tokens that are used to modify metadata for future search retrieval associated with one or more products in a product catalog.
The above limitation, under its broadest reasonable interpretation, falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in MPEP 2106.04(a)(2)(II), in that it recites a commercial interaction, see ¶0002 of instant specification. Claims 11 recites similar limitations as claim 1.
Thus, under Prong 1 of Step 2A, claims 1 and 11 recite an abstract idea.
Step 2A, Prong 2
Claim 1 includes the following additional elements that are bolded:
a system comprising:
one or more processors; and
one or more non-transitory computer-readable media storing computing instructions that, when run on the one or more processors, cause the one or more processors to perform operations comprising:
training a machine learning model based on pairing information that is included in data, filtered based on filtering criteria, and corresponds to a query item pair; and
operating the machine learning model to generate an output sequence that includes missing tokens that are used to modify metadata for future search retrieval associated with one or more products in a product catalog.
Claim 11 includes the same additional elements as claim 1.
The additional elements recited in claims 1 and 11 merely invoke such elements as a tool to perform the abstract idea and generally link the use of the abstract idea to a particular technological environment (see MPEP 2106.05(f) and MPEP 2106.05(h). These additional elements are described at a high level in Applicant’s specification without any meaningful detail about their structure or configuration (see Figs. 2 and 7, and ¶¶0022-0023 and 0057).
As such, under Prong 2 of Step 2A, when considered both individually and as a whole, the additional elements do not integrate the judicial exception into a practical application and, thus, claims 1 and 11 are directed to an abstract idea.
Step 2B
As noted above, while the recitation of the additional elements in independent claims 1 and 11 are acknowledged, claims 1 and 11 merely invoke such additional elements as a tool to perform the abstract idea and generally link the use of the abstract idea to a particular technological environment (see MPEP 2106.05(f) and MPEP 2106.05(h)).
Even when considered as an ordered combination, the additional elements of claim 1 and 11 do not add anything that is not already present when they are considered individually. Therefore, under Step 2B, there are no meaningful limitations in claims 1 and 11 that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself (see MPEP 2106.05).
As such, independent claims 1 and 11 are ineligible.
Dependent claims 2-5, 8, 9, 12-15, 18, and 19 when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. 101 because they do not add “significantly more” to the abstract idea. More specifically, dependent claims 2-5, 8, 9, 12-15, 18, and 19 merely further define the abstract limitations of claims 1 and 11 or provide further embellishments of the limitations recited in independent claims 1 and 11. Claims 2-5, 8, 9, 12-15, 18, and 19 do not introduce any further additional elements. Thus, dependent claims 2-5, 8, 9, 12-15, 18, and 19 are ineligible.
Furthermore, it is noted that certain dependent claims recite additional elements supplemental to those recited in independent claims 1 and 11: a Seq2Seq machine learning model (claims 6 and 16) and inputting the input sequence into an encoder of the machine learning model, the encoder configured to generate a context vector (claims 7, 10, 17, and 20). However, these elements do not integrate the abstract idea into a practical application because they merely amount to using a computer to apply the abstract idea to a particular technological environment or field of use and thus do not act to integrate the abstract idea into a practical application of the abstract idea. Additionally, the additional elements do not amount to significantly more because they merely amount to using a computer to apply the abstract idea and amount to no more than a general link of the use of the abstract idea to a particular technological environment.
Thus, dependent claims 6, 7, 10, 16, 17, and 20 are ineligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-3, 5-13, and 15-19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Fabbrizio et al. (US 10,839,033 B1 [previously cited]).
Regarding Claim 1, Fabbrizio et al., hereinafter, Fabbrizio, discloses a system comprising:
one or more processors (Fig. 3; Col. 1, line 64 to Col. 2, line 17[Non-transitory computer program products (i.e., physically embodied computer program products) are also described that store instructions, which when executed by one or more data processors of one or more computing systems, causes at least one data processor to perform operations herein. Similarly, computer systems are also described that may include one or more data processors and memory coupled to the one or more data processors.]); and
one or more non-transitory computer-readable media storing computing instructions that, when run on the one or more processors, cause the one or more processors to perform operations comprising (Fig. 3; Col. 1, line 64 to Col. 2, line 17):
training a machine learning model based on pairing information that is included in data, filtered based on filtering criteria, and corresponds to a query item pair (Col. 4, lines 17-30[At 20, a model can be trained using the received data. The model can map item names to referring expressions. For example, the training can include using the item name selected by the user (e.g., “Simpson Strong-tie 12-Gauge Angle”) as an input to the model and the search query (e.g., “angle bracket”) as a supervisory signal. By utilizing the search query (from the data characterizing the historical user interaction) as the supervisory signal and the selected item name as input, the model will be trained to map item names to referring expressions.]; Examiner notes that the received data is comparable to pairing information that is included in data, in view of Fig. 1; Col. 3, line 54 to Col. 4, line 20[At 10, data can be received characterizing historical user interaction with a search engine associated with a plurality of items. The historical user interaction can include a search query input by a user and the name of an item selected by a user in response to receiving results conducted according to the search query.]; Examiner notes that according to the instant specification ¶0050, “filtering the pairing information based on the filtering criteria includes analyzing pairing information to identify a search query and an associated item,” which is comparable to receiving historical user interaction data including a search query input by a user and the name of an item selected by a user as disclosed in Fabbrizio); and
operating the machine learning model to generate an output sequence that includes missing tokens that are used to modify metadata for future search retrieval associated with one or more products in a product catalog (Fig. 3; Col. 4, lines 17-30[At 20, a model can be trained using the received data. The model can map item names to referring expressions. For example, the training can include using the item name selected by the user (e.g., “Simpson Strong-tie 12-Gauge Angle”) as an input to the model and the search query (e.g., “angle bracket”) as a supervisory signal. By utilizing the search query (from the data characterizing the historical user interaction) as the supervisory signal and the selected item name as input, the model will be trained to map item names to referring expressions.]; Examiner notes that the newly mapped “referring expressions” to item names are comparable to “missing tokens” according to instant specification ¶0052, in view of Col. 5, lines 28-43[The set of semantically equivalent expression can be added to a lexicon associated with a tenant. The tenant can be, for example, a business owner deploying a conversational system via a web site for use in processing search queries or customer service requests. In some implementations, the conversational system can also include a natural language agent ensemble consisting of natural language processing resources, such as a dialog manager module. The lexicon can be used by the dialog manager to generate dialog outputs which can be associated with the item.]; Examiner notes that adding expressions to a lexicon associated with the business is comparable to modifying metadata for products in a product catalog).
Regarding Claim 2, Fabbrizio discloses the system of claim 1, Fabbrizio further discloses wherein the data includes search queries, impression information, and add-to-cart information (Fig. 1; Col. 3, line 54 to Col. 4, line 16[At 10, data can be received characterizing historical user interaction with a search engine associated with a plurality of items. The historical user interaction can include a search query input by a user and the name of an item selected by a user in response to receiving results conducted according to the search query… The historical user interaction data can be commonly found in one or more readily available sources. For example, the historical user interaction data can include clickstream data, click path data, and/or web log data associated with a web site where the item can be provided, such as an e-commerce website. In some implementations, the data can include item descriptions, item reviews, as well as purchasing, billing, and/or search query data associated with the item.]).
Regarding Claim 3, Fabbrizio discloses the system of claim 2, Fabbrizio further discloses wherein: the query item pair corresponds to a search query and an associated item that is linked to the search query based on the data (Fig. 1; Col. 3, line 54 to Col. 4, line 3[At 10, data can be received characterizing historical user interaction with a search engine associated with a plurality of items… The search engine can have previously performed a search and returned a list of results to the user. The user can have previously selected the most relevant result. For example, where the search query is “angle bracket”, the name selected by the user can be “Simpson Strong-tie 12-Gauge Angle.” Thus in this example, the data characterizing the historical user interaction can include the text string “angle bracket” as the search query and the text string “Simpson Strong-tie 12-Gauge Angle” as the item name.]).
Regarding Claim 5, Fabbrizio discloses the system of claim 1, Fabbrizio further discloses wherein training the machine learning model comprises:
analyzing the pairing information to identify a search query and an associated item (Fig. 1; Col. 3, line 54 to Col. 4, line 3[At 10, data can be received characterizing historical user interaction with a search engine associated with a plurality of items… The search engine can have previously performed a search and returned a list of results to the user. The user can have previously selected the most relevant result. For example, where the search query is “angle bracket”, the name selected by the user can be “Simpson Strong-tie 12-Gauge Angle.” Thus in this example, the data characterizing the historical user interaction can include the text string “angle bracket” as the search query and the text string “Simpson Strong-tie 12-Gauge Angle” as the item name.]);
generating a keyword item pair for the search query and the associated item (Fig. 1; Col. 3, line 54 to Col. 4, line 30[At 10, data can be received characterizing historical user interaction with a search engine associated with a plurality of items… The user can have previously selected the most relevant result. For example, where the search query is “angle bracket”, the name selected by the user can be “Simpson Strong-tie 12-Gauge Angle.” Thus in this example, the data characterizing the historical user interaction can include the text string “angle bracket” as the search query and the text string “Simpson Strong-tie 12-Gauge Angle” as the item name… At 20, a model can be trained using the received data.]); and
generating an input sequence based on the keyword item pair (Figs. 1 and 2; Col. 4, lines 17-30[For example, the training can include using the item name selected by the user (e.g., “Simpson Strong-tie 12-Gauge Angle”) as an input to the model and the search query (e.g., “angle bracket”) as a supervisory signal. By utilizing the search query (from the data characterizing the historical user interaction) as the supervisory signal and the selected item name as input, the model will be trained to map item names to referring expressions.]).
Regarding Claim 6, Fabbrizio discloses the system of claim 5, Fabbrizio further discloses wherein the machine learning model is a Seq2Seq machine learning model (Fig. 2; Col. 6, lines 16-21[FIG. 2 illustrates an example encoder-decoder architecture 200 for an example model 210 provided according to the example method 100 of FIG. 1. The model 210 can be configured as a sequence-to-sequence model using a RNN architecture which can include an encoder 220 and a decoder 240.]).
Regarding Claim 7, Fabbrizio discloses the system of claim 5, Fabbrizio further discloses wherein the operations further comprise:
inputting the input sequence into an encoder of the machine learning model, the encoder being configured to generate a context vector (Fig. 2; Col. 6, line 48 to Col. 7, line 8[As illustrated in FIG. 2, x vector (x.sub.1, x.sub.2, x.sub.3, x.sub.4) is the encoder 220 input (where X_i represent the word embeddings vector for the word I in the input sequence), which corresponds to the item name (e.g., “Simpson Strong-Tie 12-Gauge Angle”), h is the encoder 220 output (h_t is the RNN hidden state function at timestep t which can be computed based on the previous input the configuration of the RNN cell (either LSTM or GRU))… the encoder 220 input (the x vector) can include 300 element dense vectors precomputed through a very large model, which can be trained on general English or specific domain documents such as product descriptions and reviews. An LSTM and/or GRU 260 can also be included, which can interface with encoder 220.] in view of Col. 5, lines 60-65[The vectors generated by the encoder and the decoder can be hidden layer outputs generate by the model respectively over time.]); and
transmitting the context vector to a decoder of the machine learning model, the decoder being configured to generate an output sequence based on the context vector (Fig. 2[showing the transmitting]; Col. 6, line 54 to Col. 7, line 3[As illustrated in FIG. 2…The encoder 220 network produces a representation of the input sequence which is pushed forward to the decoder 240 network where at each step, an output y_i is generated based on the conditional probability p that maximizes the likelihood of generating y_i based on the previous input X and y_i−1, y_i−2, etc.]).
Regarding Claim 8, Fabbrizio discloses the system of claim 7, Fabbrizio further discloses wherein:
the output sequence comprises the missing tokens (Fig. 2; Col. 5, lines 28-42[In some implementations, the model can be used to determine semantically equivalent referring expressions for items names. The set of semantically equivalent expression can be added to a lexicon associated with a tenant. The tenant can be, for example, a business owner deploying a conversational system via a web site for use in processing search queries or customer service requests.]; Examiner notes that semantically equivalent expressions are comparable to the missing tokens); and
the missing tokens comprise one or more keywords that are missing from the metadata (Fig. 2; Col. 5, lines 28-42[In some implementations, the model can be used to determine semantically equivalent referring expressions for items names. The set of semantically equivalent expression can be added to a lexicon associated with a tenant. The tenant can be, for example, a business owner deploying a conversational system via a web site for use in processing search queries or customer service requests]; Examiner notes that semantically equivalent referring expressions for item names are comparable to keywords for the associated item).
Regarding Claim 9, Fabbrizio discloses the system of claim 8, Fabbrizio further discloses wherein modifying the metadata comprises modifying the metadata to include the one or more keywords that are missing from the metadata (Fig. 2; Col. 5, lines 28-42[In some implementations, the model can be used to determine semantically equivalent referring expressions for items names. The set of semantically equivalent expression can be added to a lexicon associated with a tenant. The tenant can be, for example, a business owner deploying a conversational system via a web site for use in processing search queries or customer service requests]; Examiner notes that adding to a lexicon associated with a business is comparable to adding missing metadata for the associated item).
Regarding Claim 10, Fabbrizio discloses the system of claim 7, Fabbrizio further discloses wherein the operations further:
generating the input sequence based on the keyword item pair (Figs. 1 and 2; Col. 4, lines 17-30[For example, the training can include using the item name selected by the user (e.g., “Simpson Strong-tie 12-Gauge Angle”) as an input to the model and the search query (e.g., “angle bracket”) as a supervisory signal. By utilizing the search query (from the data characterizing the historical user interaction) as the supervisory signal and the selected item name as input, the model will be trained to map item names to referring expressions.]);
generating a desired output sequence based on the keyword item pair (Fig. 2[Examiner notes that h, the encoder output, are comparable to a desired output sequence]; Col. 6, line 48 to Col. 7, line 8[As illustrated in FIG. 2, x vector (x.sub.1, x.sub.2, x.sub.3, x.sub.4) is the encoder 220 input (where X_i represent the word embeddings vector for the word I in the input sequence), which corresponds to the item name (e.g., “Simpson Strong-Tie 12-Gauge Angle”), h is the encoder 220 output (h_t is the RNN hidden state function at timestep t which can be computed based on the previous input the configuration of the RNN cell (either LSTM or GRU)));
inputting the input sequence into the encoder of the machine learning model to generate a context vector (Fig. 2; Col. 6, line 48 to Col. 7, line 8[As illustrated in FIG. 2, x vector (x.sub.1, x.sub.2, x.sub.3, x.sub.4) is the encoder 220 input (where X_i represent the word embeddings vector for the word I in the input sequence), which corresponds to the item name (e.g., “Simpson Strong-Tie 12-Gauge Angle”), h is the encoder 220 output (h_t is the RNN hidden state function at timestep t which can be computed based on the previous input the configuration of the RNN cell (either LSTM or GRU))… the encoder 220 input (the x vector) can include 300 element dense vectors precomputed through a very large model, which can be trained on general English or specific domain documents such as product descriptions and reviews. An LSTM and/or GRU 260 can also be included, which can interface with encoder 220.] in view of Col. 5, lines 60-65[The vectors generated by the encoder and the decoder can be hidden layer outputs generate by the model respectively over time.]); and
transmitting the context vector and the desired output sequence to the decoder of the machine learning model, the decoder being configured to generate an output sequence based on the context vector and the desired output sequence (Fig. 2[showing the transmitting]; Col. 6, line 54 to Col. 7, line 3[As illustrated in FIG. 2…The encoder 220 network produces a representation of the input sequence which is pushed forward to the decoder 240 network where at each step, an output y_i is generated based on the conditional probability p that maximizes the likelihood of generating y_i based on the previous input X and y_i−1, y_i−2, etc.]).
Regarding Claim 11, Fabbrizio discloses a method implemented via execution of computing instructions configured to run at one or more processors and stored at one or more non-transitory computer-readable media, the method comprising (Fig. 3; Col. 1, line 64 to Col. 2, line 17[Non-transitory computer program products (i.e., physically embodied computer program products) are also described that store instructions, which when executed by one or more data processors of one or more computing systems, causes at least one data processor to perform operations herein. Similarly, computer systems are also described that may include one or more data processors and memory coupled to the one or more data processors.]):
training a machine learning model based on pairing information that is included in user engagement information filtered based on filtering criteria (Col. 4, lines 17-30[At 20, a model can be trained using the received data. The model can map item names to referring expressions. For example, the training can include using the item name selected by the user (e.g., “Simpson Strong-tie 12-Gauge Angle”) as an input to the model and the search query (e.g., “angle bracket”) as a supervisory signal. By utilizing the search query (from the data characterizing the historical user interaction) as the supervisory signal and the selected item name as input, the model will be trained to map item names to referring expressions.] in view of Fig. 1; Col. 3, line 54 to Col. 4, line 20[At 10, data can be received characterizing historical user interaction with a search engine associated with a plurality of items. The historical user interaction can include a search query input by a user and the name of an item selected by a user in response to receiving results conducted according to the search query.]; Examiner notes that according to the instant specification ¶0050, “filtering the pairing information based on the filtering criteria includes analyzing pairing information to identify a search query and an associated item,” which is comparable to receiving historical user interaction data including a search query input by a user and the name of an item selected by a user as disclosed in Fabbrizio); and
operating the machine learning model to generate an output sequence that includes missing tokens that are used to modify metadata for future search retrieval (Fig. 3; Col. 4, lines 17-30[At 20, a model can be trained using the received data. The model can map item names to referring expressions. For example, the training can include using the item name selected by the user (e.g., “Simpson Strong-tie 12-Gauge Angle”) as an input to the model and the search query (e.g., “angle bracket”) as a supervisory signal. By utilizing the search query (from the data characterizing the historical user interaction) as the supervisory signal and the selected item name as input, the model will be trained to map item names to referring expressions.]; Examiner notes that the newly mapped “referring expressions” to item names are comparable to “missing tokens” according to instant specification ¶0052, in view of Col. 5, lines 28-43[The set of semantically equivalent expression can be added to a lexicon associated with a tenant. The tenant can be, for example, a business owner deploying a conversational system via a web site for use in processing search queries or customer service requests. In some implementations, the conversational system can also include a natural language agent ensemble consisting of natural language processing resources, such as a dialog manager module. The lexicon can be used by the dialog manager to generate dialog outputs which can be associated with the item.]; Examiner notes that adding expressions to a lexicon associated with the business is comparable to modifying metadata for products in a product catalog).
Regarding Claim 12, Fabbrizio discloses the method of claim 11, Fabbrizio further discloses wherein the user engagement information includes search queries, impression information, add-to-cart information, and order information (Fig. 1; Col. 3, line 54 to Col. 4, line 16[At 10, data can be received characterizing historical user interaction with a search engine associated with a plurality of items. The historical user interaction can include a search query input by a user and the name of an item selected by a user in response to receiving results conducted according to the search query… The historical user interaction data can be commonly found in one or more readily available sources. For example, the historical user interaction data can include clickstream data, click path data, and/or web log data associated with a web site where the item can be provided, such as an e-commerce website. In some implementations, the data can include item descriptions, item reviews, as well as purchasing, billing, and/or search query data associated with the item.]).
Regarding Claim 13, Fabbrizio discloses the method of claim 12, Fabbrizio further discloses wherein:
the pairing information corresponds to a query item pair (Fig. 1; Col. 3, line 54 to Col. 4, line 3[the data characterizing the historical user interaction can include the text string “angle bracket” as the search query and the text string “Simpson Strong-tie 12-Gauge Angle” as the item name.]; Examiner notes that the search query and the item name are comparable to pairing information); and
the query item pair corresponds to a search query and an associated item that is linked to the search query based on the user engagement information (Fig. 1; Col. 3, line 54 to Col. 4, line 3[At 10, data can be received characterizing historical user interaction with a search engine associated with a plurality of items… The search engine can have previously performed a search and returned a list of results to the user. The user can have previously selected the most relevant result. For example, where the search query is “angle bracket”, the name selected by the user can be “Simpson Strong-tie 12-Gauge Angle.” Thus in this example, the data characterizing the historical user interaction can include the text string “angle bracket” as the search query and the text string “Simpson Strong-tie 12-Gauge Angle” as the item name.]).
Regarding Claim 15, Fabbrizio discloses the method of claim 11, Fabbrizio further discloses wherein training the machine learning model comprises:
analyzing the pairing information to identify a search query and an associated item (Fig. 1; Col. 3, line 54 to Col. 4, line 3[At 10, data can be received characterizing historical user interaction with a search engine associated with a plurality of items… The search engine can have previously performed a search and returned a list of results to the user. The user can have previously selected the most relevant result. For example, where the search query is “angle bracket”, the name selected by the user can be “Simpson Strong-tie 12-Gauge Angle.” Thus in this example, the data characterizing the historical user interaction can include the text string “angle bracket” as the search query and the text string “Simpson Strong-tie 12-Gauge Angle” as the item name.]);
analyzing metadata for the associated item and keywords corresponding to the search query (Fig. 1; Col. 3, line 54 to Col. 4, line 16[The historical user interaction data can be commonly found in one or more readily available sources. For example, the historical user interaction data can include clickstream data, click path data, and/or web log data associated with a web site where the item can be provided, such as an e-commerce website. In some implementations, the data can include item descriptions, item reviews, as well as purchasing, billing, and/or search query data associated with the item.]);
generating a keyword item pair for the search query and the associated item (Fig. 1; Col. 3, line 54 to Col. 4, line 30[At 10, data can be received characterizing historical user interaction with a search engine associated with a plurality of items… The user can have previously selected the most relevant result. For example, where the search query is “angle bracket”, the name selected by the user can be “Simpson Strong-tie 12-Gauge Angle.” Thus in this example, the data characterizing the historical user interaction can include the text string “angle bracket” as the search query and the text string “Simpson Strong-tie 12-Gauge Angle” as the item name… At 20, a model can be trained using the received data.]); and
generating an input sequence based on the keyword item pair (Figs. 1 and 2; Col. 4, lines 17-30[For example, the training can include using the item name selected by the user (e.g., “Simpson Strong-tie 12-Gauge Angle”) as an input to the model and the search query (e.g., “angle bracket”) as a supervisory signal. By utilizing the search query (from the data characterizing the historical user interaction) as the supervisory signal and the selected item name as input, the model will be trained to map item names to referring expressions.]).
Regarding Claim 16, Fabbrizio discloses the method of claim 11, Fabbrizio further discloses wherein the machine learning model is a Seq2Seq machine learning model (Fig. 2; Col. 6, lines 16-21[FIG. 2 illustrates an example encoder-decoder architecture 200 for an example model 210 provided according to the example method 100 of FIG. 1. The model 210 can be configured as a sequence-to-sequence model using a RNN architecture which can include an encoder 220 and a decoder 240.]).
Regarding Claim 17, Fabbrizio discloses the method of claim 15, Fabbrizio further discloses wherein the operations further comprise:
inputting the input sequence into an encoder of the machine learning model, the encoder being configured to generate a context vector (Fig. 2; Col. 6, line 48 to Col. 7, line 8[As illustrated in FIG. 2, x vector (x.sub.1, x.sub.2, x.sub.3, x.sub.4) is the encoder 220 input (where X_i represent the word embeddings vector for the word I in the input sequence), which corresponds to the item name (e.g., “Simpson Strong-Tie 12-Gauge Angle”), h is the encoder 220 output (h_t is the RNN hidden state function at timestep t which can be computed based on the previous input the configuration of the RNN cell (either LSTM or GRU))… the encoder 220 input (the x vector) can include 300 element dense vectors precomputed through a very large model, which can be trained on general English or specific domain documents such as product descriptions and reviews. An LSTM and/or GRU 260 can also be included, which can interface with encoder 220.] in view of Col. 5, lines 60-65[The vectors generated by the encoder and the decoder can be hidden layer outputs generate by the model respectively over time.]); and
transmitting the context vector to a decoder of the machine learning model, the decoder being configured to generate an output sequence based on the context vector (Fig. 2[showing the transmitting]; Col. 6, line 54 to Col. 7, line 3[As illustrated in FIG. 2…The encoder 220 network produces a representation of the input sequence which is pushed forward to the decoder 240 network where at each step, an output y_i is generated based on the conditional probability p that maximizes the likelihood of generating y_i based on the previous input X and y_i−1, y_i−2, etc.]).
Regarding Claim 18, Fabbrizio discloses the method of claim 17, Fabbrizio further discloses wherein:
the output sequence comprises the missing tokens (Fig. 2; Col. 5, lines 28-42[In some implementations, the model can be used to determine semantically equivalent referring expressions for items names. The set of semantically equivalent expression can be added to a lexicon associated with a tenant. The tenant can be, for example, a business owner deploying a conversational system via a web site for use in processing search queries or customer service requests.]; Examiner notes that semantically equivalent expressions are comparable to the missing tokens); and
the missing tokens comprise one or more keywords that are missing from the metadata for the associated item (Fig. 2; Col. 5, lines 28-42[In some implementations, the model can be used to determine semantically equivalent referring expressions for items names. The set of semantically equivalent expression can be added to a lexicon associated with a tenant. The tenant can be, for example, a business owner deploying a conversational system via a web site for use in processing search queries or customer service requests]; Examiner notes that semantically equivalent referring expressions for item names are comparable to keywords for the associated item).
Regarding Claim 19, Fabbrizio discloses the method of claim 18, Fabbrizio further discloses wherein modifying the metadata comprises modifying the metadata for the associated item to include the one or more keywords that are missing from the metadata for the associated item (Fig. 2; Col. 5, lines 28-42[In some implementations, the model can be used to determine semantically equivalent referring expressions for items names. The set of semantically equivalent expression can be added to a lexicon associated with a tenant. The tenant can be, for example, a business owner deploying a conversational system via a web site for use in processing search queries or customer service requests]; Examiner notes that adding to a lexicon associated with a business is comparable to adding missing metadata for the associated item).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 4, 14, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fabbrizio in view of Preetham et al. (US 8,891,858 B1 [previously cited]).
Regarding Claim 4, Fabbrizio discloses the system of claim 1, Fabbrizio further discloses wherein the operations further comprise:
analyzing the pairing information to identify a search query and an associated item (Fig. 1; Col. 3, line 54 to Col. 4, line 3[the data characterizing the historical user interaction can include the text string “angle bracket” as the search query and the text string “Simpson Strong-tie 12-Gauge Angle” as the item name.]; Examiner notes that the search query and the item name are comparable to pairing information);
determining a relevance metric for the search query and the associated item (Fig. 1; Col. 3, line 54 to Col. 4, line 3[The user can have previously selected the most relevant result. For example, where the search query is “angle bracket”, the name selected by the user can be “Simpson Strong-tie 12-Gauge Angle.” Thus in this example, the data characterizing the historical user interaction can include the text string “angle bracket” as the search query and the text string “Simpson Strong-tie 12-Gauge Angle” as the item name.]); and
the search query and the associated item from further processing if the relevance metric (Fig. 1; Col. 3, line 54 to Col. 4, line 3[The user can have previously selected the most relevant result. For example, where the search query is “angle bracket”, the name selected by the user can be “Simpson Strong-tie 12-Gauge Angle.” Thus in this example, the data characterizing the historical user interaction can include the text string “angle bracket” as the search query and the text string “Simpson Strong-tie 12-Gauge Angle” as the item name.).
Although Fabbrizio discloses a search query and an associated item, Fabbrizio does not explicitly disclose removing the search query and the item if a relevance metric is below a threshold.
However, Preetham et al., hereinafter, Preetham, teaches removing query items that are below a relevance threshold (Col. 1, line 48 to Col. 2, line 7[removing, from the set of training images, at least some of the second images of the second portion of the set training images identified as outlier images, the outlier images being training images for which the image relevance score received from the first re-trained image relevance model is below a threshold score]).
The system of Preetham is applicable to the system of Fabbrizio as they share characteristics and capabilities, namely, they are both targeted to users performing online queries. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the item search queries as disclosed by Fabbrizio to include removing queries that are below a relevance threshold as taught by Preetham. One of ordinary skill in the art would have been motivated to expand the system of Fabbrizio in order to better correlate text queries with images (Col. 2, lines 30-32).
Regarding Claim 14, Fabbrizio discloses the method of claim 11, Fabbrizio discloses further comprising:
analyzing pairing information to identify a search query and an associated item (Fig. 1; Col. 3, line 54 to Col. 4, line 3[the data characterizing the historical user interaction can include the text string “angle bracket” as the search query and the text string “Simpson Strong-tie 12-Gauge Angle” as the item name.]; Examiner notes that the search query and the item name are comparable to pairing information);
determining a relevance metric for the search query and the associated item (Fig. 1; Col. 3, line 54 to Col. 4, line 3[The user can have previously selected the most relevant result. For example, where the search query is “angle bracket”, the name selected by the user can be “Simpson Strong-tie 12-Gauge Angle.” Thus in this example, the data characterizing the historical user interaction can include the text string “angle bracket” as the search query and the text string “Simpson Strong-tie 12-Gauge Angle” as the item name.]); and
the search query and the associated item from further processing if the relevance metric (Fig. 1; Col. 3, line 54 to Col. 4, line 3[The user can have previously selected the most relevant result. For example, where the search query is “angle bracket”, the name selected by the user can be “Simpson Strong-tie 12-Gauge Angle.” Thus in this example, the data characterizing the historical user interaction can include the text string “angle bracket” as the search query and the text string “Simpson Strong-tie 12-Gauge Angle” as the item name.).
Although Fabbrizio discloses a search query and an associated item, Fabbrizio does not explicitly disclose removing the search query and the item if a relevance metric is below a threshold.
However, Preetham, teaches removing query items that are below a relevance threshold (Col. 1, line 48 to Col. 2, line 7[removing, from the set of training images, at least some of the second images of the second portion of the set training images identified as outlier images, the outlier images being training images for which the image relevance score received from the first re-trained image relevance model is below a threshold score]).
The method of Preetham is applicable to the method of Fabbrizio as they share characteristics and capabilities, namely, they are both targeted to users performing online queries. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the item search queries as disclosed by Fabbrizio to include removing queries that are below a relevance threshold as taught by Preetham. One of ordinary skill in the art would have been motivated to expand the method of Fabbrizio in order to better correlate text queries with images (Col. 2, lines 30-32).
Regarding Claim 20, Fabbrizio discloses the method of claim 17, Fabbrizio discloses further comprising:
operating the machine learning model in a training stage by:
generating the input sequence based on the keyword item pair (Figs. 1 and 2; Col. 4, lines 17-30[For example, the training can include using the item name selected by the user (e.g., “Simpson Strong-tie 12-Gauge Angle”) as an input to the model and the search query (e.g., “angle bracket”) as a supervisory signal. By utilizing the search query (from the data characterizing the historical user interaction) as the supervisory signal and the selected item name as input, the model will be trained to map item names to referring expressions.]);
generating a desired output sequence based on the keyword item pair (Fig. 2[Examiner notes that h, the encoder output, are comparable to a desired output sequence]; Col. 6, line 48 to Col. 7, line 8[As illustrated in FIG. 2, x vector (x.sub.1, x.sub.2, x.sub.3, x.sub.4) is the encoder 220 input (where X_i represent the word embeddings vector for the word I in the input sequence), which corresponds to the item name (e.g., “Simpson Strong-Tie 12-Gauge Angle”), h is the encoder 220 output (h_t is the RNN hidden state function at timestep t which can be computed based on the previous input the configuration of the RNN cell (either LSTM or GRU)));
inputting the input sequence into the encoder of the machine learning model to generate a context vector (Fig. 2; Col. 6, line 48 to Col. 7, line 8[As illustrated in FIG. 2, x vector (x.sub.1, x.sub.2, x.sub.3, x.sub.4) is the encoder 220 input (where X_i represent the word embeddings vector for the word I in the input sequence), which corresponds to the item name (e.g., “Simpson Strong-Tie 12-Gauge Angle”), h is the encoder 220 output (h_t is the RNN hidden state function at timestep t which can be computed based on the previous input the configuration of the RNN cell (either LSTM or GRU))… the encoder 220 input (the x vector) can include 300 element dense vectors precomputed through a very large model, which can be trained on general English or specific domain documents such as product descriptions and reviews. An LSTM and/or GRU 260 can also be included, which can interface with encoder 220.] in view of Col. 5, lines 60-65[The vectors generated by the encoder and the decoder can be hidden layer outputs generate by the model respectively over time.]); and
transmitting the context vector and the desired output sequence to the decoder of the machine learning model, the decoder configured to generate an output sequence based on the context vector and the desired output sequence (Fig. 2[showing the transmitting]; Col. 6, line 54 to Col. 7, line 3[As illustrated in FIG. 2…The encoder 220 network produces a representation of the input sequence which is pushed forward to the decoder 240 network where at each step, an output y_i is generated based on the conditional probability p that maximizes the likelihood of generating y_i based on the previous input X and y_i−1, y_i−2, etc.]).
Although Fabbrizio discloses training a machine learning model, Fabbrizio does not explicitly disclose a re-training stage for the machine learning model.
However, Preetham teaches re-training a model (Col. 1, line 48 to Col. 2, line 7[the query being a unique set of one or more query terms received by a search system as a query input, and re-training the image relevance model, the re-training comprising generating a first re-trained image relevance model based on content feature values of first images of a first portion of training images in the set of training images, receiving, from the first re-trained image relevance model, image relevance scores for second images of a second portion of the set of training images, removing, from the set of training images, at least some of the second images of the second portion of the set training images identified as outlier images, the outlier images being training images for which the image relevance score received from the first re-trained image relevance model is below a threshold score, and generating a second re-trained image relevance model based on content feature values of the first images of the first portion and the second images of the second portion that remain following removal of the at least some of the second images of the second portion of the set of training images.]).
The method of Preetham is applicable to the method of Fabbrizio as they share characteristics and capabilities, namely, they are both targeted to users performing online queries. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the machine learning model as disclosed by Fabbrizio to include a re-training stage as taught by Preetham. One of ordinary skill in the art would have been motivated to expand the method of Fabbrizio in order to better correlate text queries with images (Col. 2, lines 30-32).
Response to Arguments
Applicant’s arguments on pages 9-10 of the remarks filed 10/21/2025, with respect to the previous 35
USC § 101 rejections have been fully considered but are not persuasive.
Applicant argues on pages 9-10 of the remarks that the amended claims do not recite any of the enumerated judicial exceptions and cites to USPTO example 39 for support. Examiner respectfully disagrees. The claims are directed to Certain Methods of Organizing Human Activity grouping of abstract ideas, enumerated in MPEP 2106.04(a)(2)(II) in that they recite a commercial interaction. According to the MPEP 2106.04(a)(2)(II), the methods of organizing human activity relate to concepts of fundamental economic principles or practices (including hedging, insurance, mitigating risk), commercial or legal interactions (including agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations) and managing personal behavior or relationships or interactions between people, (including social activities, teaching, and following rules or instructions). A model based on pairing information that is included in data, filtered based on filtering criteria, and corresponds to a query item pair; and to generate an output sequence that includes missing tokens that are used to modify metadata for future search retrieval associated with one or more products in a product catalog as recited in claim 1 constitutes a commercial or legal interaction. Accordingly, examiner maintains that under Step 2A Prong 1, the claims recite an abstract idea.
Furthermore, Unlike Example 39, the presently claimed invention does recite a judicial exception enumerated in the MPEP 2106.04(a)(2)(II). The claim recites a commercial interaction and is directed to Certain Methods of Organizing Human Activity grouping of abstract ideas (see MPEP 2106.04(a)).
Accordingly, Examiner maintains that the invention is directed to a judicial exception without
significantly more. The claims recite an abstract idea. This judicial exception is not integrated into a practical application. The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Thus the 35 USC §101 rejections are maintained.
Applicant’s arguments on page 11 of the remarks filed 10/21/2025, with respect to the previous 35 USC § 102/103 rejections have been fully considered but are not persuasive. As no remarks directed to the specific nature of how the previous art failed to teach the previously presented claims, references Fabbrizio and Preetham have been maintained. As per MPEP 2111, the pending claims must be given their broadest reasonable interpretation consistent with the specification.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHOORA LADONI whose email is Ahoora.Ladoni@uspto.gov and telephone number is (703) 756-5617. The examiner can normally be reached M-F 0900–1700 ET.
Examiner interviews are available via telephone, in-person and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AHOORA LADONI/Examiner, Art Unit 3689
/MARISSA THEIN/Supervisory Patent Examiner, Art Unit 3689