DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
This application is a continuation of Application No. 17/666,531 filed 7 February 2022.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-4, 11-14 and 20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 5, 7, 9, 10, 13 and 15 of U.S. Patent No. 12,259,894 in view of US PGPub 2010/0293234 to Schmidt (hereafter Schmidt).
The independent claims of the Application fail to explicitly claim storing the query as a query feature vector and inputting the query feature vector into the neural network. It would have been obvious to one of ordinary skill in the art for the claims of the Application to include these steps since in order to receive an output from a neural network, there first needs to be an input and the storage of the input merely allows for it to be reused.
The independent claims fail to recite the limitation of storing a taxonomy of items that are connected through a hierarchy based on attributes of the items. Schmidt teaches the use of a taxonomy associated with a catalog of products for filtering products including the further limitation of storing a taxonomy of items that are connected through a hierarchy based on attributes of the items (see [0011], lines 8-13; [0028]; [0032]; and Fig 2 – Fig 2 is an example taxonomy for Desktop PCs.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to store the taxonomy information of Chaidaroon in a taxonomy in the manner taught by Schmidt. One would have been motivated to do so since items in a catalog are often described by a taxonomy since a taxonomy describes the set of products with a set of information that consists of a set of attributes that assume values (Schmidt: see [0004]).
The limitations of determining a value of a category associated with each item of the set of items from the item database; generating a whitelist of values for the category based on the values of the category associated with each item of the set in the patent equates to the based on the taxonomy information of the current application.
Application No 19/078,265
US Patent No 12,259,894
1. A computer-implemented method, comprising:
storing a taxonomy of items that are connected through a hierarchy based on attributes of the items;
receiving, at an online system, a query directed at a search interface;
applying a first neural network, by the online system, to extract an embedding for the query, the embedding for the query representing the query in a latent space of the first neural network, wherein applying the first neural network to extract the embedding for the query comprises: extracting a query latent vector in a first hidden layer of the first neural network;
applying a second neural network to extract item embeddings for each of a plurality of items maintained in an item database by the online system, each item embedding corresponding to an item offered by the online system and representing the item in the latent space of the second neural network, wherein applying the second neural network to extract an item embedding for an item comprises: extracting an item latent vector in a second hidden layer of the second neural network;
comparing the embedding for the query to the item embeddings based on the taxonomy of items to identify a set of items; and
generating, as a response to the query directed at the search interface, a query result comprising the set of items.
1. A method for improving a search interface using embeddings extracted from a neural network, the method comprising:
receiving, at an online system, a query directed at the search interface;
applying the neural network, by the online system, to extract an embedding for the query, the embedding for the query representing the query in a latent space of the neural network, wherein applying the neural network to extract the embedding for the query comprises: storing the query as a query feature vector, inputting the query feature vector into the neural network, and extracting a query latent vector in a first hidden layer of the neural network;
applying the neural network to extract item embeddings for each of a plurality of items maintained in an item database by the online system, each item embedding corresponding to an item offered by the online system and representing the item in the latent space of the neural network, wherein applying the neural network to extract an item embedding for an item comprises: storing the item as an item feature vector, inputting the item feature vector into the neural network, and extracting an item latent vector in a second hidden layer of the neural network;
comparing, in the latent space of the neural network, the embedding for the query to the item embeddings to select a set of items corresponding to item embeddings that are selected in the latent space;
determining a value of a category associated with each item of the set of items from the item database; generating a whitelist of values for the category based on the values of the category associated with each item of the set; and
generating, as a response to the query directed at the search interface that relies on the embeddings from the neural network, a query result comprising a plurality of items, wherein generating the plurality of items comprises removing one or more items having values for the category that are not included in the whitelist of values for the category.
Claim 2
Claim 2
Claim 3
Claim 5
Claim 4
Claim 7
Claim 11
Claim 9
Claim 12
Claim 10
Claim 13
Claim 13
Claim 14
Claim 7
Claim 20
Claim 15
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Determining whether claims are statutory under 35 U.S.C. 101 involves a two-step analysis. Step 1 requires a determination of whether the claims are directed to the statutory categories of invention. Step 2 requires a determination of whether the claims are directed to a judicial exception without significantly more. Step 2 is divided into two prongs, with the first prong having a part 1 and part 2. See MPEP 2106.
Claim 1 recites a computer-implemented method, comprising: storing a taxonomy of items that are connected through a hierarchy based on attributes of the items; receiving, at an online system, a query directed at a search interface; applying a first neural network, by the online system, to extract an embedding for the query, the embedding for the query representing the query in a latent space of the first neural network, wherein applying the first neural network to extract the embedding for the query comprises: extracting a query latent vector in a first hidden layer of the first neural network; applying a second neural network to extract item embeddings for each of a plurality of items maintained in an item database by the online system, each item embedding corresponding to an item offered by the online system and representing the item in the latent space of the second neural network, wherein applying the second neural network to extract an item embedding for an item comprises: extracting an item latent vector in a second hidden layer of the second neural network; comparing the embedding for the query to the item embeddings based on the taxonomy of items to identify a set of items; and generating, as a response to the query directed at the search interface, a query result comprising the set of items.
Pursuant to Step 2A, part 1, claims are analyzed to determine whether they are directed to an abstract idea. Pursuant to MPEP 2106, claims are deemed to be directed to an abstract idea if, under their broadest reasonable interpretation, they fall within one of the enumerated categories of (a) mathematical concepts, (b) certain methods of organizing human activity, and (c) mental processes. Under the broadest reasonable interpretation, the terms of the claim are presumed to have their plain meaning consistent with the specification as it would be interpreted by one of ordinary skill in the art. See MPEP 2111.
The limitations of storing a taxonomy of items that are connected through a hierarchy based on attributes of the items; extract an embedding for the query; extract item embeddings for each of a plurality of items maintained in an item database, each item embedding corresponding to an item offered by the online system; comparing the embedding for the query to the item embeddings based on the taxonomy of items to identify a set of items; and generating, as a response to the query a query result comprising the set of items, as drafted, are processes that, under their broadest reasonable interpretation, covers performance of the limitation in the mind (including an observation, evaluation, judgement, opinion) except for the recitation of generic computer components. For example, these limitations depict creating vectors for a query and items in a catalog and then comparing the query vector to the items vectors and also to the taxonomy to generate a list of items that mee the query. If limitations, under their broadest reasonable interpretation, covers the performance of the limitation in the mind except for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Pursuant to Step 2A, part 2, claims are analyzed to determine whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). One way to determine integration into a practical application is when the claimed invention improves the functioning of a computer or improves another technology or technical field. To evaluate an improvement to a computer or technical field, the specification must set forth an improvement in technology and the claim itself must reflect the disclosed improvement. See MPEP 2106.04(d)(1).
This judicial exception is not integrated into a practical application. The claim recites the additional elements of an online system, a search interface, a first neural network and a second neural network. The elements are recited at a high level of generality (i.e., a generic computer performing the generic computer functions of receiving a query and extracting vectors) such that it amounts to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)). The additional elements of applying a first neural network, to extract an embedding for the query, the embedding for the query representing the query in a latent space of the first neural network, wherein applying the first neural network to extract the embedding for the query comprises: extracting a query latent vector in a first hidden layer of the first neural network and applying a second neural network to extract item embeddings for each of a plurality of items, representing the item in the latent space of the second neural network, wherein applying the second neural network to extract an item embedding for an item comprises: extracting an item latent vector in a second hidden layer of the second neural network are generally linking the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). These are inherent attributes and functions of a deep learning neural network. The claim also recites the additional element of receiving, at an online system, a query directed at a search interface. This element is adding insignificant extra-solution activity to the judicial exception (see MPEP 2106.05(g)) since the elements are receiving data. The term "extra-solution activity" can be understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. Extra-solution activity includes both pre-solution and post-solution activity. An example of pre-solution activity is a step of gathering data for use in a claimed process, e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent. An example of post-solution activity is an element that is not integrated into the claim as a whole, e.g., a printer that is used to output a report of fraudulent transactions, which is recited in a claim to a computer programmed to analyze and manipulate information about credit card transactions in order to detect whether the transactions were fraudulent. MPEP 2106.05(g). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Pursuant to Step 2B, claims are analyzed to determine whether the claim as a whole amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim recites the additional elements of an online system, a search interface, a first neural network and a second neural network. The elements are recited at a high level of generality (i.e., a generic computer performing the generic computer functions of receiving a query and extracting vectors) such that it amounts to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)). The additional elements of applying a first neural network, to extract an embedding for the query, the embedding for the query representing the query in a latent space of the first neural network, wherein applying the first neural network to extract the embedding for the query comprises: extracting a query latent vector in a first hidden layer of the first neural network and applying a second neural network to extract item embeddings for each of a plurality of items, representing the item in the latent space of the second neural network, wherein applying the second neural network to extract an item embedding for an item comprises: extracting an item latent vector in a second hidden layer of the second neural network are generally linking the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). These are inherent attributes and functions of a deep learning neural network. The claim also recites the additional element of receiving, at an online system, a query directed at a search interface. This element is adding insignificant extra-solution activity to the judicial exception (see MPEP 2106.05(g)) since the elements are receiving data. The term "extra-solution activity" can be understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. Extra-solution activity includes both pre-solution and post-solution activity. An example of pre-solution activity is a step of gathering data for use in a claimed process, e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent. An example of post-solution activity is an element that is not integrated into the claim as a whole, e.g., a printer that is used to output a report of fraudulent transactions, which is recited in a claim to a computer programmed to analyze and manipulate information about credit card transactions in order to detect whether the transactions were fraudulent. MPEP 2106.05(g). At Step 2B, the evaluation of the insignificant extra-solution activity consideration takes into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g). The limitation is directed to IESA of receiving data, e.g., using the Internet to gather data, performing repetitive calculations, electronic recordkeeping, storing and retrieving information in memory, electronically scanning or extracting data from a physical document, a web browser’s back and forward button functionality, recording a customer’s order, shuffling and dealing a standard deck of cards, restricting public access to media by requiring a consumer to view an advertisement, presenting offers and gathering statistics, determining an estimated outcome and setting a price, arranging a hierarchy of groups, sorting information, eliminating less restrictive pricing information and determining the price, which is well understood, routine, and conventional. See MPEP 2106.05(d), subsection II and the Berkheimer Memo. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer, insignificant extra-solution activity and generally linking the use of the judicial exception to a particular technological environment or field of use, which do not provide an inventive concept. The claim is not patent eligible.
Claim 11 recites a non-transitory computer-readable medium configured to store code comprising instructions, wherein the instructions, when executed by one or more processors, cause the one or more processors to: store a taxonomy of items that are connected through a hierarchy based on attributes of the items; receive, at an online system, a query directed at a search interface; apply a first neural network, by the online system, to extract an embedding for the query, the embedding for the query representing the query in a latent space of the first neural network, wherein applying the first neural network to extract the embedding for the query comprises: extracting a query latent vector in a first hidden layer of the first neural network; apply a second neural network to extract item embeddings for each of a plurality of items maintained in an item database by the online system, each item embedding corresponding to an item offered by the online system and representing the item in the latent space of the second neural network, wherein applying the second neural network to extract an item embedding for an item comprises: extracting an item latent vector in a second hidden layer of the second neural network; compare the embedding for the query to the item embeddings based on the taxonomy of items to identify a set of items; and generate, as a response to the query directed at the search interface, a query result comprising the set of items.
Pursuant to Step 2A, part 1, claims are analyzed to determine whether they are directed to an abstract idea. Pursuant to MPEP 2106, claims are deemed to be directed to an abstract idea if, under their broadest reasonable interpretation, they fall within one of the enumerated categories of (a) mathematical concepts, (b) certain methods of organizing human activity, and (c) mental processes. Under the broadest reasonable interpretation, the terms of the claim are presumed to have their plain meaning consistent with the specification as it would be interpreted by one of ordinary skill in the art. See MPEP 2111.
The limitations of storing a taxonomy of items that are connected through a hierarchy based on attributes of the items; extract an embedding for the query; extract item embeddings for each of a plurality of items maintained in an item database, each item embedding corresponding to an item offered by the online system; comparing the embedding for the query to the item embeddings based on the taxonomy of items to identify a set of items; and generating, as a response to the query a query result comprising the set of items, as drafted, are processes that, under their broadest reasonable interpretation, covers performance of the limitation in the mind (including an observation, evaluation, judgement, opinion) except for the recitation of generic computer components. For example, these limitations depict creating vectors for a query and items in a catalog and then comparing the query vector to the items vectors and also to the taxonomy to generate a list of items that mee the query. If limitations, under their broadest reasonable interpretation, covers the performance of the limitation in the mind except for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Pursuant to Step 2A, part 2, claims are analyzed to determine whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). One way to determine integration into a practical application is when the claimed invention improves the functioning of a computer or improves another technology or technical field. To evaluate an improvement to a computer or technical field, the specification must set forth an improvement in technology and the claim itself must reflect the disclosed improvement. See MPEP 2106.04(d)(1).
This judicial exception is not integrated into a practical application. The claim recites the additional elements of a medium, code, processors, an online system, a search interface, a first neural network and a second neural network. The elements are recited at a high level of generality (i.e., a generic computer performing the generic computer functions of storing, receiving, applying comparing and generating) such that it amounts to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)). The additional elements of applying a first neural network, to extract an embedding for the query, the embedding for the query representing the query in a latent space of the first neural network, wherein applying the first neural network to extract the embedding for the query comprises: extracting a query latent vector in a first hidden layer of the first neural network and applying a second neural network to extract item embeddings for each of a plurality of items, representing the item in the latent space of the second neural network, wherein applying the second neural network to extract an item embedding for an item comprises: extracting an item latent vector in a second hidden layer of the second neural network are generally linking the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). These are inherent attributes and functions of a deep learning neural network. The claim also recites the additional element of receiving, at an online system, a query directed at a search interface. This element is adding insignificant extra-solution activity to the judicial exception (see MPEP 2106.05(g)) since the elements are receiving data. The term "extra-solution activity" can be understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. Extra-solution activity includes both pre-solution and post-solution activity. An example of pre-solution activity is a step of gathering data for use in a claimed process, e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent. An example of post-solution activity is an element that is not integrated into the claim as a whole, e.g., a printer that is used to output a report of fraudulent transactions, which is recited in a claim to a computer programmed to analyze and manipulate information about credit card transactions in order to detect whether the transactions were fraudulent. MPEP 2106.05(g). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Pursuant to Step 2B, claims are analyzed to determine whether the claim as a whole amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim recites the additional elements of a medium, code, processors, an online system, a search interface, a first neural network and a second neural network. The elements are recited at a high level of generality (i.e., a generic computer performing the generic computer functions of storing, receiving, applying comparing and generating) such that it amounts to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)). The additional elements of applying a first neural network, to extract an embedding for the query, the embedding for the query representing the query in a latent space of the first neural network, wherein applying the first neural network to extract the embedding for the query comprises: extracting a query latent vector in a first hidden layer of the first neural network and applying a second neural network to extract item embeddings for each of a plurality of items, representing the item in the latent space of the second neural network, wherein applying the second neural network to extract an item embedding for an item comprises: extracting an item latent vector in a second hidden layer of the second neural network are generally linking the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). These are inherent attributes and functions of a deep learning neural network. The claim also recites the additional element of receiving, at an online system, a query directed at a search interface. This element is adding insignificant extra-solution activity to the judicial exception (see MPEP 2106.05(g)) since the elements are receiving data. The term "extra-solution activity" can be understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. Extra-solution activity includes both pre-solution and post-solution activity. An example of pre-solution activity is a step of gathering data for use in a claimed process, e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent. An example of post-solution activity is an element that is not integrated into the claim as a whole, e.g., a printer that is used to output a report of fraudulent transactions, which is recited in a claim to a computer programmed to analyze and manipulate information about credit card transactions in order to detect whether the transactions were fraudulent. MPEP 2106.05(g). At Step 2B, the evaluation of the insignificant extra-solution activity consideration takes into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g). The limitation is directed to IESA of receiving data, e.g., using the Internet to gather data, performing repetitive calculations, electronic recordkeeping, storing and retrieving information in memory, electronically scanning or extracting data from a physical document, a web browser’s back and forward button functionality, recording a customer’s order, shuffling and dealing a standard deck of cards, restricting public access to media by requiring a consumer to view an advertisement, presenting offers and gathering statistics, determining an estimated outcome and setting a price, arranging a hierarchy of groups, sorting information, eliminating less restrictive pricing information and determining the price, which is well understood, routine, and conventional. See MPEP 2106.05(d), subsection II and the Berkheimer Memo. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer, insignificant extra-solution activity and generally linking the use of the judicial exception to a particular technological environment or field of use, which do not provide an inventive concept. The claim is not patent eligible.
Claim 20 recites a system comprising: one or more processors; and memory storing code comprising instructions, wherein the instructions, when executed by the one or more processors, cause the one or more processors to: store a taxonomy of items that are connected through a hierarchy based on attributes of the items; receive, at an online system, a query directed at a search interface; apply a first neural network, by the online system, to extract an embedding for the query, the embedding for the query representing the query in a latent space of the first neural network, wherein applying the first neural network to extract the embedding for the query comprises: extracting a query latent vector in a first hidden layer of the first neural network; apply a second neural network to extract item embeddings for each of a plurality of items maintained in an item database by the online system, each item embedding corresponding to an item offered by the online system and representing the item in the latent space of the second neural network, wherein applying the second neural network to extract an item embedding for an item comprises: extracting an item latent vector in a second hidden layer of the second neural network; compare the embedding for the query to the item embeddings based on the taxonomy of items to identify a set of items; and generate, as a response to the query directed at the search interface, a query result comprising the set of items.
Pursuant to Step 2A, part 1, claims are analyzed to determine whether they are directed to an abstract idea. Pursuant to MPEP 2106, claims are deemed to be directed to an abstract idea if, under their broadest reasonable interpretation, they fall within one of the enumerated categories of (a) mathematical concepts, (b) certain methods of organizing human activity, and (c) mental processes. Under the broadest reasonable interpretation, the terms of the claim are presumed to have their plain meaning consistent with the specification as it would be interpreted by one of ordinary skill in the art. See MPEP 2111.
The limitations of storing a taxonomy of items that are connected through a hierarchy based on attributes of the items; extract an embedding for the query; extract item embeddings for each of a plurality of items maintained in an item database, each item embedding corresponding to an item offered by the online system; comparing the embedding for the query to the item embeddings based on the taxonomy of items to identify a set of items; and generating, as a response to the query a query result comprising the set of items, as drafted, are processes that, under their broadest reasonable interpretation, covers performance of the limitation in the mind (including an observation, evaluation, judgement, opinion) except for the recitation of generic computer components. For example, these limitations depict creating vectors for a query and items in a catalog and then comparing the query vector to the items vectors and also to the taxonomy to generate a list of items that mee the query. If limitations, under their broadest reasonable interpretation, covers the performance of the limitation in the mind except for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Pursuant to Step 2A, part 2, claims are analyzed to determine whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). One way to determine integration into a practical application is when the claimed invention improves the functioning of a computer or improves another technology or technical field. To evaluate an improvement to a computer or technical field, the specification must set forth an improvement in technology and the claim itself must reflect the disclosed improvement. See MPEP 2106.04(d)(1).
This judicial exception is not integrated into a practical application. The claim recites the additional elements of memory, code, processors, an online system, a search interface, a first neural network and a second neural network. The elements are recited at a high level of generality (i.e., a generic computer performing the generic computer functions of storing, receiving, applying comparing and generating) such that it amounts to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)). The additional elements of applying a first neural network, to extract an embedding for the query, the embedding for the query representing the query in a latent space of the first neural network, wherein applying the first neural network to extract the embedding for the query comprises: extracting a query latent vector in a first hidden layer of the first neural network and applying a second neural network to extract item embeddings for each of a plurality of items, representing the item in the latent space of the second neural network, wherein applying the second neural network to extract an item embedding for an item comprises: extracting an item latent vector in a second hidden layer of the second neural network are generally linking the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). These are inherent attributes and functions of a deep learning neural network. The claim also recites the additional element of receiving, at an online system, a query directed at a search interface. This element is adding insignificant extra-solution activity to the judicial exception (see MPEP 2106.05(g)) since the elements are receiving data. The term "extra-solution activity" can be understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. Extra-solution activity includes both pre-solution and post-solution activity. An example of pre-solution activity is a step of gathering data for use in a claimed process, e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent. An example of post-solution activity is an element that is not integrated into the claim as a whole, e.g., a printer that is used to output a report of fraudulent transactions, which is recited in a claim to a computer programmed to analyze and manipulate information about credit card transactions in order to detect whether the transactions were fraudulent. MPEP 2106.05(g). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Pursuant to Step 2B, claims are analyzed to determine whether the claim as a whole amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim recites the additional elements of a memory, code, processors, an online system, a search interface, a first neural network and a second neural network. The elements are recited at a high level of generality (i.e., a generic computer performing the generic computer functions of storing, receiving, applying comparing and generating) such that it amounts to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)). The additional elements of applying a first neural network, to extract an embedding for the query, the embedding for the query representing the query in a latent space of the first neural network, wherein applying the first neural network to extract the embedding for the query comprises: extracting a query latent vector in a first hidden layer of the first neural network and applying a second neural network to extract item embeddings for each of a plurality of items, representing the item in the latent space of the second neural network, wherein applying the second neural network to extract an item embedding for an item comprises: extracting an item latent vector in a second hidden layer of the second neural network are generally linking the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). These are inherent attributes and functions of a deep learning neural network. The claim also recites the additional element of receiving, at an online system, a query directed at a search interface. This element is adding insignificant extra-solution activity to the judicial exception (see MPEP 2106.05(g)) since the elements are receiving data. The term "extra-solution activity" can be understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. Extra-solution activity includes both pre-solution and post-solution activity. An example of pre-solution activity is a step of gathering data for use in a claimed process, e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent. An example of post-solution activity is an element that is not integrated into the claim as a whole, e.g., a printer that is used to output a report of fraudulent transactions, which is recited in a claim to a computer programmed to analyze and manipulate information about credit card transactions in order to detect whether the transactions were fraudulent. MPEP 2106.05(g). At Step 2B, the evaluation of the insignificant extra-solution activity consideration takes into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g). The limitation is directed to IESA of receiving data, e.g., using the Internet to gather data, performing repetitive calculations, electronic recordkeeping, storing and retrieving information in memory, electronically scanning or extracting data from a physical document, a web browser’s back and forward button functionality, recording a customer’s order, shuffling and dealing a standard deck of cards, restricting public access to media by requiring a consumer to view an advertisement, presenting offers and gathering statistics, determining an estimated outcome and setting a price, arranging a hierarchy of groups, sorting information, eliminating less restrictive pricing information and determining the price, which is well understood, routine, and conventional. See MPEP 2106.05(d), subsection II and the Berkheimer Memo. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer, insignificant extra-solution activity and generally linking the use of the judicial exception to a particular technological environment or field of use, which do not provide an inventive concept. The claim is not patent eligible.
Claims 2, 3, 7, 10, 12, 13 and 17 are directed to the abstract idea of “Mental Processes.” Each claim fails to provide any additional elements. This judicial exception is not integrated into a practical application because there are no additional elements to integrate the abstract idea into a practical application. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because there are no additional elements. The claims are not patent eligible.
Claims 4 and 14 are directed to the abstract idea of “Mental Processes.” The additional limitations of each of the claims is directed to adding insignificant extra-solution activity to the judicial exception (see MPEP 2106.05(g)). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. At Step 2B, the evaluation of the insignificant extra-solution activity consideration takes into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g). The limitation is directed to IESA, e.g., using the Internet to gather data, performing repetitive calculations, electronic recordkeeping, storing and retrieving information in memory, electronically scanning or extracting data from a physical document, a web browser’s back and forward button functionality, recording a customer’s order, shuffling and dealing a standard deck of cards, restricting public access to media by requiring a consumer to view an advertisement, presenting offers and gathering statistics, determining an estimated outcome and setting a price, arranging a hierarchy of groups, sorting information, eliminating less restrictive pricing information and determining the price, which is well understood, routine, and conventional. See MPEP 2106.05(d), subsection II and the Berkheimer Memo. Even when considered in combination, these additional elements represent insignificant extra-solution activity which does not provide an inventive concept.
Claims 5, 6, 8, 9, 15, 16, 18 and 19 are directed to the abstract idea of “Mental Processes.” The additional limitations of each of the claims amount to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Even when considered in combination, these additional elements represent mere instructions to apply the exception using a generic computer component which does not provide an inventive concept.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Patent No 11,682,060 to Chaidaroon et al (hereafter Chaidaroon) in view of US PGPub 2010/0293234 to Schmidt (hereafter Schmidt) in view of US PGPub 2023/0028553 to Kelley et al (hereafter Kelley).
Referring to claim 1, Chaidaroon discloses a computer-implemented method, comprising:
receiving, at an online system, a query [user query 102] directed at a search interface (see column 5, lines 1-2; column 7, line 52 – column 8, lines 7 – In the example shown, the user query is “running gear.”);
applying a first neural network [query encoder 110], by the online system, to extract an embedding for the query, the embedding for the query representing the query in a latent space of the first neural network (see column 5, lines 2-8; column 9, lines 27-44; column 16, line 66 – column 17, lines 16 – This query can be input into a query encoder that can determine a query vector that can be projected into the semantic space.);
applying a second neural network [item encoder 112] to extract item embeddings for each of a plurality of items maintained in an item database by the online system, each item embedding corresponding to an item offered by the online system and representing the item in the latent space of the second neural network (see column 5, lines 8-19; column 9, lines 45-61 – For each item that is available in the catalog of items for an ecommerce marketplace, the items, such as item 104 and 106, can be processed by the product retrieval systems of the present disclosure by processing them through an item encoder 112. The item encoder 112 can determine an item vector that is projected onto the semantic space 114. The item 104 is projected onto the semantic space 114 and is represented by the projection 118.);
comparing the embedding for the query to the item embeddings based on the taxonomy of items to identify a set of items (see column 5, lines 19-23; column 8, lines 30-40; column 9, lines 27-61; column 16, line 66 – column 17, lines 16 – The distance between the projection 116 and the projection 118 can both be measured. The separation between the projections can correspond to a relevancy between the query and the items. The feature generator can operate to obtain information such as query information and item information. The item information can include information that characterizes an item that is available on the ecommerce marketplace. The item information can include taxonomy information (e.g., department, category, or other organizational information). The feature generator is used during the generation of the vectors/embeddings and uses the taxonomy. Therefore, the comparison is considered to be based on the taxonomy.); and
generating, as a response to the query directed at the search interface, a query result comprising the set of items (see column 15, lines 1-3 – The retrieval computing device 202 can determine a ranked list of items that it has determined are most relevant to the query entered by the customer.).
While Chaidaroon discloses that item information includes taxonomy information, which is used in the generation of embeddings, and that item information is stored in a database (see column 8, lines 30-54), Chaidaroon fails to explicitly disclose the further limitation of storing a taxonomy of items that are connected through a hierarchy based on attributes of the items. Schmidt teaches the use of a taxonomy associated with a catalog of products for filtering products including the further limitation of storing a taxonomy of items that are connected through a hierarchy based on attributes of the items (see [0011], lines 8-13; [0028]; [0032]; and Fig 2 – Fig 2 is an example taxonomy for Desktop PCs.).
Chaidaroon and Schmidt are analogous art since they both teach allowing a user to search a catalog of goods based on taxonomy information. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to store the taxonomy information of Chaidaroon in a taxonomy in the manner taught by Schmidt. One would have been motivated to do so since items in a catalog are often described by a taxonomy since a taxonomy describes the set of products with a set of information that consists of a set of attributes that assume values (Schmidt: see [0004]).
While the combination of Chaidaroon and Schmidt (hereafter Chaidaroon/Schmidt) teaches neural networks for generating a query latent vector and an item latent vector, Chaidaroon/Schmidt fails to explicitly teach the further limitation of wherein applying the first neural network to extract the embedding for the query comprises: extracting a query latent vector in a first hidden layer of the first neural network; and wherein applying the second neural network to extract an item embedding for an item comprises: extracting an item latent vector in a second hidden layer of the second neural network. Kelley teaches the use of neural networks to create a latent vector including the further limitations of
wherein applying the first neural network to extract the embedding for the query [the query is just the input] comprises: extracting a query [the query is just the input] latent vector in a first hidden layer of the first neural network (see [0019] - In order to generate such latent vectors, a set of values are extracted from one or more respective fields in a communication and applied to a neural network. Values of a first set of fields are applied at an input layer of the trained neural network, while values of a second set of fields are applied at an output layer of the neural network. A latent vector is generated at a hidden layer of the neural network based on the application of the values to the input and output layers.); and
wherein applying the second neural network to extract an item embedding for an item [the item is just the input] comprises: extracting an item latent vector in a second hidden layer of the second neural network (see [0019] – In order to generate such latent vectors, a set of values are extracted from one or more respective fields in a communication and applied to a neural network. Values of a first set of fields are applied at an input layer of the trained neural network, while values of a second set of fields are applied at an output layer of the neural network. A latent vector is generated at a hidden layer of the neural network based on the application of the values to the input and output layers.).
While Chaidaroon/Schmidt teaches a neural network, Chaidaroon/Schmidt fails to explicitly teach the basic structure of a neural network. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention for the neural network of Chaidaroon/Schmidt to have the structure of Kelley and for the hidden layers to extract the latent vector as taught by Kelley. One would have been motivated to do so since a standard neural network has an input layer, an output layer and at least one hidden layer (Kelley: see [0019]).
Referring to claim 2, the combination of Chaidaroon/Schmidt and Kelley (hereafter Chaidaroon/Schmidt/Kelley) teaches the method of claim 1, further comprising: ranking items included in the set of items based on corresponding measures of similarity between the embedding for the query and the item embeddings corresponding to each item included in the set of items (Chaidaroon: see column 15, lines 1-3 – The retrieval computing device can determine a ranked list of items that it has determined are most relevant to the query entered by the customer.); identifying items of the set of items having at least a threshold position in the ranking (Chaidaroon: see column 15, lines 3-5 – top 10 items); and generating an approved list that includes the items having at least the threshold position in the ranking (Chaidaroon: see column 15, lines 1-3 – In one example, the list of items can include ten items ranked by the relevance to the query.).
Referring to claim 3, Chaidaroon/Schmidt/Kelley teaches the method of claim 1, wherein comparing the embedding for the query to the item embeddings comprises: determining distances between the embedding for the query and each item embedding (Chaidaroon: see column 5, lines 19-23; column 9, lines 45-61 – The distance between the projection 116 and the projection 118 can both be measured. The separation between the projections can correspond to a relevancy between the query and the items.); and generating the set of items based on the determined distances (Chaidaroon: see column 5, lines 19-31; column 15, lines 1-3).
Referring to claim 4, Chaidaroon/Schmidt/Kelley teaches the method of claim 1, wherein receiving, at the online system, a query directed at the search interface comprises: receiving a request to create an order from a user, the request identifying a warehouse ( Chaidaroon: see column 4, lines 1-15; column 6, lines 11-29); and receiving the query to identify one or more items offered by the warehouse satisfying the query (Chaidaroon : see column 5, lines 1-2; column 7, line 52 – column 8, lines 7 – In the example shown, the user query is “running gear.”).
Referring to claim 5, Chaidaroon/Schmidt/Kelley teaches the method of claim 1, wherein extracting the embedding for the query further comprises:
tokenizing the query into a plurality of terms (Chaidaroon: see column 8, lines 55-56 and column 13, lines 6-23 – The feature generator can also operate to tokenize a query. For example, if the input query is “running shoes.” The input query can be converted to a list of token identifier, e.g., [10, 15], where 10 represents “running” and 15 represents “shoes.”);
applying the first neural network to transform each term into a respective term vector (Chaidaroon: see column 13, lines 6-23 – A vector is created for each token identifier.); and
aggregating the term vectors into the query latent vector using a pooling function (Chaidaroon: see column 13, lines 31-34 - The sum of all word vectors that are returned from the word embedding lookup table 808.).
Referring to claim 6, Chaidaroon/Schmidt/Kelley teaches the method of claim 1, wherein extracting the item embeddings for each of a plurality of items maintained in the item database comprises: processing item attribute data through an attribute encoder to generate attribute vectors (Chaidaroon: see column 11, line 43 – column 12, line 34); applying a transformation function to combine the attribute vectors into a preliminary item latent vector (Chaidaroon: see column 11, line 43 – column 12, line 34); and refining the preliminary item latent vector using the second neural network to obtain an item embedding (Chaidaroon: see column 11, line 43 – column 12, line 34).
Referring to claim 7, Chaidaroon/Schmidt/Kelley teaches the method of claim 1, wherein storing the taxonomy of items comprises: defining a hierarchical structure with multiple levels of item categories (Schmidt: see [0028]; [0032]; Fig 2); associating each item with at least one category within the hierarchical structure (Schmidt: see [0028]; [0032]; Fig 2); and maintaining relationships between categories to enable category-based filtering (Schmidt: see [0028]; [0032]; Fig 2).
Referring to claim 8, Chaidaroon/Schmidt/Kelley teaches the method of claim 1, wherein applying the first neural network to extract an embedding for the query further comprises: preprocessing the query to remove stop words and normalize text (Chaidaroon: see column 9, lines 33-44 – The query information can be tokenized and trimmed to include no more than a predetermined amount of tokens.); applying a sequence-to-vector transformation using the neural network (Chaidaroon: see column 13, lines 24-40); and refining the extracted query embedding based on contextual information (Chaidaroon: see column 13, lines 24-40).
Referring to claim 9, Chaidaroon/Schmidt/Kelley teaches the method of claim 1, wherein applying the second neural network to extract item embeddings further comprises: training the second neural network on a dataset of item descriptions and associated user interactions (Chaidaroon: see column 11, line 43 – column 12, line 34); encoding item attributes as auxiliary features in the item embeddings (Chaidaroon: see column 11, line 43 – column 12, line 34); and periodically updating the second neural network to adapt to new items and user preferences (Chaidaroon: see column 11, line 43 – column 12, line 34).
Referring to claim 10, Chaidaroon/Schmidt/Kelley teaches the method of claim 1, wherein comparing the embedding for the query to the item embeddings based on the taxonomy of items further comprises: generating an initial candidate list of items using an approximate nearest neighbors search (Chaidaroon: see column 13, lines 41-59); determining a confidence score for each candidate item based on historical search relevance (Chaidaroon: see column 13, lines 41-59 – determining a ranking); and filtering out items with confidence scores below a predefined threshold (see column 15, lines 1-5 – top ten rankings).
Referring to claim 11, Chaidaroon discloses a non-transitory computer-readable medium configured to store code comprising instructions, wherein the instructions, when executed by one or more processors (see column 2, lines 56-65), cause the one or more processors to:
receive, at an online system, a query [user query 102] directed at a search interface (see column 5, lines 1-2; column 7, line 52 – column 8, lines 7 – In the example shown, the user query is “running gear.”);
apply a first neural network [query encoder 110], by the online system, to extract an embedding for the query, the embedding for the query representing the query in a latent space of the first neural network (see column 5, lines 2-8; column 9, lines 27-44; column 16, line 66 – column 17, lines 16 – This query can be input into a query encoder that can determine a query vector that can be projected into the semantic space.);
apply a second neural network [item encoder 112] to extract item embeddings for each of a plurality of items maintained in an item database by the online system, each item embedding corresponding to an item offered by the online system and representing the item in the latent space of the second neural network (see column 5, lines 8-19; column 9, lines 45-61 – For each item that is available in the catalog of items for an ecommerce marketplace, the items, such as item 104 and 106, can be processed by the product retrieval systems of the present disclosure by processing them through an item encoder 112. The item encoder 112 can determine an item vector that is projected onto the semantic space 114. The item 104 is projected onto the semantic space 114 and is represented by the projection 118.);
compare the embedding for the query to the item embeddings based on the taxonomy of items to identify a set of items (see column 5, lines 19-23; column 8, lines 30-40; column 9, lines 27-61; column 16, line 66 – column 17, lines 16 – The distance between the projection 116 and the projection 118 can both be measured. The separation between the projections can correspond to a relevancy between the query and the items. The feature generator can operate to obtain information such as query information and item information. The item information can include information that characterizes an item that is available on the ecommerce marketplace. The item information can include taxonomy information (e.g., department, category, or other organizational information). The feature generator is used during the generation of the vectors/embeddings and uses the taxonomy. Therefore, the comparison is considered to be based on the taxonomy.); and
generate, as a response to the query directed at the search interface, a query result comprising the set of items (see column 15, lines 1-3 – The retrieval computing device 202 can determine a ranked list of items that it has determined are most relevant to the query entered by the customer.).
While Chaidaroon discloses that item information includes taxonomy information, which is used in the generation of embeddings, and that item information is stored in a database (see column 8, lines 30-54), Chaidaroon fails to explicitly disclose the further limitation of storing a taxonomy of items that are connected through a hierarchy based on attributes of the items. Schmidt teaches the use of a taxonomy associated with a catalog of products for filtering products including the further limitation of store a taxonomy of items that are connected through a hierarchy based on attributes of the items (see [0011], lines 8-13; [0028]; [0032]; and Fig 2 – Fig 2 is an example taxonomy for Desktop PCs.).
Chaidaroon and Schmidt are analogous art since they both teach allowing a user to search a catalog of goods based on taxonomy information. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to store the taxonomy information of Chaidaroon in a taxonomy in the manner taught by Schmidt. One would have been motivated to do so since items in a catalog are often described by a taxonomy since a taxonomy describes the set of products with a set of information that consists of a set of attributes that assume values (Schmidt: see [0004]).
While the combination of Chaidaroon and Schmidt (hereafter Chaidaroon/Schmidt) teaches neural networks for generating a query latent vector and an item latent vector, Chaidaroon/Schmidt fails to explicitly teach the further limitation of wherein applying the first neural network to extract the embedding for the query comprises: extracting a query latent vector in a first hidden layer of the first neural network; and wherein applying the second neural network to extract an item embedding for an item comprises: extracting an item latent vector in a second hidden layer of the second neural network. Kelley teaches the use of neural networks to create a latent vector including the further limitations of
wherein applying the first neural network to extract the embedding for the query [the query is just the input] comprises: extracting a query [the query is just the input] latent vector in a first hidden layer of the first neural network (see [0019] - In order to generate such latent vectors, a set of values are extracted from one or more respective fields in a communication and applied to a neural network. Values of a first set of fields are applied at an input layer of the trained neural network, while values of a second set of fields are applied at an output layer of the neural network. A latent vector is generated at a hidden layer of the neural network based on the application of the values to the input and output layers.); and
wherein applying the second neural network to extract an item embedding for an item [the item is just the input] comprises: extracting an item latent vector in a second hidden layer of the second neural network (see [0019] – In order to generate such latent vectors, a set of values are extracted from one or more respective fields in a communication and applied to a neural network. Values of a first set of fields are applied at an input layer of the trained neural network, while values of a second set of fields are applied at an output layer of the neural network. A latent vector is generated at a hidden layer of the neural network based on the application of the values to the input and output layers.).
While Chaidaroon/Schmidt teaches a neural network, Chaidaroon/Schmidt fails to explicitly teach the basic structure of a neural network. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention for the neural network of Chaidaroon/Schmidt to have the structure of Kelley and for the hidden layers to extract the latent vector as taught by Kelley. One would have been motivated to do so since a standard neural network has an input layer, an output layer and at least one hidden layer (Kelley: see [0019]).
Referring to claim 12, Chaidaroon/Schmidt/Kelley teaches the non-transitory computer-readable medium of claim 11, wherein the instructions, when executed, further cause the one or more processors to: rank items included in the set of items based on corresponding measures of similarity between the embedding for the query and the item embeddings corresponding to each item included in the set of items (Chaidaroon: see column 15, lines 1-3 – The retrieval computing device can determine a ranked list of items that it has determined are most relevant to the query entered by the customer.); identify items of the set of items having at least a threshold position in the ranking (Chaidaroon: see column 15, lines 3-5 – top 10 items); and generate an approved list that includes the items having at least the threshold position in the ranking (Chaidaroon: see column 15, lines 1-3 – In one example, the list of items can include ten items ranked by the relevance to the query.).
Referring to claim 13, Chaidaroon/Schmidt/Kelley teaches the non-transitory computer-readable medium of claim 11, wherein comparing the embedding for the query to the item embeddings comprises: determining distances between the embedding for the query and each item embedding (Chaidaroon: see column 5, lines 19-23; column 9, lines 45-61 – The distance between the projection 116 and the projection 118 can both be measured. The separation between the projections can correspond to a relevancy between the query and the items.); and generating the set of items based on the determined distances (Chaidaroon: see column 5, lines 19-31; column 15, lines 1-3).
Referring to claim 14, Chaidaroon/Schmidt/Kelley teaches the non-transitory computer-readable medium of claim 11, wherein receiving, at the online system, a query directed at the search interface comprises: receiving a request to create an order from a user, the request identifying a warehouse ( Chaidaroon: see column 4, lines 1-15; column 6, lines 11-29); and receiving the query to identify one or more items offered by the warehouse satisfying the query (Chaidaroon : see column 5, lines 1-2; column 7, line 52 – column 8, lines 7 – In the example shown, the user query is “running gear.”).
Referring to claim 15, Chaidaroon/Schmidt/Kelley teaches the non-transitory computer-readable medium of claim 11, wherein extracting the embedding for the query further comprises:
tokenizing the query into a plurality of terms (Chaidaroon: see column 8, lines 55-56 and column 13, lines 6-23 – The feature generator can also operate to tokenize a query. For example, if the input query is “running shoes.” The input query can be converted to a list of token identifier, e.g., [10, 15], where 10 represents “running” and 15 represents “shoes.”);
applying the first neural network to transform each term into a respective term vector (Chaidaroon: see column 13, lines 6-23 – A vector is created for each token identifier.); and
aggregating the term vectors into the query latent vector using a pooling function (Chaidaroon: see column 13, lines 31-34 - The sum of all word vectors that are returned from the word embedding lookup table 808.).
Referring to claim 16, Chaidaroon/Schmidt/Kelley teaches the non-transitory computer-readable medium of claim 11, wherein extracting the item embeddings for each of a plurality of items maintained in the item database comprises: processing item attribute data through an attribute encoder to generate attribute vectors (Chaidaroon: see column 11, line 43 – column 12, line 34); applying a transformation function to combine the attribute vectors into a preliminary item latent vector (Chaidaroon: see column 11, line 43 – column 12, line 34); and refining the preliminary item latent vector using the second neural network to obtain an item embedding (Chaidaroon: see column 11, line 43 – column 12, line 34).
Referring to claim 17, Chaidaroon/Schmidt/Kelley teaches the non-transitory computer-readable medium of claim 11, wherein storing the taxonomy of items comprises: defining a hierarchical structure with multiple levels of item categories (Schmidt: see [0028]; [0032]; Fig 2); associating each item with at least one category within the hierarchical structure (Schmidt: see [0028]; [0032]; Fig 2); and maintaining relationships between categories to enable category-based filtering (Schmidt: see [0028]; [0032]; Fig 2).
Referring to claim 18, Chaidaroon/Schmidt/Kelley teaches the non-transitory computer-readable medium of claim 11, wherein applying the first neural network to extract an embedding for the query further comprises: preprocessing the query to remove stop words and normalize text (Chaidaroon: see column 9, lines 33-44 – The query information can be tokenized and trimmed to include no more than a predetermined amount of tokens.); applying a sequence-to-vector transformation using the neural network (Chaidaroon: see column 13, lines 24-40); and refining the extracted query embedding based on contextual information (Chaidaroon: see column 13, lines 24-40).
Referring to claim 19, Chaidaroon/Schmidt/Kelley teaches the non-transitory computer-readable medium of claim 11, wherein applying the second neural network to extract item embeddings further comprises: training the second neural network on a dataset of item descriptions and associated user interactions (Chaidaroon: see column 11, line 43 – column 12, line 34); encoding item attributes as auxiliary features in the item embeddings (Chaidaroon: see column 11, line 43 – column 12, line 34); and periodically updating the second neural network to adapt to new items and user preferences (Chaidaroon: see column 11, line 43 – column 12, line 34).
Referring to claim 20, Chaidaroon discloses a system comprising:
one or more processors [processor 302]; and
memory [memory 304] storing code comprising instructions, wherein the instructions, when executed by the one or more processors, cause the one or more processors to:
receive, at an online system, a query [user query 102] directed at a search interface (see column 5, lines 1-2; column 7, line 52 – column 8, lines 7 – In the example shown, the user query is “running gear.”);
apply a first neural network [query encoder 110], by the online system, to extract an embedding for the query, the embedding for the query representing the query in a latent space of the first neural network (see column 5, lines 2-8; column 9, lines 27-44; column 16, line 66 – column 17, lines 16 – This query can be input into a query encoder that can determine a query vector that can be projected into the semantic space.);
apply a second neural network [item encoder 112] to extract item embeddings for each of a plurality of items maintained in an item database by the online system, each item embedding corresponding to an item offered by the online system and representing the item in the latent space of the second neural network (see column 5, lines 8-19; column 9, lines 45-61 – For each item that is available in the catalog of items for an ecommerce marketplace, the items, such as item 104 and 106, can be processed by the product retrieval systems of the present disclosure by processing them through an item encoder 112. The item encoder 112 can determine an item vector that is projected onto the semantic space 114. The item 104 is projected onto the semantic space 114 and is represented by the projection 118.);
compare the embedding for the query to the item embeddings based on the taxonomy of items to identify a set of items (see column 5, lines 19-23; column 8, lines 30-40; column 9, lines 27-61; column 16, line 66 – column 17, lines 16 – The distance between the projection 116 and the projection 118 can both be measured. The separation between the projections can correspond to a relevancy between the query and the items. The feature generator can operate to obtain information such as query information and item information. The item information can include information that characterizes an item that is available on the ecommerce marketplace. The item information can include taxonomy information (e.g., department, category, or other organizational information). The feature generator is used during the generation of the vectors/embeddings and uses the taxonomy. Therefore, the comparison is considered to be based on the taxonomy.); and
generate, as a response to the query directed at the search interface, a query result comprising the set of items (see column 15, lines 1-3 – The retrieval computing device 202 can determine a ranked list of items that it has determined are most relevant to the query entered by the customer.).
While Chaidaroon discloses that item information includes taxonomy information, which is used in the generation of embeddings, and that item information is stored in a database (see column 8, lines 30-54), Chaidaroon fails to explicitly disclose the further limitation of storing a taxonomy of items that are connected through a hierarchy based on attributes of the items. Schmidt teaches the use of a taxonomy associated with a catalog of products for filtering products including the further limitation of store a taxonomy of items that are connected through a hierarchy based on attributes of the items (see [0011], lines 8-13; [0028]; [0032]; and Fig 2 – Fig 2 is an example taxonomy for Desktop PCs.).
Chaidaroon and Schmidt are analogous art since they both teach allowing a user to search a catalog of goods based on taxonomy information. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to store the taxonomy information of Chaidaroon in a taxonomy in the manner taught by Schmidt. One would have been motivated to do so since items in a catalog are often described by a taxonomy since a taxonomy describes the set of products with a set of information that consists of a set of attributes that assume values (Schmidt: see [0004]).
While the combination of Chaidaroon and Schmidt (hereafter Chaidaroon/Schmidt) teaches neural networks for generating a query latent vector and an item latent vector, Chaidaroon/Schmidt fails to explicitly teach the further limitation of wherein applying the first neural network to extract the embedding for the query comprises: extracting a query latent vector in a first hidden layer of the first neural network; and wherein applying the second neural network to extract an item embedding for an item comprises: extracting an item latent vector in a second hidden layer of the second neural network. Kelley teaches the use of neural networks to create a latent vector including the further limitations of
wherein applying the first neural network to extract the embedding for the query [the query is just the input] comprises: extracting a query [the query is just the input] latent vector in a first hidden layer of the first neural network (see [0019] - In order to generate such latent vectors, a set of values are extracted from one or more respective fields in a communication and applied to a neural network. Values of a first set of fields are applied at an input layer of the trained neural network, while values of a second set of fields are applied at an output layer of the neural network. A latent vector is generated at a hidden layer of the neural network based on the application of the values to the input and output layers.); and
wherein applying the second neural network to extract an item embedding for an item [the item is just the input] comprises: extracting an item latent vector in a second hidden layer of the second neural network (see [0019] – In order to generate such latent vectors, a set of values are extracted from one or more respective fields in a communication and applied to a neural network. Values of a first set of fields are applied at an input layer of the trained neural network, while values of a second set of fields are applied at an output layer of the neural network. A latent vector is generated at a hidden layer of the neural network based on the application of the values to the input and output layers.).
While Chaidaroon/Schmidt teaches a neural network, Chaidaroon/Schmidt fails to explicitly teach the basic structure of a neural network. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention for the neural network of Chaidaroon/Schmidt to have the structure of Kelley and for the hidden layers to extract the latent vector as taught by Kelley. One would have been motivated to do so since a standard neural network has an input layer, an output layer and at least one hidden layer (Kelley: see [0019]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US Patent No 10,902,009 to Lenz teaches comparing query and record embeddings
US PGPub 20230080205 to Singh teaches an online concierge system
US PGPub 2018/00967071 to Green teaches a relevance score for results based on embeddings
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIMBERLY LOVEL WILSON whose telephone number is (571)272-2750. The examiner can normally be reached 8-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Beausoliel can be reached at 571-272-3645. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KIMBERLY L WILSON/Primary Examiner, Art Unit 2167