DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
The examiner acknowledges Applicant’s claim of benefit to Provisional Patent Application No. 63/472,981 filed on 6/14/2023.
Status of Claims
Applicant’s communications filed on 2/27/2026 have been considered.
Claims 1-10 have been previously canceled.
Claims 11, 22 and 31 have been amended.
Claims 11-32 are currently pending and have been examined.
Response to Arguments
Applicant’s arguments filed with respect to the rejection of claims under 35 USC 101 have been fully considered and are not persuasive.
Applicant argues that amended claim 11 “solves the problem with conventional LLMs and neural networks used for e-commerce that are not bounded by a catalog of products and that can become confused about facts (e.g., ‘hallucinate’) resulting in returning inaccurate results to end users… the method uses the received library to provide some boundaries to the information to be searched” (Remarks Pages 8 and 9). This argument has been considered but is not persuasive. The specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement… [furthermore] the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. MPEP 2106.04. While Applicant’s specification (see at least [0003-0004]) discusses that large language models may “[return] inaccurate results to end users… hamper[ing] the search for the right (or even real) product”, this does not represent a technical improvement. Although the claims include computer technology such as a non-transitory computer readable medium, storing thereon computer readable instructions that when read by a computer cause a processor to perform a method and a content vector database, such elements are merely peripherally incorporated in order to implement the abstract idea of providing content information to customers. Applicant’s specification does not describe a change or improvement to the claimed technology, such that one of ordinary skill would recognize such a technological improvement. It is further noted that independent claims 11, 22 and 31 do not recite a large language model as an additional element, but rather dependent claims 13, 18-19, and 29-30 recite limitations including a large language model. Accordingly, analysis of the independent claims 11, 22 and 31 does not take the recited large language model into consideration, and the independent claims do not reflect any improvements regarding large language models or neural networks. With regards to dependent claims 13, 18-19, and 29-30, the claims recite limitations that further define the abstract idea, implemented using generic computing components (inputting information into the LLM, receiving an output from the LLM, providing information to the user based on the output), without effecting a change or improvement to the claimed technology. Accordingly, the rejection has been maintained.
With regards to Applicant’s argument that the claimed method uses the received library to provide some boundaries to the information to be searched, which can mitigate the noted problems with LLMs and neural networks (Remarks Page 13), this argument has been considered and is not persuasive. Providing boundaries to information to be searched, in order to return more relevant product information from a catalog is not representative of a technical improvement, but rather an improvement to the abstract idea of providing content information to customers. The rejection has been maintained.
Examiner note: While Applicant’s arguments (see Remarks Page 9) refer to “amended claim 1”, it appears that a typographical error has been made, and the arguments are meant to be made with reference to independent claim 11. Applicant’s arguments have been considered with regards to amended claim 11.
Applicant’s arguments filed with respect to the rejection of claims under 35 USC 102 and 103 have been fully considered but are rendered moot under new grounds of rejection.
Applicant argues that the amended claims overcome the currently cited prior art because “there is no teaching in Wang that the actions that are selected come from a plurality of possible actions” (Remarks Page 10). This argument has been considered but is rendered moot under new grounds of rejection. Independent claim 1 now stands rejected under 35 USC 103 in view of the newly cited combination of Wang/Selvam. Wang has been further relied upon to teach outputting a response to the end user with information, including from the library, based on performing the action sequence, at (Wang, [0041][0056][0078]), disclosing the identification of items from a database storing that satisfy a query from predicted similarities between items in the database and a term in a query, and recommending said items to a user, where the database stores an item catalog. On the other hand, newly cited Selvam has been relied upon to teach storing content vectors associated with the plurality of content using the embedding vectors, and determining an action sequence from among a plurality of predefined actions, as shown in the 103 rejection, below. The rejection has been maintained.
The rejections of independent claims 22 and 31 have been maintained for similar reasons. In light of the rejections to the independent claims being maintained, the rejections of dependent claims 12-21, 23-30, and 23 have been maintained.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 11-32 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite an abstract idea. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Under Step 1 of the Subject Matter Eligibility Test for Products and Processes, the claims must be directed to one of the four statutory categories. See MPEP 2106.03. Claims 11-30 are directed towards a process. Claims 31-32 are directed towards a manufacture. Therefore, claims 11-32 are directed to one of the four statutory categories (Step 1: YES, regarding claims 11-32).
Under Step 2A of the MPEP, it is determined whether the claims are directed to a judicially recognized exception. See MPEP 2106.04. Step 2A is a two-prong inquiry.
Under Prong 1, it is determined whether the claim recites a judicial exception. In determining whether the claims are directed to a judicial exception, the claims are analyzed to evaluate whether the claims recite a judicial exception.
Taking Claim 31 as representative, claim 31 recites limitations that fall within the certain methods of organizing human activity groupings of abstract ideas, including:
perform[ing] a customer engagement method comprising:
receiving input from an end user;
determining an action sequence from among a plurality of predefined actions based at least on the input received from the end user or a derivation thereof;
performing the action using a plurality of content of a library of content, wherein the plurality of content includes product information; and
outputting a response to the end user with information, including from the library, based on performing the action sequence.
Claim 11 recites the same limitations believed to be abstract as recited in claim 31, and additionally recites:
receiving a library of content including information associated with a plurality of content, which includes product information;
creating embedding vectors of the information associated with the plurality of content;
storing content vectors associated with the plurality of content using the embedding vectors in a content vector storage;
receiving input from an end user;
determining an action sequence from among a plurality of predefined actions based at least on the input received from the end user or a derivation thereof;
performing the action sequence using the stored content vectors associated with the plurality of content; and
outputting a response to the end user with information, including from the library, based on performing the action sequence.
Claim 22 recites the same limitations believed to be abstract as recited in claim 31.
Claim 31, as exemplary, recites the abstract idea of providing content information to customers. These recited limitations fall within the "Certain Methods of Organizing Human Activities" Grouping of abstract ideas as it relates to commercial interactions of sales activities or behaviors. Accordingly, the claim recites an abstract idea. See MPEP 2106.04.
Accordingly, under Prong One of Step 2A of the Alice/Mayo test, claims 11, 22 and 31 recite an abstract idea (Step 2A, Prong One: YES).
Under Prong 2, it is determined whether the claim recites additional elements that integrate the exception into a practical application of the exception.
Claim 32 recites additional elements beyond the judicial exception(s), including a non-transitory computer readable medium, storing thereon computer readable instructions that when read by a computer cause a processor to perform a method. Claim 11 recites the same additional elements as recited in claim 32, and additionally recites a content vector database. Claim 22 recites the same additional elements as recited in Claims 11 and 32.
These additional elements are described at a high level in Applicant’s specification without any meaningful detail about their structure or configuration. As such, these computer-related limitations are not found to be sufficient to integrate the abstract idea into a practical application. Claims 11, 22 and 31 specifying that the abstract idea of providing content information to customers is executed in a computer environment merely indicates a field of use in which to apply the abstract idea because this requirement merely limits the claims to the computer field, i.e., to execution on a generic computer. As such, under Prong Two of Step 2A of the Alice/Mayo test, when considered both individually and as a whole, the limitations of claims 11, 22 and 31 are not indicative of integration into a practical application (Step 2A, Prong Two: NO).
Since claims 11, 22 and 31 recite an abstract idea and fail to integrate the abstract idea into a practical application, claims 11, 22 and 31 are “directed to” an abstract idea (Step 2A: YES). Accordingly, the judicial exception is not integrated into a practical application.
Next, under Step 2B, the instant claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as discussed above, the additional elements of a non-transitory computer readable medium, storing thereon computer readable instructions that when read by a computer cause a processor to perform a method and a content vector database amount to no more than mere instructions to apply the exception using generic computer components. For the same reason these elements are not sufficient to provide an inventive concept. Therefore when considering the additional elements alone, and in combination, there is no inventive concept in the claim, and thus the claim is not patent eligible (Step 2B: NO).
Dependent claims 12-21, 23-30 and 32, when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. 101 because they do not add “significantly more” to the abstract idea. As for dependent claims 12, 14-17, 20, 23-28 and 32, these claims recite limitations that further define the same abstract idea noted in independent claims 11, 22 and 31. Therefore, claims 12, 14-17, 20, 23-28 and 32 are considered patent ineligible for the reasons given above.
As for dependent claims 13, 18-19, and 29-30, these claims recite limitations that further define the abstract idea noted in independent claims 11, 22 and 31. Additionally, they recite the following additional limitations:
sending the relevant content files, along with the input received from the user or a derivation thereof and other information, to a large language model;
receiving a completion response from the large language model; and
receiving a completion from a large language model.
The additional elements of sending information to a large language model and receiving information from the large language model are recited at a high level of generality such that they amount to no more than instructions to apply the judicial exception in a generic technological environment. Even in combination, these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself. Accordingly, under the Alice/Mayo test, claims 11-32 are ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 11-15, 20-24, 27-28 and 31-32 are rejected under 35 U.S.C. 103 as being unpatentable over previously cited Wang (US 2023/0146336 A1) in view of newly cited U.S Patent Application No. 2024/0070577 A1 to Selvam et al., hereinafter Selvam.
Regarding Claim 11, Wang discloses A method of customer engagement comprising ([0031][0054][0089]):
receiving a library of content including information associated with a plurality of content, which includes product information ([0033] The online concierge system 102 includes an inventory management engine 202, which interacts with inventory systems associated with each warehouse 110… the inventory management engine 202 requests and receives inventory information maintained by the warehouse 110… inventory information includes both qualitative and qualitative information about items);
creating embedding vectors of the information associated with the plurality of content ([0065] from the training set of natural language examples including item identifiers and values of attributes for items of the item catalog, the online concierge system 102 trains 420 a corpus model… the corpus model is any model configured to receive natural language input comprising one or more tokens (e.g., words) and to map the tokens into embeddings in a vector space);
storing content vectors associated with the plurality of content in a content vector database ([0089] an online system (e.g., the online concierge system 102, a search provider, a server providing content to users, etc.) maintains a database or other relational table identifying multiple content items. Each content item is associated with a content item identifier, and the database includes an entry for a content item identifier having fields corresponding to different attributes of a content item);
receiving input from an end user ([0041] the order fulfillment engine 206 generates one or more recommendations to a user based on one or more terms in a query received from the user);
determining an action sequence based at least on the input received from the end user or a derivation thereof ([0041] the order fulfillment engine 206 generates one or more recommendations to a user based on one or more terms in a query received from the user; [0079] a model 700 for identifying items from a database satisfying a query from predicted similarities between items in the database and a term in a query; [0082] using the corpus model and the mapping layer to select items for display based on a query term) (The system selects items to display to the user in response to the user query);
performing the action sequence using the stored content vectors associated with the plurality of content ([0041] the model outputs a predicted similarity between one or more terms in the query and each item of the item warehouse 110. Based on the predicted similarities, the order fulfillment engine 206 selects a set of items for display to the user; [0078] the online concierge system 102 receives 435 a query including one or more terms… The embedding for the term generated by the corpus model is input into the mapping layer, which generate 440 a predicted similarity between the embedding for the term of the query and each item identifier, based on the token embedding corresponding to each item identifier. The online concierge system 102 selects 445 a set of items based on the predicted similarities); and
outputting a response to the end user with information, including from the library, based on performing the action sequence ([0041] the order fulfillment engine 206 generates one or more recommendations to a user based on one or more terms in a query received from the user… the model outputs a predicted similarity between one or more terms in the query and each item of the item warehouse 110; [0078] the online concierge system 102 ranks items based on the predicted similarity of their corresponding item identifier to the embedding for the term of the query and selects 445 items having item identifiers having at least a threshold position in the ranking (e.g., having item identifiers within the top 10 positions of the rankings) and displays 450 information identifying the selected items to a user via an interface)… see [0056][0058] identifying items from a database satisfying a query from predicted similarities between items in the database and a term in a query, where the database stores an item catalog);
But does not explicitly disclose storing content vectors associated with the plurality of content using the embedding vectors; and determining an action sequence from among a plurality of predefined actions.
Selvam, on the other hand, teaches storing content vectors associated with the plurality of content using the embedding vectors ([0035] The data collection module 200 also collects item data, which is information or data that identifies and describes items that are available at a retailer location. The item data may include item identifiers for items; [0040] the item selection model uses item embeddings describing items to score items. These item embeddings may be generated by separate machine-learning models and may be stored in the data store 240); and
determining an action sequence from among a plurality of predefined actions ([0071] A UI state machine is a state machine that determines which user interface to display to the picker. The UI state machine has a set of UI states and a set of state transitions. A UI state is a state in the state machine that corresponds to a task UI. The state transitions specify, for each UI state, a next UI state to transition to in response to receiving a task unit for the picker; [0073] collection tasks, delivery tasks, transport tasks, and bagging tasks).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the method, as taught by Wang, storing content vectors associated with the plurality of content using the embedding vectors; and determining an action sequence from among a plurality of predefined actions, as taught by Selvam, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang, to include the teachings of Selvam, in order to provide automatic updates to a UI as an assigned task changes, as well as accommodate changes to task batches (Selvam, [0004-0005]).
Regarding Claim 12, Wang and Selvam teach the limitations of claim 11.
Wang further discloses further comprising: transforming the library of content into a catalog of content files ([0033] The online concierge system 102 includes an inventory management engine 202, which interacts with inventory systems associated with each warehouse 110… the inventory management engine 202 requests and receives inventory information maintained by the warehouse 110… inventory information includes both qualitative and qualitative information about items;
creating an embedding vector of the information associated with each content file ([0089] the online system creates one or more templates for natural language descriptions of attributes for each content item of the database. Each template includes a content item identifier of a content item, a description of an attribute, a value of the attribute for the content item, and natural language text. From the templates and the database, the online concierge system generates examples for a training set, with each example including a plurality of tokens in different positions and corresponding to a content item); and
storing content vectors in a content vector database ([0089] an online system (e.g., the online concierge system 102, a search provider, a server providing content to users, etc.) maintains a database or other relational table identifying multiple content items. Each content item is associated with a content item identifier, and the database includes an entry for a content item identifier having fields corresponding to different attributes of a content item).
Regarding Claim 13, Wang and Selvam teach the limitations of claim 12.
Wang further discloses wherein the library of content includes a catalog of content files associated with the plurality of content ([0035] the inventory management engine 202 receives an item catalog from a warehouse 110 identifying items offered for purchase by the warehouse 110. From the item catalog, the inventory management engine 202 determines a taxonomy of items offered by the warehouse 110; [0059] The online concierge system 102 generates 410 templates for natural language descriptions of attributes for each item of the item catalog from the database storing the item catalog),
the method further comprising: as a part of performing the action sequence, selecting relevant content files based on a matching of the embedding vector of the input received from the user or a derivation thereof and the embedding vectors of the information associated with the content files in the catalog ([0047] the modeling engine 218 trains and maintains a model to determine predicted similarities between a received query and multiple items… a corpus model maps words in a received query to embeddings in a vector space which are the input to a mapping later, which outputs predicted similarities between the output of the corpus model and items in the item catalog… see [0003] the online concierge system identifies items with attributes that at least partially match one or more terms in the query);
sending the relevant content files, along with the input received from the user or a derivation thereof and other information, to a large language model ([0047] the modeling engine 218 trains a model to determine predicted similarities… the model comprises a corpus model and a mapping layer; [0069] The corpus model 630 is a masked language model in various embodiments that receives an input of a natural language description 610 for an item; [0082] using the corpus model and the mapping layer to select items for display based on a query term; [0089] generating a model determining measures of similarity between an input query and multiple content items (e.g., documents, web pages, articles) in a database or other relational table);
receiving a completion response from the large language model ([0087] the online concierge system 102 subsequently applies the model comprising the corpus model and the trained mapping layer to a received specific item identifier, generating predicted similarities between the specific item identifier and each item in the item catalog… the online concierge system 102 ranks items of the item catalog based on their predicted similarity to the received item identifier… see [0081] The mapping layout 725 outputs predicted similarities 730A-730N of the term of the query 705 to each item identifier of an item included in the product catalog. Based on the predicted similarities 730A-730N, the model 700 selects a set of items 740 for display) (Examiner notes that a completion response has been interpreted as textual information including product information from a product catalog, per Applicant’s specification [0051]); and
sending a response to the user based on the response received from the large language model ([0087] the online concierge system 102 ranks items of the item catalog based on their predicted similarity to the received item identifier and selects items having at least a threshold position in the ranking for display or displays items of the item catalog in an order based on the ranking… see [0081] Based on the predicted similarities 730A-730N, the model 700 selects a set of items 740 for display. For example, the model 700 ranks the items based on the predicted similarity 730A-730N generated for each item identifier corresponding to an item and selects a set of items 740 having at least a threshold position in the ranking for display).
Regarding Claim 14, Wang and Selvam teach the limitations of claim 11.
Wang further discloses wherein the input includes at least one of a query, or a request ([0041] the order fulfillment engine 206 generates one or more recommendations to a user based on one or more terms in a query received from the user).
Regarding Claim 15, Wang and Selvam teach the limitations of claim 11.
Wang further discloses wherein actions of the action sequence include at least one of
search ([0078] the online concierge system 102 receives 435 a query including one or more terms… The embedding for the term generated by the corpus model is input into the mapping layer, which generate 440 a predicted similarity between the embedding for the term of the query and each item identifier, based on the token embedding corresponding to each item identifier. The online concierge system 102 selects 445 a set of items based on the predicted similarities),
select,
product expert,
knowledge expert,
add to cart, or
remove from cart (Examiner notes that, according to the limitation reading “at least one of…”, only one of the subsequent must be present).
Regarding Claim 20, Wang and Selvam teach the limitations of claim 11.
Wang further discloses further comprising: receiving context data related to the input received from the end user ([0082] While FIGS. 4 and 7 describe using the corpus model and the mapping layer to select items for display based on a query term, the corpus model and the mapping layer may additionally or alternatively be trained to output items that are related to an item identifier that the corpus model receives as input; [0083] The affinity score between the item corresponding to the item identifier and the additional item corresponding to the additional item identifier query term may be determined from rates at which the item and the additional item co-occur in orders previously received from users of the online concierge system 102 or co-occur in orders previously fulfilled by the online concierge system 102),
wherein determining the action sequence is based at least on the input received from the end user or a derivation thereof and the context data ([0083] The online concierge system 102 normalizes the rate or the frequency at which the item and the additional item co-occur in previously received orders to determine the affinity score between the item and the additional item in various embodiments; [0084] any loss function or combination of loss functions, may be applied to the predicted similarity between an item identifier of the example and the additional item identifier output by the model and the affinity score between the item identifier and the additional item identifier included in the example to generate an error term; [0087] the online concierge system 102 ranks items of the item catalog based on their predicted similarity to the received item identifier and selects items having at least a threshold position in the ranking for display or displays items of the item catalog in an order based on the ranking; [0041] the order fulfillment engine 206 generates one or more recommendations to a user based on one or more terms in a query received from the user).
Regarding Claim 21, Wang and Selvam teach the limitations of claim 20.
Wang further discloses wherein the context data includes at least one of
history of end user inputs or derivations thereof and responses,
shown product list,
selected product list ([0083] describing determination of the affinity score from orders previously received from users of the online concierge system 102),
viewed product list, or
shopping cart list ([0083] describing determination of the affinity score from orders previously received from users of the online concierge system 102… see [0055] the system communication interface 324 receives an order from the system 102 and transmits the contents of a basket of items to the system 102) (According to the limitation reciting “at least one of…”, only one of the subsequent limitations must be present).
Claim 23 recites a method comprising substantially similar limitations as claim 14. The claim is rejected under substantially similar grounds as claim 14.
Claim 24 recites a method comprising substantially similar limitations as claim 15. The claim is rejected under substantially similar grounds as claim 15.
Claim 27 recites a method comprising substantially similar limitations as claim 20. The claim is rejected under substantially similar grounds as claim 20.
Claim 28 recites a method comprising substantially similar limitations as claim 21. The claim is rejected under substantially similar grounds as claim 21.
Claim 31 is directed to a non-transitory computer-readable medium. Claim 31 recites limitations that are substantially parallel in nature to those addressed above for claim 22 which is directed towards a system. The disclosure of Wang/Selvam teaches the limitations of claim 22 as noted above. Wang further discloses A non-transitory computer readable medium, storing thereon computer readable instructions that when read by a computer cause a processor to perform a customer engagement method (Wang: [0092]). Claim 31 is therefore rejected for the reasons set forth above in claim 22 and in this paragraph.
Claim 32 recites a non-transitory computer-readable medium comprising substantially similar limitations as claim 12. The claim is rejected under substantially similar grounds as claim 12.
Claims 16-19, 25-26, and 29-30 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Selvam, and further in view of previously cited Weiss (US 11,854,544 B1).
Regarding Claim 16, Wang and Selvam teach the limitations of claim 15.
Wang further discloses wherein performing the search action includes building a structured search query by extracting information from the input received from the end user or a derivation thereof ([0078] the online concierge system 102 receives 435 a query including one or more terms. In various embodiments, the query comprises a query token comprising a word or phrase indicating that subsequent tokens are terms in a query. The online concierge system 102 generates an embedding for a term in the query from the trained corpus model… see [0066] tokens include tokens corresponding to words used for values of certain attributes),
extracting sorting instructions from the input received from the end user or a derivation thereof ([0078] The online concierge system 102 generates an embedding for a term in the query from the trained corpus model… the online concierge system 102 displays information identifying various items in an order corresponding to the ranking of items based on the predicted similarities of their corresponding item identifiers to the embedding for the term of the query), and
extracting attributes and the corresponding attribute values from the input received from the end user or a derivation thereof to obtain filtering expressions ([0078] the online concierge system 102 receives 435 a query including one or more terms. In various embodiments, the query comprises a query token comprising a word or phrase indicating that subsequent tokens are terms in a query. The online concierge system 102 generates an embedding for a term in the query from the trained corpus model. The embedding for the term generated by the corpus model is input into the mapping layer, which generate 440 a predicted similarity between the embedding for the term of the query and each item identifier, based on the token embedding corresponding to each item identifier… the online concierge system 102 displays information identifying various items in an order corresponding to the ranking of items based on the predicted similarities of their corresponding item identifiers to the embedding for the term of the query… see [0066] tokens include tokens corresponding to words used for values of certain attributes) (Examiner notes that using the extracted tokens to determine/filter which items are displayed to the user has been interpreted as obtaining filtering expressions);
But does not explicitly disclose extracting a category from the input received from the end user.
Weiss, on the other hand, discloses extracting a category from the input received from the end user ([Col 3 Ln 55-Col 4 Ln 27] The process 102 may begin at 112 by the user device 106 receiving first input data. The first input data may be typed, voice input, or otherwise provided to user device 106… The first input data may, for example, include a first search request to search the product catalog 128 for products of a particular type; [Col 4 Ln 49-60] the process 102 may include the service provider 108 determining potential departments for searching based on the first input data).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the system, as taught by Wang and Selvam, extracting a category from the input received from the end user, as taught by Weiss, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang and Selvam, to include the teachings of Weiss, in order to correctly identify user goals from user queries submitted via voice interfaces (Weiss, [Col 1 Ln 6-18]).
Regarding Claim 17, Wang, Selvam and Weiss teach the limitations of claim 16.
Wang does not explicitly disclose wherein performing the search action includes performing a pre-search before building the structured search, wherein the pre-search includes determining whether a sufficient amount of information from at least one of the input received from the end user or a derivation thereof, or the library of content is available to build a structured search.
Weiss, on the other hand, discloses wherein performing the search action includes performing a pre-search before building the structured search ([Col 3 Ln 55-Col 4 Ln 27] The first input data may, for example, include a first search request to search the product catalog 128 for products of a particular type; [Col 5 Ln 60-Col 6 Ln 6] a reference to the same product or product identifier in the first input data and the second input data may be used to aid in determining that the voice input data is a refinement of the first input data… the voice input data may include an explicit instruction to refine or filter the first search results with the second input data; [Col 6 Ln 7-18] the process 102 may include the user device 106 displaying the second search results after refining the search results) (Examiner notes that the first search request that occurs prior to the refinement of the search request has been interpreted as a pre-search before the structured search),
wherein the pre-search includes determining whether a sufficient amount of information from at least one of the input received from the end user or a derivation thereof, or the library of content is available to build a structured search ([Col 4 Ln 49-60] after the first search query is received by the search provider 108, the search provider 108 searches within a product catalog and identifies the departments in which products fitting the first search query may reside; [Col 5 Ln 36-53] The service provider 108 may apply a rule to prioritize remaining in a current department or sub-department of the product catalog as laid out in the structured graph when the refined set of filters are associated with the current department. In the event that the refined set of filters do not match the current department, the rule may prioritize moving to a sub-department where the refined filters may match).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the system, as taught by Wang and Selvam, extracting a category from the input received from the end user, as taught by Weiss, for the same reasons discussed above with respect to claim 16.
Regarding Claim 18, Wang, Selvam and Weiss teach the limitations of claim 17.
Wang further discloses further comprising: receiving a completion from a large language model ([0087] the online concierge system 102 subsequently applies the model comprising the corpus model and the trained mapping layer to a received specific item identifier, generating predicted similarities between the specific item identifier and each item in the item catalog… the online concierge system 102 ranks items of the item catalog based on their predicted similarity to the received item identifier… see [0081] The mapping layout 725 outputs predicted similarities 730A-730N of the term of the query 705 to each item identifier of an item included in the product catalog. Based on the predicted similarities 730A-730N, the model 700 selects a set of items 740 for display); and
generating a response ([0087] the online concierge system 102 ranks items of the item catalog based on their predicted similarity to the received item identifier and selects items having at least a threshold position in the ranking for display or displays items of the item catalog in an order based on the ranking… see [0081] Based on the predicted similarities 730A-730N, the model 700 selects a set of items 740 for display. For example, the model 700 ranks the items based on the predicted similarity 730A-730N generated for each item identifier corresponding to an item and selects a set of items 740 having at least a threshold position in the ranking for display);
But does not explicitly disclose generating a response that includes a request for more information from the end user when performing the pre-search action reveals that a sufficient information is not available.
Weiss, on the other hand, discloses generating a response that includes a request for more information from the end user when performing the pre-search action reveals that a sufficient information is not available ([Col 5 Ln 36-53] If no match may be found within the current department, the rule may cause the service provider 108 to either suggest switching departments to a new department of the catalog or initiating a new search)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the system, as taught by Wang, generating a response that includes a request for more information from the end user when performing the pre-search action reveals that a sufficient information is not available, as taught by Weiss, for the same reasons discussed above with respect to claim 16.
Regarding Claim 19, Wang, Selvam and Weiss teach the limitations of claim 17.
Wang further discloses further comprising: receiving a completion from a large language model ([0087] the online concierge system 102 subsequently applies the model comprising the corpus model and the trained mapping layer to a received specific item identifier, generating predicted similarities between the specific item identifier and each item in the item catalog… the online concierge system 102 ranks items of the item catalog based on their predicted similarity to the received item identifier… see [0081] The mapping layout 725 outputs predicted similarities 730A-730N of the term of the query 705 to each item identifier of an item included in the product catalog. Based on the predicted similarities 730A-730N, the model 700 selects a set of items 740 for display); and
generating a response ([0087] the online concierge system 102 ranks items of the item catalog based on their predicted similarity to the received item identifier and selects items having at least a threshold position in the ranking for display or displays items of the item catalog in an order based on the ranking… see [0081] Based on the predicted similarities 730A-730N, the model 700 selects a set of items 740 for display. For example, the model 700 ranks the items based on the predicted similarity 730A-730N generated for each item identifier corresponding to an item and selects a set of items 740 having at least a threshold position in the ranking for display);
But does not explicitly disclose generating a response based on the results of a structured product search query when performing the pre-search action reveals that sufficient information is available.
Weiss, on the other hand, discloses generating a response based on the results of a structured product search query when performing the pre-search action reveals that sufficient information is available ([Col 4 Ln 49-60] after the first search query is received by the search provider 108, the search provider 108 searches within a product catalog and identifies the departments in which products fitting the first search query may reside; [Col 5 Ln 26-35] At 124, the process 102 may include the service provider 108 matching a product attribute extracted from the voice input at 116 to one or more filters of the subset of filters from 122. The matching of the product attributes to the filters enables the system to select relevant values from the various nodes of the structured graph and select the relevant sub-departments and categories for applying the search query. The matched set of filters become a refined set of filters to use to initially reduce the portions of the product catalog to be searched with the refinements; [Col 13 Ln 48-53 a user may provide a first search term to search for “gloves” in a catalog of an online marketplace. The first search term may be provided via a voice input, keyboard input, or other input device. The system may provide a listing of search results matching the request for “gloves.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the system, as taught by Wang, generating a response based on the results of a structured product search query when performing the pre-search action reveals that sufficient information is available, as taught by Weiss, for the same reasons discussed above with respect to claim 16.
Claim 25 recites a method comprising substantially similar limitations as claim 16. The claim is rejected under substantially similar grounds as claim 16.
Claim 26 recites a method comprising substantially similar limitations as claim 17. The claim is rejected under substantially similar grounds as claim 17.
Claim 29 recites a method comprising substantially similar limitations as claim 18. The claim is rejected under substantially similar grounds as claim 18.
Claim 30 recites a method comprising substantially similar limitations as claim 19. The claim is rejected under substantially similar grounds as claim 19.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZACHARY R DONAHUE whose telephone number is (571)272-5850. The examiner can normally be reached M-F 8a-5p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at (571) 272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZACHARY RYAN DONAHUE/Examiner, Art Unit 3689
/MARISSA THEIN/Supervisory Patent Examiner, Art Unit 3689