Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant’s amendments and arguments are considered but are either unpersuasive or moot in view of the new grounds of rejection that, if presented, were necessitated by the amendments to the Claims.
This action is Final.
DETAILED ACTION
Response to Arguments
35 USC 103
Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant’s arguments with respect to claim(s) 4 have been fully considered but they are not persuasive. Applicant argues that Iyer does not disclose the additional limitations of claim 4 because the attributes are associated only with an item, and are not relevant to user attributes. However, in [0080], Iyer discloses: “[0080] At step 908, a weighted average of the individual probability scores is generated. For example, in the illustrated embodiment, the output of each embedding layer 954 a-954 e is provided to an attention layer 960 configured to generate a weighted average of the probability scores (referred to as an attention calculation). The output of the attention layer 960 includes a user representation embedding 962 configured to represent user interest in various items and/or user interest in item attributes. The user representation embedding 962 is configured to maximize the probability of a next clicked/purchased item (t) given the historic user data. In some embodiments, the user representation embedding 962 may be used to calculate and/or is representative of user-item attributes.” The output configured to represent user interest in item attributes reads on attributes that are relevant to the user attributes, and would have been obvious to combine as detailed in the rejection below.
Amendments to claim 1 necessitated new grounds of rejection, which also teach the limitations of claims 4-5.
Accordingly, the rejection is maintained.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. ("Towards Knowledge-Based Personalized Product Description Generation in E-commerce") in view of Selleslagh ("Generate product descriptions with GPT-3"), Huet et al. (US 20200311158 A1), and Jain (US 20240249331 A1).
PNG
media_image1.png
557
1194
media_image1.png
Greyscale
Chen Figure 1
Regarding claim 1, Chen discloses: 1. A computer system comprising: a processing unit ("The experiments are conducted on a Linux server equipped with an Intel(R) Xeon(R) Platinum 8163 CPU" Appendix A.1 Hardware Configuration)
configured to execute computer-readable instructions ("All models are implemented in PyTorch [29] version 0.4.1 and Python 3.6." Appendix A.1 Software)
to cause the system to: responsive to a request received from a user device for viewing a webpage including a textual description, ("a) A user clicks on a product" Figure 1 – clicking a product is understood as a request for the product information ; Abstract discloses that the system has been deployed in Taobao, which is a website which would be displayed on a user device.)
retrieve a user record for a user associated with the request, ("In our e-commerce setting, each user is labeled with “interest tags”, which derive from his/her basic information, e.g., browsing history and shopping records." Section 3.2 User Categories)
wherein the textual description is for presentation on the website; (Fig. 1 shows that an input to the model is information from Wikipedia, which is a website, but not the website which will display the result.)
obtain two or more user attributes associated with the user, based on the user record; (Fig. 1 shows the attributes "Function" and "Housewife" for the Target User. See also Section 3.2 – User Categories “In our e-commerce setting, each user is labeled with “interest tags”, which derive from his/her basic information, e.g., browsing history and shopping records.” – tags is plural.)
generate a prompt to a large language model (LLM) for generating a user-specific textual description, (Fig. 2 shows the transformer model architecture which would be considered a large language model.)
the prompt including the two or more user attributes to include in the generated user-specific textual description (Fig. 2 shows "Attribute Emb" is one of the inputs to the model.)
and the textual description included in data for displaying the webpage; (Fig. 2 shows Knowledge Base, which can be considered a source text, is used to generate the Product Description.)
provide the prompt to the LLM (Not explicitly disclosed)
and receive a generated user-specific textual description that is a modification of the textual description included in the data for displaying the webpage; (Fig. 1 shows “Product Description”, and “Our Results”, which is user-specific.)
and responsive to the request and to receiving the generated user-specific textual description, provide a modified webpage including the generated user-specific textual description for display via the user device, ("The framework has been deployed in Taobao, the largest online e-commerce platform in China." Abstract – deploying the system in an e-commerce platform implies that the description would be displayed on a user device.)
wherein providing the modified webpage comprises substituting the generated user-specific textual description for the textual description in the data for displaying the webpage. (not explicitly disclosed)
Chen does not explicitly disclose that the input to the LLM is in the format of a prompt, or that two or more user attributes are used to generate the description. (Chen discloses in Section 3.2 that the most important tag is used.) However, in Figure 1 description, Chen states that: “The user focuses on the “function” product aspect”. It does not appear that the system determines which aspect is most relevant to the user, but it would be obvious to one of ordinary skill in the art to do so. However, for purposes of compact prosecution, Huet is relied on to teach this limitation rather than Chen.
Chen also does not teach that the prompt includes the textual description included in data for displaying the webpage, or receiving a textual description that is a modification of the textual description included in the data for displaying the webpage, which is substituted with the user-specific textual description.
Selleslagh discloses: generate a prompt to a large language model (LLM) for generating a user-specific textual description, the prompt including the one or more user attributes to include in the generated user-specific textual description ("After the data cleaning steps, we can generate our prompts based on the available attributes for each product record." Section 2. Generating Prompts; pg. 2, para 3 discloses a LLM.)
and provide the prompt to the LLM ("Once we have our questions, we can call the completions endpoint to interact with GPT-3 to get our answers (completions)." Section 3. Calling the model)
Chen and Selleslagh are considered analogous art to the claimed invention because they discuss generating product descriptions with an LLM. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Chen with the GPT-3 model taught by Selleslagh and further using prompts as input. Doing so would have been beneficial because GPT-3 may achieve human-level performance (Selleslagh, pg. 2, para 2), and using prompts in natural language gives the best results for this model (Selleslagh, pg. 5, para 4).
Selleslagh also discloses: the prompt includes the textual description included in data for displaying the webpage (“Write a short marketing description about: A knitted fabric, that is piece dyed and made of 80% Modacrylic and 20% Polyester. It's a heavy fabric that can be used for accessories, animal cushions, carnavals, decorations, home decoes, pillows, and traditional costumes”. Pg. 6. This is a textual description and it is data that is used for displaying the webpage.)
and receiving a textual description that is a modification of the textual description included in the data for displaying the webpage (“This black knitted fabric is a versatile and stylish addition to your wardrobe. Made of 80% acrylic and 20% polyester, it can be used for accessories, animal cushions, carnavals, pillows, and traditional costumes.” Pg. 8. This is a modification of the textual description above.)
Chen and Selleslagh are considered analogous art to the claimed invention because they discuss generating product descriptions with an LLM. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified the system of Chen to include a product description to be modified in the prompt as taught by Selleslagh. Doing so would have been beneficial because the model gives the best results by prompting in natural language. (Selleslagh, pg. 5, para 4).
Selleslagh does not explicitly disclose that two or more user attributes are used to generate the description, or substituting the generated user-specific textual description for the textual description in the data for displaying the webpage.
Huet discloses that two or more user attributes are used to generate the description. (“[0041] The system responds to user input corresponding to browsing or navigating the catalog by obtaining user information (step 2306) including the user's behavior history and language history. The system then selects attributes of the product (step 2308), weighted according to the user information, to be highlighted in the personalized product description. The NLP engine generates text to be added to the description provided by the seller (step 2310).”)
Chen, Selleslagh, and Huet are considered analogous art to the claimed invention because they disclose generating product descriptions for e-commerce. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Chen in view of Selleslagh to include a user’s behavior history and language history in generating the updated description. Doing so would have been beneficial so that the description reflects the preferences or language style of the user (Huet [0002]).
Huet does not disclose substituting the generated user-specific textual description for the textual description in the data for displaying the webpage.
Jain discloses generating a product description, responsive to a request received from a user device for viewing a webpage including a textual description, wherein the textual description is for presentation on the website; (Fig. 4 shows that the Product Description 420 is an input to the Generative Machine Learning model. See also “[0032] Referring to FIG. 1 , according to some aspects, data processing system 100 is used in an e-commerce context. For example, user 105 (such as a merchandiser) possesses a product description for a product that is intended to be displayed on a graphical user interface (such as an app or a website). User 105 provides the product description to data processing apparatus 115 via user device 110. Data processing apparatus 115 retrieves product reviews corresponding to the product description from database 125.”)
substituting the generated user-specific textual description for the textual description in the data for displaying the webpage. (Fig. 5 shows that the existing product description is augmented. See also “[0080] … In some cases, data processing apparatus 330 automatically updates product description 315 without manual intervention from the merchandiser…”; see also “[0003] Embodiments of the present disclosure provide a data processing system that discovers a salient product feature in a review of the product and determines whether an existing description of the product appropriately includes the salient product feature. In some cases, the data processing system uses a generative machine learning model to generate a new product description based on the existing product description, where the new product description includes the salient product feature.”)
Chen, Selleslagh, Huet, and Jain are considered analogous art to the claimed invention because they disclose generating product descriptions for e-commerce. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination so that an existing description on a website is updated. Doing so would have been beneficial in order to avoid time, effort, and expense of manually updating a product description (Jain [0094]) and because information and interest may change over time (Jain [0015]).
Regarding claim 2, Chen discloses: 2. The system of claim 1, wherein the processing unit is configured to execute computer-readable instructions to obtain the two or more user attributes based on the user record by: extracting, by a pre-trained attribute extraction model, the two or more user attributes from the user record. ("Then we train a CNN-based [14] multi-class classifier on Z which takes the text as input and predicts the user category it belongs to." Section 3.2 User Categories)
Regarding claim 3, Chen discloses: 3. The system of claim 2, wherein the processing unit is configured to execute computer-readable instructions to further cause the system to: retrieve an object record for an object associated with the request; (Figure 1 shows that a Product of a "Chinese-style resin lamp" is selected. )
obtain one or more object attributes based on the object record; (Fig. 1 shows that the "Product Title" is obtained, which is an attribute of the object. )
append the two or more user attributes to the one or more object attributes; (Fig. 2 shows that the "Word Emb" for x (title) is combined with the "Attribute Emb" and input to the Transformer Encoder.)
and generate the prompt to the LLM for generating the user-specific textual description, the prompt including the two or more user attributes to include in the generated user-specific textual description appended to the one or more object attributes to include in the generated user-specific textual description. (Fig. 1 shows that the Product Title and User Aspect and User Category are included in generating the Product Description.)
Chen does not explicitly disclose generating a prompt for the input.
Selleslagh discloses generating a prompt as explained in claim 1. See claim 1 for motivation statement.
Regarding claim 4, Chen discloses: 4. The system of claim 3, wherein the processing unit is configured to execute computer-readable instructions to obtain the one or more object attributes by: extracting, by the pre-trained attribute extraction model, the one or more object attributes that are relevant to the user attributes, from the object record. ("We thus extract the aspect from the description using a heuristic method based on semantic similarity. " Section 3.2 Aspects)
Chen does not disclose a pre-trained attribute extraction model for object attributes that are relevant to user attributes. Neither do Selleslagh or Huet.
Jain discloses: wherein the processing unit is configured to execute computer-readable instructions to obtain the one or more object attributes by: extracting, by the pre-trained attribute extraction model, the one or more object attributes that are relevant to the user attributes, from the object record. ("[0059] In some examples, attribute inference component 215 performs a semantic analysis of an additional product review for the product to obtain additional review data including an additional attribute of the product. In some examples, attribute inference component 215 obtains user interaction data for the augmented product description. In some examples, attribute inference component 215 updates a set of attributes based on the user interaction data.")
Chen, Selleslagh, Huet, and Jain are considered analogous art to the claimed invention because they disclose generating product descriptions for e-commerce. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination to extract attributes that are relevant to the user. Doing so would have been beneficial so that the description is relevant to the user.
Regarding claim 5, Chen discloses: 5. The system of claim 4, wherein in extracting the one or more object attributes and the one or more user attributes, the processing unit is configured to execute computer-readable instructions to further cause the system to: determine, by the pre-trained attribute extraction model, one or more priority object attributes from the extracted object attributes; ("Based on the statistics of the dataset and empirical support by domain experts, we set the number of product aspects |A1 | to three" Appendix A.2 Aspects Selection and Annotation – limiting the aspects to the top three would make those three priority aspects.)
determine, by the pre-trained attribute extraction model, one or more priority user attributes from the extracted user attributes; ("In practice, we find that user categories with a low appearing frequency can cause some noise during training, for there are too few corresponding descriptions. To solve this problem, we replace user categories appearing less than 5 times with the <UNK> token." Appendix A.2 User Categories Collection – user categories appearing 5 times or more are priority categories.)
append the one or more priority user attributes to the one or more priority object attributes; (Fig. 3 shows the different embeddings are appended. See also Section 3.2, Attribute Fusion Model.)
and generate the prompt to the LLM for generating the user-specific textual description, the prompt including the one or more priority user attributes to include in the generated user-specific textual description appended to the one or more priority object attributes to include in the generated user-specific textual description. (Fig. 3 shows the appended embeddings are the Input. )
Chen does not disclose a pre-trained attribute extraction model for object attributes, or generating a prompt.
Selleslagh discloses a prompt as explained in claim 1. Selleslagh does not disclose a pre-trained attribute extraction model. Neither does Huet.
Iyer discloses: a pre-trained attribute extraction model which determines priority attributes (“[0058] In some examples, attribute inference component 215 combines a rule-based saliency score and a learning-based saliency score to obtain a confidence score for the attribute. In some examples, attribute inference component 215 identifies a set of attributes based on the set of product reviews. In some examples, attribute inference component 215 ranks the set of attributes based on the confidence score."; see also “[0060] According to some aspects, attribute inference component 215 comprises a saliency machine learning model. According to some aspects, the saliency machine learning model comprises one or more artificial neural networks (ANNs).” )
Rationale for combination as provided for Claim 5.
Regarding claim 6, Chen discloses: 6. The system of claim 1, wherein the two or more user attributes are embeddings. (Fig. 2 shows "Attribute Emb" is one of the inputs to the model.)
Regarding claim 7, Chen discloses: 7. The system of claim 3, wherein the one or more object attributes is an embedding. (Fig. 2 shows "Word Emb" (for x, which is the product title) is one of the inputs to the model.)
Regarding claim 8, Chen discloses: 8. The system of claim 1, wherein the textual description is a product description for a product associated with the request, (Not explicitly disclosed )
the prompt to the LLM includes instructions to generate a user-specific product description for the product, ("… (b) The goal of KOBE is to generate a product description, given 1) the product title, 2) the desired product aspect and user category …" Figure 1)
and the generated user-specific textual description is a generated user-specific product description. (Fig. 1 shows Product Description, and Our Results, which is user-specific. )
Chen does not disclose: the source text is a source product description for the product associated with the request, and the input to the LLM is a prompt.
Selleslagh discloses: wherein the textual description is a product description for a product associated with the request, ("Write a short marketing description about: A knitted fabric, that is piece dyed and made of 80% Modacrylic and 20% Polyester. It's a heavy fabric that can be used for accessories, animal cushions, carnavals, decorations, home decoes, pillows, and traditional costumes" Section 2)
and the prompt to the LLM includes instructions to generate a user-specific product description for the product, ("Write a short marketing description" Section 2)
Rationale for combination as provided for Claim 1. Selleslagh was cited for teaching the “prompt” limitation of Claim 1 and Claim 8 provides further details regarding the same step which are combined under the same rationale.
Regarding claim 9, Chen discloses: 9. The system of claim 1, wherein the user record comprises at least one of: a current browsing activity record; a previous transaction event record; a previous browsing activity record; a previous search query; or a user profile. ("In our e-commerce setting, each user is labeled with “interest tags”, which derive from his/her basic information, e.g., browsing history and shopping records." Section 3.2 User Categories)
Regarding claim 10, Chen discloses: 10. The system of claim 1, wherein the user attributes include at least two of: a user demographic attribute; a user preference attribute; or a user need attribute. (Fig. 1 shows the target user's attributes of "Function" (user preference) and "Housewife" (demographic))
Regarding claim 11, Chen discloses: 11. The system of claim 1, wherein the processing unit is configured to execute computer-readable instructions to further cause the system to provide the prompt to the LLM as a set of tokens. ("The average lengths of product titles and descriptions are 31.4 and 90.2 tokens, respectively" Section A.2)
Chen discloses the use of tokens, but does not explicitly disclose a prompt that is a set of tokens.
Selleslagh discloses: wherein the processing unit is configured to execute computer-readable instructions to further cause the system to provide the prompt to the LLM as a set of tokens. (“We can use the transformers package to tokenize our input prompts” pg. 2, para 5)
Rationale for combination as provided for Claim 1. Selleslagh was cited for teaching the “prompt” limitation of Claim 1 and Claim 8 provides further details regarding the same step which are combined under the same rationale.
Regarding claim 12, Chen does not disclose the additional limitations.
Selleslagh discloses: 12. The system of claim 1, wherein the LLM is a generative pre-trained transformer LLM. ("Various companies are building LLMs, but for now, the two most famous models are GPT-3 and Google LaMBDA. Since Google has not yet opened its model to the public, I decided to try GPT-3." Pg. 2, para 3)
Rationale for combination as provided for Claim 1. Selleslagh was cited for teaching the “prompt” limitation of Claim 1 which refers to generating a prompt for GPT-3, and Claim 8 provides further details regarding the same step which are combined under the same rationale.
Claim 13 is a method claim with limitations corresponding to the limitations of Claim 1 and is rejected under similar rationale.
Claim 14 is a method claim with limitations corresponding to the limitations of Claim 2 and is rejected under similar rationale.
Claim 15 is a method claim with limitations corresponding to the limitations of Claim 3 and is rejected under similar rationale.
Claim 16 is a method claim with limitations corresponding to the limitations of Claim 4 and is rejected under similar rationale.
Claim 17 is a method claim with limitations corresponding to the limitations of Claim 5 and is rejected under similar rationale.
Claim 18 is a method claim with limitations corresponding to the limitations of Claim 6 and is rejected under similar rationale.
Claim 19 is a method claim with limitations corresponding to the limitations of Claim 7 and is rejected under similar rationale.
Regarding claim 20, Chen discloses: 20. The method of claim 13, wherein the webpage is associated with a product. (Fig. 1 "a) A user clicks on a product")
Regarding claim 21, Chen does not disclose the additional limitations. Neither does Selleslagh.
Huet discloses: 21. The method of claim 13, wherein the data for displaying the webpage is stored on the system, and wherein the modified webpage provided to the user device has the user-specific textual description substituted for the textual description, the substitution occurring in real-time in response to the request. ("[0036] ... In this embodiment, the system dynamically modifies the product description to reflect the current ordering of the attribute list, so that the user is presented in real time with an updated product description that has greater appeal.")
Chen, Selleslagh, Huet, and Jain are considered analogous art to the claimed invention because they disclose generating product descriptions for e-commerce. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Chen in view of Selleslagh to update a source text with a user specific description in real time. Doing so would have been beneficial so that the product recommendations surfaces the content to the user at the right time (Chen, Section 1).
Claim 22 is a method claim with limitations corresponding to the limitations of Claim 8 and is rejected under similar rationale.
Claim 23 is a method claim with limitations corresponding to the limitations of Claim 9 and is rejected under similar rationale.
Claim 24 is a computer-readable medium claim with limitations corresponding to the limitations of Claim 1 and is rejected under similar rationale. Additionally, “computer-readable medium” of the Claim are taught by Chen ("512GB RAM" Appendix A.1 Hardware Configuration).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Wright (US 20230259692 A1). Wright is not prior art under 35 USC 102(b)(2)(C) and is included for reference only. Wright discloses a method for generating modified product descriptions.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JON C MEIS whose telephone number is (703)756-1566. The examiner can normally be reached Monday - Thursday, 8:30 am - 5:30 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached on 571-272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JON CHRISTOPHER MEIS/Examiner, Art Unit 2654
/HAI PHAN/Supervisory Patent Examiner, Art Unit 2654