DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice of Pre-AIA or AIA Status
This action is in response to the amendment filed on12/31/2025. Claims 1-20 are pending. Claims 1-20 are amended. No claims have been added. No claims have been cancelled.
Response to Arguments
Applicant's arguments filed 12/31/2025 have been fully considered but they are not persuasive. The applicant has argued the previous 101 rejection specifically “The Examiner's characterization of the claims as merely "generating feedback and displaying the feedback" constituting "Certain Methods of Organizing Human Activity" is overly broad and does not accurately reflect what the claims actually recite. The claims are not directed to the abstract idea of providing feedback between users, but rather to a specific technological solution that employ machine learning to intelligent generate personalized item review text from structured rating data within a webservice platform.” The examiner respectfully disagrees. The invention is directed to certain methods of organizing human activities. Specifically the claim involves managing interactions between people (determining... an interaction… between a first user and a second user), facilitating commercial or social behavior, and collecting an using feedback. The claims appears to be organizing how users evaluate items (i.e. a survey) while generating and submitting feedback. The claims are directed to collecting, analyzing, and presenting user feedback, and facilitating the creation and submission of reviews.
The applicant has argued “Machine Learning-Based Attribute Generation from Collective User Data: The claims specifically recite "generating, by a machine learning model, a plurality of attributes related to the item based on the interaction and feedback from a plurality of users of the web service." This is a technical operation where the machine learning model analyzes aggregated data from multiple users to intelligently determine which item attributes are most relevant to present for rating. 'This goes beyond generic computing-it represents a specific computational analysis function that processes collective user experience to extract meaningful item characteristics.” The examiner respectfully disagrees. Although the claim contains machine learning, as claimed the using generating attributes using ML, the ML is merely being used as a tool to perform the steps of the invention. The claims do not specifically teach how the technology (ML) is operating merely a generic what it does. At most the improvement is to the abstract idea (attribute quality and review generation) not to the technology. A human operator is able to read reviews, identify common data, and summarize it. The claimed additional element of machine learning merely represents a generic computer implementation and is not a technological improvement.
The applicant has argued “Web Service Platform Integration: The claims are specifically directed to operations occurring “in a web service about an item," which grounds the invention in a specific technological environment. The method is not merely about human interactions, but about improving the technical operation of web-based e-commerce platforms.” The examiner respectfully disagrees. The web service platform is a generic computing device that performs the steps of the abstract idea. The web service and platform are conventional components and are not directed to a technical improvement. Even though the claim includes a platform the claim is still directed to organizing and generating user reviews, the platform is merely a place to host it.
The applicant has argued “Machine Learning-Based Personalized Text Generation: The claims recite "automatically generating, by the machine learning model, a feedback text for a review of the item based on the received one or more ratings and one or more feedback items previously submitted by the first user." This limitation requires the machine learning model to: (1) access historical feedback data from the specific user, (2) analyze that historical data to learn user preferences and patterns, (3) combine the learned patterns with current rating data, and (4) automatically generate natural language review text. This is a specific application of natural language processing and machine learning technology, not an abstract idea.” The examiner respectfully disagrees. The personalization as argued would merely tailor content to a user using preferences and history, this would be an improvement to the content not to the technology. Although the claim contains machine learning, as claimed the using generating attributes using ML, the ML is merely being used as a tool to perform the steps of the invention. The claims do not specifically teach how the technology (ML) is operating merely a generic what it does. At most the improvement is to the abstract idea (attribute quality and review generation) not to the technology.
The applicant has argued “Technical Data Processing Pipeline: The claims recite a specific sequence of technical operations: (1) determining interactions within a web service, (2) using machine learning to generate relevant attributes from multi-user feedback data, (3) collecting structured rating inputs through a user interface, (4) employing machine learning to automatically generate review text based on both current ratings and historical user data, and (5) submitting the generated text to the web service. This represents a defined technical process for data collection, machine learning analysis, and automated content generation within a web-based platform, not merely organizing human activity.” The examiner respectfully disagrees. Applicant’s claim is directed to detecting an interaction, generating attributes, collecting ratings, generating review text, receiving a selecting and submitting feedback. The claimed limitations merely describe the functionality and not a technical improvement. The improvement is at best, better reviews. The invention is merely a standard data analysis within a conventional pipeline. Applicant’s arguments are not persuasive.
The applicant has argued “Multi-User Data Aggregation and Analysis: The machine learning model generates attributes "based on...feedback from a plurality of users of the web service," indicating the system performs computational analysis of data from multiple sources to determine relevance. This is a technical function involving data aggregation, pattern recognition, and relevance ranking- computational operations that improve how the web service identifies important item characteristics.” The examiner respectfully disagrees. The claimed limitation is merely claiming generic data analysis. There is no claimed technical improvement. Even with data aggregation the claims merely involve collecting opinions, analyzing the opinions, and generating a review. This is merely organizing human generated information.
The applicant has argued “Automated Natural Language Generation: The automatic generation of "feedback text for a review" represents the application of natural language generation technology. The machine learning model must process structured rating data and historical user feedback to produce coherent natural language text-a sophisticated computational operation involving linguistic analysis and text synthesis.” The examiner respectfully disagrees. The applicant does not specifically claim natural Language Generation. However, the limitation of feedback text for a review is merely analyzing data and generating text. The applicant appears to be claiming the function of the limitation without claiming technical details or a technical improvement. Improvements to the data are at most merely an improvement to the abstract idea.
The applicant has argued “Personalization Based on Historical User Data: The claim requires the machine learning model to consider "one or more feedback items previously submitted by the first user," meaning the system maintains user profiles. analyzes historical patterns, and applies learned preferences to current operations. This represents a technical improvement in how web services provide personalized experiences through machine learning.” The examiner respectfully disagrees. Although the claim contains machine learning, as claimed the using generating attributes using ML, the ML is merely being used as a tool to perform the steps of the invention. The claims do not specifically teach how the technology (ML) is operating merely a generic what it does. At most the improvement is to the abstract idea (attribute quality and review generation) not to the technology. A human operator is able to read reviews, identify common data, and summarize it. The claimed additional element of machine learning merely represents a generic computer implementation and is not a technological improvement.
The applicant has argued “The claims are directed to the practical application of improving the technical operation and data quality of web-based item review systems through machine learning-based intelligent attributeselectionandautomatedpersonalizedreviewtextgeneration.This represents a specific improvement over prior art review systems that either: (1) require users to manually compose review text from scratch, leading to lower participation rates and inconsistent review quality; (2) provide static, pre-defined attribute lists that may not be relevant to the specific item or current user priorities; or (3) fail to leverage historical user data to personalize the review generation process.” The examiner respectfully disagrees. The claimed invention involves the use of a web platform that applies a machine learning model and generates attributes and text. The computer as claimed is a tool to perform the steps of the invention, the claims are not directed to a technical improvement. For argument sake, even if the claims do improve the quality of the data, this would not be a technical improvement. It would be an improvement to the abstract idea. There is no improvement to the technical field, the technology, or computer functionality.
The applicant has argued “Similar to Example 42 from the USPTO's guidance, where a medical records system providing "a specific improvement over prior art systems by allowing remote users to share information in real time in a standardized format regardless of the format in which the information was input by the user" was found eligible, the present claims recite a specific improvement over prior art review systems by allowing the webservice to dynamically generate relevant item attributes from collective user feedback and automatically generate personalized review text from simple rating inputs based on learned user preferences.” The examiner respectfully disagrees. Example 42 is directed to training and using a neural network to classify or detect content based on specific training and feature processing steps. Although applicant’s claims have machine learning and text processing applicant’s invention merely deals with generating reviews. Applicant’s claim also uses the concept of Machine Learning but does not define how it is done or used. Although applicant’s claim uses a web service it is not an improvement to how the system operates.
The applicant has argued “Improvement to Web Service Functionality: The claims improve how web-based review platforms function by: (1) dynamically identifying relevant item attributes through machine learning analysis of collective user feedback, rather than using static attribute lists, and (2) automatically generating high-quality, personalized review text from simple rating inputs, increasing both the quantity and quality of reviews available on the platform. This directly improves the technical operation of the web service.” The examiner respectfully disagrees. The claim is directed to interactions, ratings, attributes, and reviews using the webservice. The webservice in the claim is being used as a tool to perform the steps of the invention. The webservice itself is not an improvement to the claim. The claim does not involve any change or improvement to the webservice, there is no performance improvement. The claim is at best improving the quality of the feedback. Applicant’s arguments are not found persuasive.
The applicant has argued “Machine Learning as Technological Tool for Data Analysis: The use of machine learning to analyze "Feedback from a plurality of users" to determine relevant attributes represents a specific technological implementation that provides concrete improvements to data processing capabilities. The system performs computational analysis across multiple user interactions to extract patterns and identify which attributes are most meaningful for the specific item, which is a technical improvement in how web services process and utilize collective user data.” The examiner respectfully disagrees. Applicant’s arguments merely argues what is achieved but does not show how it is computed which makes the limitations generic data analysis. The invention does not define how the interactions are represented or what kind of model is used. The claims lacks a technical improvement. Applicant’s invention is directed to generating attributes and generating feedback. These limitations do not improve the functioning of a computer or another technology.
The applicant has argued that “Solving Technical Problem of Review Generation and Data Quality: The claims address the technical problem of how to automatically generate personalized, high-quality item reviews from structured rating data while leveraging historical user feedback patterns. This is analogous to the format conversion in Example 42, but here the system converts: (1) unstructured multi-user feedback into structured attributes, and (2) structured ratings plus historical user data into personalized natural language text. Both conversions improve data quality and system usability.” The examiner respectfully disagrees. The claim is directed to interactions, ratings, attributes, and reviews using the webservice. The webservice in the claim is being used as a tool to perform the steps of the invention. The webservice itself is not an improvement to the claim. The claim does not involve any change or improvement to the webservice, there is no performance improvement. The claim is at best improving the quality of the feedback. Applicant’s arguments are not found persuasive. Example 42 is directed to training and using a neural network to classify or detect content based on specific training and feature processing steps. Although applicant’s claims have machine learning and text processing applicant’s invention merely deals with generating reviews. Applicant’s claim also uses the concept of Machine Learning but does not define how it is done or used. Although applicant’s claim uses a web service it is not an improvement to how the system operates.
The applicant has argued “Reduction of User Interface Burden: By automatically generating review text from simple rating inputs combined with historical user data, the system reduces the cognitive and time burden on users while still producing high-quality reviews. This improves the efficiency of the user interface and increases user engagement with the web service platform-a technical improvement to system usability.” The examiner respectfully disagrees. The applicant is merely claiming the use of an interface without being tied to any technical improvement. The applicant appears to be just automating a user task. Any efficiency would be an improvement to the abstract idea and would not be a technical improvement.
The applicant has argued “Enhanced Data Collection Through Intelligent Personalization: The machine learning model's use of "one or more feedback items previously submitted by the first user" to generate current review text represents a technical improvement in data collection methodology. By learning from past user behavior and applying those learned patterns, the system can generate more authentic and consistent reviews that better reflect user preferences, improving the quality of data stored in the web service.” The examiner respectfully disagrees. The examiner respectfully disagrees. The personalization as argued would merely tailor content to a user using preferences and history, this would be an improvement to the content not to the technology. Although the claim contains machine learning, as claimed the using generating attributes using ML, the ML is merely being used as a tool to perform the steps of the invention. The claims do not specifically teach how the technology (ML) is operating merely a generic what it does. At most the improvement is to the abstract idea (attribute quality and review generation) not to the technology.
The applicant has argued “Dynamic Attribute Selection Based on Collective Intelligence: Unlike static review systems, the claims recite dynamic generation of "a plurality of attributes related to the item based on feedback from a plurality of users." This means the system adapts which attributes to present based on computational analysis of what other users found important, representing a technical improvement in how web services identify and prioritize relevant item characteristics through crowd-sourced data analysis.” The examiner respectfully disagrees. Stating identify and prioritize relevant characteristics describes what the system does but not how the steps are technically done. The claimed invention is directed to just analyzing user data to output data. There is not technical improvement to the invention.
The applicant has argued “Real-Time Web Service Integration with Machine Learning Processing: The claims recite operations occurring within "a web service" and "submitting the feedback text for the item on the web service," indicating the machine learning operations are integrated directly into the web service's technical architecture. The system processes multi-user data, generates attributes, collects ratings, generates personalized text, and submits results--all within the web service platform-- representing a technical improvement to web-based e-commerce systems.” The examiner respectfully disagrees. The steps of the invention are merely directed to generic processing. The claims are performed on a computer in a conventional manner. The claims lack any specific techniques or technical improvement.
The applicant has argued “Specific Machine Learning Implementation with Dual Technical Functions: The claims require a machine learning model to perform two distinct, sophisticated technical functions: (i) analyzing collective feedback from multiple users to intelligently generate relevant item attributes. and (ii) analyzing historical user-specific feedback data to automatically generate personalized review text. This is not a routine or conventional use of computer technology, but rather a specific application of advanced machine learning techniques for both collective data analysis and individual personalization. The specification describes the machine learning model as using transformer neural networks and deep learning algorithms, which represents a technological advancement beyond generic computing.” The examiner respectfully disagrees. Simply naming transformers and deep learning does not describe how the model is applied in a way that would be an improvement to the technology. The Machine Learning model as claimed is merely a generic model used to generate content, there is no improvement to the technology or technical field.
The applicant has argued “Unconventional Ordered Combination: The specific sequence of operations-(a) ML-based dynamic attribute generation from multi-user collective data, (b) collection of user ratings for those dynamically-generated attributes, (c) ML-based analysis of historical user feedback to learn preferences, (d) ML-based automated natural language text generation combining learned preferences with current ratings, and (c) submission to web service-represents an unconventional ordered combination that provides a specific technological solution to the problem of generating high-quality, personalized item reviews from minimal structured user input while leveraging both collective intelligence and individual preferences.” The examiner respectfully disagrees. The argument of high quality and minimal input as best is an improvement to content and possibly convenience for the user and is not a technical improvement.
The applicant has argued “Technical Solution to Technical Problem: The claims address the technical problem of how to: (1) dynamically identify relevant item attributes from collective user feedback, and (2) automatically generate authentic, personalized review text from structured rating data while maintaining consistency with the user's historical feedback patterns. This technical solution improves the functioning of web-based review systems by increasing review quality, quantity, and relevance without requiring extensive manual effort from users.” The examiner respectfully disagrees. It is unclear how the system is being improved. The invention appears to an improvement to user experience or content quality which is not a technical improvement.
The applicant has argued “Sophisticated Multi-Source Data Integration and Processing: The claims require the machine learning model to process and integrate data from multiple sources: (i) interaction data within the web service, (ii) feedback from multiple users to determine relevant attributes through collective analysis, (iii) historical feedback from the specific user to learn individual patterns, and (iv) current rating inputs. This multi-source data integration, pattern learning, and synthesis represents significantly more than abstract idea implementation-it is a specific technical operation that improves how web services process and utilize diverse data sources.” The examiner respectfully disagrees. Aggregating data from multiple sources is not a technical improvement. Without showing how the system processes multiple sources differently the claimed limitations are merely generic or just functional.
The applicant has argued “Improvement to Computer Functionality: Similar to BASCOM Global Internet V. AT&T Mobility LLC, 827 F.3d 1341 (Fed. Cir. 2016), and DDR Holdings, LLC V. Hotels.com, L.P., 773 E3d 1245 (Fed. Cir. 2014), the present claims recite a specific implementation of technology that improves how web-based systems operate. The claims improve the technical functioning of web service platforms by: (a) providing intelligent, dynamic attribute selection based on machine learning analysis of collective data, (b) automating review text generation while maintaining personalization through learned user patterns, and (c) increasing both the quantity and quality of review data collected by the platform.” The examiner respectfully disagrees. The claims of Bascom are tied to an arrangement that is a technical system improvement. Applicant’s claims describe an improvement to the feedback. Applicant’s claims do not involve a technical improvement.
The applicant has argued “Non-Routine, Unconventional Activity: The combination of: (a) analyzing collective multi-user feedback to dynamically generate relevant attributes, (b) learning individual user patterns from historical feedback, and (c) automatically generating personalized natural language review text that combines learned user preferences with current structured ratings, represents a non-routine, unconventional activity that goes well beyond generic computer functions. The specification details the sophisticated machine learning operations involved in both collective data analysis and individual personalization.” The examiner respectfully disagrees. It appears as though the machine learning model is merely used to generate better reviews. This would be an improvement to the abstract idea and not a technical improvement. There is no improvement to the computer functionality, efficiency, or pipeline.
The applicant has argued “Specific Technical Implementation Yielding Concrete Benefits: The ordered combination of limitations results in specific technical benefits: (i) improved data quality through intelligent attribute selection, (ii) increased user participation through reduced interface burden, (iii) enhanced review authenticity through personalization based on learned user patterns, and (iv) better utilization of collective user experience data. These concrete technical benefits demonstrate that the claims recite significantly more than any alleged abstract idea.” The examiner respectfully disagrees. The claims are not directed to a technical solution to a technical problem. There is no improvement to the functioning of the technology, and the arrangement of elements is generic. The claims are merely directed to collecting an analyzing user input to generate and present content. This is not significantly more than the abstract idea.
Applicant’s arguments are not found persuasive and the previous 101 rejection is updated below.
Applicant’s arguments, filed 12/31/2015 with respect to the previous 112(b), 2nd rejection of claims 4, 11, 17 has been fully considered and are persuasive. The 112(b), 2nd rejection of claims 4, 11, 17 has been withdrawn.
Applicant’s arguments, filed 12/31/2015 with respect to the previous 103 rejections of claims 4, 11, 17 has been fully considered and are persuasive. The 112(b), 2nd rejection of claims 4, 11, 17 has been withdrawn.
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot in view of the updated prior art search. An updated search was conducted and an updated 103 rejection is below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 USC 101 because the claimed invention is directed to a judicial exception (i.e. abstract idea) without anything significantly more.
Step 1: Claims 1-7 are directed to a method, claims 8-14 are directed to a non-transitory machine-readable medium, and claims 15-20 are directed to a device. Therefore, claims 1-20 are directed to patent eligible categories of invention.
Step 2A, Prong 1: Claims 1, 8, 15, recite generating feedback and displaying the feedback, constituting an abstract idea based on “Certain Methods of Organizing Human Activity” related to managing personal behavior or interactions between individuals including social activities. Claim 1 recites abstract limitations including “determining an occurrence of an interaction …about an item between a first user and a second user: generating, …, a plurality of attributes related to the item based on the interaction and feedback from a plurality of users …; providing a …presentation to the first user for rating the plurality of attributes; receiving one or more ratings for the plurality of attributes from the first user; automatically generating, …, a feedback text for a review of the item based on the received one or more ratings and one or more feedback items previously submitted by the first user; providing, …, the feedback text to the first user; receiving a selection from the first user for the feedback text; and submitting the feedback text for the item ...” Claim 8 recites abstract limitations including “determining an occurrence of an interaction …about an item between a first user and a second user; generating, …, a plurality of attributes related to the item based on the interaction and feedback from a plurality of users …; providing …for presentation to the first user for rating the plurality of attributes; receiving one or more ratings for the plurality of attributes from the first user; automatically generating, …, a feedback text for a review of the item based on the received one or more ratings and one or more feedback items previously submitted by the first user; providing, …, the feedback text to the first user; receiving a selection from the first user for the feedback text; submitting the feedback text for the item ...” Claim 15 recites abstract limitations including “determining an occurrence of an interaction …about an item between a first user and a second user;
generating,.., a plurality of attributes related to the item based on the interaction and feedback from a plurality of users…; providing a …presentation to the first user for rating the plurality of attributes;
receiving one or more ratings for the plurality of attributes from the first user; automatically generating, …, a feedback text for a review of the item based on the received one or more ratings one or more feedback items previously submitted by the first user; providing, …, the feedback text to the first user; receiving a selection from the first user for the feedback text; and submitting the feedback text for the item ...” These limitations, as drafted, is a process that, under its broadest reasonable interpretation, but for the language of “a processor,” and “an interface” covers an abstract idea but for the recitation of generic computer components. That is, other than reciting “a processor,” and “an interface” nothing in the claim elements preclude the steps from being interpreted as an abstract idea. For example, with the exception of the “processor” language, the claim steps in the context of the claim encompass an abstract idea directed to a “Mental Process” and “Certain Methods of Organizing Human Activity.”
Dependent claims 5, 6, 12, 13, 18, 19, further narrow the abstract idea identified in the independent claims and do not introduce further additional elements for consideration.
Dependent claims 2, 3, 7, 9, 10, 14, 16, 20, will be evaluated under Step 2A, Prong 2 below.
Step 2A, Prong 2: Independent claims 1, 8, and 15, do not integrate the judicial exception into a practical application. Claim 1 is a method that recites limitations performed “in a web service, by a machine learning model, user interface.” Claim 8 further recites the additional elements of “1. A non-transitory machine-readable medium having instructions, the instructions executable by a processor of a machine to perform operations, a web service, a machine learning model, a user interface.” Claim 15 further recites the additional elements “A device, comprising: a processor; and memory including instructions that, when executed by the processor, cause the device to perform operations including:
in a web service, by a machine learning model, user interface.” These additional elements are mere instructions to implement an abstract idea using a computer in its ordinary capacity, or merely uses the computer as a tool to perform the identified abstract idea. Use of a computer or other machinery in its ordinary capacity for performing the steps of the abstract idea or other tasks (e.g., to provide, receive, and display data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) does not integrate a judicial exception into a practical application. See MPEP 2106.05(f). The claim employs generic computer functions to execute an abstract idea, even when limiting the use of the idea to one particular environment. This type of generally linking is not sufficient to prove integration into a practical application. See MPEP 2106.05(h).
Therefore, the additional elements of the independent claims, when considered both individually and in combination, are not sufficient to prove integration into a practical application.
Dependent claims 5, 6, 12, 13, 18, 19, further narrow the abstract idea identified in the independent claims and do not introduce further additional elements for consideration, which does not integrate the judicial exception into a practical application.
Dependent claims 2, 9, introduces the additional element of “wherein the machine learning model is based on characteristics that relate to a grammatical style used by the first user, wherein the feedback incorporates the grammatical style used by the first user.” Use of a computer or other machinery in its ordinary capacity for performing the steps of the abstract idea or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) does not integrate a judicial exception into a practical application. See MPEP 2106.05(f).
Dependent claims 3, 10, introduces the additional element of “accessing information relating to the interaction; and automatically generating the feedback to combine the accessed information with the grammatical style used by the first user.” Use of a computer or other machinery in its ordinary capacity for performing the steps of the abstract idea or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) does not integrate a judicial exception into a practical application. See MPEP 2106.05(f).
Dependent claim 4, 11, 17, introduces the additional element of “wherein the interaction relates to a product and the feedback includes an aspect of the product, the method further comprising: accessing an image displaying the product; highlighting the product within the image; and the interactive user interface being a first user interface, displaying the image, the highlighted product, and the feedback on a second user interface.” This limitation does not integrate the judicial exception into a practical application because it is nothing more than generally linking the use of the judicial exception to a particular technological environment. See MPEP 2106.05(h).
Dependent claims 7, 14, 20, introduces the additional element of “wherein the interaction relates to a product, and the method further comprises: displaying on the user interface a plurality of aspects of the products listed according to a relevancy, the plurality of aspects being listed as a plurality of selectable elements on the user interface; receiving a selection of a selectable aspect of the plurality of selectable elements; and generating the feedback text to include an aspect associated with the selectable aspect of the plurality of selectable elements.” This limitation does not integrate the judicial exception into a practical application because it is nothing more than generally linking the use of the judicial exception to a particular technological environment. See MPEP 2106.05(h).
Dependent claim 16 introduces the additional element of “wherein the machine learning model is based on characteristics that relate to a grammatical style used by the first user, wherein the feedback incorporates the grammatical style used by the first user, wherein the operations further comprise: accessing information relating to the interaction; and automatically generating the feedback to combine the accessed information with the grammatical style used by the first user.” Use of a computer or other machinery in its ordinary capacity for performing the steps of the abstract idea or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) does not integrate a judicial exception into a practical application. See MPEP 2106.05(f).
Therefore, the additional elements of the dependent claims, when considered both individually and in the context of the independent claims, are not sufficient to prove integration into a practical application.
Step 2B: Independent claims 1, 8, and 15 do not comprise anything significantly more than the judicial exception. As can be seen above with respect to Step 2A, Prong 2, Claim 1 is a method that recites limitations performed “in a web service, by a machine learning model, user interface.” Claim 8 further recites the additional elements of “A non-transitory machine-readable medium having instructions, the instructions executable by a processor of a machine to perform operations, a web service, a machine learning model, a user interface.” Claim 15 further recites the additional elements “A device, comprising: a processor; and memory including instructions that, when executed by the processor, cause the device to perform operations including: in a web service, by a machine learning model, user interface.” These additional elements are mere instructions to implement an abstract idea using a computer in its ordinary capacity, or merely uses the computer as a tool to perform the identified abstract idea. Use of a computer or other machinery in its ordinary capacity for performing the steps of the abstract idea or other tasks (e.g., to provide, receive, and display data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) is not anything significantly more than the judicial exception. See MPEP 2106.05(f). The claim employs generic computer functions to execute an abstract idea, even when limiting the use of the idea to one particular environment. This type of generally linking is not anything significantly more than the judicial exception. See MPEP 2106.05(h).
The additional elements of the independent claims, when considered both individually and in combination, do not comprise anything significantly more than the judicial exception.
Dependent claims 5, 6, 12, 13, 18, 19, further narrow the abstract idea identified in the independent claims and do not introduce further additional elements for consideration, which is not anything significantly more than the judicial exception.
Dependent claims 2, 9, introduces the additional element of “wherein the machine learning model is based on characteristics that relate to a grammatical style used by the first user, wherein the feedback incorporates the grammatical style used by the first user.” Use of a computer or other machinery in its ordinary capacity for performing the steps of the abstract idea or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) is not anything significantly more than the judicial exception. See MPEP 2106.05(f).
Dependent claims 3, 10, introduces the additional element of “accessing information relating to the interaction; and automatically generating the feedback to combine the accessed information with the grammatical style used by the first user.” Use of a computer or other machinery in its ordinary capacity for performing the steps of the abstract idea or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) is not anything significantly more than the judicial exception. See MPEP 2106.05(f).
Dependent claim 4, 11, 17, introduces the additional element of “wherein the interaction relates to a product and the feedback includes an aspect of the product, the method further comprising: accessing an image displaying the product; highlighting the product within the image; and the interactive user interface being a first user interface, displaying the image, the highlighted product, and the feedback on a second user interface.” This limitation is not anything significantly more than the judicial exception because it is nothing more than generally linking the use of the judicial exception to a particular technological environment. See MPEP 2106.05(h).
Dependent claims 7, 14, 20, introduces the additional element of “wherein the interaction relates to a product, and the method further comprises: displaying on the user interface a plurality of aspects of the products listed according to a relevancy, the plurality of aspects being listed as a plurality of selectable elements on the user interface; receiving a selection of a selectable aspect of the plurality of selectable elements; and generating the feedback text to include an aspect associated with the selectable aspect of the plurality of selectable elements.” This limitation is not anything significantly more than the judicial exception because it is nothing more than generally linking the use of the judicial exception to a particular technological environment. See MPEP 2106.05(h).
Dependent claim 16 introduces the additional element of “wherein the machine learning model is based on characteristics that relate to a grammatical style used by the first user, wherein the feedback incorporates the grammatical style used by the first user, wherein the operations further comprise: accessing information relating to the interaction; and automatically generating the feedback to combine the accessed information with the grammatical style used by the first user.” Use of a computer or other machinery in its ordinary capacity for performing the steps of the abstract idea or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) is not anything significantly more than the judicial exception. See MPEP 2106.05(f).
The additional elements of the dependent claims, when considered both individually and in the context of the independent claims, are not anything significantly more than the judicial exception.
Accordingly, claims 1-20 are rejected under 35 USC 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4, 8, 11, 15, 17, is/are rejected under 35 U.S.C. 103 as being unpatentable over Paulino et al. (US 20250371251 A1) in view of Ryan et al. (US 20140272898 A1).
Regarding claim 1, Paulino teaches determining an occurrence of an interaction in a web service about an item between a first user and a second user (¶ 5, discloses interaction between a consumer and a merchant. ¶ 86-87, discloses a user purchasing items from an online store. ¶ 31-33, discloses a review summaries of purchased items. ¶ 16, 46).
generating, by a machine learning model, a plurality of attributes related to the item based on the interaction and feedback from a plurality of users of the web service (abstract, ¶ 89, 93, discloses using a machine learning model to generate attributes based on the feedback. ¶ 67-68, disclose receiving reviews and outputting attributes. ¶ 49, 101, 109);
providing …presentation to the first user for rating the plurality of attributes (¶ 49, discloses generating a review based on attributes. ¶ 44, 49-55, discloses details about a review data obtainer. ¶ 90, 76, 89-91, 93);
receiving one or more ratings for the plurality of attributes from the first user (¶ 38, disclose review ratings. ¶ 57, 84, 91, 110, discloses review ratings.);
automatically generating, by the machine learning model, a feedback text for a review of the item based on the received one or more ratings and one or more feedback items previously submitted by the first user (¶ 19, 21, 63, 89, 103, discloses generating a summary using a machine learning model.);
providing… the feedback text to the first user (Fig. 3-4B, ¶ 9-10, 49-50, discloses feedback on a user interface. ¶ 34, 58, 61);
receiving a selection from the first user for the feedback text (¶ 19, 32, 35, 89-91, discloses selecting a set of reviews.);
and submitting the feedback text for the item on the web service (¶ 35-39, discloses submitting user reviews).
Paulino does not specifically teach displaying the various items via an interface.
However, Ryan teaches a user interface for displaying (Fig. 4A-4H, ¶ 27-30, discloses various interfaces for displaying and collecting data. ¶ 38, 29-36, 41).
It would have been obvious to one of ordinary skill in the art at the time of filing to modify Paulino to include/perform a user interface for displaying, as taught/suggested by Ryan. This known technique is applicable to the system of Paulino as they both share characteristics and capabilities, namely, they are directed to digital surveys. One of ordinary skill in the art would have recognized that applying the known technique of Ryan would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Ryan to the teachings of Paulino would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such interface for display features into similar systems. Further, applying displaying on a user interface would have been recognized by those of ordinary skill in the art as resulting in an improved system that would give the user additional ease of filling out data when reviewing or giving feedback. Ryan also teaches feedback text.
Regarding claims 4, 11, 17, the combination of Paulino and Ryan teach the limitations of claim 1, 8, 15.
Paulino further teaches wherein the interaction relates to a product and the feedback includes an aspect of the product, the method further comprising: accessing an image displaying the product (¶ 34, 49, 80, discloses displaying an image of a product);
and the interactive user interface being a first user interface, displaying the image, and the feedback on a second user interface (¶ 34, 49, 80, discloses displaying an image of a product long with the feedback).
Paulino does not specifically teach highlighting an image of a product.
However, Ryan teaches wherein the interaction relates to a product and the feedback includes an aspect of the product, the method further comprising: accessing an image displaying the product (Fig. 4A-4G, 6, ¶ 23, according to this paragraph the question subject may take many forms. It can be a company, a product, a person, an idea, a concept, a place, a brand, or any other subject which is suitable for questioning in an online survey. The brand is the product as can be seen in the language of “I’ve never used it” or “I don’t use it anymore”… ¶ 28, 33, discloses displaying an image of a brand. The brand based on the BRI is the product); highlighting the product within the image (Fig. 4A-4G, 6, ¶ 23, according to this paragraph the question subject may take many forms. It can be a company, a product, a person, an idea, a concept, a place, a brand, or any other subject which is suitable for questioning in an online survey. The brand is the product as can be seen in the language of “I’ve never used it” or “I don’t use it anymore”… ¶ 28, 33, discloses displaying an image of a brand. The brand based on the BRI is the product. Specifically in Fig. 3B one brand (product) is highlighted); and the interactive user interface being a first user interface, displaying the image, the highlighted product, and the feedback on the second user interface (Fig. 4A-4G, 6, disclose a display of the brands (product), Fig. 3B discloses an image of one brand (product), disclose displaying follow-up question responses. ¶ 41-42, disclose displaying follow-up question responses. ¶ 38, 24-27).
It would have been obvious to one of ordinary skill in the art at the time of filing to modify Paulino to include/perform displaying highlighted image of the product, as taught/suggested by Ryan. This known technique is applicable to the system of Paulino as they both share characteristics and capabilities, namely, they are directed to online surveys. One of ordinary skill in the art would have recognized that applying the known technique of Ryan would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Ryan to the teachings of Paulino would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such display features into similar systems. Further, applying displaying an image of a product would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow a user to better visualize the product that is being reviewed.
Regarding claim 8, Paulino teaches A non-transitory machine-readable medium having instructions, the instructions executable by a processor of a machine to perform operations comprising (Fig. 1, 8, ¶ 27, 32, 93);
determining an occurrence of an interaction in a web service about an item between a first user and a second user (¶ 5, discloses interaction between a consumer and a merchant. ¶ 86-87, discloses a user purchasing items from an online store. ¶ 31-33, discloses a review summaries of purchased items. ¶ 16, 46).
generating, by a machine learning model, a plurality of attributes related to the item based on the interaction and feedback from a plurality of users of the web service (abstract, ¶ 89, 93, discloses using a machine learning model to generate attributes based on the feedback. ¶ 67-68, disclose receiving reviews and outputting attributes. ¶ 49, 101, 109);
providing …presentation to the first user for rating the plurality of attributes (¶ 49, discloses generating a review based on attributes. ¶ 44, 49-55, discloses details about a review data obtainer. ¶ 90, 76, 89-91, 93);
receiving one or more ratings for the plurality of attributes from the first user (¶ 38, disclose review ratings. ¶ 57, 84, 91, 110, discloses review ratings.);
automatically generating, by the machine learning model, a feedback text for a review of the item based on the received one or more ratings and one or more feedback items previously submitted by the first user (¶ 19, 21, 63, 89, 103, discloses generating a summary using a machine learning model.);
providing… the feedback text to the first user (Fig. 3-4B, ¶ 9-10, 49-50, discloses feedback on a user interface. ¶ 34, 58, 61);
receiving a selection from the first user for the feedback text (¶ 19, 32, 35, 89-91, discloses selecting a set of reviews.);
and submitting the feedback text for the item on the web service (¶ 35-39, discloses submitting user reviews).
Paulino does not specifically teach displaying the various items via an interface.
However, Ryan teaches a user interface for presentation (Fig. 4A-4H, ¶ 27-30, discloses various interfaces for displaying and collecting data. ¶ 38, 29-36, 41).
It would have been obvious to one of ordinary skill in the art at the time of filing to modify Paulino to include/perform a user interface for presentation, as taught/suggested by Ryan. This known technique is applicable to the system of Paulino as they both share characteristics and capabilities, namely, they are directed to digital surveys. One of ordinary skill in the art would have recognized that applying the known technique of Ryan would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Ryan to the teachings of Paulino would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such interface for display features into similar systems. Further, applying displaying on a user interface would have been recognized by those of ordinary skill in the art as resulting in an improved system that would give the user additional ease of filling out data when reviewing or giving feedback. Ryan also teaches feedback text.
Regarding claim 15, Paulino teaches a device, comprising: a processor; and memory including instructions that, when executed by the processor, cause the device to perform operations including (Fig. 1, 8, ¶ 27, 32, 93);
determining an occurrence of an interaction in a web service about an item between a first user and a second user (¶ 5, discloses interaction between a consumer and a merchant. ¶ 86-87, discloses a user purchasing items from an online store. ¶ 31-33, discloses a review summaries of purchased items. ¶ 16, 46).
generating, by a machine learning model, a plurality of attributes related to the item based on the interaction and feedback from a plurality of users of the web service (abstract, ¶ 89, 93, discloses using a machine learning model to generate attributes based on the feedback. ¶ 67-68, disclose receiving reviews and outputting attributes. ¶ 49, 101, 109);
providing …presentation to the first user for rating the plurality of attributes (¶ 49, discloses generating a review based on attributes. ¶ 44, 49-55, discloses details about a review data obtainer. ¶ 90, 76, 89-91, 93);
receiving one or more ratings for the plurality of attributes from the first user (¶ 38, disclose review ratings. ¶ 57, 84, 91, 110, discloses review ratings.);
automatically generating, by the machine learning model, a feedback text for a review of the item based on the received one or more ratings and one or more feedback items previously submitted by the first user (¶ 19, 21, 63, 89, 103, discloses generating a summary using a machine learning model.);
providing… the feedback text to the first user (Fig. 3-4B, ¶ 9-10, 49-50, discloses feedback on a user interface. ¶ 34, 58, 61);
receiving a selection from the first user for the feedback text (¶ 19, 32, 35, 89-91, discloses selecting a set of reviews.);
and submitting the feedback text for the item on the web service (¶ 35-39, discloses submitting user reviews).
Paulino does not specifically teach displaying the various items via an interface.
However, Ryan teaches a user interface for displaying (Fig. 4A-4H, ¶ 27-30, discloses various interfaces for displaying and collecting data. ¶ 38, 29-36, 41).
It would have been obvious to one of ordinary skill in the art at the time of filing to modify Paulino to include/perform a user interface for displaying, as taught/suggested by Ryan. This known technique is applicable to the system of Paulino as they both share characteristics and capabilities, namely, they are directed to digital surveys. One of ordinary skill in the art would have recognized that applying the known technique of Ryan would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Ryan to the teachings of Paulino would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such interface for display features into similar systems. Further, applying displaying on a user interface would have been recognized by those of ordinary skill in the art as resulting in an improved system that would give the user additional ease of filling out data when reviewing or giving feedback. Ryan also teaches feedback text.
Claim(s) 2, 3, 9, 10, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Paulino et al. (US 20250371251 A1) in view of Ryan et al. (US 20140272898 A1) in further view of
Luzhnica et al. (US 11516158 B1).
Regarding claims 2, 9, the combination of Paulino and Ryan teach the limitations of claim 1, 8, 15. The combination does not specifically teach a grammatical style.
However, Luzhnica teaches wherein the machine learning model is based on characteristics that relate to a grammatical style used by the first user, wherein the feedback incorporates the grammatical style used by the first user (col. 20, line 50- col. 21, line 16, discloses adhering to grammatical rules. Col. 26, lines 7-53, Col. 27, lines 25-45, discloses the uses of grammatical forms and structures. Col. 70, lines 10-33, discloses the use of machine learning with the responses.).
It would have been obvious to one of ordinary skill in the art at the time of filing to modify Paulino to include/perform the use of a grammatical style, as taught/suggested by Luzhnica. This known technique is applicable to the system of Paulino as they both share characteristics and capabilities, namely, they are directed to recording user messages . One of ordinary skill in the art would have recognized that applying the known technique of Luzhnica would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Luzhnica to the teachings of Paulino would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such grammatical features into similar systems. Further, applying a grammatical style would have been recognized by those of ordinary skill in the art as resulting in an improved system that would give the user a personalized and more engaging experience.
Regarding claims 3, 10, the combination of Paulino and Ryan teach the limitations of claims 2, 9. The combination does not specifically teach a grammatical style.
However, Luzhnica teaches the method further comprising: accessing information relating to the interaction; and automatically generating the feedback to combine the accessed information with the grammatical style used by the first user (col. 20, line 50- col. 21, line 16, discloses adhering to grammatical rules. Col. 26, lines 7-53, Col. 27, lines 25-45, discloses the uses of grammatical forms and structures. Col. 70, lines 10-33, discloses the use of machine learning with the responses. Col. 99, line 65- col. 100, line 40, col. 101, line 25 – col. 103, line 15, discloses generating feedback.).
It would have been obvious to one of ordinary skill in the art at the time of filing to modify Paulino to include/perform the use of a grammatical style, as taught/suggested by Luzhnica. This known technique is applicable to the system of Paulino as they both share characteristics and capabilities, namely, they are directed to recording user messages . One of ordinary skill in the art would have recognized that applying the known technique of Luzhnica would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Luzhnica to the teachings of Paulino would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such grammatical features into similar systems. Further, applying a grammatical style would have been recognized by those of ordinary skill in the art as resulting in an improved system that would give the user a personalized and more engaging experience.
Regarding claim, 16 the combination of Paulino and Ryan teach the limitations of claim 1, 8, 15. The combination does not specifically teach a grammatical style.
However, Luzhnica teaches wherein the machine learning model is based on characteristics that relate to grammatical style used by the first user, wherein the feedback incorporates the grammatical style used by the first user wherein the operations further comprise: accessing information relating to the interaction; and automatically generating the feedback to combine the accessed information with the grammatical style used by the first user (col. 20, line 50- col. 21, line 16, discloses adhering to grammatical rules. Col. 26, lines 7-53, Col. 27, lines 25-45, discloses the uses of grammatical forms and structures. Col. 70, lines 10-33, discloses the use of machine learning with the responses. Col. 99, line 65- col. 100, line 40, col. 101, line 25 – col. 103, line 15, discloses generating feedback.).
It would have been obvious to one of ordinary skill in the art at the time of filing to modify Paulino to include/perform the use of a grammatical style, as taught/suggested by Luzhnica. This known technique is applicable to the system of Paulino as they both share characteristics and capabilities, namely, they are directed to recording user messages . One of ordinary skill in the art would have recognized that applying the known technique of Luzhnica would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Luzhnica to the teachings of Paulino would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such grammatical features into similar systems. Further, applying a grammatical style would have been recognized by those of ordinary skill in the art as resulting in an improved system that would give the user a personalized and more engaging experience.
Claim(s) 5, 6, 12, 13, 18, 19, is/are rejected under 35 U.S.C. 103 as being unpatentable over Paulino et al. (US 20250371251 A1) in view of Ryan et al. (US 20140272898 A1) in further view of Brondstetter et al. (US 20150356579 A1).
Regarding claims 5, 12, 18, the combination of Paulino and Ryan teach the limitations of claim 1, 8, 15. The combination does not specifically teach ignoring a first feedback response element.
However, Brondstetter teaches determining if the feedback text has been selected; removing the feedback text when a determination is made when the feedback text is not selected; and presenting a different feedback text (¶ 26, discloses the first feedback as “take a survey” and when that is ignored a second feedback “click here for a reward for being our customer”. ¶ 47-48, discloses what may happen if a feedback request is ignored.).
It would have been obvious to one of ordinary skill in the art at the time of filing to modify Paulino to include/perform ignoring a first feedback response element, as taught/suggested by Brondstetter. This known technique is applicable to the system of Paulino as they both share characteristics and capabilities, namely, they are directed to surveys that ask follow up questions. One of ordinary skill in the art would have recognized that applying the known technique of Brondstetter would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Brondstetter to the teachings of Paulino would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such response features into similar systems. Further, applying ignoring a first feedback response element would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow the company the ability to rephrase and retry getting feedback.
Regarding claims 6, 13, 19, the combination of Paulino and Ryan teach the limitations of claim 5, 12, 18. The combination teaches feedback text but does not specifically teach the sale of a good.
However, Brondstetter teaches wherein the interaction is a sale of a product and the feedback text is a first aspect of the product and the feedback text is a second aspect of the product different from the first aspect of the product (¶ 12, discloses feedback from the sale of goods. Specifically, feedback based on satisfaction with a product and a survey related to a post purchase service, these are handled separately. ¶ 15, discloses different feedback events).
It would have been obvious to one of ordinary skill in the art at the time of filing to modify Paulino to include/perform the sale of a good, as taught/suggested by Brondstetter. This known technique is applicable to the system of Paulino as they both share characteristics and capabilities, namely, they are directed to online surveys. One of ordinary skill in the art would have recognized that applying the known technique of Brondstetter would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Brondstetter to the teachings of Paulino would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such response element features into similar systems. Further, applying the sale of a good would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow the company the ability to define what type of subject the feedback would be related to.
Claim(s) 7, 14, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Paulino et al. (US 20250371251 A1) in view of Ryan et al. (US 20140272898 A1) in further view of Lu et al. (US 20220092652 A1).
Regarding claims 7, 14, 20, the combination of Paulino and Ryan teach the limitations of claim 1, 8, 15. The combination does not specifically teach displaying a second user interface that lists a plurality of aspects of the products listed according to a relevancy.
However, Lu teaches displaying on the user interface a plurality of aspects of the products listed according to a relevancy, the plurality of aspects being listed as a plurality of selectable elements on the user interface; receiving a selection of a selectable aspect of the plurality of selectable elements; and generating the feedback text to include an aspect associated with the selectable aspect of the plurality of selectable elements (Fig. 4, Fig. 6, ¶ 67-68, 72, discloses receiving feedback on specific features of the product. ¶ 66, 71, discloses selecting a product to provide feedback for.).
It would have been obvious to one of ordinary skill in the art at the time of filing to modify Paulino to include/perform displaying a second user interface that lists a plurality of aspects of the products listed according to a relevancy, as taught/suggested by Lu. This known technique is applicable to the system of Paulino as they both share characteristics and capabilities, namely, they are directed to user feedback. One of ordinary skill in the art would have recognized that applying the known technique of Lu would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Lu to the teachings of Paulino would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such response aspect features into similar systems. Further, applying displaying a second user interface that lists a plurality of aspects of the products listed according to a relevancy would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow the publisher of the feedback survey the ability to pinpoint features of the product/service.
Other pertinent prior art includes Dowell et al. (US 20230186333 A1) which discloses collecting and curating survey responses. Parikh et al. (US 20170364967 A1) which discloses using machine learning to evaluate and sort product feedback. Tu et al. (US 20200218770 A1) discloses using a machine learning model, a feedback sensitivity score associated with the content creator. Hudda et al. (US 20170046775 A1) discloses for each particular user feedback question, the system generates user feedback graphics based on stored user feedback associated with the particular user feedback question.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMIE H AUSTIN whose telephone number is (571)272-7363. The examiner can normally be reached Monday, Tuesday, Thursday, Friday 7am-2pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Epstein can be reached at (571) 270 5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JAMIE H. AUSTIN
Examiner
Art Unit 3625
/JAMIE H AUSTIN/Primary Examiner, Art Unit 3625