DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. §101 because the claimed invention is directed to non-statutory subject matter. The claimed invention is directed to an abstract idea, without “significantly more”. The claims recite a method for optimizing feedback, comprising: a. providing a dynamic feedback system for a good or service, the dynamic feedback system based on one or more parameters, where the one or more parameters relate to a rating device, a distributed feedback monitoring method, implementation of a machine learning algorithm for optimization of rating system features, or a combination thereof; b. setting or adjusting multiple parameters dynamically; c. capturing a set of feedback from users using the dynamic feedback system, where a dashboard showing at least one previous rating, an average rating, a rating trend, information from related feedback, or a combination thereof is displayed to at least one of the users before the set of feedback is captured, where the information from related feedback is evaluated for similarity between information received from one or more of the users and/or a provider of the service; d. repeating steps (b)-(c) at least once; e. providing feedback information to the provider, the feedback information including at least one previous rating, an average rating, a rating trend, the captured sets of feedback, one or more values derived from the captured sets of feedback, one or more values derived from a ground truth and the captured sets of feedback, or a combination thereof, where the feedback information provided to the provider is configured to allow an early detection of a change in cooperation, crowd-wisdom, and/or quality; f. changing the service based on the feedback information
and
a server, comprising: a processor; and a non-transitory computer readable medium containing instructions that, when executed, cause the processor to: allow users on clients to access a dynamic feedback system for a service, the dynamic feedback system capable of gathering input from each user, the dynamic feedback system based on one or more parameters, the parameters relating to a rating device, a distributed feedback monitoring method, implementation of a machine learning algorithm for optimization of rating system features, or a combination thereof; send data to display to the users prior to each user sending feedback, the data comprising at least one previous rating, an average rating, a rating trend, information from related feedback, or a combination thereof, and receiving first feedback from the users, where the information from related feedback is evaluated for similarity between information received from one or more of the users and/or a provider of the service; allow a provider to set or adjust at least one of the one or more parameters; send data to display to the users prior to each user sending feedback, the data comprising at least one previous rating, an average rating, a rating trend, information from related feedback, or a combination thereof, and receiving second feedback from the users, where the information from related feedback is evaluated for similarity between information received from one or more of the users and/or a provider of the service; display feedback information to the provider, the feedback information including at least one previous rating, an average rating, a rating trend, the captured sets of feedback, one or more values derived from the captured sets of feedback, one or more values derived from a ground truth and the captured sets of feedback, or a combination thereof, where the feedback information provided to the provider is configured to allow an early detection of a change in cooperation, crowd-wisdom, and/or quality
and
a system comprising: a server according to claim 6, configured to provide a dynamic feedback system; and two or more clients, each client comprising: a client processor; and a non-transitory computer readable medium containing instructions that, when executed, cause the client processor to: allow a user to enter feedback; receive data from the server related to the feedback before or during a time when the feedback is entered, the data comprising at least one previous rating, an average rating, a rating trend, information from related feedback, or a combination thereof, and receiving first feedback from the users, where the information from related feedback was evaluated for similarity between information received from one or more of the users and/or a provider of the service; display the received data; and send the feedback to the server.
and
a client for optimizing feedback, comprising: a client processor; and a non-transitory computer readable medium containing instructions that, when executed, cause the client processor to: allow a user to enter feedback as part of a dynamic feedback system, where multiple parameters are set or adjusted dynamically; receive data from a server related to the feedback before or during a time when the feedback is entered, the data comprising at least one previous rating, an average rating, a rating trend, information from related feedback, or a combination thereof, and receiving first feedback from the users, where the information from related feedback was evaluated for similarity between information received from one or more of the users and/or a provider of the service; display the received data; and send the feedback to the server, where the feedback is configured to allow an early detection of a change in cooperation, crowd-wisdom, and/or quality
which are an abstract idea that falls under “Certain Methods of Organizing Human Activity", specifically “Commercial or Legal Interactions (Including Agreements in the form of Contracts; Legal Obligations; Advertising, Marketing, or Sales Activities or Behaviors; Business Relations)”, as discussed in MPEP §2106(a)(2) Parts (I) and (II), and in the 2019 Revised Patent Subject Matter Eligibility Guidance.
This judicial exception is not integrated into a practical application because an “a server”, “a processor”, “computer readable medium”, “a client processor” are generically recited computer elements that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer.
There is no improvement made to computer technology since the claims are directed to collecting, analyzing and displaying customer feedback. This is not related to a long standing problem in computer technology. Additionally, there is no practical application as there is no particular machine that is used to implement the claim language, but instead the claims only generic computer components are used to perform the invention. In addition, there is no transformation of the machine used in the application into a different state or thing. Lastly, the claims do not attempt to apply the abstract idea in a meaningful way beyond simply using the claimed machine.
All dependent claims are also rejected, because they merely further detail the abstract idea.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 6 and 8 recite “information…is configured to allow an early detection of a change” or “feedback... is configured to allow an early detection of a change” but it is unclear what could constitute such a configuration.
The various dependent claims inherit the above issues from their respective client claims.
In addition, claim 1 recites a “feedback system for a good or service” and also recites “changing the service” which begs the question whether the latter limitation is required or not since a feedback system directed to a device would not require it.
Finally, claim 2 recites “occasionally receiving feedback” but it is unclear what would constitute “occasionally”.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-3 and 6-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hudda et al. (pub. no. 20210264507) in view of Silverstein et al. (pub. no. 20220318861).
Regarding claim 1, Hudda discloses a method for optimizing feedback, comprising: a. providing a dynamic feedback system for a good or service, the dynamic feedback system based on one or more parameters, where the one or more parameters relate to a rating device, a distributed feedback monitoring method, implementation of a machine learning algorithm for optimization of rating system features, or a combination thereof; b. setting or adjusting multiple parameters dynamically (“User interface choices that encourage users to leave feedback can improve network-based commerce sites. For example, when a user purchases a product from a network-based commerce system, the commerce presents one or more feedback options to the user (e.g., send a follow-up email or other message to a user after records show that the product has been delivered).
In some example embodiments, the network-based commerce system determines one or more questions associated with the product. In some example embodiments, the questions are determined based on product type, user interests, previous product feedback (e.g., text-mining user comments about the product to determine subjects of interest), product specifications, information provided by a producer of the product, and so on”, [0017] & [0018]; questions are interpreted to be parameters);
c. capturing a set of feedback from users using the dynamic feedback system, where a dashboard showing at least one previous rating, an average rating, a rating trend, information from related feedback, or a combination thereof is displayed to at least one of the users before the set of feedback is captured, where the information from related feedback is evaluated for similarity between information received from one or more of the users and/or a provider of the service (“Once the one or more questions have been determined, the network-based commerce system presents the questions to the user as part of a web page generated by the network-based commerce system. Each question has an associated visual feedback image presented as part of a user feedback section of a displayed web page (this user feedback section of a displayed web page may be called an aspect card). In some example embodiments, the visual feedback image is a radial graph (e.g., an annulus that has a base color and a filled in section of another color representing a percentage or amount of the data being represented). In one example embodiment, the radial graph represents the portion of users who like Movie A. Thus, if forty percent of users like movie A the radial graph is an annulus (e.g., a circle with a concentric circle removed from the middle) with forty percent of the circle filled in with the color blue while the rest remains the color white”, [0019]);
d. repeating steps (b)-(c) at least once (additional users who use the system to provide feedback on a particular item); e. providing feedback information to the provider, the feedback information including at least one previous rating, an average rating, a rating trend, the captured sets of feedback, one or more values derived from the captured sets of feedback, one or more values derived from a ground truth and the captured sets of feedback, or a combination thereof, where the feedback information provided to the provider is configured to allow an early detection of a change in cooperation, crowd-wisdom, and/or quality (“Once the user responds to the one or more feedback questions by selecting one of the presented possible answers, the client system (at which the one or more feedback questions are displayed) transmits the information back to the network-based commerce system. In some example embodiments, there are two possible answers (e.g., yes or no). In other example embodiments, more than two options are displayed. The network-based commerce system updates feedback information in real-time based on a most recent user answer (e.g., an answer that was just received), and sends an update to the visual feedback image back to the client system for presentation to the user. Thus, the user sees the image update in real time based on the answer selected by the user. In this way, the user is more likely to give feedback”, [0020]).
Regarding claim 1, it is noted that Hudda does not explicitly disclose making a change to the system based the feedback. Silverstein however, teaches making a change to the system based the feedback (“Customer feedback is a vital part of any modern enterprise. Companies, products, services, etc. are judged by potential customers based on the feedback from other customers through reviews. There are many avenues from which people provide feedback, such as through ratings websites, such as Yelp®, or social media websites such as Facebook® or Instagram®. Companies also gauge their own success or weakness based on such reviews. These reviews help direct people and companies in how to change, update, and fine-tune their products, services, and business plans, etc. for improved future implementations”, [0002]).
Exemplary rationales that may support a conclusion of obviousness include combining prior art elements according to known methods to yield predictable results. Here both Hudda and Silverstein are directed to systems that generate feedback on user purchases. To use the feedback in the Hudda invention to improve products and services as taught by Silverstein would be to combine a prior art element according to a known method to yield a predictable results. Therefore, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the claimed invention to modify Hudda to use the received feedback to improve the products and services as taught by Silverstein. To do so would provide guidance to a company as to how to improve their business.
Regarding claim 2, Hudda discloses occasionally receiving feedback from each user relating to that user's preferences in governance space (“server data module(s) 340, storing data related to the server system 120, including but not limited to:
user profile data 342, including both data provided by the user, who will be prompted to provide some personal information, such as his or her name, age (e.g., birth date), gender, interests, contact information, home town, address, educational background (e.g., schools, majors), current job title, job description, industry, employment history, skills, professional organizations, memberships to other social networks, customers, past business relationships, and seller preferences; and inferred user information based on user activity, social graph data, remaining power threshold value, and so on”, [0059] & [0060]).
Regarding claim 3, Hudda discloses setting or adjusting multiple parameters includes a step change or a trajectory parameter transition over time to a parameter (“The network-based commerce system updates feedback information in real-time based on a most recent user answer (e.g., an answer that was just received), and sends an update to the visual feedback image back to the client system for presentation to the user”, [0020]; updating feedback information in response to the most recent user answer to a question is interpreted to be a step change).
Regarding claim 6, Hudda discloses a server, comprising: a processor; and a non-transitory computer readable medium containing instructions that, when executed, cause the processor to: allow users on clients to access a dynamic feedback system for a service, the dynamic feedback system capable of gathering input from each user (“In some embodiments, the method 700 is performed at a server system (e.g., server system 120 in FIG. 1) including one or more processors and memory storing one or more programs for execution by the one or more processors.
The server system (e.g., server system 120 in FIG. 1) processes, in operation 702, a purchase request for a product from a user. For example, any time a user purchases a product through the network-based commerce system, the purchase is noted by the server system (e.g., server system 120 in FIG. 1).
In response to detecting that a product has been purchased, the server system (e.g., server system 120 in FIG. 1) prepares to request user feedback from the user for the product that was purchased. This request can be delivered to the user immediately or at a later time. The request can be sent via email or through a messaging service of the network-based commerce system”, [0088] - [0090]),
the dynamic feedback system based on one or more parameters, the parameters relating to a rating device, a distributed feedback monitoring method, implementation of a machine learning algorithm for optimization of rating system features, or a combination thereof (“ In some example embodiments, the server system (e.g., server system 120 in FIG. 1) identifies, in operation 704, one or more user feedback questions for the product. Identifying the one or more user feedback questions includes the server system (e.g., server system 120 in FIG. 1) determining, in operation 706, whether there are any predetermined user feedback questions already associated with the product. The server system (e.g., server system 120 in FIG. 1) stores a database of user feedback question. Each question in the database is associated with one or more products or product categories”, [0091];
“In accordance with a determination that there are predetermined user feedback questions already associated with the product, the server system (e.g., server system 120 in FIG. 1) selects, in operation 708, one or more of the predetermined user feedback questions based on the amount of user feedback previously received for each user feedback question and the preferences of the user. For example, if the user's preferences indicate that the user values battery life highly when evaluating electronic products, the server system 120 is more likely to select user feedback questions related to battery life.
In some example embodiments, a predetermined number of questions can be displayed simultaneously and only the number of user feedback questions that can be simultaneously displayed are selected.
In some example embodiments, if there are more available questions for a particular product than are needed, the server system (e.g., server system 120 in FIG. 1) selects the user feedback questions that have the fewest received responses. In other example embodiments, the user feedback questions are ranked by topic based on the preferences of the user who purchased the product. For example, if a particular user is price sensitive, user feedback questions associated with the price of a product are ranked more highly than other user feedback questions.
In some example embodiments, user feedback questions are selected based on a determination of the relative importance of the questions in evaluating a product. In some example embodiments, the server system (e.g., the server system 120 in FIG. 1) determines, for each user feedback question, the relative impact the user feedback question has on future user purchase decisions.
For example, the server system (e.g., the server system 120 in FIG. 1) analyzes the purchase trends for cameras to determine which user feedback questions were most predictive of user purchase decisions. It determines that Question A was very important to user purchase decisions for cameras because cameras that scored highly on Question A sold a high number of units while cameras that scored lowly on Question A sold a low number of units. In contract Question B was not important to user purchase decisions because cameras that scored highly on Question B sold at similar levels to cameras that scored lowly on Question B, when controlled for other factors. In this example, Question A would be determined to be more important than Question B.
In other example embodiments, the server system (e.g., the server system 120 in FIG. 1) selects questions that are not considered to have reached consensus. For example, if Question C has a high number of high ratings and a high number of low ratings, but relatively few intermediate ratings, the server system (e.g., the server system 120 in FIG. 1) determines that no consensus has been reached for the question and will prioritize Question C for further feedback”, [0094] – [0099]);
send data to display to the users prior to each user sending feedback, the data comprising at least one previous rating, an average rating, a rating trend, information from related feedback, or a combination thereof, and receiving first feedback from the users, where the information from related feedback is evaluated for similarity between information received from one or more of the users and/or a provider of the service (“For each particular user feedback question, the server system (e.g., server system 120 in FIG. 1) generates, in operation 718, a user feedback image based on stored user feedback associated with the particular user feedback question. The user feedback image is a graphic that represents a percentage of users who have selected each possible response to the particular user feedback question. For example, if the user feedback question is “The product provides good quality for the money,” the user feedback image will represent the percentage of users that answered yes and the percentage of users that answered no.
In some example embodiments, generating a user feedback image includes determining a particular type of user feedback image. For example, assume a pie chart is the determined user feedback image. The server system (e.g., the server system 120 in FIG. 1) then assigns a particular color to each potential answer of the user feedback question. The server system (e.g., the server system 120 in FIG. 1) then fills in the pie chart using previously received user feedback data such that each respective answer receives an appropriate area of the pie chart based on the percentage of the users that responded with the particular answer.
In some example embodiments, the server system (e.g., server system 120 in FIG. 1) transmits, in operation 720, the one or more selected user feedback questions and the generated user feedback images to a client system associated with the user for display. The server system (e.g., server system 120 in FIG. 1) then receives, in operation 722, user feedback for a user feedback question of the selected one or more user feedback questions”, [0108] – [0110]);
allow a provider to set or adjust at least one of the one or more parameters (“In some example embodiments, if there are more available questions for a particular product than are needed, the server system (e.g., server system 120 in FIG. 1) selects the user feedback questions that have the fewest received responses. In other example embodiments, the user feedback questions are ranked by topic based on the preferences of the user who purchased the product. For example, if a particular user is price sensitive, user feedback questions associated with the price of a product are ranked more highly than other user feedback questions.
In some example embodiments, user feedback questions are selected based on a determination of the relative importance of the questions in evaluating a product. In some example embodiments, the server system (e.g., the server system 120 in FIG. 1) determines, for each user feedback question, the relative impact the user feedback question has on future user purchase decisions.
For example, the server system (e.g., the server system 120 in FIG. 1) analyzes the purchase trends for cameras to determine which user feedback questions were most predictive of user purchase decisions. It determines that Question A was very important to user purchase decisions for cameras because cameras that scored highly on Question A sold a high number of units while cameras that scored lowly on Question A sold a low number of units. In contract Question B was not important to user purchase decisions because cameras that scored highly on Question B sold at similar levels to cameras that scored lowly on Question B, when controlled for other factors. In this example, Question A would be determined to be more important than Question B.
In other example embodiments, the server system (e.g., the server system 120 in FIG. 1) selects questions that are not considered to have reached consensus. For example, if Question C has a high number of high ratings and a high number of low ratings, but relatively few intermediate ratings, the server system (e.g., the server system 120 in FIG. 1) determines that no consensus has bee3n reached for the question and will prioritize Question C for further feedback”, [0096] – [0099]).
Regarding claim 6, it is noted that Hudda does not explicitly disclose displaying feedback to the provider and improving the system based the feedback. Silverstein however, teaches disclose displaying feedback to the provider and improving the system based on the feedback ([0002]).
Exemplary rationales that may support a conclusion of obviousness include combining prior art elements according to known methods to yield predictable results. Here both Hudda and Silverstein are directed to systems that generate feedback on user purchases. To use the feedback in the Hudda invention to improve products and services as taught by Silverstein would be to combine a prior art element according to a known method to yield a predictable results. Therefore, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the claimed invention to modify Hudda to use the received feedback to improve the products and services as taught by Silverstein. To do so would provide guidance to a company as to how to improve their business.
Regarding claim 7, Hudda discloses a system comprising: a server according to claim 6, configured to provide a dynamic feedback system; and two or more clients, each client comprising: a client processor; and a non-transitory computer readable medium containing instructions (“With reference to FIG. 1, an example embodiment of a high-level client-server-based network architecture 100 is shown. A server system 120, in the example forms of a network-based publication system or payment system, provides server-side functionality via a network 104 (e.g., the Internet or wide area network (WAN)) to one or more client devices 102. FIG. 1 illustrates, for example, a web client 112 (e.g., a browser, such as the Internet Explorer® browser developed by Microsoft® Corporation of Redmond, Wash. State), client application(s) 114, and a programmatic client 116 executing on the client device 102”, [0021])
that, when executed, cause the client processor to: allow a user to enter feedback; receive data from the server related to the feedback before or during a time when the feedback is entered, the data comprising at least one previous rating, an average rating, a rating trend, information from related feedback, or a combination thereof, and receiving first feedback from the users, where the information from related feedback was evaluated for similarity between information received from one or more of the users and/or a provider of the service; display the received data ([0019]);
and send the feedback to the server ([0020]).
Regarding claim 8, Hudda discloses a client for optimizing feedback, comprising: a client processor; and a non-transitory computer readable medium containing instructions ([0021])
that, when executed, cause the client processor to: allow a user to enter feedback as part of a dynamic feedback system, where multiple parameters are set or adjusted dynamically ([0094] – [0099]);
receive data from a server related to the feedback before or during a time when the feedback is entered, the data comprising at least one previous rating, an average rating, a rating trend, information from related feedback, or a combination thereof, and receiving first feedback from the users, where the information from related feedback was evaluated for similarity between information received from one or more of the users and/or a provider of the service; display the received data ([0019]);
and send the feedback to the server ([0020]).
Regarding claim 8, it is noted that Hudda does not explicitly disclose displaying feedback to the provider and improving the system based on the feedback. Silverstein however, teaches disclose displaying feedback to the provider and improving the system based on the feedback ([0002]).
Exemplary rationales that may support a conclusion of obviousness include combining prior art elements according to known methods to yield predictable results. Here both Hudda and Silverstein are directed to systems that generate feedback on user purchases. To use the feedback in the Hudda invention to improve products and services as taught by Silverstein would be to combine a prior art element according to a known method to yield a predictable results. Therefore, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the claimed invention to modify Hudda to use the received feedback to improve the products and services as taught by Silverstein. To do so would provide guidance to a company as to how to improve their business.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAWRENCE STEFAN GALKA whose telephone number is (571)270-1386. The examiner can normally be reached M-F 6-9 & 12-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Lewis can be reached at 571-272-7673. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LAWRENCE S GALKA/Primary Examiner, Art Unit 3715