Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Status of the Application
1. The following is a Non-Final Office Action in response to communication received on 12/12/2025. Claims 1-12 and 15-22 are pending in this action.
Continued Examination Under 37 CFR 1.114
2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/12/2025 has been entered.
Response to Amendment
3. Applicant’s amendments to claims 1, 11, and 19 are acknowledged. Applicant’s cancellation of claims 13-14 are acknowledged.
Response to Arguments
4. On remarks pages 15-16, Applicant argues the cited prior art in view of Applicant’s amendments.
The Examiner notes that the amended limitation of “wherein acquiring the sentiment data further includes extracting feedback comprising (i) verbal cues, (ii) visual cues, and (ii) textual cues, all three of which are included in video content describing the client’s interactions with the entity, such that the verbal cues, the visual cues, and the textual cues are extracted from the video content, and wherein the verbal cues, the visual cues, and the textual cues are subsequently ranked differently relative to one another” as recited in claim 1 is a broad result oriented (results based) claim.
Here the limitation does not disclose how the extraction takes place, whether this is related to one video content interaction or multiple different video content interactions over time, or how the cues are ranked differently other than being later (subsequently) ranked differently. This could mean merely ranking determined information differently over time or different interactions differently. Further the claims do not disclose how the cues are determined other than they are related to video, visual, and textual.
Therefore the Examiner interprets this limitation to recite/require receiving feedback relating to text, visual, and audio sentiment from a user interacting with a video or videos of entity and ranking that information differently over time or different interactions differently.
The previously cited prior art of Pickney teaches this. Specifically Pickney teaches information recommended or displayed in the system can be video (see paragraphs 0072 and 0082). Pickney goes on to teach receiving feedback related to that recommendation or video over time (see paragraphs 0104-0105), weighting those determined feedbacks (e.g. subsequently in that it’s after the determination) differently based on constraints like time (e.g. ranking differently relative to one another)(see paragraph 0105), and feedback can be things like clicks after being asked questions (see paragraph 0139) (which could be visual, verbal (as it relates to words), or textual cues as broadly recited in the claims), text answers to questions on a screen (see paragraph 0139 and Figure 8A) (which could be visual, verbal (as it relates to words), or textual cues as broadly recited in the claims), and voice responses (which could be verbal cues or verbal (as it relates to words)(see paragraph 0137)).
Further Applicant’s additional amendments of “wherein applying the one or more weighting factors includes identifying first normalized scoring data associated with the verbal cues, second normalized scoring data associated with the visual cues, and third normalized scoring data associated with the textual cues, and wherein the first, second, and third normalized scoring data are ranked differently relative to one another based, at least in part, on determined reliability weighting metrics associated with the verbal cues, the visual cues, and the textual cues, is again as recited in claim 1 is a broad result oriented (results based) claim. Here the claim does not disclose how the scoring data associated with verbal cues, visual cues, and textual cues is identified, or how the first, second, and third scoring data is ranked differently relative to one another other than its based at least in part on determined reliability weighting metrics associated with visual cues, verbal cues, and textual clues. How the determined reliability weighting metrics are determined is not recited, further it is not required that it is determined here, it could be merely a weight previously determined in the system like weighting feedback less than a previous time (See Pinckney paragraph 0105).
Therefore the Examiner interprets the limitation to recite/require identifying scoring data associated with verbal cues, textual cues, and visual cues, and weighting them differently based on defined weights in the system, like how well the user answered questions, when the response is (e.g. weighting feedback less the 9th time), etc. This is taught in paragraph 0105 of Pickney.
The only element not specifically disclosed in Pickney is that the scoring data is “normalized” however the secondary reference of Pawar has been previously relied upon to teach this.
If Applicant were to amend the claims to further define how these functions are performed, in the efforts of compact prosecution, the Examiner notes the newly cited reference of Moudy et al. (United States Patent Number: US 9,336,268) which is a system that receives feedback from users to determine sentiment scores (see abstract). Moudy et al. specifically discusses receiving text, audio, and image/video feedback and then weighting them separately to determine sentiment (see column 31 lines 64- column 32 lines 55 and corresponding Figures 12A-12C). Examiner notes that the Examiner does not rely on Moudy et al. for prior art purposes here, rather the above was just provided in the efforts of compact prosecution.
Therefore the Examiner respectfully disagrees with Applicant’s arguments.
Claim Rejections - 35 USC § 103
5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
7. Claim(s) 1-3, 5-9, 11-12, and 19-21 are rejected under 35 U.S.C. 103 as being unpatentable over Pickney et al. (United States Patent Application Publication Number: US 2011/0302117) further in view of Pawar (United States Patent Application Publication Number: US 2018/0349499).
As per claim 1, Pickney et al. teaches A computer system configured to generate and dynamically update an experience score for a client, where the experience score operates as a quantitative indicator describing a relationship between the client and an entity, the computer system being further configured to use the experience score to modify one or more subsequent interactions the entity has with the client so as to improve the relationship, said computer system comprising: (see abstract, paragraphs 0115 and 0198, Examiner’s note: teaches a computer based recommendation system where information may be updated over time (see paragraph 0198) and the recommendation may be based on a score (see paragraph 0115).
one or more processors; and one or more computer-readable hardware storage devices that store instructions that are executable by the one or more processors to cause the computer system to: (see paragraph 0191, Examiner’s note: computer readable medium being executed by a processor).
train a machine learning (ML) engine (see paragraphs 0058, 0066, 0079, 0088, 0119, Examiner’s note: teaches the system is trained (see paragraphs 0079, 0088, 0119) where the system is implemented by decisions made by machine learning (see paragraphs 0058 and 0066)).
to generate one or more weighting factors that assign relative importance levels to different types of interactions between clients and entities; (see paragraphs 0070, 0105, 0208, 0233, Examiner’s note: teaches different types of weights used in the machine learning system).
Wherein the ML engine is further tasked with learning, based on a plurality of interactions between the client and the entity, which mode of communication is preferred by the client when the client interacts with the entity (see paragraphs 0062-0063, 0066, 0137, 0186, 0225-0226, 0233-0234, 0239, Examiner’s note: here teaches collecting data and making determinations of when, where, and how to provide information to users).
acquire sentiment data detailing the relationship between the client and the entity, wherein the sentiment data is acquired from different types of interactions the client had relative to the entity, and wherein the sentiment data includes structured sentiment data and unstructured sentiment data; (see paragraphs 0059-0060, 0176, and 0187-0188, Examiner’s note: teaches here the information input in the system can be either unstructured or structured data that is used to determine information about the user to provide recommendations to the user or other users).
Wherein acquiring the sentiment data further includes extracting feedback data comprising (i) verbal cues, (ii) visual cues, (iii) textual cues, all three of which are included in video content describing the client’s interactions with the entity, such that the verbal cues, the visual cues, and the textual cues are extracted from the video content, and wherein the verbal cues, the visual cues, and the textual cues are subsequently ranked differently relative to one another; (see paragraphs 0072-0082, 0104-0105, 0139, and Figure 8A, Examiner’s note: Here the limitation does not disclose how the extraction takes place, whether this is related to one video content interaction or multiple different video content interactions over time, or how the cues are ranked differently other than being later (subsequently) ranked differently. This could mean merely ranking determined information differently over time or different interactions differently. Further the claims do not disclose how the cues are determined other than they are related to video, visual, and textual. Therefore the Examiner interprets this limitation to recite/require receiving feedback relating to text, visual, and audio sentiment from a user interacting with a video or videos of entity and ranking that information differently over time or different interactions differently. Pickney teaches this as Pickney teaches information recommended or displayed in the system can be video (see paragraphs 0072 and 0082). Pickney goes on to teach receiving feedback related to that recommendation or video over time (see paragraphs 0104-0105), weighting those determined feedbacks (e.g. subsequently in that it’s after the determination) differently based on constraints like time (e.g. ranking differently relative to one another)(see paragraph 0105), and feedback can be things like clicks after being asked questions (see paragraph 0139) (which could be visual, verbal (as it relates to words), or textual cues as broadly recited in the claims), text answers to questions on a screen (see paragraph 0139 and Figure 8A) (which could be visual, verbal (as it relates to words), or textual cues as broadly recited in the claims), and voice responses (which could be verbal cues or verbal (as it relates to words)(see paragraph 0137)).
detect a first type of interaction the client had relative to the entity, the first type of interaction including the client navigating a user interface of the entity, (see paragraphs 0058-0059 and Figure 2, Examiner’s note: teaches Q and A where a user may be provided a recommendation after 10 questions).
The first type of interaction, which includes involvement of the user interface, is included among the plurality of interactions between the client and the entity and is used by the ML engine when learning which mode of communication is preferred by the client (see paragraphs 0062-0063, 0066, 0137, 0186, 0225-0226, 0233-0234, 0239, Examiner’s note: here teaches collecting data and making determinations of when and where to provide information to users).
wherein the user interface is initially structured in accordance with a first layout such that, during said navigation of the user interface, a number of navigations that is required to reach a particular target displayed by the user interface is a first number of navigations; (see paragraphs 0058-0059 and Figure 2, Examiner’s note: teaches Q and A where a user may be provided a recommendation after 10 questions).
use natural language processing (NLP) to provide structure to the unstructured sentiment data such that a second set of structured sentiment data is acquired, wherein the structured sentiment data and the second set of structured sentiment data constitute an initial set of scoring data; (see paragraphs 0187, 0190-0191, Examiner’s note: determining preferences through natural language processing, like for example is this about electronics).
apply the one or more weighting factors to the initial set of scoring data to generate a set of weighted scores; (see paragraphs 0070, 0105, 0208, 0233, Examiner’s note: teaches different types of weights used in the machine learning system).
Wherein applying the one or more weighting factors includes identifying first scoring data associated with the verbal cues, second scoring data associated with the visual cues, and third scoring data associated with textual cues, and wherein the first, second, and third scoring data are ranked differently relative to one another based, at least in part, on determined reliability weighting metrics associated with the verbal cues, the visual cues, and the textual cues (see paragraph 0105, Examiner’s note: Here the claim does not disclose how the scoring data associated with verbal cues, visual cues, and textual cues is identified, or how the first, second, and third scoring data is ranked differently relative to one another other than its based at least in part on determined reliability weighting metrics associated with visual cues, verbal cues, and textual clues. How the determined reliability weighting metrics are determined is not recited, further it is not required that it is determined here, it could be merely a weight previously determined in the system like weighting feedback less than a previous time (See Pinckney paragraph 0105). Therefore the Examiner interprets the limitation to recite/require identifying scoring data associated with verbal cues, textual cues, and visual cues, and weighting them differently based on defined weights in the system, like how well the user answered questions, when the response is (e.g. weighting feedback less the 9th time), etc. This is taught in paragraph 0105 of Pickney.).
after generating the set of weighted scores, generate the experience score by aggregating the set of weighted scores; (see paragraphs 0115 and Figure 4, 5, 11, Examiner’s note: here shows providing a score (see paragraph 0115) and the Figures show providing percentage of hunch that the recommendation or correct or a user is similar).
use the experience score to modify a subsequent interaction the client has with the entity, (see paragraphs 0090, 0117, 0123, Figure 6-7, and 16, Examiner’s note: teaches this in multiple ways for example the top list (see paragraph 0090 and Figure 6), user contributions to the system and other user contributions to the system on the home page (see paragraph 0123 and Figure 16), providing a user recent activity and topics they might like on the homepage (see paragraph 0117 and Figure 7).
Examiner’s note: this is consistent with Applicant’s invention see paragraph 0089 of providing information on what a user particular likes of frequently purchases in fewer steps
The subsequent interaction, which further includes involvement of the user interface, is also included among the plurality of interactions between the client and the entity and is also used by the ML engine when learning which mode of communication is preferred by the client (see paragraphs 0062-0063, 0066, 0137, 0186, 0225-0226, 0233-0234, 0239, Examiner’s note: here teaches collecting data and making determinations of when and where to provide information to users. ).
wherein modifying the subsequent interaction the client has with the entity includes modifying the user interface from having the first layout to having a second layout that is different than the first layout, wherein, as a result of the user interface being modified, the number of navigations that are required to reach the particular target displayed by the user interface is reduced, such that a second, reduced number of navigations is now required to reach the particular target displayed by the user interface; (see paragraphs 0090, 0117, 0123, Figure 6-7, and 16, Examiner’s note: teaches this in multiple ways for example the top list (see paragraph 0090 and Figure 6), user contributions to the system and other user contributions to the system on the home page (see paragraph 0123 and Figure 16), providing a user recent activity and topics they might like on the homepage (see paragraph 0117 and Figure 7).
Examiner’s note: this is consistent with Applicant’s invention see paragraph 0089 of providing information on what a user particular likes of frequently purchases in fewer steps
in response to acquiring new training data, cause the ML engine to learn a new set of one or more weighting factors that assign newly learned relative importance levels to the different types of interactions between the clients and the entities; and subsequent to the ML engine being further trained using the new training data, cause the ML engine to update the experience score using the new set of one or more weighting factors (see paragraphs 0058, 0060, 0104-0105, Examiner’s note: system learns by feedback (see paragraphs 005, 0060, 0104-0105). Weights may be based on a user’s history in the system (see paragraph 0105)
based on the ML engine’s learning as to which mode of communication is preferred by the client, determine that the client prefers an alternative mode of communication relative to communicating via the interface; and setting the alternative mode of communication as a default mode of communication for the client, such that the alternative mode of communication will be used for subsequent communications with the client (see paragraphs 0062-0063, 0066, 0137, 0186, 0225-0226, 0233-0234, 0239, Examiner’s note: here teaches collecting data and making determinations of when and where to provide information to users).
based on the ML engine’s learning of a type of information that the client finds offensive or off-putting, modify the use interfacer to prevent the type of information that the client finds offensive or off-putting from being displayed (see paragraph 0131, 0216-0217, 0222, and Figure 24, Examiner’s note: removing content that is voted to be objectionable irrelevant or low quality (see paragraph 0131 and Figure 21). Not providing information based on it not being interesting (see paragraphs 0216-0217 and 0222. Further teaches not providing information based on it being sold out or not arriving in time or current weather (see paragraph 0233-0234)).
Pinckney et al. does not expressly teach normalizing data before applying weights or more specifically as recited in the claims normalize the initial set of scoring data; after normalizing the initial set of scoring data, apply one or more weights to normalized scoring data.
However, Pawar which is in the art of sending information to a user based on an engagement score (see abstract) teaches normalizing data before applying weights or more specifically as recited in the claims normalize the initial set of scoring data; after normalizing the initial set of scoring data, applying one or more weights to normalized scoring data (see paragraphs 0049-0050, Examiner’s note: normalizing data and then applying weights).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Pinckney et al. with the aforementioned teachings from Pawar with the motivation of using a commonly known mathematical calculation to compare various types of data of relative importance in order to make a determination (see Pawar paragraphs 0049-0050), when Pickney clearly teaches using various types of data to make a determination is known (see Pinckney et al. paragraphs 0059-0060, 0176, 0187-0188).
As per claim 2, Pickney et al. teaches
wherein structures for all data included in the initial set of scoring data are set to match one another (see paragraphs 0176, 01087, Examiner’s note: teaches using structured and unstructured data to make preference decisions, this means they “are set to match one another”).
As per claim 3, Pickney et al. teaches
wherein applying the one or more weighting factors includes applying a first weighting factor included in said one or more weighting factors to a first portion of the initial set of scoring data, the first portion being associated with a first type of interaction the client had relative to the entity, and wherein applying the one or more weighting factors further includes applying a second weighting factor included in said one or more weighting factors to a second portion of the initial set of scoring data, the second portion being associated with a second type of interaction the client had relative to the entity. (see paragraphs 0070, 0105, 0208, 0233, Examiner’s note: teaches different types of weights used in the machine learning system).
As per claim 5, Pickney et al. teaches
wherein the unstructured sentiment data includes all the following: a type-written client review about the entity; a type-written client comment, wherein the type-written client comment is included in one or more of a chat message, a text message, or a social media message; a voice message; (see paragraphs 0068, 0089, 0096, 0098, 0113, 0136-0137, 0139 0176, Examiner’s note: here shows numerous different question and answer sessions, where they can be unstructured text and be instant message or voice. Further teaches they can be reviews or comments about specific items).
or a type-written client comment in a survey sent by the entity. (see paragraphs 0089, 0176, and 0187, Examiner’s note: Examiner interprets the survey limitation to be at least taught by paragraph 0089. Further teaches the user information may be unstructured or freeform (see paragraph 0176 and 0187)).
As per claim 6, Pickney et al. teaches
wherein the structured sentiment data includes a quantified rating of the entity by the client (see paragraphs 0104-0105, 0125, Examiner’s note: teaches different ways users ratings can be determined (see paragraph 0104-0105)) and can include a yes no button for feedback).
As per claim 7, Pickney et al. teaches
wherein each of the one or more interest factors includes a corresponding timing aspect, and wherein sentiment data that is relatively older is less interesting than sentiment data that is relatively newer (see paragraphs 0219-0220, Examiner’s note: teaches time impacts what is decided to be shown).
Pickney does not expressly teach one or more weighting factors related to time where older data is less weighted than new data
However Pawar which is in the art of sending information to a user based on an engagement score (see abstract) teaches one or more weighting factors related to time where older data is less weighted than new data (see paragraph 0065, Examiner’s note: weight can include things like time since the information was accessed, decay factors, frequency of access, relationship to information, or relationship to the object about which information was accessed).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Pickney in view of Pawar with the aforementioned teachings from Pawar with the motivation of using a known type of factor to provide more importance to certain data like more recent interactions (see Pawar paragraph 0065), when providing more or less importance to different types of time data in Pickney (see paragraphs 0219-0220) as well as weighting (see paragraphs 0233 and 0105) are both known.
As per claim 8, Pickney et al. teaches
wherein the one or more weighting factors include a first weighting factor and a second weighting factor, the first weighting factor corresponds to a survey response type of interaction the client had with the entity, and the second weighting factor corresponds to a survey type of interaction, and wherein the first weighting factor is greater than the second weighting factor. (see paragraphs 0070, 0105, 0208, 0233, and 0089, Examiner’s note: teaches different types of weights used in the machine learning system (see paragraphs 0070, 0105, 0208, and 0233). It’s a survey in paragraph 0089).
Pickney et al. does not expressly teach different weighting factors for different types of online interactions like a webchat
However Pawar which is in the art of sending information to a user based on an engagement score (see abstract) teaches different weighting factors for different types of online interactions like a webchat (see paragraph 0065-0066, Examiner’s note: different weights for different type of interactions, where one type of interaction can include webchat).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Pickney in view of Pawar with the aforementioned teachings from Pawar with the motivation of using a known type of factor to provide more importance to certain data like the type of interaction medium (see Pawar paragraph 0065), when providing more or less importance to different types of data in Pickney (see paragraphs 0219-0222) as well as weighting (see paragraphs 0233 and 0105) are both known.
As per claim 9, Pickney et al. teaches
wherein modifying the subsequent interaction the client has with the entity includes sending a referral request. (see paragraphs 0090, 0117, 0123, Figure 6-7, and 16, Examiner’s note: teaches this in multiple ways for example the top list (see paragraph 0090 and Figure 6), user contributions to the system and other user contributions to the system on the home page (see paragraph 0123 and Figure 16), providing a user recent activity and topics they might like on the homepage (see paragraph 0117 and Figure 7. Examiner notes that these can all be interpreted as a referral request as they refer a user to information and a user can request information by clicking on it).
As per claim 11, Pickney et al. teaches A method for generating and dynamically updating an experience score for a client, where the experience score operates as a quantitative indicator describing a relationship between the client and an entity, the method further using the experience score to modify one or more subsequent interactions the client has with the entity so as to improve the relationship, said method comprising: (see abstract, paragraphs 0011, 0115 and 0198, Examiner’s note: teaches a computer based recommendation system where information may be updated over time (see paragraph 0198) and the recommendation may be based on a score (see paragraph 0115). Further teaches this is a method (see paragraph 0011).
training a machine learning (ML) engine (see paragraphs0058, 0066, 0079, 0088, 0119, Examiner’s note: teaches the system is trained (see paragraphs 0079, 0088, 0119) where the system is implemented by decisions made by machine learning (see paragraphs 0058 and 0066)).
to generate one or more weighting factors that assign relative importance levels to different types of interactions between clients and entities; (see paragraphs 0070, 0105, 0208, 0233, Examiner’s note: teaches different types of weights used in the machine learning system).
Wherein the ML engine is further tasked with learning, based on a plurality of interactions between the client and the entity, which mode of communication is preferred by the client when the client interacts with the entity; (see paragraphs 0062-0063, 0066, 0137, 0186, 0225-0226, 0233-0234, 0239, Examiner’s note: here teaches collecting data and making determinations of when, where, and how to provide information to users ).
acquiring sentiment data detailing the relationship between the client and the
entity, wherein the sentiment data is acquired from different types of interactions the
client had relative to the entity, and wherein the sentiment data includes structured
sentiment data and unstructured sentiment data; (see paragraphs 0059-0060, 0176, 0187-0188, Examiner’s note: teaches here the information input in the system can be either unstructured or structured data that is used to determine information about the user to provide recommendations to the user or other users).
Wherein acquiring the sentiment data further includes extracting feedback data comprising (i) verbal cues, (ii) visual cues, (iii) textual cues, all three of which are included in video content describing the client’s interactions with the entity, such that the verbal cues, the visual cues, and the textual cues are extracted from the video content, and wherein the verbal cues, the visual cues, and the textual cues are subsequently ranked differently relative to one another; (see paragraphs 0072-0082, 0104-0105, 0139, and Figure 8A, Examiner’s note: Here the limitation does not disclose how the extraction takes place, whether this is related to one video content interaction or multiple different video content interactions over time, or how the cues are ranked differently other than being later (subsequently) ranked differently. This could mean merely ranking determined information differently over time or different interactions differently. Further the claims do not disclose how the cues are determined other than they are related to video, visual, and textual. Therefore the Examiner interprets this limitation to recite/require receiving feedback relating to text, visual, and audio sentiment from a user interacting with a video or videos of entity and ranking that information differently over time or different interactions differently. Pickney teaches this as Pickney teaches information recommended or displayed in the system can be video (see paragraphs 0072 and 0082). Pickney goes on to teach receiving feedback related to that recommendation or video over time (see paragraphs 0104-0105), weighting those determined feedbacks (e.g. subsequently in that it’s after the determination) differently based on constraints like time (e.g. ranking differently relative to one another)(see paragraph 0105), and feedback can be things like clicks after being asked questions (see paragraph 0139) (which could be visual, verbal (as it relates to words), or textual cues as broadly recited in the claims), text answers to questions on a screen (see paragraph 0139 and Figure 8A) (which could be visual, verbal (as it relates to words), or textual cues as broadly recited in the claims), and voice responses (which could be verbal cues or verbal (as it relates to words)(see paragraph 0137)).
detecting a first type of interaction the client had relative to the entity, the first
type of interaction including the client navigating a user interface of the entity, (see paragraphs 0058-0059 and Figure 2, Examiner’s note: teaches Q and A where a user may be provided a recommendation after 10 questions).
The first type of interactions, which includes involvement of the user interface, is included among the plurality of interactions between the client and the entity and is used by the ML engine when learning which mode of communication is preferred by the client, (see paragraphs 0062-0063, 0066, 0137, 0186, 0225-0226, 0233-0234, 0239, Examiner’s note: here teaches collecting data and making determinations of when, where, and how to provide information to users ).
Wherein the user interface is initially structured in accordance with a first layout such that during said navigation of the user interface, a number of navigations that are required to reach a particular target displayed by the user interface is a first number of navigations; (see paragraphs 0058-0059 and Figure 2, Examiner’s note: teaches Q and A where a user may be provided a recommendation after 10 questions).
using natural language processing (NLP) to provide structure to the unstructured
sentiment data such that a second set of structured sentiment data is acquired, wherein
the structured sentiment data and the second set of structured sentiment data constitute
an initial set of scoring data; (see paragraphs 0187, 0190-0191, Examiner’s note: determining preferences through natural language processing, like for each is this about electronics).
applying the one or more weighting factors to the initial set of scoring data to generate a set of weighted scores, (see paragraphs 0070, 0105, 0208, 0233, Examiner’s note: teaches different types of weights used in the machine learning system).
Wherein applying the one or more weighting factors includes identifying first scoring data associated with the verbal cues, second scoring data associated with the visual cues, and third scoring data associated with textual cues, and wherein the first, second, and third scoring data are ranked differently relative to one another based, at least in part, on determined reliability weighting metrics associated with the verbal cues, the visual cues, and the textual cues (see paragraph 0105, Examiner’s note: Here the claim does not disclose how the scoring data associated with verbal cues, visual cues, and textual cues is identified, or how the first, second, and third scoring data is ranked differently relative to one another other than its based at least in part on determined reliability weighting metrics associated with visual cues, verbal cues, and textual clues. How the determined reliability weighting metrics are determined is not recited, further it is not required that it is determined here, it could be merely a weight previously determined in the system like weighting feedback less than a previous time (See Pinckney paragraph 0105). Therefore the Examiner interprets the limitation to recite/require identifying scoring data associated with verbal cues, textual cues, and visual cues, and weighting them differently based on defined weights in the system, like how well the user answered questions, when the response is (e.g. weighting feedback less the 9th time), etc. This is taught in paragraph 0105 of Pickney.).
after generating the set of weighted scores, generating the experience score by aggregating the set of weighted scores; (see paragraphs 0115 and Figure 4, 5, 11, Examiner’s note: here shows providing a score (see paragraph 0115) and the Figures show providing percentage of hunch that the recommendation or correct or a user is similar).
using the experience score to modify a subsequent interaction the client has with
the entity, (see paragraphs 0090, 0117, 0123, Figure 6-7, and 16, Examiner’s note: teaches this in multiple ways for example the top list (see paragraph 0090 and Figure 6), user contributions to the system and other user contributions to the system on the home page (see paragraph 0123 and Figure 16), providing a user recent activity and topics they might like on the homepage (see paragraph 0117 and Figure 7).
The subsequent interaction, which further includes involvement of the user interface, is also included among the plurality of interactions between the client and the entity and is also used by the ML engine when engine when learning which mode of communication is preferred by the client (see paragraphs 0062-0063, 0066, 0137, 0186, 0225-0226, 0233-0234, 0239, Examiner’s note: here teaches collecting data and making determinations of when, where, and how to provide information to users ).
wherein modifying the subsequent interaction the client has with the entity
includes modifying the user interface from having the first layout to having a second layout that is different than the first layout. wherein, as a result of the user interface being modified, the number of navigations that are required to reach the particular target
displayed by the user interface is reduced, such that a second, reduced number of
navigations are now required to reach the particular target displayed by the user interface; (see paragraphs 0090, 0117, 0123, Figure 6-7, and 16, Examiner’s note: teaches this in multiple ways for example the top list (see paragraph 0090 and Figure 6), user contributions to the system and other user contributions to the system on the home page (see paragraph 0123 and Figure 16), providing a user recent activity and topics they might like on the homepage (see paragraph 0117 and Figure 7).
in response to acquiring new training data, causing the ML engine to learn a new
set of one or more weighting factors that assign newly learned relative importance levels to the different types of interactions between the clients and the entities; and subsequent to the ML engine being further trained using the new training data, causing the ML engine to update the experience score using the new set of one or more weighting factors. (see paragraphs 0058, 0060, 0104-0105, Examiner’s note: system learns by feedback (see paragraphs 005, 0060, 0104-0105). Weights may be based on a user’s history in the system (see paragraph 0105)
based on the ML engine’s learning as to which mode of communication is preferred by the client, determine that the client prefers an alternative mode of communication relative to communicating via the user interface; and setting the alternative mode of communication as a default mode of communication for the client, such that the alternative mode of communication will be used for subsequent communications with the client (see paragraphs 0062-0063, 0066, 0137, 0186, 0225-0226, 0233-0234, 0239, Examiner’s note: here teaches collecting data and making determinations of when, where, and how to provide information to users ).
And based on the ML engine’s learning of a type of information that the client finds offensive or off-putting, modifying the user interface to prevent the type of information that the client finds offensive or off-putting from being displayed (see paragraph 0131, 0216-0217, 0222, and Figure 24, Examiner’s note: removing content that is voted to be objectionable irrelevant or low quality (see paragraph 0131 and Figure 21). Not providing information based on it not being interesting (see paragraphs 0216-0217 and 0222. Further teaches not providing information based on it being sold out or not arriving in time or current weather (see paragraph 0233-0234)).
Pinckney et al. does not expressly teach normalizing data before applying weights or more specifically as recited in the claims normalizing the initial set of scoring data; after normalizing the initial set of scoring data, applying weights to normalized scoring data.
However, Pawar which is in the art of sending information to a user based on an engagement score (see abstract) teaches normalizing data before applying weights or more specifically as recited in the claims normalizing the initial set of scoring data; after normalizing the initial set of scoring data, applying weights to normalized scoring data (see paragraphs 0049-0050, Examiner’s note: normalizing data and then generating weights).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Pinckney et al. in view of Pawar with the aforementioned teachings from Pawar with the motivation of using a commonly known mathematical calculation to compare various types of data in order to make a determination (see Pawar paragraphs 0049-0050), when Pickney clearly teaches using various types of data to make a determination is known (see Pinckney et al. paragraphs 0059-0060, 0176, 0187-0188).
As per claim 12, Pickney et al. teaches
wherein a public network is crawled to acquire at least some of the sentiment data (see paragraphs 0072, 0107-0108, 0193, 0214, Examiner’s note: crawling websites to gain information).
As per claim 19, Pickney teaches A method for generating and dynamically updating an experience score for a client, where the experience score operates as a quantitative indicator describing a relationship between the client and an entity, the method further using the experience score to modify one or more subsequent interactions the client has with the entity so as to improve the relationship, said method comprising: (see abstract, paragraphs 0011, 0115 and 0198, Examiner’s note: teaches a computer based recommendation system where information may be updated over time (see paragraph 0198) and the recommendation may be based on a score (see paragraph 0115). Further teaches this is a method (see paragraph 0011).
training a machine learning (ML) engine (see paragraphs0058, 0066, 0079, 0088, 0119, Examiner’s note: teaches the system is trained (see paragraphs 0079, 0088, 0119) where the system is implemented by decisions made by machine learning (see paragraphs 0058 and 0066)).
to generate one or more weighting factors that assign relative importance levels to different types of interactions between clients and entities; (see paragraphs 0070, 0105, 0208, 0233, Examiner’s note: teaches different types of weights used in the machine learning system).
Wherein the ML engine is further tasked with learning, based on a plurality of interactions between the client and the entity, which mode of communication is preferred by the client when the client interacts with the entity; (see paragraphs 0062-0063, 0066, 0137, 0186, 0225-0226, 0233-0234, 0239, Examiner’s note: here teaches collecting data and making determinations of when, where, and how to provide information to users ).
using an interactions engine to acquire sentiment data detailing the relationship
between the client and the entity, wherein the interactions engine acquires the sentiment data from different types of interactions the client had relative to the entity, and wherein sentiment data is structured to generate an initial set of scoring data; (see paragraphs 0059-0060, 0176, 0187-0188, Examiner’s note: teaches here the information input in the system can be either unstructured or structured data that is used to determine information about the user to provide recommendations to the user or other users).
Wherein acquiring the sentiment data further includes extracting feedback data comprising (i) verbal cues, (ii) visual cues, (iii) textual cues, all three of which are included in video content describing the client’s interactions with the entity, such that the verbal cues, the visual cues, and the textual cues are extracted from the video content, and wherein the verbal cues, the visual cues, and the textual cues are subsequently ranked differently relative to one another; (see paragraphs 0072-0082, 0104-0105, 0139, and Figure 8A, Examiner’s note: Here the limitation does not disclose how the extraction takes place, whether this is related to one video content interaction or multiple different video content interactions over time, or how the cues are ranked differently other than being later (subsequently) ranked differently. This could mean merely ranking determined information differently over time or different interactions differently. Further the claims do not disclose how the cues are determined other than they are related to video, visual, and textual. Therefore the Examiner interprets this limitation to recite/require receiving feedback relating to text, visual, and audio sentiment from a user interacting with a video or videos of entity and ranking that information differently over time or different interactions differently. Pickney teaches this as Pickney teaches information recommended or displayed in the system can be video (see paragraphs 0072 and 0082). Pickney goes on to teach receiving feedback related to that recommendation or video over time (see paragraphs 0104-0105), weighting those determined feedbacks (e.g. subsequently in that it’s after the determination) differently based on constraints like time (e.g. ranking differently relative to one another)(see paragraph 0105), and feedback can be things like clicks after being asked questions (see paragraph 0139) (which could be visual, verbal (as it relates to words), or textual cues as broadly recited in the claims), text answers to questions on a screen (see paragraph 0139 and Figure 8A) (which could be visual, verbal (as it relates to words), or textual cues as broadly recited in the claims), and voice responses (which could be verbal cues or verbal (as it relates to words)(see paragraph 0137)).
detecting a first type of interaction the client had relative to the entity, the first
type of interaction including the client navigating a user interface of the entity, (see paragraphs 0058-0059 and Figure 2, Examiner’s note: teaches Q and A where a user may be provided a recommendation after 10 questions).
The first type of interaction, which includes involvement of the user interface, is included among the plurality of interactions between the client and entity and is used by the ML engine when learning which mode of communication is preferred by the client, (see paragraphs 0062-0063, 0066, 0137, 0186, 0225-0226, 0233-0234, 0239, Examiner’s note: here teaches collecting data and making determinations of when, where, and how to provide information to users ).
Wherein the user interface is initially structured in accordance with a first layout such that. During said navigation of the user interface, a number of navigations that are required to reach a particular target displayed by the user interface is a first number of navigations; (see paragraphs 0058-0059 and Figure 2, Examiner’s note: teaches Q and A where a user may be provided a recommendation after 10 questions).
applying the one or more weighting factors to the initial set of scoring data to generate a set of weighted scores, (see paragraphs 0070, 0105, 0208, 0233, Examiner’s note: teaches different types of weights used in the machine learning system).
Wherein applying the one or more weighting factors includes identifying first scoring data associated with the verbal cues, second scoring data associated with the visual cues, and third scoring data associated with textual cues, and wherein the first, second, and third scoring data are ranked differently relative to one another based, at least in part, on determined reliability weighting metrics associated with the verbal cues, the visual cues, and the textual cues (see paragraph 0105, Examiner’s note: Here the claim does not disclose how the scoring data associated with verbal cues, visual cues, and textual cues is identified, or how the first, second, and third scoring data is ranked differently relative to one another other than its based at least in part on determined reliability weighting metrics associated with visual cues, verbal cues, and textual clues. How the determined reliability weighting metrics are determined is not recited, further it is not required that it is determined here, it could be merely a weight previously determined in the system like weighting feedback less than a previous time (See Pinckney paragraph 0105). Therefore the Examiner interprets the limitation to recite/require identifying scoring data associated with verbal cues, textual cues, and visual cues, and weighting them differently based on defined weights in the system, like how well the user answered questions, when the response is (e.g. weighting feedback less the 9th time), etc. This is taught in paragraph 0105 of Pickney.).
after generating the set of weighted scores, generating the experience score by
aggregating the set of weighted scores; (see paragraphs 0115 and Figure 4, 5, 11, Examiner’s note: here shows providing a score (see paragraph 0115) and the Figures show providing percentage of hunch that the recommendation or correct or a user is similar).
using the experience score to modify a subsequent interaction the client has with
the entity, (see paragraphs 0090, 0117, 0123, Figure 6-7, and 16, Examiner’s note: teaches this in multiple ways for example the top list (see paragraph 0090 and Figure 6), user contributions to the system and other user contributions to the system on the home page (see paragraph 0123 and Figure 16), providing a user recent activity and topics they might like on the homepage (see paragraph 0117 and Figure 7).
The subsequent interaction, which further includes involvement of the user interface, is also included among the plurality of interactions between the client and the entity and is also used by the ML engine when learning which mode of communication is preferred by the client (see paragraphs 0062-0063, 0066, 0137, 0186, 0225-0226, 0233-0234, 0239, Examiner’s note: here teaches collecting data and making determinations of when, where, and how to provide information to users ).
wherein modifying the subsequent interaction the client has with the entity
includes modifying the user interface from having the first layout to having a second
layout that is different than the first layout. wherein, as a result of the user interface being modified, the number of navigations that are required to reach the particular target
displayed by the user interface is reduced, such that a second, reduced number of navigations are now required to reach the particular target displayed by the user
interface; (see paragraphs 0090, 0117, 0123, Figure 6-7, and 16, Examiner’s note: teaches this in multiple ways for example the top list (see paragraph 0090 and Figure 6), user contributions to the system and other user contributions to the system on the home page (see paragraph 0123 and Figure 16), providing a user recent activity and topics they might like on the homepage (see paragraph 0117 and Figure 7).
in response to acquiring new training data, cause the ML engine to learn a new set of one or more weighting factors that assign newly learned relative importance levels to the different types of interactions between the clients and the entities; and subsequent to the ML engine being further trained using the new training data and in response to the interactions engine acquiring new sentiment data, causing the ML engine to update the client's experience score using the new set of one or more weighting factors. (see paragraphs 0058, 0060, 0104-0105, Examiner’s note: system learns by feedback (see paragraphs 005, 0060, 0104-0105). Weights may be based on a user’s history in the system (see paragraph 0105)
based on the ML engine’s learning as to which mode of communication is preferred by the client, determine that the client prefers an alternative mode of communication relative to communicating via the user interface; and setting the alternative mode of communication as a default mode of communication for the client, such that the alternative mode of communication will be used for subsequent communications with the client (see paragraphs 0062-0063, 0066, 0137, 0186, 0225-0226, 0233-0234, 0239, Examiner’s note: here teaches collecting data and making determinations of when, where, and how to provide information to users ).
And based on the ML engine’s learning of a type of information that the client finds offensive or off-putting, modifying the user interface to prevent the type of information that the client finds offensive or off-putting from being displayed (see paragraph 0131, 0216-0217, 0222, and Figure 24, Examiner’s note: removing content that is voted to be objectionable irrelevant or low quality (see paragraph 0131 and Figure 21). Not providing information based on it not being interesting (see paragraphs 0216-0217 and 0222. Further teaches not providing information based on it being sold out or not arriving in time or current weather (see paragraph 0233-0234)).
Pinckney et al. does not expressly teach normalizing data before applying weights or more specifically as recited in the claims normalizing the initial set of scoring data; after normalizing the initial set of scoring data, applying weights to normalized scoring data.
However, Pawar which is in the art of sending information to a user based on an engagement score (see abstract) teaches normalizing data before applying weights or more specifically as recited in the claims normalizing the initial set of scoring data;; after normalizing the initial set of scoring data, applying weights to normalized scoring data (see paragraphs 0049-0050, Examiner’s note: normalizing data and then generating weights).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Pinckney et al. in view of Pawar with the aforementioned teachings from Pawar with the motivation of using a commonly known mathematical calculation to compare various types of data in order to make a determination (see Pawar paragraphs 0049-0050), when Pickney clearly teaches using various types of data to make a determination is known (see Pinckney et al. paragraphs 0059-0060, 0176, 0187-0188).
As per claim 20, Pickney teaches
wherein the interactions engine acquires at least some of the sentiment data from one or more third party sources by crawling a public network. (see paragraphs 0072, 0107-0108, 0193, 0214, Examiner’s note: crawling websites to gain information).
As per claim 21, Pickney teaches
wherein modifying the user interface further includes causing the particular target to be visible upon the user interface being displayed. (see paragraphs 0090, 0117, 0123, Figure 6-7, and 16, Examiner’s note: teaches this in multiple ways for example the top list (see paragraph 0090 and Figure 6), user contributions to the system and other user contributions to the system on the home page (see paragraph 0123 and Figure 16), providing a user recent activity and topics they might like on the homepage (see paragraph 0117 and Figure 7).
8. Claim(s) 4 is rejected under 35 U.S.C. 103 as being unpatentable over Pickney et al. (United States Patent Application Publication Number: US 2011/0302117) further in view of Pawar (United States Patent Application Publication Number: US 2018/0349499) further in view of Terry et al. (United States Patent Application Publication Number: US 2019/0179903).
As per claim 4, Pickney et al. teaches
Wherein the plurality of interactions, which are used by the ML engine to learn which mode of communication is preferred by the client when the client interacts with the entity, include all of the following: an interaction where the client exchanged chat messages; (see paragraphs 0073, 0098, 0137, 0146, and 0156, Examiner’s note: collecting chat information where the chats can be about entities).
an interaction where the client completed a survey; (see paragraph 0089, Examiner’s note: the Examiner interprets the survey limitation to be at least taught by paragraph 0089).
an interaction where the client posted a review about the entity on a public network; an interaction where the client posted information about the entity on a social media account; (see paragraphs 0194, 0198, 0203, 0245, 0249-0250, Examiner’s note: creating and sharing reviews in the system where the reviews are on the internet and on social networks).
an interaction where the client referred the entity to another client ; (see paragraphs 0194, 0198, 0203, 0245, 0249-0250, Examiner’s note: creating and sharing reviews in the system where the reviews are on the internet).
or an interaction where the client visited a website of the entity (see paragraphs 0249, 0255, 01345, 0143, and 0189, Examiner’s note: tracking a user’s browsing history and where a user interacts with websites ).
Pickney et al. in view of Pawar does not expressly teach (1) client exchange chat messages with the entity, an interaction where the client exchanged an email with the entity; an interaction where the client called the entity and/or left a voicemail; an interaction in which the client received a message from the entity and ignored the message; an interaction where the client completed a payment;
However, Terry which is in the art of AI knowledge delivery (see abstract) teaches (1) client exchange chat messages with the entity, an interaction where the client exchanged an email with the entity; an interaction where the client called the entity and/or left a voicemail; an interaction in which the client received a message from the entity and ignored the message; an interaction where the client completed a payment (see paragraph 0020, 0132-0133, 0162, and Figure 28E, Examiner’s note: teaches communicating via email, SMS, social networks, telephones, etc. Teaches communicating for example via email unless a user becomes unresponsive them making a phone call. Further teaches over time using the channel that is most effective for a given contact (see paragraphs 0132-0133). Further teaches communications may be for payment (see paragraph 0020, 0162,and Figure 28E).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Pickney et al. in view of Pawar with the aforementioned teachings from Terry with the motivation of providing a way to collect additional relevant information for machine learning to determine relevant information to provide to a user (see Terry paragraph 0020, 0132-0133, 0162, and Figure 28E), when collecting information to determine relevant information to provide to the user is known (see Pickney et al. paragraphs 0062-0063, 0066, 0137, 0186, 0225-0226, 0233-0234, 0239).
9. Claim(s) 10 is rejected under 35 U.S.C. 103 as being unpatentable over Pickney et al. (United States Patent Application Publication Number: US 2011/0302117) further in view of Pawar (United States Patent Application Publication Number: US 2018/0349499) further in view of Hui et al. (United States Patent Application Publication Number: US 2019/0073596).
As per claim 10, Pickney et al. teaches
wherein the ML engine performs analysis on the initial set of scoring data in an attempt to identify which one or more leading factors had a largest impact on the relationship between the client and the entity (see paragraphs 0058, 0063, 0066, and 0074, Examiner’s note: teaches performing analysis over to time to determine the factors that have the most impact).
Pickney et al. in view of Pawar does not expressly teach performing regression analysis to determine largest impact.
However, Hui et al. which is in the art of estimating relationships between online content and user’s reactions to determine user engagement (See paragraph 0001) teaches performing regression analysis to determine largest impact (see paragraphs 0022 and 0057-0058, Examiner’s note: regression to determine the most important predictor).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Pickney et al. in view of Pawar with the aforementioned teachings from Hui et al. with the motivation of using a common calculation to determine the most important predictor (see Hui et al. paragraphs 0022 and 0057-0058), when determining the factors that have the most impact over time is known (see Pickney et al. paragraphs 058, 0063, 0066, and 0074).
10. Claim(s) 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Pickney et al. (United States Patent Application Publication Number: US 2011/0302117) further in view of Pawar (United States Patent Application Publication Number: US 2018/0349499) further in view of Aminian et al. (United States Patent Application Publication Number: US 2010/0106730).
As per claim 15, Pickney et al. teaches
wherein the method further includes displaying a client interface that has a particular visual layout, (see Figures 1-10, Examiner’s note: shows various client interfaces here in the system of Pickney et al.).
Pickney et al. in view of Pawar does not expressly teach and wherein the particular visual layout includes displaying the experience score at a location that is proximate to a name of the client
However, Aminian et al. which is in the art of exposing users to relevant content (See abstract and title) teaches and wherein the particular visual layout includes displaying the experience score at a location that is proximate to a name of the client (see Figures 5 and 7 and paragraph 0061, Examiner’s note: logged in as “zeno” and the my score and my credit next to it).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Pickney et al. in view of Pawar with the aforementioned teachings from Aminian et al. with the motivation of providing a known way to display information like a score next to a name so that one can determine what the score relates to (see Aminian et al. figures 5 and 7 and paragraph 0061), when displaying various scores and user names in various places in the screens of Pickney et al. (see Figures 5, 7, 8) is known.
As per claim 16, Pickney et al. teaches
wherein the client interface is configured to rank clients based on their corresponding experience scores, (see paragraphs 0115, Figures 4-5, 11, Examiner’s note: shows similar hunches and percentage likelihood).
Pickney does not expressly teach wherein a threshold score is defined, and wherein targeted notices are transmitted to clients whose experience scores are below or above the threshold score.
However, Pawar which is in the art of sending information to a user based on an engagement score (see abstract) teaches wherein a threshold score is defined, and wherein targeted notices are transmitted to clients whose experience scores are below or above the threshold score (see paragraphs 0050, 0058, and 0062, Examiner’s note: sending content when an engagement score is above a threshold)
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Pinckney et al. in view of Pawar in view of Aminian et al. with the aforementioned teachings from Pawar with the motivation of using a commonly known way to determine which information to send and to then send the relevant information when it is above a threshold or limit (see Pawar paragraphs 0050, 0058, and 0062), when Pickney et al. clearly teaches selecting from various information to send and displaying it based on it being relevant similar information is known (see Pickney paragraphs 0115, 0125, 0189, 0220-0222, and Figures 4-5, 11)
As per claim 17, Pickney et al. teaches
wherein a time factor is included as a part of each interesting factor included in the one or more interesting factors, and wherein execution of the time factor causes relatively older sentiment data to be less interesting than relatively newer sentiment data, and wherein the time factor includes one or more of a non- linear time decay algorithm, a linear decay algorithm, or an algorithm based on calendar time. (see paragraphs 0219-0220, Examiner’s note: teaches time impacts what is decided to be shown. Further teaches this may be performed by an analytical, mathematical rule based, or heuristic technique, which the examiner interprets to be at least an algorithm based on calendar time).
Pickney does not expressly teach one or more weighting factors related to time where older data is less weighted than new data.
However Pawar which is in the art of sending information to a user based on an engagement score (see abstract) teaches one or more weighting factors related to time where older data is less weighted than new data (see paragraph 0065, Examiner’s note: weight can include things like time since the information was accessed, decay factors, frequency of access, relationship to information, or relationship to the object about which information was accessed).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Pickney in view of Pawar Aminian et al. with the aforementioned teachings from Pawar with the motivation of using a known type of factor to provide more importance to certain data like more recent data (see Pawar paragraph 0065), when providing more or less importance to different types of time data in Pickney (see paragraphs 0219-0220) as well as weighting (see paragraphs 0233 and 0105) are both known.
As per claim 18, Pickney teaches
wherein an interactions engine at least periodically monitors for new sentiment data, and wherein the client's experience score is updated based on the new sentiment data (see paragraphs 0115, 0194, 0198, 0211, and 0258, Examiner’s note: teaches generating a user score (see paragraph 0115) where user information may be constantly updated based on a user’s interaction with the system (see paragraphs 0194, 0198, and 0211). Further paragraph 0258 teaches that this may happen over distributed computing).
11. Claim(s) 22 is rejected under 35 U.S.C. 103 as being unpatentable over Pickney et al. (United States Patent Application Publication Number: US 2011/0302117) further in view of Pawar (United States Patent Application Publication Number: US 2018/0349499) further in view of Kong et al. (United States Patent Application Publication Number: US 2015/0242374).
As per claim 22, Pickney et al. in view of Pawar does not expressly teach
wherein modifying the user interface further includes all of: adjusting a size of text that is displayed in the user interface, and adjusting a size of an image that is displayed in the user interface, and preventing display of a previous user interface element that was displayed when the user interface was structured to have the first layout.
However, Kong et al. which is in the art of internet interface layouts (see paragraphs 0002-0003 ) teaches wherein modifying the user interface further includes all of: adjusting a size of text that is displayed in the user interface, and adjusting a size of an image that is displayed in the user interface, and preventing display of a previous user interface element that was displayed when the user interface was structured to have the first layout (see paragraphs 0044, 0051-0052, 0054, 0064, 0066, Examiner’s note: teaches here different layouts can have different images and text size as well as include different information all together, which read on “preventing display of a previous user interface element that was displayed when the user interface was structured to have the first layout”).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Pickney et al. in view of Pawar with the aforementioned teachings from Kong et al. with the motivation of providing a known way to include interfaces with different layouts in the Internet environment in order to include the information needed in each webpage(See Kong et al. paragraphs 0002-0003, 0064-0066 ) and keep it visually appealing to users by having different layouts on different pages, when Pickney clearly teaches in Figures 1-10 displaying different information over the internet (See paragraph 0193).
Conclusion
12. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Golshan (United States Patent Application Publication Number: US 2014/0019443) teaches a system for discovering the predicted interest of the user (see abstract and title)
Paglia et al. (United States Patent Application Publication Number: US 2015/0293916) teaches a system for content and for allowing less interaction and navigation in the user interface by filtering based on a set of content items and user history (See abstract and paragraph 0015)
Heit et al. (United States Patent Application Publication Number: US 2011/0190594) teaches regression analysis to determine the attributes that have the biggest impact on quality (see paragraph 0104)
13. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIERSTEN SUMMERS whose telephone number is (571)272-6542. The examiner can normally be reached Monday - Friday 7am-3:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Uber can be reached on 5712703923. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users.
To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format.
For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KIERSTEN V SUMMERS/Primary Examiner, Art Unit 3626