DETAILED ACTION
The following is a Final Office action. In response to Examiner’s communication of 10/10/2025, Applicant, on 1/8/2026, amended claims 1, 4, 9, 12, and 16-19 and cancelled claims 5 and 14. Claims 1-4, 6-13, and 15-20 are now pending and have been rejected as indicated below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
No Information Disclosure Statement has yet been filed. As such, No Information Disclosure Statement has been considered.
Response to Amendment
Applicant’s amendments are acknowledged.
The 35 USC 101 rejection of claims 1-4, 6-13, and 15-20 has been withdrawn in light of Applicant’s amendments and explanations.
Revised 35 USC § 103 rejections of claims 1-4, 6-13, and 15-20 are applied in light of Applicant’s explanations.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 6-10, 12, 13, 15, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication Number 2021/0123823 to Finkelstein et al. (hereafter referred to as Finkelstein) in view of U.S. Patent Application Publication Number 2024/0296044 to Day et al. (hereafter referred to as Day).
As per claim 1, Finkelstein teaches:
upon detecting a feedback trigger from a client device, cause transmission of a prompt message to the client device to initiate a feedback session related to an application, with a user of the client device (Paragraph Number [0059] teaches a simplified flowchart indicating the steps associated with using a system which provides structured feedback forms, for providing feedback on a user interaction session with a computer software installation process. As seen in FIG. 6, a user initiates a software installation process on a computer, using a software installation program. Upon encountering difficulties in completing the software installation process, the user abandons the software installation process. The software installation program then prompts the user, requesting the user to submit relevant feedback. Upon agreeing to provide feedback, the software installation program then presents to the user a structured feedback form comprising feedback categories predefined by the software vendor. The user, in turn, selects a feedback category, and is preferably presented by the system with a list of feedback subcategories predefined by the software vendor. The user then selects a feedback subcategory, and submits relevant feedback. The user also submits user contact information. The system records the user's feedback and at a later time may display all aggregated user feedback to the software vendor).
transmit, from a repository, a first questionnaire comprising a set of questions related to the application to the client device (Paragraph Number [0061] teaches the system provides the software vendor with the ability to generate categorized and nested structured feedback forms to be displayed in a software installation program for the purpose of collecting feedback regarding the user's experience while navigating through the software installation process. As seen in FIG. 7E, responsive to the user choosing to provide negative feedback regarding the user experience with the software installation process, the software installation program provides the user with a list of specific feedback subcategories from which he may from to more precisely describe his experience with the software installation process, such as feedback relating to installation process or general suggestions. The user selects the "Installation process" category, which includes various feedback subcategories, as shown in FIG. 7E. The user then selects the "Error messages" subcategory as the most relevant to the problems he has experienced).
wherein the set of questions are selected based on a user attribute (Paragraph Number [0061] teaches the system provides the software vendor with the ability to generate categorized and nested structured feedback forms to be displayed in a software installation program for the purpose of collecting feedback regarding the user's experience while navigating through the software installation process. As seen in FIG. 7E, responsive to the user choosing to provide negative feedback regarding the user experience with the software installation process, the software installation program provides the user with a list of specific feedback subcategories from which he may from to more precisely describe his experience with the software installation process, such as feedback relating to installation process or general suggestions. The user selects the "Installation process" category, which includes various feedback subcategories, as shown in FIG. 7E. The user then selects the "Error messages" subcategory as the most relevant to the problems he has experienced).
receive the user's response for a first question from amongst the first questionnaire (Paragraph Number [0049] teaches the system provides the website administrator with the ability to generate categorized and nested structured feedback forms to be displayed on the website for the purpose of collecting feedback regarding the user's experience while navigating through a website-based process. As seen in FIG. 3D, responsive to the user agreeing to fill in a feedback from, the system displays to the user a structured feedback form, questioning the user regarding the reason for terminating the transaction process, and provides the user with a list of specific feedback categories which he may choose from to more precisely describe his reasons for terminating the transaction process, such as feedback relating to usability issues, security issues or general suggestions. The user selects the "Usability Issues" category which includes various feedback subcategories, as shown in FIG. 3D. The user then selects the "Address form" subcategory as the most relevant to the problems he has experienced).
category of the first question (Paragraph Number [0061] teaches the system provides the software vendor with the ability to generate categorized and nested structured feedback forms to be displayed in a software installation program for the purpose of collecting feedback regarding the user's experience while navigating through the software installation process. As seen in FIG. 7E, responsive to the user choosing to provide negative feedback regarding the user experience with the software installation process, the software installation program provides the user with a list of specific feedback subcategories from which he may from to more precisely describe his experience with the software installation process, such as feedback relating to installation process or general suggestions. The user selects the "Installation process" category, which includes various feedback subcategories, as shown in FIG. 7E. The user then selects the "Error messages" subcategory as the most relevant to the problems he has experienced).
role of the user (Paragraph Number [0055] teaches a user initiates a software installation process using a software installation program. After selecting the desired installation directory, the user attempts to continue the installation process at 10:00 AM by clicking on the "next" button. However, as seen in FIG. 5B, at 10:03 AM the software installation program is still requesting that the user wait for the installation program to continue. As seen in FIG. 5C, at 10:05 AM the software installation process has still not yet been resumed. The user therefore decides to terminate the software installation process by clicking on the "cancel" button. As shown in FIG. 5D, responsive to the user canceling the software installation process, the user is prompted by the software installation program, which requests that the user fill in a feedback form. Upon agreeing to fill in a feedback from, the system displays to the user a structured feedback form, as seen in FIG. 5E. (Examiner asserts that the person installing the program constitutes as a role (user installer, etc.))).
and pattern in user's response (Paragraph Number [0055] teaches a user initiates a software installation process using a software installation program. After selecting the desired installation directory, the user attempts to continue the installation process at 10:00 AM by clicking on the "next" button. However, as seen in FIG. 5B, at 10:03 AM the software installation program is still requesting that the user wait for the installation program to continue. As seen in FIG. 5C, at 10:05 AM the software installation process has still not yet been resumed. The user therefore decides to terminate the software installation process by clicking on the "cancel" button. As shown in FIG. 5D, responsive to the user canceling the software installation process, the user is prompted by the software installation program, which requests that the user fill in a feedback form. Upon agreeing to fill in a feedback from, the system displays to the user a structured feedback form, as seen in FIG. 5E. (Examiner asserts that the person installing the program constitutes as a role (user installer, etc.))).
automatically cause, by the processor, to transition a current state of the user having access to a first set of functions to an augmented state having access to a second set of functions within the application (Paragraph Number [0052] teaches a simplified flowchart indicating the steps associated with using a system which provides structured feedback forms, for providing feedback on another user interaction session with a website-based process. As seen in FIG. 4, a user browses a website and initiates a business transaction process on the website. Upon encountering difficulties in completing the transaction process, the user abandons the transaction. The system then prompts the user, requesting the user to submit relevant feedback. Upon agreeing to provide feedback, the system then presents to the user a structured feedback form comprising feedback categories predefined by the website administrator. The user, in turn, selects a feedback category, and is preferably presented by the system with a list of feedback subcategories predefined by the website administrator. The user then selects a feedback subcategory, and submits relevant feedback. The user also submits user contact information. The system records the user's feedback and contact information and at a later time may display all aggregated user feedback to the website administrator. Paragraph Number [0080] teaches the quality heatmap 500 may be updated automatically by the pull request module 230. For instance, the pull request module 230 may update the quality heatmap 500 after new user feedback on the pull request workflow is received or after the pull request module 230 modifies the pull request workflow).
Finkelstein teaches providing feedback for computer applications, but does not explicitly quantifying responses via metric scores which is taught by the following citations from Day:
A system comprising: processor; and a machine-readable storage medium comprising instructions executable by the processor to: (Paragraph Number [0022] teaches as will be appreciated by one skilled in the art, aspects of the present disclosure, in particular aspects of AV sensor calibration, described herein, may be embodied in various manners (e.g., as a method, a system, a computer program product, or a computer-readable storage medium). Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Functions described in this disclosure may be implemented as an algorithm executed by one or more hardware processing units, e.g., one or more microprocessors, of one or more computers. In various embodiments, different steps and portions of the steps of each of the methods described herein may be performed by different processing units. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable medium(s), preferably non-transitory, having computer-readable program code embodied, e.g., stored, thereon. In various embodiments, such a computer program may, for example, be downloaded (updated) to the existing devices and systems (e.g., to the existing perception system devices or their controllers, etc.) or be stored upon manufacturing of these devices and systems).
assign, by the processor, a first metric score to the user on receiving the user's response for the first question (Paragraph Number [0060] teaches the aggregation of individual quality scores for determining the quality score of either a pull request workflow or an operation may be weighted. For instance, the quality module 340 may determine weights of the users based on user data. The quality score may be a weighted aggregation of the individual quality scores. For instance, the quality module 340 may compute a weighted individual quality score for each user, e.g., by multiplying the weight with the individual quality score of the user, and determine a sum of all the weighted individual quality scores).
wherein the metric score is assigned to the user based on a weight associated with the first question of the first questionnaire (Paragraph Number [0060] teaches the aggregation of individual quality scores for determining the quality score of either a pull request workflow or an operation may be weighted. For instance, the quality module 340 may determine weights of the users based on user data. The quality score may be a weighted aggregation of the individual quality scores. For instance, the quality module 340 may compute a weighted individual quality score for each user, e.g., by multiplying the weight with the individual quality score of the user, and determine a sum of all the weighted individual quality scores).
wherein the weight is associated with the first question based on a relevance or potential impact of the first question (Paragraph Number [0060] teaches the aggregation of individual quality scores for determining the quality score of either a pull request workflow or an operation may be weighted. For instance, the quality module 340 may determine weights of the users based on user data. The quality score may be a weighted aggregation of the individual quality scores. For instance, the quality module 340 may compute a weighted individual quality score for each user, e.g., by multiplying the weight with the individual quality score of the user, and determine a sum of all the weighted individual quality scores).
based on the first metric score (Paragraph Number [0060] teaches the aggregation of individual quality scores for determining the quality score of either a pull request workflow or an operation may be weighted. For instance, the quality module 340 may determine weights of the users based on user data. The quality score may be a weighted aggregation of the individual quality scores. For instance, the quality module 340 may compute a weighted individual quality score for each user, e.g., by multiplying the weight with the individual quality score of the user, and determine a sum of all the weighted individual quality scores).
analyze the user's response to determine a task-related parameter indicative of a required modification to ... application workflow (Paragraph Number [0056] teaches the survey datastore 360 may modify a pre-generated survey or generate a new survey based on the critical event that triggers the survey, the pull request, the pull request workflow, one or more attributes of the user, the type of application or programming code repository associated with the pull request, other factors, or some combination thereof. The modified survey or new survey can be stored in the survey datastore 360).
a machine-executable application workflow (Paragraph Number [0052] teaches the user module 320 may predict an action of a user. For instance, the user module 320 may include one or more prediction modules (not shown in FIG. 3) to predict user actions based on user data in the user datastore 350. For example, a prediction module may learn a pattern of how a user interacts with the infrastructure system 120 and predict an action that is likely to be taken by the user based on the pattern. For example, the prediction module may learn that the user makes a pull request at a predetermined frequency, e.g., by using a model trained with machine learning techniques. The prediction module can then predict the time when the user will make the next pull request with the infrastructure system 120. As other examples, the user module 320 may predict the time when a user would respond to feedback for a pull request, the time when a user would respond to a survey, and so on).
generate a control instruction to be transmitted to a workflow and task management engine to automatically update the existing workflow of the application based on the determined task-related parameter (Paragraph Number [0038] teaches the interface module 210 may receive information of applications, e.g., from the client devices 130. In some embodiments, the information of an application may include programming codes of the application. The programming codes may be source codes that can be compiled to generate object codes. Additionally or alternatively, the information of the application may include other information associated with the application, such as software version, configuration, instruction for compiling the source codes, instruction for deploying the application, instruction for generating a container for virtually running the application (e.g., running the application in a Cloud), and so on. Paragraph Number [0062] teaches the quality module 340 may also determine whether to modify one or more operations in a pull request workflow based on quality scores or quality heatmap. In some embodiments (e.g., embodiments where the quality module 340 determines quality scores for operations in the pull request workflow), the quality module 340 may determine to modify the codes of an operation based on that the quality score of the operation is below a threshold or that the quality score of the operation is ranked lower than one or more other operations. In other embodiments (e.g., embodiments where the quality module 340 generates a quality heatmap for the pull request workflow), the quality module 340 may determine to modify an operation based on the quality heatmap. For instance, the quality module 340 may determine to modify the codes of an operation, the element of which overlays with more than a threshold number of elements representing negative evaluations).
execute the control instruction to update the existing workflow of the application by changing at least one of an execution sequence, enabled application functions, and resource allocation of the application (Paragraph Number [0062] teaches the quality module 340 may also determine whether to modify one or more operations in a pull request workflow based on quality scores or quality heatmap. In some embodiments (e.g., embodiments where the quality module 340 determines quality scores for operations in the pull request workflow), the quality module 340 may determine to modify the codes of an operation based on that the quality score of the operation is below a threshold or that the quality score of the operation is ranked lower than one or more other operations. In other embodiments (e.g., embodiments where the quality module 340 generates a quality heatmap for the pull request workflow), the quality module 340 may determine to modify an operation based on the quality heatmap. For instance, the quality module 340 may determine to modify the codes of an operation, the element of which overlays with more than a threshold number of elements representing negative evaluations. (Examiner asserts that modification of codes of an operation constitutes changing at least an execution sequence of the application). Paragraph Number [0088] teaches the fleet management system 620 may also provide software (“AV software”) to the fleet of AVs 610. The software, when executed by processors, may control operations of the AVs 610, e.g., based on the operational plan. The fleet management system 620 may provide different software to different AVs 610. The fleet management system 620 may also update software, e.g., by changing one or more components in a version of the AV software and releasing a new software version. Paragraph Number [0129] teaches the control module 940 controls behaviors of the AV based on an operational plan. The control module 940 can dynamically update the operational plan based on information obtained during the operation of the AV and control one or more behaviors of the AV based on the updated operational plan).
Both Finkelstein and Day are directed to response analysis of customers. Finkelstein discloses providing feedback for computer applications. Day improves upon Finkelstein by disclosing quantifying responses via metric scores. One of ordinary skill in the art would be motivated to further include quantifying responses via metric scores, to efficiently quantify response data in order to provide for analysis that can be standardized and manipulated so as to be improved. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of providing feedback for computer applications in Finkelstein to further utilize quantifying responses via metric scores as disclosed in Day, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 9, claim 9 recites a method that is substantially similar to that performed by the system of claim 1 and is rejected for the same reasons put forth in regard to claim 1.
As per claim 17, Day teaches:
A non-transitory computer-readable medium comprising instructions, the instructions being executable by a processing resource of a system, to (Paragraph Number [0022] teaches as will be appreciated by one skilled in the art, aspects of the present disclosure, in particular aspects of AV sensor calibration, described herein, may be embodied in various manners (e.g., as a method, a system, a computer program product, or a computer-readable storage medium). Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Functions described in this disclosure may be implemented as an algorithm executed by one or more hardware processing units, e.g., one or more microprocessors, of one or more computers. In various embodiments, different steps and portions of the steps of each of the methods described herein may be performed by different processing units. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable medium(s), preferably non-transitory, having computer-readable program code embodied, e.g., stored, thereon. In various embodiments, such a computer program may, for example, be downloaded (updated) to the existing devices and systems (e.g., to the existing perception system devices or their controllers, etc.) or be stored upon manufacturing of these devices and systems).
The remainder of the claim limitations are substantially similar to those found in claim 1 and are rejected for the same reasons put forth in regard to claim 1.
A person of ordinary skill in the art would have been motivated to combine these reference as described in regard to claim 1.
As per claims 2 and 10, the combination of Finkelstein and Day teaches each of the limitations of claims 1 and 9 respectively.
In addition, Finkelstein teaches:
wherein the user attribute is determined based on one of a profile of the user comprising a user's designated role in the application (Paragraph Number [0055] teaches a user initiates a software installation process using a software installation program. After selecting the desired installation directory, the user attempts to continue the installation process at 10:00 AM by clicking on the "next" button. However, as seen in FIG. 5B, at 10:03 AM the software installation program is still requesting that the user wait for the installation program to continue. As seen in FIG. 5C, at 10:05 AM the software installation process has still not yet been resumed. The user therefore decides to terminate the software installation process by clicking on the "cancel" button. As shown in FIG. 5D, responsive to the user canceling the software installation process, the user is prompted by the software installation program, which requests that the user fill in a feedback form. Upon agreeing to fill in a feedback from, the system displays to the user a structured feedback form, as seen in FIG. 5E. (Examiner asserts that the person installing the program constitutes as a role (user installer, etc.))).
data corresponding to a user's historical interaction with the application (Paragraph Number [0055] teaches a user initiates a software installation process using a software installation program. After selecting the desired installation directory, the user attempts to continue the installation process at 10:00 AM by clicking on the "next" button. However, as seen in FIG. 5B, at 10:03 AM the software installation program is still requesting that the user wait for the installation program to continue. As seen in FIG. 5C, at 10:05 AM the software installation process has still not yet been resumed. The user therefore decides to terminate the software installation process by clicking on the "cancel" button. As shown in FIG. 5D, responsive to the user canceling the software installation process, the user is prompted by the software installation program, which requests that the user fill in a feedback form. Upon agreeing to fill in a feedback from, the system displays to the user a structured feedback form, as seen in FIG. 5E. (Examiner asserts that the various steps that the installer has taken to install and cancel the install constitute historical interactions)).
and frequency of interaction with the application (Paragraph Number [0043] teaches the system provides the website administrator with the ability to generate categorized and nested structured feedback forms to be displayed on the website for the purpose of collecting feedback regarding the user's experience while navigating through a website-based process. As seen in FIG. 1E, responsive to the user providing negative feedback regarding the user experience with the website-based process, the system provides the user with a list of specific negative feedback categories from which he may choose to more precisely describe his experience with the website-based process, such as feedback relating to bugs, website content or general suggestions. The user selects the "Bugs" category, which includes various negative feedback subcategories, as shown in FIG. 1E. The user then selects the "Response time" subcategory as the most relevant to the problems he has experienced. Paragraph Number [0044] teaches upon selecting the "Response time" subcategory, the user is presented with various more specific feedback options and may select the specific issue he has encountered, such as the length of time that has elapsed while the website has been processing the transaction. To complete the feedback process, the user clicks on the "send" button, after which the system displays to the user a user contact information form, as seen in FIG. 1G. (See also Paragraph Number [0059])).
As per claims 6, 12, and 19, the combination of Finkelstein and Day teaches each of the limitations of claims 1, 9, and 17 respectively.
Finkelstein teaches providing feedback for computer applications, but does not explicitly quantifying responses via metric scores which is taught by the following citations from Day:
compare the first metric score assigned to the user with one of a threshold milestone of a plurality of threshold milestones (Paragraph Number [0062] teaches the quality module 340 may also determine whether to modify one or more operations in a pull request workflow based on quality scores or quality heatmap. In some embodiments (e.g., embodiments where the quality module 340 determines quality scores for operations in the pull request workflow), the quality module 340 may determine to modify the codes of an operation based on that the quality score of the operation is below a threshold or that the quality score of the operation is ranked lower than one or more other operations. In other embodiments (e.g., embodiments where the quality module 340 generates a quality heatmap for the pull request workflow), the quality module 340 may determine to modify an operation based on the quality heatmap. For instance, the quality module 340 may determine to modify the codes of an operation, the element of which overlays with more than a threshold number of elements representing negative evaluations).
on determining the first metric score to exceed one of the threshold milestones, cause to transition the user from the current state having access to the first set of functions to the augmented state having access to the second set of functions. (Paragraph Number [0062] teaches the quality module 340 may also determine whether to modify one or more operations in a pull request workflow based on quality scores or quality heatmap. In some embodiments (e.g., embodiments where the quality module 340 determines quality scores for operations in the pull request workflow), the quality module 340 may determine to modify the codes of an operation based on that the quality score of the operation is below a threshold or that the quality score of the operation is ranked lower than one or more other operations. In other embodiments (e.g., embodiments where the quality module 340 generates a quality heatmap for the pull request workflow), the quality module 340 may determine to modify an operation based on the quality heatmap. For instance, the quality module 340 may determine to modify the codes of an operation, the element of which overlays with more than a threshold number of elements representing negative evaluations).
A person of ordinary skill in the art would have been motivated to combine these references as described in regard to claim 1.
As per claims 7 and 13 the combination of Finkelstein and Day teaches each of the limitations of claims 1 and 6, and 9 and 12 respectively.
Finkelstein teaches providing feedback for computer applications, but does not explicitly quantifying responses via metric scores which is taught by the following citations from Day:
on determining the first metric score to be less than each of the threshold milestones of the plurality of threshold milestones, continue to receive user's response for a subsequent question present within the questionnaire (Paragraph Number [0068] teaches the operation 420 is for committing change. The operation 420 is to add commits to keep track of the progress of completing the pull request. Commits may create a history of the work to indicate what has been done. In some embodiments, each commit may be a version of the new branch. A commit may be generated by making one or more changes to a previous commit. A commit may be associated with a message that explains why the changes were made. In some embodiments, each commit may be a separate unit of change. This can facilitate the user to reverse changes).
assign a second metric score to the user on receiving user's response for the subsequent question (Paragraph Number [0058] teaches the quality module 340 evaluates quality of pull request workflows based on user responses to surveys provided by the survey module 330. The quality module 340 may determine a quality score that indicates the quality of a pull request workflow based on user response to one or more surveys associated with the pull request workflow. Each of the one or more surveys may be triggered based on a dataset of a trace of the pull request workflow. In some embodiments, the quality module 340 may receive responses from multiple users. The quality module 340 may determine an individual quality score based on the response from an individual user and aggregates the individual quality scores of all the users to determine the quality score of the pull request workflow).
accumulate the second metric score with the first metric score to obtain an accumulated metric score (Paragraph Number [0059] teaches in addition to or in lieu of a quality score of the pull request workflow, the quality module 340 may also evaluate the quality of one or more operations in the pull request workflow. The quality module 340 may determine a quality score for each operation. In embodiments where responses from multiple users to one or more surveys associated with an operation were received, the quality module 340 may determine an individual quality score based on the response from an individual user and aggregates the individual quality scores of all the users to determine the quality score of the operation).
compare the accumulated metric scores assigned to the user with the plurality of threshold milestones (Paragraph Number [0060] teaches the aggregation of individual quality scores for determining the quality score of either a pull request workflow or an operation may be weighted. For instance, the quality module 340 may determine weights of the users based on user data. The quality score may be a weighted aggregation of the individual quality scores. For instance, the quality module 340 may compute a weighted individual quality score for each user, e.g., by multiplying the weight with the individual quality score of the user, and determine a sum of all the weighted individual quality scores).
on determining the accumulated metric scores to exceed one of the threshold milestones of the plurality of threshold milestones, cause to transition the current state of the user having access to the first set of functions to the augmented state having access to the second set of functions (Paragraph Number [0066] teaches a pull request workflow 400 for which trace-based surveys can be provided to users of the pull request workflow, according to some embodiments of the present disclosure. The pull request workflow 400 may be a workflow provided by the pull request module 230. The pull request workflow 400 may be used for making changes to code repositories of AV applications. The pull request workflow 400 includes seven operations: 410, 420, 430, 440, 450, 460, and 470. The operations are arranged in a sequence. An operation may be triggered by the completion of the preceding operation in the sequence. In other embodiments, the pull request workflow 400 may include fewer, more, or different operations. Also, the operations may have different relationships. For the purpose of simplicity and illustration, the pull request workflow 400 in the embodiments of FIG. 4 is used to add a feature to an AV application. In other embodiments, the pull request workflow 400 can be used to process other pull requests, such as modifying features in AV applications, removing features from AV applications, and so on).
A person of ordinary skill in the art would have been motivated to combine these references as described in regard to claim 1.
As per claims 8 and 15, the combination of Finkelstein and Day teaches each of the limitations of claims 1 and 9, 12, and 13 respectively.
In addition, Finkelstein teaches:
wherein the second set of functions are different from the first set of functions reflecting an increase in features, functions, and services offered by the application as the user state transitions from the current state to the augmented state (Paragraph Number [0052] teaches a simplified flowchart indicating the steps associated with using a system which provides structured feedback forms, for providing feedback on another user interaction session with a website-based process. As seen in FIG. 4, a user browses a website and initiates a business transaction process on the website. Upon encountering difficulties in completing the transaction process, the user abandons the transaction. The system then prompts the user, requesting the user to submit relevant feedback. Upon agreeing to provide feedback, the system then presents to the user a structured feedback form comprising feedback categories predefined by the website administrator. The user, in turn, selects a feedback category, and is preferably presented by the system with a list of feedback subcategories predefined by the website administrator. The user then selects a feedback subcategory, and submits relevant feedback. The user also submits user contact information. The system records the user's feedback and contact information and at a later time may display all aggregated user feedback to the website administrator).
Claims 3, 4, 11, 16, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication Number 2021/0123823 to Finkelstein et al. (hereafter referred to as Finkelstein) in view of U.S. Patent Application Publication Number 2024/0296044 to Day et al. (hereafter referred to as Day) in further view of U.S. Patent Application Publication Number 2023/0046469 to Sabourin et al. (hereafter referred to as Sabourin).
As per claims 3, 11, and 20, the combination of Finkelstein and Day teaches of the limitations of claims 1, 9, and 17 respectively.
Finkelstein teaches providing feedback for computer applications, but does not explicitly teach parsing a user’s response to determine response attributes of a respondent which is taught by the following citations from Sabourin:
parse the user's response to identify and remove non-relevant components from the user's response to obtain a parsed user's response (Paragraph Number [0060] teaches the process 200 involves the modification module 110 receiving the newly completed assessment dataset at Operation 210. At Operation 215, the modification module 110 parses the newly completed assessment dataset into question/answer pairings. For example, the newly completed assessment dataset may be based on a questionnaire provided as an electronic form. The electronic form may have been viewed by personnel for the vendor on a computing device. The electronic form may have presented the personnel with questions along with fields to provide answers to the questions. Therefore, the modification module 110 parses the questions and corresponding answers provided in the electronic form into the question/answer pairings).
wherein the user's response is obtained in one of a free-text format and as a selection from multiple-choice options; (Paragraph Number [0072]teaches the process 400 involves the dynamic assessment module 120 displaying a question to the personnel in Operation 410. The personnel then enters an answer to the question. The answer to the question can be provided in various formats depending on the types of response requested for the question. For example, the question may be the form of a yes/no question, multiple choice question, freeform question, and/or the like. In addition, the question may request additional information to be provided (e.g., uploaded) along with the answer such as supporting documentation, documentation on certifications, documentation on past data-related incidents, and/or the like).
analyze the parsed user's response to determine a set of response attributes indicating patterns and trends in the user's response (Paragraph Number [0071] as the personnel provides answers to the questions presented for the assessment, the dynamic assessment module 120 detects inconsistencies in the assessment in real time and in response, flags one or more responses related to the inconsistencies for additional action. Paragraph Number [0073] teaches the dynamic assessment module 120 receives the answer to the question in Operation 415 and maps the question/answer pairing to one or more attributes in Operation 420. For example, the dynamic assessment module 120 may use some type of data structure that maps the question/answer pairing to one or more attributes related to computer-implemented functionality provided by the vendor. In addition, the dynamic assessment module 120 maps the one or more attributes to other question/answer pairings found in the assessment that are related to the one or more attributes in Operation 425. Paragraph Number [0078] teaches the dynamic assessment module 120 determines whether the answer to the question contains an inconsistency. If so, then the dynamic assessment module 120 determines whether to address the inconsistency in Operation 440. In various aspects, the dynamic assessment module 120 performs this particular operation by using a decision engine to determine a relevance of a particular identified inconsistency).
execute a search query generated based on the set of response attributes on the repository to extract a set of relevant questions for the user (Paragraph Number [0080] teaches the dynamic assessment module 120 may use a machine-learning model to determine the one or more actions to take to address the inconsistency. For example, the machine-learning model may be a trained model such as a multi-label classification model that processes the particular question/answer pairing and/or the related question/answer pairings and generates a data representation having a set of predictions (e.g., values) in which each prediction is associated with a particular action to take to address the inconsistency. The machine-learning model may be trained, for example, using training data derived from users' responses to follow up requests previously provided for inconsistencies detected in one or more assessment question/answer pairings such as: (1) whether the user ignored the flagged question or related follow up action; (2) a particular type of action the user took in response to the flagged question and/or related follow up action (e.g., providing support for the response, implementing a remediating action, modifying one or more attributes, etc.); (3) one or more frameworks and/or standards that are mapped to the flagged question; and/or (4) any other suitable information).
modify an order and a content of the questions of the first questionnaire based on the set of relevant questions to transmit an updated questionn3ire to the client device (Paragraph Number [0081] teaches the dynamic assessment module 120 determines whether the assessment includes another question to ask the personnel in Operation 450. If so, then the dynamic assessment module 120 returns to Operation 410, selects the next question, and performs the operations just discussed for the newly selected question. Once the dynamic assessment module 120 has processed all of the questions for the assessment, the dynamic assessment module 120 records the assessment in Operation 455. For example, the dynamic assessment module 120 may record the assessment by submitting the assessment as a newly completed assessment dataset to the vendor risk management computing system 100).
Both the combination of Finkelstein and Day and Sabourin are directed to response analysis of customers. The combination of Finkelstein and Day discloses providing feedback for computer applications. Sabourin improves upon the combination of Finkelstein and Day by disclosing parsing a user’s response to determine response attributes of a respondent. One of ordinary skill in the art would be motivated to further include parsing a user’s response to determine response attributes of a respondent, to efficiently gather and analyze specific information about a respondent as well as their response so as to properly prepare answers and remedies for the respondent. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of providing feedback for computer applications in the combination of Finkelstein and Day to further utilize parsing a user’s response to determine response attributes of a respondent as disclosed in Sabourin, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claims 4, 16, and 18, the combination of Finkelstein, Day, and Sabourin teaches teach of the limitations of claims 1 and 3, 9, and 17 respectively.
Finkelstein teaches providing feedback for computer applications, but does not explicitly quantifying responses via metric scores which is taught by the following citations from Day:
identify an existing workflow within the application that corresponds to the taskrelated parameter, wherein the existing workflow is a target workflow which is to be modified (Paragraph Number [0058] teaches the quality module 340 evaluates quality of pull request workflows based on user responses to surveys provided by the survey module 330. The quality module 340 may determine a quality score that indicates the quality of a pull request workflow based on user response to one or more surveys associated with the pull request workflow. Each of the one or more surveys may be triggered based on a dataset of a trace of the pull request workflow. In some embodiments, the quality module 340 may receive responses from multiple users. The quality module 340 may determine an individual quality score based on the response from an individual user and aggregates the individual quality scores of all the users to determine the quality score of the pull request workflow).
A person of ordinary skill in the art would have been motivated to combine these references as described in regard to claim 1.
Response to Argument
Applicants arguments filed 1/8/2026 have been fully considered but they are not fully persuasive.
Applicant argues that the previously cited reference does not teach the newly amended portions including the new limitations recited by the independent claims. (See Applicant’s Remarks, 1/8/2026, pgs. 17-19). Examiner respectfully disagrees. Examiner notes that new citations from the previously cited references have been applied to the newly presented claim limitations as indicated in the above in the new 35 USC 103 rejection. Examiner has added and emphasized specific portions of the Finkelstein and Day references to read on the new independent claims. As such, Applicant’s arguments directed towards the previous rejection are moot. In response to Applicant’s arguments, Examiner directs Applicant to review the new citations and explanations provided in the new 35 USC 103 rejection presented above.
Conclusion
Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW H DIVELBISS whose telephone number is (571)270-0166. The examiner can normally be reached on 7:30 am - 6:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached on (571) 272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about PAIR, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/M. H. D./
Examiner, Art Unit 3624
/Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624