DETAILED ACTION
Status of the Application
The following is a Final Office Action. In response to Examiner's communication of June 24, 2025, Applicant, on December 23, 2025, amended claims 1, 8, & 15. Claims 1-21 are now pending in this application and have been rejected below.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Amendment
Applicant's amendments are not sufficient to overcome the 35 USC 101 rejections set forth in the previous action. Therefore, these rejections are maintained below.
Applicant's amendments are not sufficient to overcome the prior art rejections set forth in the previous action. Therefore, these rejections are maintained below.
Response to Arguments - 35 USC § 101
Applicant’s arguments with respect to the 35 USC 101 rejections have been fully considered, but they are not persuasive.
Applicant argues that the pending claims are directed to patentable subject matter under 35 U.S.C. § 101 at least because the claims effect an improvement in the functioning of a computer configured to perform automated process configuration and information storage; thus, assuming solely for sake of argument there is an exception, the claims, when considered as a whole, integrate the exception into a practical application; and the claims involve a practical application of automated process configuration and information storage by a computer by reciting:
accessing a database of information describing a plurality of steps identified by a blueprint, …;
retrieving, from the database, information describing a step in the plurality of steps;
presenting, via a user interface, the information describing the step;
requesting, via the user interface, a user input in response to the presented information;
receiving, from the user interface, information describing the user input;
storing, in the database, the information describing the user input, wherein the storing includes affiliating the information describing user input and the information describing the step with a corresponding identifier; and
identifying, based upon the user input, a next step in the plurality of steps.
(emphasis added by Applicant). Examiner respectfully disagrees.
Pursuant to 2019 Revised Patent Subject Matter Eligibility Guidance, in order to determine whether a claim is directed to an abstract idea, under Step 2A, we first (1) determine whether the claims recite limitations, individually or in combination, that fall within the enumerated subject matter groupings of abstract ideas (mathematical concepts, certain methods of organizing human activity, or mental processes), and (2) determine whether any additional elements beyond the recited abstract idea, individually and as an ordered combination, integrate the judicial exception into a practical application. 84 Fed. Reg. 52, 54-55. Next, if a claim (1) recites an abstract idea and (2) does not integrate that exception into a practical application, in order to determine whether the claim recites an “inventive concept,” under Step 2B, we then determine whether any of the additional elements beyond the recited abstract idea, individually and in combination, are significantly more than the abstract idea itself. 84 Fed. Reg. 56.
As noted above, under Prong 2 of Step 2A, Examiners determine whether any additional elements beyond the recited abstract idea, individually and as an ordered combination, integrate the judicial exception into a practical application.
For the reasons detailed below under Prong 1 of Step 2A, the limitations Applicant refers to as allegedly improving process automated configuration and information storage manage the human behavior of persons of organizations being presented the information regarding the steps to guide human behavior, record human responsive behavior, and assess the organization in the process change based on human behavior; thus, the limitations allegedly improving the function of a computer recite a certain method organizing human behavior. Further, but for the generic computer components implementing the limitations referred to by Applicant, in view of the broadest reasonable interpretation, the limitations referred to by Applicant could all be reasonably interpreted as a human making observations of information regarding the plurality of steps, a succession of activities to assess a current state of an organizational process, assess a prospective impact of a candidate change, and direct user communications to implement the candidate change, a human presenting the information manually and/or with a pen and paper, a human observing and recording the users responses to the information mentally and/or with a pen and paper, and a human performing an evaluation and using judgment based on the observed information to identify the next step based on the responses; therefore, the limitations allegedly improving the function of a computer recite mental processes. Therefore, the limitations Applicant alleges improve the function of a computer recite are recitations of an abstract idea.
For the reasons set forth above, aside from generic computer components of the user interface and database implementing the limitations referred to by Applicant, the limitations referred to by Applicant are not additional elements beyond the recited abstract idea nor necessarily rooted in computer technology, but rather, the limitation referred to by Applicant recite abstract ideas since they recite a certain method of organizing human activity because the limitations manage human behavior and also a mental process because they can be performed by a human mentally and/or with a pen and paper.
The limitations Applicant alleges reciting the alleged improvement to technology are improvements to an abstract idea and recitations of the recited abstract idea; however, the MPEP makes clear “an improvement in the abstract idea itself (e.g. a recited fundamental economic concept) is not an improvement in technology” and that “[m]ere automation of manual processes” is not an improvement in computer technology. See MPEP 2106.05(a). Therefore, the limitations referred to by Applicant are not an improvement to computer technology.
With respect to the recitations of the user interface and database, the elements are nothing more than generic computer components implementing the recited abstract idea, which is not sufficient to integrate an abstract idea into a practical application.
As in the claims at issue in Electric Power Group, the present claims are not focused on a specific improvement in computers or any other technology, but instead on certain independently abstract ideas that simply invokes computers as tools to implement the abstract idea. Electric Power Group, LLC v. Alstom S.A., et al., No. 2015-1778, slip op. at 8 (Fed. Cir. Aug. 1, 2016); MPEP 2106.05(a).
Under Prong 1 of Step 2A, claim 1, and similarly claims 2-21, recites “guiding organizational change management … comprising: accessing … information describing a plurality of steps identified by a blueprint, wherein the steps are ordered and comprise information describing a succession of activities configured to guide a process to assess a current state of an organizational process, assess a prospective impact of a candidate change to the organizational process, and direct user communications to implement the candidate change to the organizational process; retrieving … information describing a step in the plurality of steps; presenting … the information describing the step; requesting … a user input in response to the presented information; receiving … information describing the user input; storing … the information describing the user input, wherein the storing includes affiliating the information describing user input and the information describing the step with a corresponding identifier; and identifying, based upon the user input, a next step in the plurality of steps.” Claims 1-21, in view of the claim limitations, recite the abstract idea of guiding organizations through the steps in changing organizational processes by accessing information describing a plurality of steps, a succession of activities to assess a current state of an organizational process, assess a prospective impact of a candidate change, and direct user communications to implement the candidate change, retrieving information the information, presenting the information, requesting, receiving, and storing users responses to the information, and identifying a next step in the plurality of steps.
Each of the above limitations manage business interactions and provide instructions or rules to follow to manage the human behavior of persons of organizations being presented the information regarding the steps to guide and assess the organization in the process change; thus, the claims recite certain methods of organizing human activity. In addition, as a whole, in view of the claim limitations, but for the computer components and systems performing the claimed functions, the broadest reasonable interpretation of the recited accessing information describing a plurality of steps, a succession of activities to assess a current state of an organizational process, assess a prospective impact of a candidate change, and direct user communications to implement the candidate change, retrieving information the information, presenting the information, requesting, receiving, and storing users responses to the information, and identifying a next step in the plurality of steps could all be reasonably interpreted as a human making observations of information regarding the plurality of steps, a succession of activities to assess a current state of an organizational process, assess a prospective impact of a candidate change, and direct user communications to implement the candidate change, a human presenting the information manually and/or with a pen and paper, a human observing the users responses to the information and recording the responses mentally and/or with a pen and paper, and a human performing an evaluation and using judgment based on the observed information to identify the next step based on the responses; therefore, the claims recite mental processes. Further, with respect to the dependent claims, aside from the additional elements beyond the recited abstract idea addressed below under the second prong of Step 2A and 2B, the limitations of dependent claims 2-7, 9-14, & 16-21 recite similar further abstract limitations to those discussed above that narrow the abstract idea recited in the independent claims because, aside from the computer components and systems performing the claimed functions the limitations of claims recite mental processes that can be practically performed mentally by observing, evaluating, and judging information mentally and/or with a pen and paper and recite a certain method of organizing human activity that manages business interactions and personal human behavior. Accordingly, since the claims recite a certain method of organizing human activity and mental processes, the claims recite an abstract idea under the first prong of Step 2A.
This judicial exception is not integrated into a practical application under the second prong of Step 2A. In particular, the claims recite the additional elements beyond the recited abstract idea of “[a] computer-implemented method …, at least a portion of the method being performed by a computing device comprising at least one processor, the method comprising,” “a database,” “from the database,” “via a user interface,” “from the user interface,” and “in the database” in claim 1, “[a] system …, comprising: an electronic processor configured to execute a set of computer-executable instructions; and a memory communicatively coupled to the electronic processor and storing the set of computer-executable instructions, wherein the set of computer-executable instructions are configured to cause the electronic processor to,” “a database,” “from the database,” “via a user interface,” “from the user interface,” and “in the database” in claim 8, and “[a] non-transitory computer-readable medium, comprising processor-executable instructions stored thereon configured to cause a processor to,” “a database,” “from the database,” “via a user interface,” “from the user interface,” and “in the database” in claim 15; however, individually and when viewed as an ordered combination, and pursuant to the broadest reasonable interpretation, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea on a computer (i.e. apply it), and thus, are no more than applying the abstract idea with generic computer components. Moreover, aside from the aforementioned additional elements, the remaining elements of dependent claims 2-7, 9-14, & 16-21 do not integrate the abstract idea into a practical application because these claims merely recite further limitations that provide no more than simply narrowing the recited abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception under Step 2B. As noted above, the aforementioned additional elements beyond the recited abstract idea, as an order combination, are no more than mere instructions to implement the idea using generic computer components (i.e. apply it), and further, generally link the abstract idea to a field of use, which is not sufficient to amount to significantly more than an abstract idea; therefore, the additional elements are not sufficient to amount to significantly more than an abstract idea. Additionally, these recitations as an ordered combination, simply append the abstract idea to recitations of generic computer structure performing generic computer functions that are well-understood, routine, and conventional in the field as evinced by Applicant’s Specification at [0080]-[0081] (describing the computing device suitable for implementing the disclosed subject matter can be implemented in a desktop computer, laptop computer, a server, a mobile device, an electronic device, the like, or a combination thereof). Furthermore, as an ordered combination, these elements amount to generic computer components performing repetitive calculations, receiving or transmitting data over a network, which, as held by the courts, are well-understood, routine, and conventional. See MPEP 2106.05(d); July 2015 Update, p. 7. Moreover, aside from the aforementioned additional elements, the remaining elements of dependent claims 2-7, 9-14, & 16-21 do not transform the recited abstract idea into a patent eligible invention because these claims merely recite further limitations that provide no more than simply narrowing the recited abstract idea.
Looking at these limitations as an ordered combination adds nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they simply provide instructions to use a generic arrangement of generic computer components and recitations of generic computer structure that perform well-understood, routine, and conventional computer functions that are used to “apply” the recited abstract idea. Thus, the elements of the claims, considered both individually and as an ordered combination, are not sufficient to ensure that the claims as a whole amount to significantly more than the abstract idea itself. Since there are no limitations in these claims that transform the exception into a patent eligible application such that these claims amount to significantly more than the exception itself, claims 1-21 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Response to Arguments - Prior Art
Applicant’s arguments with respect to the prior art rejections have been fully considered, but they are not persuasive.
Applicant argues claims 1-2, 7-9, 14-16, and 21 recite features that distinguish over the applied references because amended independent claim 1, for example, recites "presenting, via a user interface, the information describing the step" and “access a database of information describing a plurality of steps identified by a blueprint, wherein the steps are ordered and comprise information describing a succession of activities configured to guide a process to assess a current state of an organizational process, assess a prospective impact of a candidate change to the organizational process, and direct user communications to implement the candidate change to the organizational process” since Guven does not teach or suggest at least the aforementioned distinguishing features. Examiner respectfully disagrees.
Guven, et al. (US 20210405610 A1), hereinafter Guven, discloses “presenting, via a user interface, the information describing the step” by presenting a series of questions describing steps the user should take to reduce risks of the change in a graphical user interface in paragraphs [0039], the engine asks a user these risk assessment questions 502, and based on answers 504 to these questions 502, the questions 502 may be further refined if necessary a number of times until a final resultant set of questions 502 are produced, [0054], the display unit 1011 may display the above-described risk assessment questions, mitigation questions, and the user interface for entering the change ticket information, etc., [0043], fig. 6 reproduced below illustrates an example of a window 600 that may be presented to an implementer of a change to mitigate the risk associated with that change, e.g., the proposed change 601 is for installing AIX version 6.1, and the corresponding assigned risk 602 is a 4/5, a selectable mitigation button 605 that can be used to launch a mitigation window 606 that presents mitigation questions to a user based on the high risk factors 514 associated with the change. In this user interface window for implement a change and mitigate risks depicted below, the system presents a series of questions describing steps the user should take to reduce risks of the change displayed in order under sections labeled 1 through 3. Here, for example, the mitigation window presents a series of questions asking whether the user has taken steps including, under section 1, Checked the reasons for the failure, Build the plan to eliminate potential reasons for failure, under section 2, Verified that Account will not be impacted should the change fail, and under section 3, Double-checked the back-out plan by another CR.
PNG
media_image1.png
340
438
media_image1.png
Greyscale
In addition, Guven discloses “access a database of information describing a plurality of steps identified by a blueprint” by accessing a database containing the above series of questions describing steps the user should take to reduce risks of the proposed change based on a category of risk by traversing the tree for each category to filter the set of risk assessment question for the risk category in paragraphs [0039], fig. 5, in the risk engine, risk is determined by the engine retrieving risk assessment questions 502 from a question database DB 503 based on the input category 501, the questions 502 may be further refined if necessary and filtered out that are rendered not relevant based on a user's previous answers, and the refinement process may repeat a predetermined number of times until a final resultant set of questions 502 are produced, [0042], the risk engine may refine a set of mitigation questions based on the high risk factors 514 identified during risk assessment and define any necessary user actions, wherein the mitigation questions may be stored in a mitigation DB 514, [0036], the generated risk assessment questions are filtered based on the category or type of the proposed change, wherein the invention uses a very finely granular change category tree to map change categories to risk questions, and the entered category or type may be linked to a branch series of directions so that the tree 400 can be traversed until an appropriate leaf node is reached that is linked to a specific set of risk assessment questions.
Here, the filtering the set of risk assessment questions in the database by traversing the tree mapping categories to sets of questions based on the category of the proposed change discloses “access a database of information describing a plurality of steps identified by a blueprint” because, pursuant to the broadest reasonable interpretation, the series of questions accessed from that database disclosed by Guven “describe a plurality of steps” the user should take to reduce risks of the proposed change and identifying the questions in the database describing steps the user should take by filtering the sets of questions based on the selected category and traversing the tree for the selected category as disclosed in Guven anticipates the plurality of steps are “identified by a blueprint.”
Furthermore, in addition to the above, Guven discloses “wherein the steps are ordered and comprise information describing a succession of activities configured to guide a process to assess a current state of an organizational process, assess a prospective impact of a candidate change to the organizational process, and direct user communications to implement the candidate change to the organizational process” by describing the series of questions presented in the window above describing steps the user should take to reduce risks of the proposed change are sequentially determined and redetermined in a particular order by iteratively generating risk mitigation questions based on factors including the risk categories of the proposed change in paragraphs [0027]-[0029], fig. 1, a method of assessing and mitigating a risk of a proposed change in fig. 1 includes determining a risk value from the dynamic change context (S104), mitigating the risk value if possible (S105), and refining the dynamic change context to re-determine the risk value if needed, in fig. 2 a method of assessing a risk used to determine the risk value of FIG. 1 includes determining potential factors that contribute to the risk of making a change (S201), wherein the factors are types or categories of proposed change, e.g., the change may be upgrading an operating system, a database, making changes to a computer program/documents, etc., determining risk assessment questions from the determined factors (S202), e.g., if one of the factors requires several experts, the question could be "does this change require several experts?", if one of the factors is a highly constrained timeframe, the question could be "does this change need to be implemented in a highly constrained timeframe?", wherein the risk assessment questions are designed to help predict either the probability or the impact of a failed change, [0035], fig. 3 a method that may be used to mitigate the initially determined risk includes determining the high risk factors that can be eliminated to reduce the risk (S301), and generating mitigations questions from the determined high risk factors (S302), e.g., if it is determined that a factor of a "highly constrained timeframe" can be mitigated, then the mitigation question could be "can the change be broken into stages?".
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1, and similarly claims 2-21, recites “guiding organizational change management … comprising: accessing … information describing a plurality of steps identified by a blueprint, wherein the steps are ordered and comprise information describing a succession of activities configured to guide a process to assess a current state of an organizational process, assess a prospective impact of a candidate change to the organizational process, and direct user communications to implement the candidate change to the organizational process; retrieving … information describing a step in the plurality of steps; presenting … the information describing the step; requesting … a user input in response to the presented information; receiving … information describing the user input; storing … the information describing the user input, wherein the storing includes affiliating the information describing user input and the information describing the step with a corresponding identifier; and identifying, based upon the user input, a next step in the plurality of steps.” Claims 1-21, in view of the claim limitations, recite the abstract idea of guiding organizations through the steps in changing organizational processes by accessing information describing a plurality of steps, a succession of activities to assess a current state of an organizational process, assess a prospective impact of a candidate change, and direct user communications to implement the candidate change, retrieving information the information, presenting the information, requesting, receiving, and storing users responses to the information, and identifying a next step in the plurality of steps.
Each of the above limitations manage business interactions and provide instructions or rules to follow to manage the human behavior of persons of organizations being presented the information regarding the steps to guide and assess the organization in the process change; thus, the claims recite certain methods of organizing human activity. In addition, as a whole, in view of the claim limitations, but for the computer components and systems performing the claimed functions, the broadest reasonable interpretation of the recited accessing information describing a plurality of steps, a succession of activities to assess a current state of an organizational process, assess a prospective impact of a candidate change, and direct user communications to implement the candidate change, retrieving information the information, presenting the information, requesting, receiving, and storing users responses to the information, and identifying a next step in the plurality of steps could all be reasonably interpreted as a human making observations of information regarding the plurality of steps, a succession of activities to assess a current state of an organizational process, assess a prospective impact of a candidate change, and direct user communications to implement the candidate change, a human presenting the information manually and/or with a pen and paper, a human observing the users responses to the information and recording the responses mentally and/or with a pen and paper, and a human performing an evaluation and using judgment based on the observed information to identify the next step based on the responses; therefore, the claims recite mental processes. Further, with respect to the dependent claims, aside from the additional elements beyond the recited abstract idea addressed below under the second prong of Step 2A and 2B, the limitations of dependent claims 2-7, 9-14, & 16-21 recite similar further abstract limitations to those discussed above that narrow the abstract idea recited in the independent claims because, aside from the computer components and systems performing the claimed functions the limitations of claims recite mental processes that can be practically performed mentally by observing, evaluating, and judging information mentally and/or with a pen and paper and recite a certain method of organizing human activity that manages business interactions and personal human behavior. Accordingly, since the claims recite a certain method of organizing human activity and mental processes, the claims recite an abstract idea under the first prong of Step 2A.
This judicial exception is not integrated into a practical application under the second prong of Step 2A. In particular, the claims recite the additional elements beyond the recited abstract idea of “[a] computer-implemented method …, at least a portion of the method being performed by a computing device comprising at least one processor, the method comprising,” “a database,” “from the database,” “via a user interface,” “from the user interface,” and “in the database” in claim 1, “[a] system …, comprising: an electronic processor configured to execute a set of computer-executable instructions; and a memory communicatively coupled to the electronic processor and storing the set of computer-executable instructions, wherein the set of computer-executable instructions are configured to cause the electronic processor to,” “a database,” “from the database,” “via a user interface,” “from the user interface,” and “in the database” in claim 8, and “[a] non-transitory computer-readable medium, comprising processor-executable instructions stored thereon configured to cause a processor to,” “a database,” “from the database,” “via a user interface,” “from the user interface,” and “in the database” in claim 15; however, individually and when viewed as an ordered combination, and pursuant to the broadest reasonable interpretation, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea on a computer (i.e. apply it), and thus, are no more than applying the abstract idea with generic computer components. Moreover, aside from the aforementioned additional elements, the remaining elements of dependent claims 2-7, 9-14, & 16-21 do not integrate the abstract idea into a practical application because these claims merely recite further limitations that provide no more than simply narrowing the recited abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception under Step 2B. As noted above, the aforementioned additional elements beyond the recited abstract idea, as an order combination, are no more than mere instructions to implement the idea using generic computer components (i.e. apply it), and further, generally link the abstract idea to a field of use, which is not sufficient to amount to significantly more than an abstract idea; therefore, the additional elements are not sufficient to amount to significantly more than an abstract idea. Additionally, these recitations as an ordered combination, simply append the abstract idea to recitations of generic computer structure performing generic computer functions that are well-understood, routine, and conventional in the field as evinced by Applicant’s Specification at [0080]-[0081] (describing the computing device suitable for implementing the disclosed subject matter can be implemented in a desktop computer, laptop computer, a server, a mobile device, an electronic device, the like, or a combination thereof). Furthermore, as an ordered combination, these elements amount to generic computer components performing repetitive calculations, receiving or transmitting data over a network, which, as held by the courts, are well-understood, routine, and conventional. See MPEP 2106.05(d); July 2015 Update, p. 7. Moreover, aside from the aforementioned additional elements, the remaining elements of dependent claims 2-7, 9-14, & 16-21 do not transform the recited abstract idea into a patent eligible invention because these claims merely recite further limitations that provide no more than simply narrowing the recited abstract idea.
Looking at these limitations as an ordered combination adds nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they simply provide instructions to use a generic arrangement of generic computer components and recitations of generic computer structure that perform well-understood, routine, and conventional computer functions that are used to “apply” the recited abstract idea. Thus, the elements of the claims, considered both individually and as an ordered combination, are not sufficient to ensure that the claims as a whole amount to significantly more than the abstract idea itself. Since there are no limitations in these claims that transform the exception into a patent eligible application such that these claims amount to significantly more than the exception itself, claims 1-21 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 2, 7-9, 14-16, & 21 are rejected under 35 U.S.C. 102(a)(1), (a)(2) as being anticipated by Guven, et al. (US 20210405610 A1), hereinafter Guven.
Regarding claim 1, Guven discloses a computer-implemented method for automatically guiding organizational change management, at least a portion of the method being performed by a computing device comprising at least one processor, the method comprising ([0012]-[0015], [0053]-[0057]):
accessing a database ([0039], fig. 5, in the risk engine, risk is determined by the engine retrieving risk assessment questions 502 from a question database DB 503 based on the input category 501, the questions 502 may be further refined if necessary and filtered out that are rendered not relevant based on a user's previous answers, and the refinement process may repeat a predetermined number of times until a final resultant set of questions 502 are produced, [0042], the risk engine may refine a set of mitigation questions based on the high risk factors 514 identified during risk assessment and define any necessary user actions, wherein the mitigation questions may be stored in a mitigation DB 514) of information describing a plurality of steps identified by a blueprint ([0036], the generated risk assessment questions are filtered based on the category or type of the proposed change, wherein the invention uses a very finely granular change category tree to map change categories to risk questions, and the entered category or type may be linked to a branch series of directions so that the tree 400 can be traversed until an appropriate leaf node is reached that is linked to a specific set of risk assessment questions), wherein the steps are ordered and comprise information describing a succession of activities configured to guide a process to assess a current state of an organizational process, assess a prospective impact of a candidate change to the organizational process, and direct user communications to implement the candidate change to the organizational process ([0027]-[0029], fig. 1, a method of assessing and mitigating a risk of a proposed change in fig. 1 includes determining a risk value from the dynamic change context (S104), mitigating the risk value if possible (S105), and refining the dynamic change context to re-determine the risk value if needed, in fig. 2 a method of assessing a risk used to determine the risk value of FIG. 1 includes determining potential factors that contribute to the risk of making a change (S201), wherein the factors are types or categories of proposed change, e.g., the change may be upgrading an operating system, a database, making changes to a computer program/documents, etc., determining risk assessment questions from the determined factors (S202), e.g., if one of the factors requires several experts, the question could be "does this change require several experts?", if one of the factors is a highly constrained timeframe, the question could be "does this change need to be implemented in a highly constrained timeframe?", wherein the risk assessment questions are designed to help predict either the probability or the impact of a failed change, [0035], fig. 3 a method that may be used to mitigate the initially determined risk includes determining the high risk factors that can be eliminated to reduce the risk (S301), and generating mitigations questions from the determined high risk factors (S302), e.g., if it is determined that a factor of a "highly constrained timeframe" can be mitigated, then the mitigation question could be "can the change be broken into stages?");
retrieving, from the database, information describing a step in the plurality of steps ([0039], fig. 5, in the risk engine, risk is determined by the engine retrieving risk assessment questions 502 from a question database DB 503 based on the input category 501, the questions 502 may be further refined if necessary and filtered out that are rendered not relevant based on a user's previous answers, and the refinement process may repeat a predetermined number of times until a final resultant set of questions 502 are produced, [0042], the risk engine may refine a set of mitigation questions based on the high risk factors 514 identified during risk assessment and define any necessary user actions, wherein the mitigation questions may be stored in a mitigation DB 514);
presenting, via a user interface, the information describing the step;
requesting, via the user interface, a user input in response to the presented information ([0039], the engine asks a user these risk assessment questions 502, and based on answers 504 to these questions 502, the questions 502 may be further refined if necessary, and the refinement process may repeat a predetermined number of times until a final resultant set of questions 502 are produced, [0043], fig. 6 illustrates an example of a window 600 that may be presented to an implementer of a change to mitigate the risk associated with that change, e.g., the proposed change 601 is for installing AIX version 6.1, and the corresponding assigned risk 602 is a 4/5, a selectable mitigation button 605 that can be used to launch a mitigation window 606 that presents mitigation questions to a user based on the high risk factors 514 associated with the change, [0054], the display unit 1011 may display the above-described risk assessment questions, determined risks, mitigation questions, the user interface for entering the change ticket information, etc.);
receiving, from the user interface, information describing the user input ([0043], the mitigation window 606 presents mitigation questions to a user based on the high risk factors 514 associated with the change, and once the mitigations questions have been answered, a selectable mitigation button 607 can be used to re-determine the risk based on the answers, [0054], the display unit 1011 may display the above-described risk assessment questions, determined risks, mitigation questions, the user interface for entering the change ticket information, etc.);
PNG
media_image1.png
340
438
media_image1.png
Greyscale
storing, in the database, the information describing the user input, wherein the storing includes affiliating the information describing user input and the information describing the step with a corresponding identifier ([0048], a graphical user interface may be provided to a change requester to enter change related information into a data structure associated with the change, and the data structure may be referred to a change ticket and may be stored in the ticket DB 507 illustrated in FIG. 5, [0054], the hard disk 1008 may store each of the databases illustrated in FIG. 4, an impactXprobability matrix, answers to the risk assessment and mitigation questions, change related information, etc.,); and
PNG
media_image2.png
596
438
media_image2.png
Greyscale
identifying, based upon the user input, a next step in the plurality of steps ([0027], fig. 1, the method of assessing and mitigating a risk of a proposed change includes mitigating the risk value if possible (S105), and refining the dynamic change context to re-determine the risk value if needed, [0035], the method used to mitigate the initially determined risk using the determined high risk factors generating mitigations questions from the determined high risk factors (S302), prompting the assessor to answer the mitigation questions and take the necessary actions they agreed on to reduce risk (S303), e.g., if the user agreed to take an action item, the system will prompt them to do so before the risk can be reduced, e.g., if a back-out plan is missing and the user indicated while answering the mitigation questions that they agree to put in a back-out plan, the system will prompt them to enter a back-out plan and may not proceed until this is done, and then re-determining the risk and remaining high risk factors (S204), [0043], once the mitigations questions have been answered, a selectable mitigation button 507 can be used to re-determine the risk based on the answers, [0042]-[0043], the mitigations questions are designed to seek the required information or action to remedy the issue indicated in the high risk factors 414, e.g., for a missing back-out plan, the mitigation action could be to add a back-out plan, or indicate that a back-out plan is not possible, and the window 600 further lists an implementation plan 603 for implementing the change and a back-out plan 604 for backing out the change).
PNG
media_image1.png
340
438
media_image1.png
Greyscale
Regarding claim 2, Guven discloses the computer-implemented method of claim 1 (as above), wherein: the requesting the user input further comprises presenting, to a user via the user interface, a sentiment survey about the candidate change the organizational process ([0043], fig. 6 illustrates an example of a window 600 that may be presented to an implementer of a change to mitigate the risk associated with that change, e.g., the corresponding assigned risk 602 is a 4/5, the mitigation window 606 presents mitigation questions to a user based on the high risk factors 514 associated with the change, and once the mitigations questions have been answered, a selectable mitigation button 607 can be used to re-determine the risk based on the answers, [0054], the display unit 1011 may display the above-described risk assessment questions, determined risks, mitigation questions, the user interface for entering the change ticket information, etc., [0051], risk assessment questions that may be asked to a user may include the following: How many users (including the account and their clients) would be impacted in the case of a change failure?; Does this change affect a local, multi-region, or a global service?; Would the failure of this change impact a critical service for the customer?; How many resources are required to implement the change?; Is there enough time allocated in the change window to cover a potential back-out?); and
the user input includes a numerical value describing the user's sentiment about the candidate change to the organizational process ([0032], the risk assessment questions need not be yes/no questions, e.g., if one of the technical factors is "requires experts", the risk assessment question could provide choices, such as "does the change require a single expert, a few experts, or a multitude of experts", where a risk weight could be assigned to each answer choice, wherein if a change requires a single expert, it could be weighted lower than a change that requires multiple experts, [0034], the method further includes determining an initial risk based on the answers to the filtered questions (S204), e.g., a weight can be applied to each answer, the weights can be summed to arrive at an overall risk, the risk rating may be a numerical value, e.g., 1 for BAU, 2 for minor, 3 for medium, 4 for major, 5 for critical, etc.).
Regarding claim 7, Guven discloses the computer-implemented method of claim 1 (as above), wherein: the plurality of steps is a group of steps in a plurality of groups of steps; and each group of steps in the plurality of groups of steps is configured to guide improving a respective type of organizational process ([0050], the engine may use the high risk factors to automatically determine a list of mitigation questions and associated actions to reduce risk. If the risk is high, and a user opts for mitigation, the user can answer the mitigation questions and potentially takes actions to reduce the risk, [0035], e.g., if it is determined that a factor of a "highly constrained timeframe" can be mitigated, then the mitigation question could be "can the change be broken into stages?", and the method includes prompting the assessor to answer the mitigation questions and take the necessary actions they agreed on to reduce risk (S303), e.g., if a back-out plan is missing and the user indicated while answering the mitigation questions that they agree to put in a back-out plan, the system will prompt them to enter a back-out plan and may not proceed until this is done, and the method may include a re-determination of the risk and remaining high risk factors (S304)).
Regarding claims 8, 9, & 14, these claims are substantially similar to claim 1, 2, & 7, and are, therefore, rejected on the same basis as claims 1, 2, & 7. While claims 8, 9, & 14 are directed to a system comprising an electronic processor configured to execute a set of computer-executable instructions and a memory communicatively coupled to the electronic processor and storing the set of computer-executable instructions, Guven discloses a system, as claimed. [0012]-[0015], [0053]-[0057].
Regarding claims 15, 16, & 21, these claims are substantially similar to claim 1, 2, & 7, and are, therefore, rejected on the same basis as claims 1, 2, & 7. While claims 15, 16, & 21 are directed to a non-transitory computer-readable medium comprising processor-executable instructions stored thereon, Guven discloses a computer-readable medium, as claimed. [0012]-[0015], [0053]-[0057].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3-5, 10-12, & 17-19 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Guven, et al. (US 20210405610 A1), hereinafter Guven in view of Mehrotra, et al. (US 20070061191 A1), hereinafter Mehrotra.
Regarding claim 3, Guven discloses the computer-implemented method of claim 1 (as above). Further, while Guven disclosed all of the above and further comprising: presenting, to a plurality of users and via a user interface device, a survey configured to identify respective levels of … of aspects of an organization ([0043], fig. 6 illustrates an example of a window 600 that may be presented to an implementer of a change to mitigate the risk associated with that change, e.g., the corresponding assigned risk 602 is a 4/5, the mitigation window 606 presents mitigation questions to a user based on the high risk factors 514 associated with the change, and once the mitigations questions have been answered, a selectable mitigation button 607 can be used to re-determine the risk based on the answers, [0054], the display unit 1011 may display the above-described risk assessment questions, determined risks, mitigation questions, the user interface for entering the change ticket information, etc., [0051], risk assessment questions that may be asked to a user may include the following: How many users (including the account and their clients) would be impacted in the case of a change failure?; Does this change affect a local, multi-region, or a global service?; Would the failure of this change impact a critical service for the customer?; How many resources are required to implement the change?; Is there enough time allocated in the change window to cover a potential back-out?);
receiving, from the surveyed users, numerical values describing the respective levels of … of the aspects of the organization ([0043], the mitigation window 606 presents mitigation questions to a user based on the high risk factors 514 associated with the change, and once the mitigations questions have been answered, a selectable mitigation button 607 can be used to re-determine the risk based on the answers, [0054], the display unit 1011 may display the above risk assessment questions, determined risks, mitigation questions, the user interface for entering the change ticket information, etc., [0032], the risk assessment questions need not be yes/no, e.g., if one of the technical factors is "requires experts", the risk assessment question could provide choices, such as "does the change require a single expert, a few experts, or a multitude of experts", where a risk weight could be assigned to each answer choice, wherein if a change requires a single expert, it could be weighted lower than a change that requires multiple experts, [0034], the method further includes determining an initial risk based on the answers to the filtered questions (S204), e.g., a weight can be applied to each answer, the weights are summed to arrive at an overall risk, the risk rating may be a numerical value, e.g., 1 for BAU, 2 for min, 3 for med, 4 for maj, 5 for crit, etc.);
calculating a … score for each respective aspect of the organization by … the respective numerical values ([0034], the method further includes determining an initial risk based on the answers to the filtered questions (S204), e.g., as above, a weight can be applied to each answer, and the weights can be summed to arrive at an overall risk, and the risk rating may be a numerical value (e.g., 1 for BAU, 2 for minor, 3 for medium, 4 for major, 5 for critical, etc.));
storing, in the database, the … for each respective aspect of the organization ([0048], a graphical user interface may be provided to a change requester to enter change related information into a data structure associated with the change, and the data structure may be referred to a change ticket and may be stored in the ticket DB 507 illustrated in FIG. 5, [0054], the hard disk 1008 may store each of the databases illustrated in FIG. 4, an impactXprobability matrix, answers to the risk assessment and mitigation questions, change related information, etc.); and
selecting, based upon the … score for each respective aspect of the organization, the step in the plurality of steps ([0034]-[0035], fig. 2, 3, the step of determining the initial risk (S204) may also provide a list of high risk factors associated with the change (e.g., they are above a predefined risk threshold), and then the method next includes generating mitigations questions from the determined high risk factors (S302), prompting assessor to answer the mitigation questions and take the necessary actions they agreed on to reduce risk (S303), [0042], the engine attempts to mitigate the high risk factors 514 to reduce the previously determined risk that are above a predefined threshold, e.g., the risk may be on a scale of 1-5, and the engine may be set to mitigate whenever the determined risk is a 4 or 5, and the risk engine may refine a set of mitigation questions based on the high risk factors 514 and define any necessary user actions, [0034], the step of determining the initial risk may provide a list of high risk factors associated with the change, e.g., if a proposed change upgrading an OS is associated with a factor that the change must be performed by a multitude of experts, these factors are considered highly risk (e.g., above a predefined risk threshold), these highly risk factors can also be identified in addition to the initial risk), Guven does not necessarily disclose the following remaining limitations, which however, are taught by further teachings in Mehrotra.
Mehrotra teaches presenting, to a plurality of users …, a survey configured to identify respective levels of maturity of aspects of an organization;
receiving, from the surveyed users, numerical values describing the respective levels of maturity of the aspects of the organization ([0041]-[0045], in operation 201 current change management information is gathered/received from the organization, wherein a set of questions are derived for each of the predefined maturity levels and used as part of the maturity model tool, and when these questions are answered in light of the current change management information of an organization (e.g., obtained in operation 201), they elucidate the current change management maturity level of the organization. FIG. 3 is an example of a maturity model tool, answers to the questions of the maturity model tool may produce a sub-process score for each question, wherein scores may range from 0 to 5, with a score of 5 may indicate that activities are "Optimized"
PNG
media_image3.png
438
616
media_image3.png
Greyscale
);
calculating a maturity score for each respective aspect of the organization by averaging the respective numerical values ([0042], [0044]-[0046], in operation 203, a current change management maturity level may be identified for the organization using the current change management information using a "maturity model tool," wherein answers to the questions of the maturity model tool may produce a sub-process score for each question, sub-process scores may range from 0 to 5, average of all of the individual sub-process scores provides an aggregate score at each maturity level, after completing the aggregate scores for each predefined maturity level, the maturity level of the organization may be determined, e.g., by calculating an average score from the aggregate scores);
… the maturity score for each respective aspect of the organization ([0042], [0044]-[0046], in operation 203, a current change management maturity level may be identified for the organization using the current change management information using a "maturity model tool," wherein answers to the questions of the maturity model tool may produce a sub-process score for each question, sub-process scores may range from 0 to 5, average of all of the individual sub-process scores provides an aggregate score at each maturity level, after completing the aggregate scores for each predefined maturity level, the maturity level of the organization may be determined, e.g., by calculating an average score from the aggregate scores); and
selecting, based upon the maturity score for each respective aspect of the organization, the step in the plurality of steps ([0049]-[0051], once the current change management maturity level of an organization is established using the operations described above, operation 205 identifies a target change management process maturity level, wherein the "target" maturity level may be the maturity level immediately above the current level or several increments higher in the hierarchy, and in operation 207, one or more improvement operations may be devised that, when performed, will raise the change management maturity level higher in the hierarchy of change management maturity levels, e.g., the solution architecture specification may include the detailed documentation regarding the one or more improvement operations performed to achieve the target maturity level, including: the steps that are to be taken to achieve the target maturity level, [0058], transition roadmap 400b illustrates improvement operations 411 that detailing the transition from an Efficient to a Responsive maturity level that may include performing a solution leading assessment of the organization to determine the organization's current state, creating gap analysis (the difference between the current state and what is needed to get to the target maturity level), creating the solution architecture overview, or performing other steps, wherein operations 411 may serve to define/identify the one or more improvement operations in terms of a set of processes 413 and a set of products 415).
Guven and Mehrotra are analogous fields of invention because both address the problem of generating scores categorizing changes in business processes. At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to include in the system of Guven the ability to calculate a maturity score as taught by Mehrotra since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the combination would produce the predictable results of calculating a maturity score, as claimed. Further, it would have been obvious to one of ordinary skill in the art to have modified Guven with the aforementioned teachings of Mehrotra in order to produce the added benefit of improving quality of incident or change handling. [0003].
Regarding claim 4, Guven discloses the computer-implemented method of claim 1 (as above). Further, while Guven disclosed all of the above and further comprising: presenting, to a plurality of users and via a user interface device, a survey configured to identify a respective level of impact due to aspects of the candidate change to the organizational process ([0043], fig. 6 illustrates an example of a window 600 that may be presented to an implementer of a change to mitigate the risk associated with that change, e.g., the corresponding assigned risk 602 is a 4/5, the mitigation window 606 presents mitigation questions to a user based on the high risk factors 514 associated with the change, and once the mitigations questions have been answered, a selectable mitigation button 607 can be used to re-determine the risk based on the answers, [0054], the display unit 1011 may display the above-described risk assessment questions, determined risks, mitigation questions, the user interface for entering the change ticket information, etc., [0051], risk assessment questions that may be asked to a user may include the following: How many users (including the account and their clients) would be impacted in the case of a change failure?; Does this change affect a local, multi-region, or a global service?; Would the failure of this change impact a critical service for the customer?; How many resources are required to implement the change?; Is there enough time allocated in the change window to cover a potential back-out?);
receiving, from the surveyed users, numerical values describing the respective levels of impact due to the aspects of the candidate change to the organizational process ([0043], the mitigation window 606 presents mitigation questions to a user based on the high risk factors 514 associated with the change, and once the mitigations questions have been answered, a selectable mitigation button 607 can be used to re-determine the risk based on the answers, [0054], the display unit 1011 may display the above risk assessment questions, determined risks, mitigation questions, the user interface for entering the change ticket information, etc., [0032], the risk assessment questions need not be yes/no, e.g., if one of the technical factors is "requires experts", the risk assessment question could provide choices, such as "does the change require a single expert, a few experts, or a multitude of experts", where a risk weight could be assigned to each answer choice, wherein if a change requires a single expert, it could be weighted lower than a change that requires multiple experts, [0034], the method further includes determining an initial risk based on the answers to the filtered questions (S204), e.g., a weight can be applied to each answer, the weights are summed to arrive at an overall risk, the risk rating may be a numerical value, e.g., 1 for BAU, 2 for min, 3 for med, 4 for maj, 5 for crit, etc.);
calculating a change impact score for each respective aspect of the candidate change to the organizational process by … the respective numerical values ([0034], the method further includes determining an initial risk based on the answers to the filtered questions (S204), e.g., as above, a weight can be applied to each answer);
storing, in the database, … for each respective aspect of the candidate change to the organizational process ([0048], a graphical user interface may be provided to a change requester to enter change related information into a data structure associated with the change, and the data structure may be referred to a change ticket and may be stored in the ticket DB 507 illustrated in FIG. 5, [0054], the hard disk 1008 may store each of the databases illustrated in FIG. 4, an impactXprobability matrix, answers to the risk assessment and mitigation questions, change related information, etc.);
calculating an assessment score by adding the change impact score to a risk impact score ([0034], the method further includes determining an initial risk based on the answers to the filtered questions (S204), e.g., the weights can be summed to arrive at an overall risk, and the risk rating may be a numerical value (e.g., 1 for BAU, 2 for minor, 3 for medium, 4 for major, 5 for critical, etc.));
comparing the assessment score to a threshold value; and
selecting, when the assessment score is lower than the threshold value, a reduced number of steps to perform in the plurality of steps relative to the number of steps performed in the plurality of steps when the assessment score meets or exceeds the threshold value ([0034]-[0035], fig. 2, 3, the step of determining the initial risk (S204) may also provide a list of high risk factors associated with the change (e.g., they are above a predefined risk threshold), and then the method next includes generating mitigations questions from the determined high risk factors (S302), prompting assessor to answer the mitigation questions and take the necessary actions they agreed on to reduce risk (S303), [0042], the engine attempts to mitigate the high risk factors 514 to reduce the previously determined risk that are above a predefined threshold, e.g., the risk may be on a scale of 1-5, and the engine may be set to mitigate whenever the determined risk is a 4 or 5, and the risk engine may refine a set of mitigation questions based on the high risk factors 514 and define any necessary user actions, [0034], the step of determining the initial risk may provide a list of high risk factors associated with the change, e.g., if a proposed change upgrading an OS is associated with a factor that the change must be performed by a multitude of experts, these factors are considered highly risk (e.g., above a predefined risk threshold), these highly risk factors can also be identified in addition to the initial risk
PNG
media_image2.png
596
438
media_image2.png
Greyscale
),
Guven does not necessarily disclose the following remaining limitations, which however, are taught by further teachings in Mehrotra.
Mehrotra teaches presenting, to a plurality of users …, a survey configured to identify a respective level of impact due to aspects of the candidate change to the organizational process;
receiving, from the surveyed users, numerical values describing the respective levels of impact due to the aspects of the candidate change to the organizational process ([0041]-[0045], in operation 201 current change management information is gathered/received from the organization, wherein a set of questions are derived for each of the predefined maturity levels and used as part of the maturity model tool, and when these questions are answered in light of the current change management information of an organization (e.g., obtained in operation 201), they elucidate the current change management maturity level of the organization. FIG. 3 is an example of a maturity model tool, answers to the questions of the maturity model tool may produce a sub-process score for each question, wherein scores may range from 0 to 5, with a score of 5 may indicate that activities are "Optimized"
PNG
media_image3.png
438
616
media_image3.png
Greyscale
);
calculating a change impact score for each respective aspect of the candidate change to the organizational process by averaging the respective numerical values ([0042], [0044]-[0046], in operation 203, a current change management maturity level may be identified for the organization using the current change management information using a "maturity model tool," wherein answers to the questions of the maturity model tool may produce a sub-process score for each question, sub-process scores may range from 0 to 5, average of all of the individual sub-process scores provides an aggregate score at each maturity level);
… the change impact score for each respective aspect of the candidate change to the organizational process ([0042], [0044]-[0046], in operation 203, a current change management maturity level may be identified for the organization using the current change management information using a "maturity model tool," wherein answers to the questions of the maturity model tool may produce a sub-process score for each question, sub-process scores may range from 0 to 5, average of all of the individual sub-process scores provides an aggregate score at each maturity level);
calculating an assessment score by adding the change impact score to a risk impact score ([0042], [0044]-[0046], after completing the aggregate scores for each predefined maturity level, the maturity level of the organization may be determined, e.g., by calculating an average score from the aggregate scores);
comparing the assessment score to a threshold value; and
selecting, when the assessment score is lower than the threshold value, a reduced number of steps to perform in the plurality of steps relative to the number of steps performed in the plurality of steps when the assessment score meets or exceeds the threshold value ([0049]-[0051], once the current change management maturity level of an organization is established using the operations described above, operation 205 identifies a target change management process maturity level, wherein the "target" maturity level may be the maturity level immediately above the current level or several increments higher in the hierarchy, and in operation 207, one or more improvement operations may be devised that, when performed, will raise the change management maturity level higher in the hierarchy of change management maturity levels, e.g., the solution architecture specification may include the detailed documentation regarding the one or more improvement operations performed to achieve the target maturity level, including: the steps that are to be taken to achieve the target maturity level, [0058], transition roadmap 400b illustrates improvement operations 411 that detailing the transition from an Efficient to a Responsive maturity level that may include performing a solution leading assessment of the organization to determine the organization's current state, creating gap analysis (the difference between the current state and what is needed to get to the target maturity level), creating the solution architecture overview, or performing other steps, wherein operations 411 may serve to define/identify the one or more improvement operations in terms of a set of processes 413 and a set of products 415).
Guven and Mehrotra are analogous fields of invention because both address the problem of generating scores categorizing changes in business processes. At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to include in the system of Guven the ability to calculate a change impact score and an assessment score as taught by Mehrotra since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the combination would produce the predictable results of calculating a change impact score and an assessment score, as above. Further, it would have been obvious to one of ordinary skill in the art to have modified Guven with the aforementioned teachings of Mehrotra in order to produce the added benefit of improving quality of incident or change handling. [0003].
Regarding claim 5, Guven discloses the computer-implemented method of claim 1 (as above). Further, while Guven disclosed all of the above and further comprising: presenting, to a plurality of users and via a user interface device, a survey configured to identify a respective level of risk due to aspects of the candidate change to the organizational process ([0043], fig. 6 illustrates an example of a window 600 that may be presented to an implementer of a change to mitigate the risk associated with that change, e.g., the corresponding assigned risk 602 is a 4/5, the mitigation window 606 presents mitigation questions to a user based on the high risk factors 514 associated with the change, and once the mitigations questions have been answered, a selectable mitigation button 607 can be used to re-determine the risk based on the answers, [0054], the display unit 1011 may display the above-described risk assessment questions, determined risks, mitigation questions, the user interface for entering the change ticket information, etc., [0051], risk assessment questions that may be asked to a user may include the following: How many users (including the account and their clients) would be impacted in the case of a change failure?; Does this change affect a local, multi-region, or a global service?; Would the failure of this change impact a critical service for the customer?; How many resources are required to implement the change?; Is there enough time allocated in the change window to cover a potential back-out?);
receiving, from the surveyed users, numerical values describing the respective levels of risk due to the aspects of the candidate change to the organizational process ([0043], the mitigation window 606 presents mitigation questions to a user based on the high risk factors 514 associated with the change, and once the mitigations questions have been answered, a selectable mitigation button 607 can be used to re-determine the risk based on the answers, [0054], the display unit 1011 may display the above risk assessment questions, determined risks, mitigation questions, the user interface for entering the change ticket information, etc., [0032], the risk assessment questions need not be yes/no, e.g., if one of the technical factors is "requires experts", the risk assessment question could provide choices, such as "does the change require a single expert, a few experts, or a multitude of experts", where a risk weight could be assigned to each answer choice, wherein if a change requires a single expert, it could be weighted lower than a change that requires multiple experts, [0034], the method further includes determining an initial risk based on the answers to the filtered questions (S204), e.g., a weight can be applied to each answer, the weights are summed to arrive at an overall risk, the risk rating may be a numerical value, e.g., 1 for BAU, 2 for min, 3 for med, 4 for major, 5 for critical, etc.);
calculating a risk score for each respective aspect of the candidate change to the organizational process by … the respective numerical values ([0034], the method further includes determining an initial risk based on the answers to the filtered questions (S204), e.g., as above, a weight can be applied to each answer);
storing, in the database, … for each respective aspect of the candidate change to the organizational process ([0048], a graphical user interface may be provided to a change requester to enter change related information into a data structure associated with the change, and the data structure may be referred to a change ticket and may be stored in the ticket DB 507 illustrated in FIG. 5, [0054], the hard disk 1008 may store each of the databases illustrated in FIG. 4, an impactXprobability matrix, answers to the risk assessment and mitigation questions, change related information, etc.);
calculating an assessment score by adding the risk impact score to a change impact score ([0034], the method further includes determining an initial risk based on the answers to the filtered questions (S204), e.g., the weights can be summed to arrive at an overall risk, and the risk rating may be a numerical value (e.g., 1 for BAU, 2 for minor, 3 for medium, 4 for major, 5 for critical, etc.));
comparing the assessment score to a threshold value; and
selecting, when the assessment score is lower than the threshold value, a reduced number of steps to perform in the plurality of steps relative to the number of steps performed in the plurality of steps when the assessment score meets or exceeds the threshold value ([0034]-[0035], fig. 2, 3, the step of determining the initial risk (S204) may also provide a list of high risk factors associated with the change (e.g., they are above a predefined risk threshold), and then the method next includes generating mitigations questions from the determined high risk factors (S302), prompting assessor to answer the mitigation questions and take the necessary actions they agreed on to reduce risk (S303), [0042], the engine attempts to mitigate the high risk factors 514 to reduce the previously determined risk that are above a predefined threshold, e.g., the risk may be on a scale of 1-5, and the engine may be set to mitigate whenever the determined risk is a 4 or 5, and the risk engine may refine a set of mitigation questions based on the high risk factors 514 and define any necessary user actions, [0034], the step of determining the initial risk may provide a list of high risk factors associated with the change, e.g., if a proposed change upgrading an OS is associated with a factor that the change must be performed by a multitude of experts, these factors are considered highly risk (e.g., above a predefined risk threshold), these highly risk factors can also be identified in addition to the initial risk
PNG
media_image2.png
596
438
media_image2.png
Greyscale
),
Guven does not necessarily disclose the following remaining limitations, which however, are taught by further teachings in Mehrotra.
Mehrotra teaches presenting, to a plurality of users …, a survey configured to identify a respective level of risk due to aspects of the candidate change to the organizational process;
receiving, from the surveyed users, numerical values describing the respective levels of risk due to the aspects of the candidate change to the organizational process ([0041]-[0045], in operation 201 current change management information is gathered/received from the organization, wherein a set of questions are derived for each of the predefined maturity levels and used as part of the maturity model tool, and when these questions are answered in light of the current change management information of an organization (e.g., obtained in operation 201), they elucidate the current change management maturity level of the organization. FIG. 3 is an example of a maturity model tool, answers to the questions of the maturity model tool may produce a sub-process score for each question, wherein scores may range from 0 to 5, with a score of 5 may indicate that activities are "Optimized"
PNG
media_image3.png
438
616
media_image3.png
Greyscale
);
calculating a risk score for each respective aspect of the candidate change to the organizational process by averaging the respective numerical values ([0042], [0044]-[0046], in operation 203, a current change management maturity level may be identified for the organization using the current change management information using a "maturity model tool," wherein answers to the questions of the maturity model tool may produce a sub-process score for each question, sub-process scores may range from 0 to 5, average of all of the individual sub-process scores provides an aggregate score at each maturity level);
… the risk score for each respective aspect of the candidate change to the organizational process ([0042], [0044]-[0046], in operation 203, a current change management maturity level may be identified for the organization using the current change management information using a "maturity model tool," wherein answers to the questions of the maturity model tool may produce a sub-process score for each question, sub-process scores may range from 0 to 5, average of all of the individual sub-process scores provides an aggregate score at each maturity level);
calculating an assessment score by adding the risk impact score to a change impact score ([0042], [0044]-[0046], after completing the aggregate scores for each predefined maturity level, the maturity level of the organization may be determined, e.g., by calculating an average score from the aggregate scores);
comparing the assessment score to a threshold value; and
selecting, when the assessment score is lower than the threshold value, a reduced number of steps to perform in the plurality of steps relative to the number of steps performed in the plurality of steps when the assessment score meets or exceeds the threshold value([0049]-[0051], once the current change management maturity level of an organization is established using the operations described above, operation 205 identifies a target change management process maturity level, wherein the "target" maturity level may be the maturity level immediately above the current level or several increments higher in the hierarchy, and in operation 207, one or more improvement operations may be devised that, when performed, will raise the change management maturity level higher in the hierarchy of change management maturity levels, e.g., the solution architecture specification may include the detailed documentation regarding the one or more improvement operations performed to achieve the target maturity level, including: the steps that are to be taken to achieve the target maturity level, [0058], transition roadmap 400b illustrates improvement operations 411 that detailing the transition from an Efficient to a Responsive maturity level that may include performing a solution leading assessment of the organization to determine the organization's current state, creating gap analysis (the difference between the current state and what is needed to get to the target maturity level), creating the solution architecture overview, or performing other steps, wherein operations 411 may serve to define/identify the one or more improvement operations in terms of a set of processes 413 and a set of products 415).
Guven and Mehrotra are analogous fields of invention because both address the problem of generating scores categorizing changes in business processes. At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to include in the system of Guven the ability to calculate a risk score and an assessment score as taught by Mehrotra since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the combination would produce the predictable results of calculating a risk score and an assessment score, as above. Further, it would have been obvious to one of ordinary skill in the art to have modified Guven with the aforementioned teachings of Mehrotra in order to produce the added benefit of improving quality of incident or change handling. [0003].
Regarding claims 10-12, these claims are substantially similar to claims 3-5, and are, therefore, rejected on the same basis as claims 3-5. While claims 10-12 are directed to a system comprising an electronic processor configured to execute a set of computer-executable instructions and a memory communicatively coupled to the electronic processor and storing the set of computer-executable instructions, Guven discloses a system, as claimed. [0012]-[0015], [0053]-[0057].
Regarding claims 17-19, these claims are substantially similar to claim claims 3-5, and are, therefore, rejected on the same basis as claims 3-5. While claims 17-19 are directed to a non-transitory computer-readable medium comprising processor-executable instructions stored thereon, Guven discloses a computer-readable medium, as claimed. [0012]-[0015], [0053]-[0057].
Claims 6, 13, & 20 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Guven, et al. (US 20210405610 A1), hereinafter Guven in view of Friedlander, et al. (US 20020165763 A1), hereinafter Friedlander.
Regarding claim 6, Guven discloses the computer-implemented method of claim 1 (as above). Further, while Guven disclosed all of the above and further comprising: … presenting, to users on the list of users and via a user interface device, a survey configured to identify actual respective levels of loyalty, influence, and interest in the candidate change to the organizational process by the users on the list of users ([0043], fig. 6 illustrates an example of a window 600 that may be presented to an implementer of a change to mitigate the risk associated with that change, e.g., the corresponding assigned risk 602 is a 4/5, the mitigation window 606 presents mitigation questions to a user based on the high risk factors 514 associated with the change, and once the mitigations questions have been answered, a selectable mitigation button 607 can be used to re-determine the risk based on the answers, [0054], the display unit 1011 may display the above-described risk assessment questions, determined risks, mitigation questions, the user interface for entering the change ticket information, etc., [0051], risk assessment questions that may be asked to a user may include the following: How many users (including the account and their clients) would be impacted in the case of a change failure?; Does this change affect a local, multi-region, or a global service?; Would the failure of this change impact a critical service for the customer?; How many resources are required to implement the change?; Is there enough time allocated in the change window to cover a potential back-out?);
receiving, from the surveyed users, numerical values describing the actual respective levels of loyalty, influence, and interest in the candidate change process ([0043], the mitigation window 606 presents mitigation questions to a user based on the high risk factors 514 associated with the change, and once the mitigations questions have been answered, a selectable mitigation button 607 can be used to re-determine the risk based on the answers, [0054], the display unit 1011 may display the above risk assessment questions, determined risks, mitigation questions, the user interface for entering the change ticket information, etc., [0032], the risk assessment questions need not be yes/no, e.g., if one of the technical factors is "requires experts", the risk assessment question could provide choices, such as "does the change require a single expert, a few experts, or a multitude of experts", where a risk weight could be assigned to each answer choice, wherein if a change requires a single expert, it could be weighted lower than a change that requires multiple experts, [0034], the method further includes determining an initial risk based on the answers to the filtered questions (S204), e.g., a weight can be applied to each answer, the weights are summed to arrive at an overall risk, the risk rating may be a numerical value, e.g., 1 for BAU, 2 for min, 3 for med, 4 for major, 5 for critical, etc.); …
storing, in the database, … ([0048], a graphical user interface may be provided to a change requester to enter change related information into a data structure associated with the change, and the data structure may be referred to a change ticket and may be stored in the ticket DB 507 illustrated in FIG. 5, [0054], the hard disk 1008 may store each of the databases illustrated in FIG. 4, an impactXprobability matrix, answers to the risk assessment and mitigation questions, change related information, etc.
PNG
media_image2.png
596
438
media_image2.png
Greyscale
),
Guven does not necessarily disclose the following remaining limitations, which however, are taught by further teachings in Friedlander.
Friedlander teaches receiving a list of users impacted by the candidate change to the organizational process;
storing the list of users in the database ([0043], analysis system 22 comprises a system and method for implementing technical change in an organization 26 having multiple hierarchies 28, wherein database 24 provides storage for information 30 necessary to carry out the present invention, wherein such information could include, (2) organizational information such as the quantity, names, and types of hierarchies), [0044], each hierarchy 28 could represent a different management level, department, position type, individual, etc., within organization 26);
presenting, to users on the list of users and via a user interface device, a survey configured to identify ([0071], in fig. 9 depicting a flow chart including the first step 702 to query each of the hierarchies in the organization, [0051], the queries are sent electronically to the selected hierarchies by query system 4, the selected hierarchies could view queries at an interface by directly accessing the query system 42) actual respective levels of loyalty, influence, and interest in the candidate change to the organizational process by the users on the list of users ([0050], each query comprises a "set" of questions that are developed to measure a potential response to the technical change being proposed or implemented, wherein the queries are grouped into various query topics such as, for example, Leadership, Planning, Administration, Operations, Quality Assurance, Communications, Project Management, and Skills/Training);
receiving, from the surveyed users, numerical values describing the actual respective levels of loyalty, influence, and interest in the candidate change ([0071], the flow chart in fig. 9 includes second step 704 to receive a set of hierarchy responses to the querying, third step 706 to quantify the set of responses into a raw score, and fourth step 708 to modify the raw score to yield a skill score [0051], a set of hierarchy responses to the queries will be received by the hierarchy response system 44, wherein the set of hierarchy responses could include any number of responses (i.e., 0, 1, 2 . . . N), the set of hierarchy responses could be electronically transmitted to response system 44, or submitted directly by the queried hierarchies at an interface);
retrieving, from the database, numerical values describing targeted respective levels of loyalty, influence, and interest in the candidate change database ([0057], score system 50 would simply access the database to retrieve the inputted required score, [0071], the fifth step 710 of method 700 is to compare the skill score to a predetermined required score, [0043], analysis system 22 comprises a system and method for implementing technical change in an organization 26 having multiple hierarchies 28, wherein database 24 provides storage for information 30 necessary to carry out the present invention, wherein such information could include (3) score information (e.g., values attributable to required scores, etc.), [0053], e.g., possible responses to a particular query question could be YES, NO, or SOMETIMES. In this case, quantification system 46 could assign, for example, a value of 3 to YES, a value of 1 to NO, and a value of 2 to SOMETIMES)
calculating a stakeholder gap score for the candidate change to the organizational process by computing a difference between the received numerical values describing the actual respective levels of loyalty, influence, and interest in the candidate change and the retrieved numerical values describing targeted respective levels of loyalty, influence, and interest in the candidate change ([0058], once retrieved from the database, the comparison system 52 will then compare the required score to the skill score to determine any difference between the two, wherein this is preferably accomplished by taking the mathematical difference between the two score, [0071], fifth step 710 of method 700 is to compare the skill score to a predetermined required score to determine a predicted response to the technical change, [0039], the skill score is then compared to a predetermined required score to determine any difference between the two, wherein the difference can then be examined to pinpoint potential problems resulting from the technical change (i.e., identify a potential response to the change), and to recommend corrective actions to the organization); and
… the stakeholder gap score ([0058], once retrieved from the database, the comparison system 52 will then compare the required score to the skill score to determine any difference between the two, wherein this is preferably accomplished by taking the mathematical difference between the two score).
Guven and Friedlander are analogous fields of invention because both address the problem of generating scores categorizing changes in business processes. At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to include in the system of Guven the ability to receive a list of users impacted by candidate change, store the list of users in the database, and calculate a stakeholder gap score, as taught by Friedlander, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the combination would produce the predictable results of receiving a list of users impacted by candidate change, storing the list of users in the database, and calculating a stakeholder gap score, as above. Further, it would have been obvious to one of ordinary skill in the art to have modified Guven with the aforementioned teachings of Friedlander in order to produce the added benefit of implementing corrective actions to circumvent any adverse reactions to the change. [0004].
Regarding claim 13, this claim is substantially similar to claim 6, and is, therefore, rejected on the same basis as claim 6. While claim 13 is directed to a system comprising an electronic processor configured to execute a set of computer-executable instructions and a memory communicatively coupled to the electronic processor and storing the set of computer-executable instructions, Guven discloses a system, as claimed. [0012]-[0015], [0053]-[0057].
Regarding claim 20, this claim is substantially similar to claim 6, and is, therefore, rejected on the same basis as claim 6. While claim 20 is directed to a non-transitory computer-readable medium comprising processor-executable instructions stored thereon, Guven discloses a computer-readable medium, as claimed. [0012]-[0015], [0053]-[0057].
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES A GUILIANO whose telephone number is (571)272-9859. The examiner can normally be reached Mon-Fri 10:00 am - 6:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu can be reached at 571-272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
CHARLES GUILIANO
Primary Examiner
Art Unit 3623
/CHARLES GUILIANO/Primary Examiner, Art Unit 3623