DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the America Invents Act (AIA ).
General Information Matter
Please note, the instant Non-Provisional application (18/950,511) under prosecution at the United States Patent and Trademark Office (USPTO) has been assigned to David Zarka (Examiner) in Art Unit 2449. To aid in correlating any papers for 18/950,511, all further correspondence regarding the instant application should be directed to the Examiner.
Joint Inventors
This application currently names joint inventors. In considering patentability of the claims the Examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicants are advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the Examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential § 102(a)(2) prior art against the later invention.
Claim Rejections – 35 U.S.C. § 112
The following is a quotation of 35 U.S.C. § 112(b): “The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.”
The MPEP recites “[d]uring examination, after applying the broadest reasonable interpretation consistent with the specification to the claim, if the metes and bounds of the claimed invention are not clear, the claim is indefinite and should be rejected.” MPEP § 2173.02(I) (citing In re Packard, 751 F.3d 1307, 1311 (Fed. Cir. 2014)). “For example, if the language of a claim, given its broadest reasonable interpretation, is such that a person of ordinary skill in the relevant art would read it with more than one reasonable interpretation, then a rejection under 35 U.S.C. 112(b) . . . is appropriate.” Id. See also id. § 2173.05(e)(discussing indefiniteness arising for terms lacking proper antecedent basis).
Claim 4 is rejected under § 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
In particular, claim 4, line 9, “recommendation actions” adds ambiguity to the claim because the Examiner is uncertain as to whether the limitation refers to (A) the recommendation actions introduced in claim 1, line 5; or (B) other recommendation actions.
If (A), the Examiner recommends amending to recite “the first set of recommendation actions.” If (B), the Examiner recommends amending to recite “other recommendation actions.”
It is assumed for examination purposes that the limitation refers to (B). See MPEP § 2173.06 (reciting “When making a rejection over prior art in these circumstances, it is important that the examiner state on the record how the claim term or phrase is being interpreted with respect to the prior art applied in the rejection.”; emphasis omitted).
Claim Rejections – 35 U.S.C. § 103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Gruber and Ahuja
Claims 1, 2, 4–7, 9, 10, 12, and 16–20 are rejected under 35 U.S.C. § 103 as being obvious over Gruber et al. (US 2013/0304758 A1; filed Mar. 15, 2013) in view of Ahuja et al. (US 2018/0024578 A1; filed Dec. 30, 2016).
Regarding claim 1, while Gruber teaches a method, comprising:
receiving an alert (“the digital assistant registers a failure to provide a satisfactory response to a user request” at ¶ 103; fig. 4, item 402) triggered by an event (“failure to provide a satisfactory response to a user request” at ¶ 103) in a managed information technology (IT) environment (fig. 1, item 100);
identifying an IT component (“digital assistant” at ¶ 103; “The digital assistant includes a client-side portion 102a, 102b . . . executed on a user device 104a, 104b, and a server-side portion 106 . . . executed on a server system 108.” at ¶ 38) associated with the alert using a component extraction tool (fig. 3A, item 302, 304);
outputting a first set of recommended actions (fig. 4, item 404; “one or more real-time remedy options” at ¶¶ 104–105) for the alert;
receiving a user-selected action (Yes to fig. 4, item 406; “the user has accepted one or more of the real-time remedy option(s) presented to the user” at ¶ 106) for resolving the alert (intended use in italics);
collecting feedback data (No to fig. 4, item 410; “the user is unsatisfied with the real-time remedial response(s) provided to the user (e.g., shown as the ‘No’ branch of the decision 410)” at ¶ 108) regarding whether the user-selected action resolved the alert;
updating, using algorithms (fig. 3A, item 318–358), action-to-component likelihoods (fig. 4, item 418; “performing the information crowd sourcing process (418). . . . More details on information crowd sourcing for user requests are provided with reference to FIGS. 5, 6A-6C, and 7.” at ¶ 110; fig. 6B, item 628 and Yes to item 630; “able to formulate a response to the user request with the help of the additional information obtained from the crowd sourcing process, the digital assistant can prepare to enter the final stage of providing the crowd sourced response to the user” at ¶ 174) based on the collected feedback data;
modifying future action recommendations (“if the digital assistant determines that the user is satisfied with the crowd sourced response (e.g., based on the user’s feedback), the digital assistant (e.g., the knowledge-base building module 512 shown in FIG. 5) proceeds to record the crowd sourced response, the user request, and/or the queries and answers that contributed to the successful fulfillment of the user request to the crowd-sourced knowledge base (660)” at ¶ 181; “presents the options to the user (404). Examples of real-time remedy options include . . . searching the crowd-sourced knowledge base” at ¶ 104) based on the updated action-to-component likelihoods; and
storing the future action recommendations in an actions library (“crowd-sourced knowledge base” at ¶¶ 104, 181; fig. 6C, item 660) for subsequent alert resolutions (intended use in italics),
Gruber does not teach the algorithms being learning algorithms.
Ahuja teaches learning algorithms (“a machine-learning-based algorithm” at ¶ 51).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention for Gruber’s algorithms to be learning algorithms as taught by Ahuja “to train a machine-learning-based algorithm to predict” the optimal future action recommendations. Ahuja ¶ 71.
Regarding claim 2, Gruber teaches wherein the first set of the recommended actions (“presents the options to the user (404). Examples of real-time remedy options include . . . searching the crowd-sourced knowledge base” at ¶ 104; fig. 6B, item 622) is output based on stored action-to-component associations (fig. 3A, item 358; “compile and integrate the answers received for the queries, and formulate a response to the user request based on the integrated answers (628)” at ¶ 167), wherein each action-to-component association represents a relationship between a particular action (“the answers” at ¶ 160; fig. 5, item 522) and a particular IT component (fig. 5, items 518, 520; “those queries” at ¶ 160) that indicates a potential relevance of the particular action for resolving alerts associated with the particular IT component (intended use in italics).
Regarding claim 4, Gruber teaches wherein collecting the feedback data comprises:
receiving explicit user input (No to fig. 4, item 410; “the user is unsatisfied with the real-time remedial response(s) provided to the user (e.g., shown as the ‘No’ branch of the decision 410)” at ¶ 108) indicating whether the user-selected action resolved the alert.
Regarding claim 5, Gruber teaches wherein collecting the feedback data (No to fig. 4, item 410) comprises: determining that the user-selected action (Yes to fig. 4, item 406; “the user has accepted one or more of the real-time remedy option(s) presented to the user” at ¶ 106) resolved the alert based on detecting an absence of additional action requests (Gruber does not teach additional action requests between fig. 4, item 408 and Yes to fig. 6C, item 658) between execution of the user-selected action (fig. 4, item 408) and receipt of alert resolution confirmation (Yes to fig. 6C, item 658).
Regarding claim 6, Gruber teaches wherein collecting the feedback data comprises: tracking a sequence of actions (fig. 4, items 416–420; fig. 6, items 602–656) performed between the user-selected action (fig. 4, item 408) and alert resolution (Yes to fig. 6C, item 658).
Regarding claim 7, Gruber teaches wherein updating the action-to-component likelihoods comprises: increasing a likelihood score when1 the user-selected action resolves the alert.
Regarding claim 9, Gruber teaches wherein modifying the future action recommendations comprises:
reordering recommended actions (“if the digital assistant determines that the user is satisfied with the crowd sourced response (e.g., based on the user’s feedback), the digital assistant (e.g., the knowledge-base building module 512 shown in FIG. 5) proceeds to record the crowd sourced response, the user request, and/or the queries and answers that contributed to the successful fulfillment of the user request to the crowd-sourced knowledge base (660)” at ¶ 181; “presents the options to the user (404). Examples of real-time remedy options include . . . searching the crowd-sourced knowledge base” at ¶ 104) based on respective historical success rates (fig. 4, items 406, 410; fig. 6C, item 658).
Regarding claim 10, Gruber does not teach further comprising: executing the learning algorithms at predefined intervals to update an action-to-component likelihood (intended use in italics).
Ahuja teaches executing a learning algorithms at predefined intervals (“continually update the parameters of the machine-learning-based algorithm every 15 minutes” at ¶ 76) to update an action-to-component likelihood (intended use in italics).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention for Gruber to further comprise executing learning algorithms at predefined intervals to update an action-to-component likelihood (intended use in italics) as taught by Ahuja “to train a machine-learning-based algorithm to predict” the optimal future action recommendations. Ahuja ¶ 71.
Regarding claim 12, Gruber teaches further comprising: maintaining separate action-to-component likelihoods (fig. 4, item 418; “More details on information crowd sourcing for user requests are provided with reference to FIGS. 5, 6A-6C, and 7.” at ¶ 110; Yes to fig. 6B, item 630; “able to formulate a response to the user request with the help of the additional information obtained from the crowd sourcing process, the digital assistant can prepare to enter the final stage of providing the crowd sourced response to the user” at ¶ 174) for different alert types (“if the digital assistant determines that the user is satisfied with the crowd sourced response (e.g., based on the user’s feedback), the digital assistant (e.g., the knowledge-base building module 512 shown in FIG. 5) proceeds to record the crowd sourced response, the user request, and/or the queries and answers that contributed to the successful fulfillment of the user request to the crowd-sourced knowledge base (660)” at ¶ 181).
Regarding claim 16, Gruber teaches further comprising: automatically executing actions (fig. 4, item 404) with success rates exceeding a predetermined threshold (Yes at fig. 6C, item 658 and not NO at fig. 6C, item 658) for similar future alerts.
Regarding claim 17, Gruber teaches wherein collecting the feedback data comprises: analyzing chains of actions (fig. 4, items 404–414; fig. 6, items 602–656) performed before alert resolution (Yes at fig. 6C, item 658) to identify partially effective actions for future recommendations (intended use in italics).
Regarding claim 18, Evan teaches a system (fig. 1, item 106; fig. 3A, item 300; “FIG. 4 is a flow diagram illustrating an example process 400 undertaken by a failure management module of a digital assistant (e.g., the failure management module 340 in FIGS. 3A-3B).” at ¶ 102), comprising: a memory (fig. 3A, item 302); and a processor (fig. 3A, item 304), the processor configured to execute instructions stored in the memory to perform operations according to claim 1. Thus, references/arguments equivalent to those present for claim 1 are equally applicable to claim 18.
Regarding claim 19, claim 5 recites substantially similar features. Thus, references/arguments equivalent to those present for claim 5 are equally applicable to claim 19.
Regarding claim 20, Evan teaches a non-transitory computer readable medium (fig. 3A, item 302) storing instructions operable to cause a processor (fig. 3A, item 304) to perform operations according to claim 1. Thus, references/arguments equivalent to those present for claim 1 are equally applicable to claim 30.
Gruber, Ahuja, and Michelitsch
Claim 13 is rejected under 35 U.S.C. § 103 as being obvious over Gruber in view of Ahuja, and in further view of Michelitsch et al. (US 2018/0024578 A1; filed Dec. 30, 2016).
Regarding claim 13, while Gruber inherently teaches a timing between action execution (fig. 4, item 402) and alert resolution (Yes to fig. 4, item 410; Yes to fig. 6C, item 658),
Gruber does not teach identifying implicit feedback based on the timing.
Michelitsch teaches identifying implicit feedback based on a timing (“implicit feedback might be the time it takes from the time the user is presented the recommended item, e.g. a song starts playing, until a user takes action by an explicit feedback, e.g. pressing `like` or `dislike`.” at 7:52–55).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention for Gruber to identify implicit feedback based on the timing between action execution and alert resolution as taught by Michelitsch “to provide an improved content recommendation system.” Michelitsch 1:42.
Gruber, Ahuja, and Hobson
Claim 14 is rejected under 35 U.S.C. § 103 as being obvious over Gruber in view of Ahuja, and in further view of Hobson et al. (US 2009/0222430 A1; filed Feb. 28, 2008).
Regarding claim 14, Gruber does not teach wherein modifying the future action recommendations comprises removing actions with success rates below a predetermined threshold.
Hobson teaches removing actions with success rates below a predetermined threshold (“replacing one or more recommendations of the initially generated recommendation set by a new recommendation.” at ¶ 70).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention for Gruber’s modifying the future action recommendations to comprise removing actions with success rates below a predetermined threshold as taught by Hobson to provide “an improved recommendation system [that] would be advantageous and in particular a recommendation system allowing increased flexibility, facilitated operation, consideration and/or balancing of different characteristics, parameters and/or preferences (such as from users and content providers), improved diversity and/or accuracy of recommendations and/or improved performance.” Hobson ¶ 8.
Allowable Subject Matter
Claims 3, 8, 11, and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicants’ disclosure: US-9465685-B2; US-20210157577-A1; and US-7890483-B1.
Any inquiry concerning this communication or earlier communications from the Examiner should be directed to DAVID P. ZARKA whose telephone number is (703) 756-5746. The Examiner can normally be reached Monday–Friday from 9:30AM–6PM ET.
If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, Vivek Srivastava, can be reached at (571) 272-7304. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicants are encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
/DAVID P ZARKA/PATENT EXAMINER, Art Unit 2449
1 The increasing method-step is conditional and, therefore, need not be satisfied to meet claim 7. See Ex parte Schulhauser, No. 2013-007847, 2016 WL 6277792, at *3–5 (PTAB Apr. 28, 2016) (precedential) (holding that in a method claim, a step reciting a condition precedent does not need to be performed if the condition precedent is not met) (available at https://www.uspto.gov/sites/default/files/documents/Ex%20parte%20Schulhauser%202016_04_28.pdf; last visited Feb. 19, 2026); see also MPEP § 2111.04(II) (citing Schulhauser).