DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Myron et al. (US 20250218057 A1)., and in view of Yeddu et al. (US 20230103840 A1).
Regarding Claim 8, Myron discloses A computer system for automatically generating an image notification corresponding to a computer alert (ABST reciting “A notification system may configure components and models to receive report data and analyze the report data for context data. . . The system may generate an image text prompt based on the context data. The system may use the image text prompt as input for an artificial intelligence (AI) image generator and receive an emotion-inciting image as output.”), comprising:
one or more processors, one or more computer-readable memories, one or more computer-readable tangible storage devices, and program instructions stored on at least one of the one or more storage devices for execution by at least one of the one or more processors via at least one of the one or more memories, wherein the computer system is capable of performing operational steps (Figs. 1-3 showing the system. Fig. 2 showing a user device 200 comprising processor(s) 212, storages 214, 216 and memory 202. Fig. 3 showing a serving device 300 comprising processor(s) 316, storages 318, 320 and memory 302. ¶102 reciting “the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations.”) comprising:
based on receiving the computer alert on a computing device (¶22 reciting “the notification system component 108 and the emotion-inciting image generator component 110 may receive report data, determine context data from the report data, and generate an emotion-inciting image based on the context data.”. Further, ¶23 reciting “The notification system component 108 may receive report data . . . The report data may be received as internal data generated by the present system . . . Examples of the report data may include, but are not limited to: network performance data,”), automatically determining a computer component corresponding to the received computer alert, and automatically identifying a sentiment associated with the received computer alert; (¶43 reciting “The emotion-inciting image generator component 110 determines the context data indicating the alert context as network status, the tone as negative tone, and the severity level as high. ”) and
based on the determined computer component and the identified sentiment associated with the received computer alert, automatically generating the image notification corresponding to the received computer alert (¶43 reciting “The emotion-inciting image generator component 110 can use the context data to generate the image text prompt as “large dumpster fire.” The notification system component 108 transmits the notification with the example emotion-inciting image 114 to the example user device 104(1). The example emotion-inciting image 114 illustrates a dumpster fire with the report data to incite action from the user for repairing a total network failure.”),
wherein automatically generating the image notification further comprises automatically generating an image representing the determined computer component, and converting and incorporating the identified sentiment as an image feature in the generated image. (¶66 disclosing automatically generating an image alert, and reciting “The system analyzer component 304 may include functionality to trigger an output of the report data. The triggering of the output may be caused by . . . an automatic trigger in response to meeting one or more notification trigger conditions. The system analyzer component 304 may store the notification trigger conditions, which may include conditions based on data thresholds (e.g., percentage of system failure, data transmission rate falling below a value, etc.), meeting a set of conditions (e.g., completion of a set of tasks), and/or time intervals (e.g., biweekly, quarter, etc.).” Fig. 4 showing a generated image notification including notification elements 402, 404, 406, and 408. Element 402 corresponds to the image 114 recited in ¶43)
However, Myron does not explicitly disclose generating an image notification in real-time.
It is well known in the art to generate real-time alert on system/network failure. In addition, Yeddu teaches “systems for automatically alerting system activities and status in real-time based on logs sentiment analysis by cognitive analytics and machine learning.” (¶1). Further, ¶46 recites “The alert generator 170 generates the exemplary alerts 400 subsequent to block 330 by the cognitive alert curator 160, determining the alert as of the Real-time type from block 320 based on a preconfigured set of KPIs”.
It would have been obvious to one with ordinary skill, before the effective filing date of the claimed invention, to modify the system (taught by Myron) to generate the alert notification in real-time (taught by Yeddu). The suggestions/motivations would have been for “improving performance of the IT platforms when a remedial measure according to the logs are timely applied to the IT platforms.” (¶2), and to apply a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Regarding Claim 9, Myron in view of Yeddu discloses The computer system of claim 8, further comprising: automatically presenting the generated image notification on the computing device. (Myron, ¶53 reciting “the notification system component 310 on the serving device 300 may cause the notification system component 210 to present, via a display of the user device 200, a user interface with the notification with the emotion-inciting image.”)
Regarding Claim 10. Myron in view of Yeddu discloses The computer system of claim 9, wherein automatically determining the computer component corresponding to the received computer alert and automatically identifying the sentiment associated with the received computer alert is further performed using machine learning (ML) algorithms and natural language processing (NLP) algorithms. (Myron, ¶12 reciting “the image-generating model may train one or more machine learning (ML) models, including a text parser, to analyze the report data and determine context data associated with the content of the report data. The context data may include any data associated with contextual information or content of the report data. Examples of the context data may include but are not limited to, alert context, subscriber context, tone, severity level”. In addition, Yeddu, ¶17 reciting “. The cognitive alert system 110 is operatively coupled to external tools including, but not limited to, natural language processing (NLP) tools 113 and cognitive analytics/machine learning (CA/ML) tools 115.”; and further ¶21 reciting “the sentiment analyzer 140 may employ SentiWordNet which is an opinion lexicon derived from the WordNet database where each term is associated with numerical scores indicating positive and negative sentiment information often used for sentiment classification in NLP along with the log message lexicon.” The suggestions/motivations would have been the same as that of Claim 8 rejections.)
Regarding Claim 11. Myron in view of Yeddu discloses The computer system of claim 10, wherein automatically identifying the sentiment associated with the received computer alert further comprises:
training the ML algorithms and the NLP algorithms to identify and associate different sentiments with different text detected in different computer alerts, wherein the different sentiments comprise different emotional tones associated with the different text; (Yeddu, ¶21 reciting “the sentiment analyzer 140 may employ SentiWordNet which is an opinion lexicon derived from the WordNet database where each term is associated with numerical scores indicating positive and negative sentiment information often used for sentiment classification in NLP along with the log message lexicon. Accordingly ordinary words in log messages can be readily scored for a sentiment and classified based on SentiWordNet, while for a sentiment of any context specific words in the log messages a SentiWordNet score would be assessed and/or weighted based on a context of a log message for accuracy. In certain embodiments of the present invention, a sentiment value associated with a distinct message is preconfigured to one of binary values {Positive, Negative}, where the sentiment value of the distinct message is assigned to “Positive” if the distinct message represents a positive event or activity in the computing platform, and if the distinct message represents a negative event or activity in the computing system, then the sentiment value of the distinct message would be assigned to “Negative”. The determination on negative/positive activities and events are trained into a machine learning model based on historical logs and user responses corresponding to each of the historical logs.” The suggestions/motivations would have been the same as that of Claim 8 rejections.) and
applying the trained ML algorithms and the trained NLP algorithms to the received computer alert to automatically identify the sentiment among the different sentiments, and wherein automatically identifying the sentiment further comprises identifying an emotional tone of the received computer alert. (Myron, ¶43 reciting “The emotion-inciting image generator component 110 determines the context data indicating the alert context as network status, the tone as negative tone, and the severity level as high. ”)
Regarding Claim 12. Myron in view of Yeddu discloses The computer system of claim 8, wherein automatically generating the image in real-time further comprises:
identifying and associating an image parameter with the determined computer component; (Myron, ¶43 reciting “The emotion-inciting image generator component 110 determines the context data indicating the alert context as network status, the tone as negative tone, and the severity level as high. The emotion-inciting image generator component 110 can use the context data to generate the image text prompt as “large dumpster fire.””) and
automatically generating the image in real-time for the computer component based on the image parameter. (Myron, ¶84 reciting “The emotion-inciting image generator component 312 and/or the notification system component 310 may use the image text prompt as input for an AI image generator and receive an emotion-inciting image as output.”)
Regarding Claim 13. Myron in view of Yeddu discloses The computer system of claim 8, further comprising:
based on receiving a second computer alert associated with the determined computer component, wherein the second computer alert comprises content similar to the received computer alert, automatically generating in real-time a new image notification corresponding to the second computer alert, wherein automatically generating the new image notification further comprises automatically generating a new image different from the generated image associated with the received computer alert. (Myron, ¶11 disclosing a second alert issued from the same type of the report data, and reciting “The notification trigger conditions may include conditions based on predetermined data thresholds (e.g., percentage of system failure, data transmission rate falling below a threshold, etc.), meeting a set of conditions (e.g., completion of a group of tasks), and/or time intervals (e.g., biweekly, quarter, etc.).” Further, ¶13 reciting “The severity level may be expressed as a relative term including, but not limited to, extreme, strong, moderate, mild, weak, and the like.” Furthermore, ¶15 reciting “the image-generating model may use any portion of the context data, including one or more of . . ., and the severity level for generating an image text prompt.” In other words, a second alert from the same type of the report data (e.g. network status) may contain similar alert context with different severity level, and thus cause generating of a new image different from the first generated image.)
Regarding Claim 14. Myron in view of Yeddu discloses The computer system of claim 13, wherein the received computer alert and the second computer alert are each presented with text describing the determined computer component and activity associated with the determined computer component. (see Claim 13 rejections for detailed analysis.)
Claim 1, has similar limitations as of Claim(s) 8, therefore it is rejected under the same rationale as Claim(s) 8.
Claim 15, has similar limitations as of Claim(s) 8, therefore it is rejected under the same rationale as Claim(s) 8.
Claim 2, has similar limitations as of Claim(s) 9, therefore it is rejected under the same rationale as Claim(s) 9.
Claim 3, has similar limitations as of Claim(s) 10, therefore it is rejected under the same rationale as Claim(s) 10.
Claim 16, has similar limitations as of Claim(s) 10, therefore it is rejected under the same rationale as Claim(s) 10.
Claim 4, has similar limitations as of Claim(s) 11, therefore it is rejected under the same rationale as Claim(s) 11.
Claim 17, has similar limitations as of Claim(s) 11, therefore it is rejected under the same rationale as Claim(s) 11.
Claim 5, has similar limitations as of Claim(s) 12, therefore it is rejected under the same rationale as Claim(s) 12.
Claim 18, has similar limitations as of Claim(s) 12, therefore it is rejected under the same rationale as Claim(s) 12.
Claim 6, has similar limitations as of Claim(s) 13, therefore it is rejected under the same rationale as Claim(s) 13.
Claim 19, has similar limitations as of Claim(s) 13, therefore it is rejected under the same rationale as Claim(s) 13.
Claim 7, has similar limitations as of Claim(s) 14, therefore it is rejected under the same rationale as Claim(s) 14.
Claim 20, has similar limitations as of Claim(s) 14, therefore it is rejected under the same rationale as Claim(s) 14.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YI WANG whose telephone number is (571)272-6022. The examiner can normally be reached 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YI WANG/Primary Examiner, Art Unit 2619