Prosecution Insights
Last updated: April 19, 2026
Application No. 18/191,602

AUTOMATED GROUP OF ASSOCIATED ALERTS

Non-Final OA §101§102§103§112
Filed
Mar 28, 2023
Examiner
BHAT, VIBHA NARAYAN
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Freshworks Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
4 currently pending
Career history
4
Total Applications
across all art units

Statute-Specific Performance

§101
28.6%
-11.4% vs TC avg
§103
35.7%
-4.3% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to the application filed on March 28, 2023. Claims 1-20 are pending and have been examined. Claims 1-20 are rejected. Information Disclosure Statement Acknowledgment is made of the information disclosure statements filed March 28, 2023, which comply with 37 CFR 1.97. As such, the information disclosure statements have been placed in the application file and the information referred to therein has been considered by the examiner. Claim Objections Claims 3-4, 9-11, and 17-18 are objected to because of the following informalities: In dependent Claims 3, 10, and 17, the recitation of “the two or more associated alerts are characterized by repeated, simultaneous appearance” is grammatically incorrect and appears to use the word “appearance” instead of the plural word “appearances” to describe repeated (multiple) events in succession. It appears these recitations should read “the two or more associated alerts are characterized by repeated, simultaneous appearances”. Appropriate correction is required. In Claims 4, 11, and 18, the recitation of “the alert-vectors are numerical representation of the two or more associated alerts” is grammatically incorrect and appears to use the word “representation” instead of the plural word “representations” to describe multiple alert-vectors representing the two or more associated alerts. It appears these recitations should read “the alert-vectors are numerical representations of the two or more associated alerts”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. § 112(b): (b) CONCLUSION – The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. § 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 8, and 15 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding Claims 1, 8, and 15, the limitation "based on the learning of the learning of the alert-vectors and the n-dimensional representation" does not clearly set the metes and bounds of the patent protection desired. There is insufficient antecedent basis for this limitation in the claims, rendering the claims indefinite because “the learning of the learning” is not clearly defined and does not set forth what is being learned from the second learning process. Claim Rejections - 35 USC § 101 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. According to the USPTO guidelines, a claim is directed to non-statutory subject matter if: Step 1: The claim does not fall within one of the four statutory categories of invention (process, machine, manufacture, or composition of matter), or, Step 2: The claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: Step 2A, Prong 1: Does the claim recite an abstract idea, law of nature, or natural phenomenon? Step 2A, Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? MPEP 2106.04(a)(2)(I) states: “The mathematical concepts grouping is defined as mathematical relationships, mathematical formulas or equations, and mathematical calculations.” MPEP 2106.04(a)(2)(III) states: “Accordingly, the “mental processes” abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgements, and opinions. Further, the MPEP states: “The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g. pen and paper or a slide run) to perform the claim limitation. Using the two-step inquiry, it is clear that Claims 1-20 are each directed to an abstract idea as shown below: Please note the following: Claims 1-6 recite identical limitations as Claims 8-14 and Claims 15-20. Additionally, Claim 7 recites identical limitations as Claim 14. Each group is expressed in a different statutory category. Claims 1-7 are directed to a computer-implemented method for grouping two or more associated alerts comprised of a machine learning method. Claims 8-14 are directed to a computer program embodied on a non-transitory computer readable medium, with the computer program being configured to cause at least one processor to execute a machine learning method. Claims 15-20 are directed to an apparatus configured to group two or more associated alerts comprising: memory comprising a set of instructions; and at least one processor wherein the set of instructions are configured to cause at least one processor to execute a machine learning method. With respect to Claims 1, 8, and 15, which are independent claims with identical claim limitations: Step 1: Claim 1 is directed to a method, also known as a process, which is one of the four statutory categories of patentable subject matter. Claim 8 is directed to a non-transitory computer readable medium on which a computer program is stored, corresponding to an article of manufacture, which is one of the four statutory categories of patentable subject matter. Claim 15 is directed to an apparatus comprised of memory with a set of instructions and at least one processer wherein the set of instructions are configured to cause at least one processor to execute a machine learning method, corresponding to an article of manufacture, which is one of the four statutory categories of patentable subject matter. Step 2A, Prong 1: A judicial exception is recited in the claims as they recite mathematical calculations, data analysis, and mental processes: “applying, by the ML model, a cosine-similarity or vector similarity metrics to determine the frequency of the occurrence and co-occurrence patterns in the repeat data;”; Recites a mathematical calculation. The use of cosine-similarity or vector similarity metrics to determine the frequency of occurrence and co-occurrence patterns in a set of data consists of a set of mathematical calculations that quantify how alike two vectors are. Determining a frequency of occurrence or cooccurrence patterns of repeat data means data is being repeatedly counted and analyzed for patterns (relationships between the data), which is essentially data analysis. “and grouping, by the ML model, the repeated data based on the learning of the learning of the alert-vectors and the n-dimensional representation and the applying of the cosine-similar or vector similarity metrics”; Grouping repeated data together based off of vector representations and similarity metrics involves the analysis and organization of relationships between data elements, which is essentially data analysis. Grouping related data together can also be performed as a mental process within the human mind. The application of cosine-similar or vector similarity metrics is essentially a mathematical calculation since it is calculating the numerical similarities (distances) between vector entities. Step 2A, Prong 2: No, the abstract ideas in the limitations are not integrated into a practical application: “learning, by a machine learning (ML) model, alert-vectors and a n-dimensional representation in a vector space to determine a frequency of occurrence and cooccurrence patterns of repeat data;”; Recites the training of an ML model in order to determine the patterns within repeated data, which only amounts to “apply it” and is using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)(1). Determining a frequency of occurrence or cooccurrence patterns of repeat data means data is being repeatedly counted and analyzed for patterns (relationships between the data), which is well-understood, routine, and conventional activity of observing patterns within data – see MPEP 2106.05(d). Step 2B: No, the additional elements of Claims 1, 8, and 15 do not provide significantly more than the abstract idea itself. An ML model learning (being trained) on alert-vectors and n-dimensional representations to determine a frequency of cooccurrence patterns only amounts to “apply it” and mere instructions to implement an abstract idea on a computer – see MPEP 2106.05(f)(1). An ML model applying a cosine or vector similarity metrics to determine the frequency of co-occurrence patterns in repeat data only amounts to “apply it” and mere instructions to implement an abstract idea on a computer – see MPEP 2106.05(f)(1). An ML model grouping repeated data based on its learning of alert-vectors and n-dimensional representations and applying cosine or vector similarity metrics only amounts to “apply it” and mere instructions to implement an abstract idea on a computer – see MPEP 2106.05(f)(1). Therefore, Claims 1, 8, and 15 are directed to non-statutory subject matter and rejected. With respect to Claims 2, 9, and 16, which have identical claim limitations and are dependent upon Claims 1, 8, and 15, respectively: Step 1: Claim 2 is directed to a method, also known as a process, which is one of the four statutory categories of patentable subject matter. Claim 9 is directed to a non-transitory computer readable medium on which a computer program is stored, corresponding to an article of manufacture, which is one of the four statutory categories of patentable subject matter. Claim 16 is directed to an apparatus comprised of memory with a set of instructions and at least one processer wherein the set of instructions are configured to cause at least one processor to execute a machine learning method, corresponding to an article of manufacture, which is one of the four statutory categories of patentable subject matter. Step 2A, Prong 1: A judicial exception is not recited in the claims as they do not recite an abstract idea (mathematical concepts, certain methods of organizing human activity, or mental processes), law of nature, or natural phenomenon. Step 2A, Prong 2: The claims do not recite additional elements that integrate the judicial exception into a practical application: “wherein the learning of the alert-vectors and the n-dimensional representation comprising learning, by the ML model, using co-occurrence patterns of the two or more associated alerts”; Recites the training of an ML model on alert-vectors and n-dimensional representation through the use of co-occurrence patterns of the two or more associated alerts, which only amounts to “apply it” and is using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)(1). Step 2B: The claims do not recite additional elements that amount to significantly more than the judicial exception. The training of an ML model on alert-vectors and n-dimensional representation through the use of co-occurrence patterns of two or more associated alerts involves mere instructions that are implemented by a computer – see MPEP 2106.05(f). Therefore, Claims 2, 9, and 16 are directed to non-statutory subject matter and rejected. With respect to Claims 3, 10, and 17, which have identical claim limitations and are dependent upon Claims 2, 9, and 16, respectively: Step 1: Claim 3 is directed to a method, also known as a process, which is one of the four statutory categories of patentable subject matter. Claim 10 is directed to a non-transitory computer readable medium on which a computer program is stored, corresponding to an article of manufacture, which is one of the four statutory categories of patentable subject matter. Claim 17 is directed to an apparatus comprised of memory with a set of instructions and at least one processer wherein the set of instructions are configured to cause at least one processor to execute a machine learning method, corresponding to an article of manufacture, which is one of the four statutory categories of patentable subject matter. Step 2A, Prong 1: The claims recite an abstract idea. A judicial exception is recited in the claims as they recite mental processes: “wherein the two or more associated alerts are characterized by repeated, simultaneous appearance, indicating a significant correlation between the two or more associated alerts”; Observing the repetition of two or more associated alerts and figuring out if there is a relationship between them is a mental process that can be performed in the human mind (this includes an observation, evaluation, judgement, or opinion). Step 2A, Prong 2: The claims do not recite additional elements that integrate the judicial exception into a practical application. Step 2B: The claims do not recite additional elements that amount to significantly more than the judicial exception. Therefore, Claims 3, 10, and 17 are directed to non-statutory subject matter and rejected. With respect to Claims 4, 11, and 18, which have identical claim limitations and are dependent upon Claims 2, 9, and 16 respectively: Step 1: Claim 4 is directed to a method, also known as a process, which is one of the four statutory categories of patentable subject matter. Claim 11 is directed to a non-transitory computer readable medium on which a computer program is stored, corresponding to an article of manufacture, which is one of the four statutory categories of patentable subject matter. Claim 18 is directed to an apparatus comprised of memory with a set of instructions and at least one processer wherein the set of instructions are configured to cause at least one processor to execute a machine learning method, corresponding to an article of manufacture, which is one of the four statutory categories of patentable subject matter. Step 2A, Prong 1: The claims recite an abstract idea. A judicial exception is recited in the claims as they recite mathematical calculations: “wherein the alert-vectors are numerical representation of the two or more associated alerts, where n numerical values to represent the two or more associated alerts”; Alerts represented as numerical representations (alert-vectors) involve a mathematical calculation where words from alert messages are converted into a set of numbers arranged as vectors with n-dimensions. Step 2A, Prong 2: The claims do not recite additional elements that integrate the judicial exception into a practical application. Step 2B: The claims do not recite additional elements that amount to significantly more than the judicial exception. Therefore, Claims 4, 11, and 18 are directed to non-statutory subject matter and rejected. With respect to Claims 5, 12, and 19, which have identical claim limitations and are dependent upon Claims 4, 11, and 18, respectively: Step 1: Claim 5 is directed to a method, also known as a process, which is one of the four statutory categories of patentable subject matter. Claim 12 is directed to a non-transitory computer readable medium on which a computer program is stored, corresponding to an article of manufacture, which is one of the four statutory categories of patentable subject matter. Claim 19 is directed to an apparatus comprised of memory with a set of instructions and at least one processer wherein the set of instructions are configured to cause at least one processor to execute a machine learning method, corresponding to an article of manufacture, which is one of the four statutory categories of patentable subject matter. Step 2A, Prong 1: The claims recite an abstract idea. A judicial exception is recited in the claims as they recite a mathematical calculation: “selecting the n numerical values by computing a cosine similarity of the two or more associated alerts to produce a large value when the two or more associated alerts are related”; Recites the computing of a cosine similarity of the two or more associate alerts, which is a mathematical calculation. Step 2A, Prong 2: The claims do not recite additional elements that integrate the judicial exception into a practical application. Step 2B: The claims do not recite additional elements that amount to significantly more than the judicial exception. Therefore, Claims 5, 12, and 19 are directed to non-statutory subject matter and rejected. With respect to Claims 6, 13, and 20, which have identical claim limitations and are dependent upon Claims 2, 9, and 16, respectively: Step 1: Claim 6 is directed to a method, also known as a process, which is one of the four statutory categories of patentable subject matter. Claim 13 is directed to a non-transitory computer readable medium on which a computer program is stored, corresponding to an article of manufacture, which is one of the four statutory categories of patentable subject matter. Claim 20 is directed to an apparatus comprised of memory with a set of instructions and at least one processer wherein the set of instructions are configured to cause at least one processor to execute a machine learning method, corresponding to an article of manufacture, which is one of the four statutory categories of patentable subject matter. Step 2A, Prong 1: The claims recite an abstract idea. A judicial exception is recited in the claims as they recite mathematical calculations: “wherein the n-dimensional vector space comprises a vector in the space having n components and n axes representing the space”; An “n-dimensional vector space” is a mathematical calculation where a vector in the space having n components and n axes is representing data through a mathematical model. Data is represented through abstract, mathematical calculations. Step 2A, Prong 2: The claims do not recite additional elements that integrate the judicial exception into a practical application. Step 2B: The claims do not recite additional elements that amount to significantly more than the judicial exception. Therefore, Claims 6, 13, and 20 are directed to non-statutory subject matter and rejected. With respect to Claims 7 and 14, which have identical claim limitations and are dependent upon Claims 1 and 8, respectively: Step 1: Claim 7 is directed to a method, also known as a process, which is one of the four statutory categories of patentable subject matter. Claim 14 is directed to a non-transitory computer readable medium on which a computer program is stored, corresponding to an article of manufacture, which is one of the four statutory categories of patentable subject matter. Step 2A, Prong 1: The claims recite an abstract idea. A judicial exception is recited in the claims as they recite mental processes and data classification: “refreshing, by the ML model, the grouping of the repeated data when one or more associated alerts are detached from one or more incidents”; An ML model refreshing a grouping of repeated data triggered by the detachment of one or more associated alerts from one or more incidents is a mental process. Observing a list of alerts, making a judgement of which alerts are no longer associated with an incident, and then regrouping the updated data is a manual and mental process that can be performed in the human mind. Regrouping repeated data can also be considered data classification. Step 2A, Prong 2: The claims do not recite additional elements that integrate the judicial exception into a practical application. Step 2B: The claims do not recite additional elements that amount to significantly more than the judicial exception. Therefore, Claims 7 and 14 are directed to non-statutory subject matter and rejected. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e. changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-6, 8-13, and 15-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Li (U.S. Patent Application Publication No. US20240256951A1). Li was filed on 1/27/2023, and this date is before the earliest effective filing date of this application, i.e., 3/28/2023. Therefore, Li constitutes prior art under 35 U.S.C. 102(a)(2). With respect to Claims 1, 8, and 15: Li teaches: “learning, by a machine learning (ML) model, alert-vectors and a n-dimensional representation in a vector space to determine a frequency of occurrence and cooccurrence patterns of repeat data;” (Paragraph 0032 teaches the training of an ML model using historical alerts (repeated data) and the generation of embeddings (also known as numerical vector representations in dimensional spaces within the field of natural language processing (NPL)) that can be compared to each other in order to identify related alerts. Paragraph 0209 teaches the existence of training datum that includes a series of alert texts obtained from historical alerts, meaning the ML model is trained on repeated data, inherently capturing occurrence and co-occurrence patterns of the repeated data.) “applying, by the ML model, a cosine-similarity or vector similarity metrics to determine the frequency of the occurrence and co-occurrence patterns in the repeat data;” (Paragraph 0214 teaches the application of an ML model to determine whether an embedding corresponding to the alert meets a similarity threshold. Paragraph 0138 further clarifies “Any number of techniques can be used to vectorize the word vector, such as count vectorization…or other techniques…cosine similarities or cosine similarity is a metric used to measure the similarity between features of two items, documents, or the like, represented as feature vectors. The technique of cosine similarities measures the cosine of an angle between two vectors of data projected in multi-dimensional space. This measurement allows a measure the similarity of a document of any type, such as of the alerts disclosed herein.”) “and grouping, by the ML model, the repeated data based on the learning of the learning of the alert-vectors and the n-dimensional representation and the applying of the cosine-similar or vector similarity metrics” (Paragraph 0183 teaches using a neural network (a type of ML model) to generating embeddings (vectors in a dimensional space) through the application of vector similarity techniques. Paragraph 0032 further teaches using ML to identify related alerts (groupable objects) and group alerts, reducing the number of incidents triggered by alerts.) Therefore, Claims 1, 8, and 15 are rejected. With respect to Claims 2, 9, and 16: Li teaches: “wherein the learning of the alert-vectors and the n-dimensional representation comprising learning, by the ML model, using co-occurrence patterns of the two or more associated alerts” (Paragraph 0032 teaches the training of an ML model using historical alerts (repeated data) and the generation of embeddings (also known as numerical vector representations in dimensional spaces within the field of natural language processing (NPL)) that can be compared to each other in order to identify related alerts. Paragraph 0209 teaches the existence of training datum that includes a series of alert texts obtained from historical alerts, meaning the ML model is trained on repeated data and embeddings, inherently capturing occurrence and co-occurrence patterns of the repeated data. Further, Paragraph 0141 describes the ML model as being trained on historical correlations and co-occurrence of alerts to determine whether two alerts are related.) Therefore, Claims 2, 9, and 16 are rejected. With respect to Claims 3, 10, and 17: Li teaches: “wherein the two or more associated alerts are characterized by repeated, simultaneous appearance, indicating a significant correlation between the two or more associated alerts” (Paragraph 0027 teaches a high number of alerts or incidents received within a period of time (time window) having related causes or symptoms. In this case, these repeated events appearing may be similar or correlated, in which case it would be beneficial to identify and group the related events.) Therefore, Claims 3, 10, and 17 are rejected. With respect to Claims 4, 11, and 18: Li teaches: “wherein the alert-vectors are numerical representation of the two or more associated alerts, where n numerical values to represent the two or more associated alerts” (Paragraph 0141 teaches a graph-based neural network, which is an ML model, trained to build a model that can obtain a numerical representation of each alert.) Therefore, Claims 4, 11, and 18 are rejected. With respect to Claims 5, 12, and 19: Li teaches: “selecting the n numerical values by computing a cosine similarity of the two or more associated alerts to produce a large value when the two or more associated alerts are related” (Paragraph 0214 teaches the application of an ML model to determine whether an embedding corresponding to the alert meets a similarity threshold. Paragraph 0138 further clarifies “Any number of techniques can be used to vectorize the word vector, such as count vectorization…or other techniques… cosine similarities or cosine similarity is a metric used to measure the similarity between features of two items, documents, or the like, represented as feature vectors. The technique of cosine similarities measures the cosine of an angle between two vectors of data projected in multi-dimensional space. This measurement allows a measure the similarity of a document of any type, such as of the alerts disclosed herein.” A cosine similarity can be computed to measure the similarity between two or more alerts to produce a measurement value signaling similarity strength.) Therefore, Claims 5, 12, and 19 are rejected. With respect to Claims 6, 13, and 20: Li teaches: “wherein the n-dimensional vector space comprises a vector in the space having n components and n axes representing the space” (Paragraph 0197 teaches an example of a graph-based learning method that can learn latent representations of vertices in a graph that encode relations in a continuous vector space. Having n components and n axes representing the space is standard in describing a vector. “n” is just a symbol for describing a set number and the number of features (dimensions) in the vector.) Therefore, Claims 6, 13, and 20 are rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e. changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claim(s) 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Li, (U.S. Patent Application Publication No. US20240256951A1 filed on 1/27/2023), in view of Israel et al., (WO Patent Application Publication No. WO2021211212A1 with an earliest priority date of 4/17/2020, hereinafter “Israel”). With respect to Claims 7 and 14: Li does not appear to explicitly disclose: “refreshing, by the ML model, the grouping of the repeated data when one or more associated alerts are detached from one or more incidents” However, Israel teaches: “refreshing, by the ML model, the grouping of the repeated data when one or more associated alerts are detached from one or more incidents” (Paragraph 0054 teaches the ability of an analyst (user) to perform at least one alert-incident grouping action, such as removing (detaching) an alert from an incident. Paragraph 0071 teaches an example of training the ML model based on data that explicitly groups alerts with incidents. Explicit grouping means an analyst (user) performed a custom action that identifies both an alert and an incident, such as detachment of the alert from the incident or “remove alert 456 from incident”. This means the ML model can be refreshed (trained) with updated groupings of repeated data when an analyst detaches one or more associated alerts from one or more incidents.) It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the present application to implement Claims 7 and 14 that utilized the teachings of Li and the teachings of Israel, which are both in the same field of invention. Li teaches grouping related alerts with incidents by an ML model, but does not explicitly describe the grouping refreshing when an alert is detached from an incident. However, Israel teaches training an ML model on historical grouping actions by an analyst (like removing an alert from an incident) and using the trained model to produce periodic incident updates to predict grouping actions. A PHOSITA would have been motivated to modify the associated alert grouping system taught by Li with the machine learning training method taught by Israel in order to improve alert-incident grouping accuracy when an alert-incident grouping is manually modified (detaching an alert from an incident) and improve overall incident management system responsiveness and automation (instead of relying on static, manual rules). This would incorporate a learning process where a machine learning model takes negative feedback into account, which is a known procedure in ML. Therefore, Claims 7 and 14 are rejected. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Vibha Bhat whose telephone number is (571)-272-7091. The examiner can normally be reached on Monday – Thursday from 8:00 AM to 5:00 PM EST and every other Friday from 8:00 AM to 4:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. See MPEP § 713.01. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at https://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes, can be reached at telephone number (571)-270-1006. The fax phone number for the organization where this application or proceeding is assigned is (571)-273-8300. Information regarding the status of an application may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or (572)-272-1000. /Vibha Bhat/Examiner Art Unit 2142 /Mariela Reyes/Supervisory Patent Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Mar 28, 2023
Application Filed
Feb 02, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month