Prosecution Insights
Last updated: April 19, 2026
Application No. 18/756,493

APPARATUSES, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR PROCESSING SERVICE MESSAGE DATA OBJECTS VIA SUPERVISED MACHINE LEARNING TO PROVIDE SERVICE MESSAGE CLASSIFICATIONS

Non-Final OA §101§102§103
Filed
Jun 27, 2024
Examiner
LAM, PHILIP HUNG FAI
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Atlassian Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
107 granted / 129 resolved
+20.9% vs TC avg
Strong +46% interview lift
Without
With
+45.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
29 currently pending
Career history
158
Total Applications
across all art units

Statute-Specific Performance

§101
23.7%
-16.3% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
5.3%
-34.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 129 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Introduction This office action is in response to Applicant’s submission filed on 6/27/2024. As such, claims 1-20 have been examined. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5, 8-12, and 15-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites an apparatus that, under the broadest reasonable interpretation, claims limitations that cover performance of the limitations in the human mind with the assistance of physical aids (e.g., pen and paper), but for the recitation of generic or well-known or conventional computer components. That is, other than reciting “at least one processor, storage device storing instructions, application framework, a supervised natural language processing model and electronic interface” nothing in these claim limitations precludes the steps from practically being performed in the mind. As a whole, claim 1 pertains to processing information and classifying or categorization of text data and providing a presentation of the result, which is a mental process that a human can do. Individually, each of the limitations also pertains to a mental process and/or insignificant extra solution activity, for example: extract a feature set from a plurality of service message data objects associated with an application framework; (e.g., a processing step, a human can read service message and extract features (identify keywords).) apply a supervised natural language processing model to the feature set to generate a plurality of classification data objects associated with the plurality of service message data objects that classify a respective service message data object as belonging to a predefined class of a plurality of predefined classes; (e.g., analysis/judgement of a message, the human categorize data according to the category or class it belongs to.) [a supervised NLP model is a generic computing component processing an input without details of how it is specially being use or trained.]. and initiate a rendering of a dashboard visualization via an electronic interface based at least in part on the plurality of classification data objects. (e.g., presentation of the result, the human can draw the result using paper and pen.) [an electronic interface is a generic computing component to display the result without specifics on the presentation of the result] The judicial exception is not integrated into a practical application. In particular, the claims only recites generic computing components. Such generic computing components are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of receiving, determining, or outputting information) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional limitations of using generic computer components amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Claim 1 is not patent eligible. The examiner further notes that the use of claimed generic computer components (“at least one processor, storage device storing instructions, application framework, a supervised natural language processing model and electronic interface”) invokes such generic computer components “merely as a tool to perform an existing process”. MPEP 2106.05(f). MPEP 2106.05(f) further explains: Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Similarly, "claiming the improved speed or efficiency inherent with applying the abstract idea on a computer" does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015). Claim 1 recites generic computer components (“at least one processor, storage device storing instructions, application framework, a supervised natural language processing model and electronic interface”), with respect to performing tasks. MPEP 2106.05(d) and (f) further provides examples of court decisions where the courts found generic computing components to be mere instructions to apply a judicial exception, and further explains “increased speed” (e.g., using a computer to increase the speed of an otherwise mental process) does not provide an inventive concept. For example: A commonplace business method or mathematical algorithm being applied on a general purpose computer, Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 573 U.S. 208, 223, 110 USPQ2d 1976, 1983 (2014); Gottschalk v. Benson, 409 U.S. 63, 64, 175 USPQ 673, 674 (1972); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). A process for monitoring audit log data that is executed on a general-purpose computer where the increased speed in the process comes solely from the capabilities of the general-purpose computer, FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016) (emphasis added). Performing repetitive calculations. Bancorp Services v. Sun Life, 687 F.3d 1266, 1278, 103 USPQ2d 1425, 1433 (Fed. Cir. 2012) ("The computer required by some of Bancorp’s claims is employed only for its most basic function, the performance of repetitive calculations, and as such does not impose meaningful limits on the scope of those claims.") Claim 8 recites a method claim that corresponds to the apparatus of claim 1 and is therefore rejected under the same grounds as claim 1 above. Claim 8 is not patent eligible. Claim 15 recites a computer program product claim that corresponds to the apparatus of claim 1 and is therefore rejected under the same grounds as claim 1 above. While claim 15 further recites a “computer readable storage medium” and “program code instructions”, these are merely generic computer components recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component. Therefore, none of these limitations (a) integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or (b) amount to significantly more than the judicial exception, because in either case the additional limitations merely utilize generic computer components that amounts to no more than mere instructions to apply the exception using generic computer function. Claim 15 is not patent eligible. Claims 2-5 depend from independent claim 1, do not remedy any of the deficiencies of claims 1, and therefore are rejected on the same grounds as claim 1 from above. Claim 2 further recites: wherein the plurality of service message data objects respectively comprise at least a description data field associated with a service request by a user identifier, and wherein the one or more storage devices store instructions are operable, when executed by the one or more processors, to further cause the one or more processors to: extract the feature set from the plurality of service message data objects by extracting the description data field from the respective service message data objects. (e.g., processing step, the human can identify description from the description data field.) Claim 3 further recite: wherein the predefined class is representative of a reason for making a service request. (e.g., the human classifying information) Claim 4 further recites: wherein the dashboard visualization comprises at least one module, and wherein the at least one module is configured to display a predetermined format for displaying data based on the plurality of classification data objects. (e.g., formatting data, the human can format display data.) [module in this context can be generic computer component] Claim 5 further recites: wherein the dashboard visualization comprises a module configured to display a proportion of service message data objects associated with a respective classification. (e.g., displaying the result, the human can display part of the message associated with the category.) [module in this context can be generic computer component] Claim 9-12 recites computer-implemented method claims that corresponds to the apparatus of claims 2-5 and are therefore rejected under the same grounds as claims 2-5 above. Claim 16-19 recites computer program product claims that corresponds to the apparatus of claims 2-5 and are therefore rejected under the same grounds as claims 2-5 above. In sum, claims 2-5, 9-12 and 16-19 depend from claim 1, 8 and 15, and further recite mental processes as explained above. None of the additional limitations recited in claims 2-5, 9-12 and 16-19 amount to anything more than the same or a similar abstract idea as recited in claims 1, 8 and 15. Nor do any limitations in claims 2-5, 9-12 and 16-19: (a) integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or (b) amount to significantly more than the judicial exception because the additional limitations of using generic computer components amounts to no more than mere instructions to apply the exception using generic computer components. Claims 2-5, 9-12 and 16-19 are not patent eligible. Claims 6-7, 13-14 and 20 involving adjustment of parameter of supervised NLP model and/or represents model that has been fine-tuned for multi-class text classification and therefore do not fall under abstract ideas and therefore are patent eligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-4, 8-11 and 15-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Muralidharan (US 20230094373). Regarding Claim 1, Muralidharan discloses: 1. An apparatus comprising one or more processors and one or more storage devices storing instructions that are operable, when executed by the one or more processors, to cause the one or more processors to: extract a feature set from a plurality of service message data objects associated with an application framework; ([0003] In accordance with one aspect, a computer-implemented method is provided. In one embodiment, the computer-implemented method comprises: determining, based on one or more natural language data fields of a software incident data object for a software application framework and using a natural language feature extraction machine learning model, a natural language feature data object for the software incident data object;) also see para 0039. [software incident data object is a type of service message data object] apply a supervised natural language processing model to the feature set to generate a plurality of classification data objects associated with the plurality of service message data objects that classify a respective service message data object as belonging to a predefined class of a plurality of predefined classes; ([0039] the predicted incident severity level is a discrete classification output that is selected from a set of candidate classes (e.g., a set comprising a low predicted incident severity level class, a medium predicted incident severity level class, and a high predicted incident severity level class). In some embodiments, the predicted incident severity level is a continuous regression output (e.g., a value selected from the range 0-1). In some embodiments, the predicted incident severity level is a multi-valued output describing, for each candidate class of a set of candidate classes, a likelihood that the corresponding software incident data object is associated with the candidate class. [0040] the incident severity level detection machine learning model is a supervised trained machine learning model (e.g., a fully-connected supervised trained machine learning model).) and initiate a rendering of a dashboard visualization via an electronic interface based at least in part on the plurality of classification data objects. ([0128] At operation 504, the software monitoring data management computing device 106 performs one or more prediction-based actions based on the one or more incident signatures. For example, performing the one or more prediction-based actions may include enabling display of a prediction output user interface that displays the one or more incident signatures. As another example, performing the one or more prediction-based actions may include enabling display of a prediction output user interface that is configured to receive one or more user feedback data objects for the one or more incident signatures.) Also see figs. 6, 9 and 12 which shows data visualization based on classification data objects. Regarding Claim 2, Muralidharan discloses: All the elements of claim 1, Muralidharan further discloses: wherein the plurality of service message data objects respectively comprise at least a description data field associated with a service request by a user identifier, and wherein the one or more storage devices store instructions are operable, when executed by the one or more processors, to further cause the one or more processors to: extract the feature set from the plurality of service message data objects by extracting the description data field from the respective service message data objects. ([0005] In accordance with yet another aspect, an apparatus comprising at least one processor and at least one memory including computer program code is provided. In one embodiment, the at least one memory and the computer program code may be configured to, with the processor, cause the apparatus to: determine, based on one or more natural language data fields of a software incident data object for a software application framework and using a natural language feature extraction machine learning model, a natural language feature data object for the software incident data object; determine, based on one or more structured data fields of the software incident data object and using a structured data feature extraction machine learning model, a structured data feature data object for the software incident data object;) [natural language data field functions as a description)]. Para 0115 also discloses a description data field. Regarding Claim 3, Muralidharan discloses: All the elements of claim 1, Muralidharan further discloses: wherein the predefined class is representative of a reason for making a service request. ([0111] In some embodiments, the incident analysis features describe one or more analysis features for a software incident data object, including a root cause category, a feature describing whether the incident was detected by monitoring, and/or the like. In some embodiments, the root cause category feature may take one of the following values: Change, Dependency, Scale, Architecture, Bug, Unknown.) [Root cause category reads on type of predefined class that is representative of a reason for making a service request] Regarding Claim 4, Muralidharan discloses: All the elements of claim 1, Muralidharan further discloses: wherein the dashboard visualization comprises at least one module, and wherein the at least one module is configured to display a predetermined format for displaying data based on the plurality of classification data objects. ([0079] FIG. 6 depicts an operational example of a software monitoring data display user interface 600 that includes user interface segments 611-615 that each describe various properties of a corresponding software incident data object. As further depicted in FIG. 6, because of the selection of the user interface element 601, the software monitoring data display user interface 600 currently displays information about “open” (i.e., unresolved) software incident data objects. User interface segments 611-615 are described in greater detail below, in relation to describing exemplary data fields of a software incident data object.) Claim 8 recites a computer-implemented method claim that corresponds to the apparatus of claim 1 and is therefore rejected under the same grounds as claim 1 above. Claims 9-11 are computer-implemented method claims that corresponds to claims 2-4 and therefore are also rejected under same grounds as claims 2-4. Regarding Claim 15, Muralidharan discloses: 15. A computer program product comprising at least one non-transitory computer readable storage medium having computer executable code portions stored therein, the computer executable code portions comprising program code instructions configured to: ([0004] In accordance with another aspect, a computer program product is provided. The computer program product may comprise at least one computer-readable storage medium having computer-readable program code portions stored therein, the computer-readable program code portions comprising executable portions configured to:) Claims 16-18 are computer program product claims that corresponds to claims 2-4 and therefore are also rejected under same grounds as claims 2-4. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 5-6, 12-13, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Muralidharan, in view of Dunn (US 20200394360). Regarding Claim 5, Muralidharan discloses: all of claim 1. Muralidharan does not explicitly disclose the feature recited below. Dunn discloses: wherein the dashboard visualization comprises a module configured to display a proportion of service message data objects associated with a respective classification. ([0131] FIGS. 11A-11F show examples of aspects of dashboard reports… For example, FIG. 11A can include data that illustrates summary information about a number of conversations, an average duration of a communication session, an intent score, intents by conversations, intent trends, intent durations, or other such information as data 1114 using data summary interface 1110, data metric 1108 graphics, and chart 1116 data. These analytics can become available as users contact the intent-driven contact center and their intents are ascertained. Interface elements for adding filters, selecting filters, and setting value ranges for filters can be used to select the displayed data using add filter element 1102, filter type selections 1104, filter value selections 1106 and other such user interface selections. For example, in the illustrated filter interface 1100, data summary interfaces 1110 can show different summary data types in associated data 1114 areas of each interface area, along with an associated trend arrow 1112 for each interface. Chart 1116 can show a volume of intent category assignments over time. For example, each line of chart 1116 can indicate a volume of intent assignments in a given hour of a day for a given geographic area (e.g. where intent assignments drop to near zero in the middle of the night). Each intent category can have a one or more associated data summary interface 1110 that shows statistical values about the intent category, such as a daily average volume, a weekly average volume, a weekly average trend (e.g. change) over the past year, or any other such metrics.) Muralidharan and Dunn are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Muralidharan to combine the teaching of Dunn, because examples described herein improve the operation of devices in a communication system by improving the efficiency of AI and machine based communications, reducing the processing resources used to facilitate responses to user communications, and improving the quality of machine driven communications in such a system (Dunn, [0007]). Regarding Claim 6, Muralidharan discloses: All the elements of claim 1, Muralidharan does not explicitly disclose the feature recited below. Dunn discloses: the one or more storage devices store instructions are operable, when executed by the one or more processors, to further cause the one or more processors to: evaluate performance of the supervised natural language processing model using one or more performance metrics at a predetermined time interval; ([0086] As described herein, multiple different neural networks can be used in the course of a conversation (e.g. multiple back and forth communications between a user and a system), and data for such communications can be used in machine learning operations to update the neural networks or other systems used for future interactions with users and operations to associate intent categories and actions with words from a user communication. Usage data by users can be used to adjust weights in a neural network to improve intent category assignments and track changes in user intent trends (e.g. final user intent results identified at the end of a user conversation with a system as compared with assigned intents based on initial user communications). Data generated by intent management engine 615 can be stored with associated message data in message data store 620, and this data can be used for various updates, including managing data for continuous real-time analysis updates or other dynamic feedback and modifications to a system,) and adjust one or more parameters of the supervised natural language processing model based on the one or more performance metrics. ([0086] As described herein, multiple different neural networks can be used in the course of a conversation (e.g. multiple back and forth communications between a user and a system), and data for such communications can be used in machine learning operations to update the neural networks or other systems used for future interactions with users and operations to associate intent categories and actions with words from a user communication. Usage data by users can be used to adjust weights in a neural network to improve intent category assignments and track changes in user intent trends (e.g. final user intent results identified at the end of a user conversation with a system as compared with assigned intents based on initial user communications). Data generated by intent management engine 615 can be stored with associated message data in message data store 620, and this data can be used for various updates, including managing data for continuous real-time analysis updates or other dynamic feedback and modifications to a system,) Muralidharan and Dunn are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Muralidharan to combine the teaching of Dunn, because the technique described would improve intent category assignments and track changes in user intent trends (Dunn, [0086]). Claims 12 and 13 are computer-implemented method claim with limitation similar to the limitations of Claims 5 and 6 and are rejected under similar rationale. Claims 19 and 20 are computer program product claim with limitation similar to the limitations of Claims 5 and 6 and are rejected under similar rationale. Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Muralidharan, in view of Ni (US 20220308952). Regarding Claim 7, Muralidharan discloses: all of claim 1. Muralidharan does not explicitly disclose the feature recited below. Ni discloses: wherein the supervised natural language processing model is a bidirectional transformer model that is fine-tuned for multi-class text classification. ([0069] A log segment representation model is built by BERT that learns a feature representation from log pattern ID sequences (e.g., each log line in an obtained log file may be translated to a log pattern ID) in a pre-training manner by a Masked Language Model (MLM). The BERT-based log segment representation is used as a feature representation for training a multi-class text classifier from labeled log segments (e.g., with each log segment labeled by an issue as its class) in a fine-tuning manner.) Muralidharan and Ni are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Muralidharan to combine the teaching of Ni, because machine learning model can analyze the log files automatically, the efficiency of a support or monitoring and analytics platform (or technical support engineers thereof) may be significantly improved. (Ni, [0067]). Claim 14 is computer-implemented method claim with limitation similar to the limitations of Claim 7 and are rejected under similar rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Yang (US 20250068671) – discloses “improving classification accuracy of document classification systems by configuring a generative machine learning model to identify one or more evidence text portions comprising one or more bases relied on by the generative machine learning model for assigning a plurality of categorical identifiers to a plurality of text segment data objects associated with a document data object, and verifying the one or more evidence text portions with a verifier machine learning model to generate one or more classifications of the document data object and provide the one or more verified evidence text portions along with the one or more classifications.” See Abstract, para 0008 and figs 4-5, 8-11 for additional details. Prajesh (US 20240135111) – discloses method for determining topics of data objects and generate analytical dashboards. See Abstract, and figs. 5-8 for additional details. Grinberg (US 20250063083) – disclose visualization of data on dashboard and classification/categorization on analyzing query to infer intents. See Abstract, and para 0032, 0080, 0099, and 0162 for details. Pham (US 20210084145) – disclose an intent recognition module using a trained supervised machine learning classification model on learned labeled data. See para 0024, and 0064. Narechania, A., Srinivasan, A., & Stasko, J. (2020). NL4DV: A toolkit for generating analytic specifications for data visualization from natural language queries. IEEE Transactions on Visualization and Computer Graphics, 27(2), 369-379. – disclose a data object visualization dashboard for handing natural language queries. See Abstract and figs. 1-2, 4-5, 8 and 10 for additional details. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Philip H Lam whose telephone number is (571)272-1721. The examiner can normally be reached 9 AM-3 PM Pacific time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached on 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHILIP H LAM/ Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Jun 27, 2024
Application Filed
Feb 01, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591626
SEARCH STRING ENHANCEMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12572735
DOMAIN-SPECIFIC DOCUMENT VALIDATION
2y 5m to grant Granted Mar 10, 2026
Patent 12572747
MULTI-TURN DIALOGUE RESPONSE GENERATION WITH AUTOREGRESSIVE TRANSFORMER MODELS
2y 5m to grant Granted Mar 10, 2026
Patent 12562158
ELECTRONIC APPARATUS AND CONTROLLING METHOD THEREOF
2y 5m to grant Granted Feb 24, 2026
Patent 12561194
ROOT CAUSE PATTERN RECOGNITION BASED MODEL TRAINING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+45.5%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 129 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month