Prosecution Insights
Last updated: April 19, 2026
Application No. 17/535,323

SYSTEMS AND METHODS FOR IMPLEMENTING PLAYBOOKS

Non-Final OA §103
Filed
Nov 24, 2021
Examiner
TAN, DAVID H
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
SuccessKPI, Inc.
OA Round
3 (Non-Final)
31%
Grant Probability
At Risk
3-4
OA Rounds
4y 1m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 31% of cases
31%
Career Allow Rate
30 granted / 98 resolved
-24.4% vs TC avg
Strong +16% interview lift
Without
With
+15.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
41 currently pending
Career history
139
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
63.5%
+23.5% vs TC avg
§102
19.8%
-20.2% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 98 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Non-Final Rejection is filed in response to Applicant Arguments/Remarks Made in an Amendment filed 12/04/2025. Claims 1, 5, and 13 are amended. Claims 1-20 remain pending. Response to Argument Argument 1, Applicant argues in Applicant Arguments/Remarks Made in an Amendment filed 12/04/2025, pg. 10 that the primary claim limitations, “wherein the plurality of associated characterization parameters are processed in parallel to generate the corresponding score for each characterization”, improves the technical problems encountered processing audio characteristics. Response to Argument 1, in light of the amendments the 35 U.S.C. 101 rejections are respectfully withdrawn. Argument 2, applicant argues in Applicant Arguments/Remarks Made in an Amendment filed 12/04/2025, pg. 10-11 that the prior art Bradley fails to teach the primary claim limitations to, “wherein the audio characteristics comprise one or more of silence duration within the electronic communication or simultaneous speech duration within the conversation”. Response to Argument 2, in light of the amendments a newly found combination of prior art (U.S. Patent Application Publication NO. 20210336905 “Bradley” further in light of U.S. Patent Application Publication NO. 20170092294 “Togawa”, and further in light of U.S. Patent Application Publication NO. 20220189457 “Shen”) is applied to updated rejections. The examiner notes that Togawa teaches in para. [0067], “the detecting unit 3 of FIG. 1 may detect…a speech overlapping period where the first speech period overlaps the second speech period…The detecting unit 3 may detect a first silent period contained in the first input signal in response to the first signal intensity, a second silent period contained in the second input signal, and a silent overlapping period where the first silent period overlaps the second silent period”. Thus Togawa teaches the BRI of the claim limitation, “wherein the audio characteristics comprise one or more of silence duration within the electronic communication or simultaneous speech duration within the conversation” as a silent period and an speech overlapping period may be detected by the detecting unit analyzing a conversation. One would have been motivated to combine specific audio gap timing data detection of Togawa with the threshold scoring comparison and recommended actions of Bradley as the combination provides a more detailed study of whether smooth communication is achieved among the conversation. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20, is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20210336905 “Bradley” further in light of U.S. Patent Application Publication NO. 20170092294 “Togawa”, and further in light of U.S. Patent Application Publication NO. 20220189457 “Shen”. Claim 1: Bradley teaches a system for generating playbooks, the system comprising: one or more processors (i.e. para. [0111], Fig. 8, the communication server 805 may include a central processing unit (CPU) 807, including a processor 810) ; and a non-transitory computer-readable storage medium storing instructions, which when executed by the one or more processors (i.e. para. [0005], The system may include one or more data processors; and a non-transitory computer-readable storage medium containing instructions which, when executed on the one or more data processors, cause the one or more data processors to perform the methods described above and herein) cause the one or more processors to: receive a plurality of electronic communications comprising user interactions (i.e. para. [0096], A communication session can facilitate the exchange of one or more messages between network device 705 and terminal device 715. The present disclosure is not limited to the exchange of messages during a communication session); input each of the plurality of electronic communications into a first machine learning model to obtain a plurality of characterizations associated with the plurality of electronic communications (i.e. para. [0101], messages exchanged between the network device 705 and the bot 720 may be used as input by the one or more machine learning models or artificial intelligence to generate an output), wherein the first machine learning model has been trained, using a first training dataset that includes characterizations and associated words and phrases labelled with a characterization names, to output one or more characterizations responsive to an input of electronic communication data (i.e. para. [0101], “The machine learning models may be trained using supervised learning techniques. For instance, a dataset of input messages and corresponding outputs specifying the appropriate responding entity (e.g., live agent or bot) can be selected for training of the machine learning models”, wherein the BRI for characterizations encompasses a determined user’s intent from the input messages. The examiner notes that the output user’s intent may be used in routing the user to a destination system based on the intent, predicting or suggesting responses to agents communicating with users, escalating communication sessions to include one or more additional bots or agents, and other suitable capabilities); receive, from an input device, a plurality of groupings for the plurality of characterizations, wherein each grouping of the plurality of groupings corresponds to a playbook (i.e. para. [0106], “the characteristic of a message can be the sentiment associated with the message. The message parameter can represent the sentiment of the message. For example, if the sentiment of the message is happy, the message parameter can be a certain value or range of values… Determining whether to switch between the bots and the terminal device can be based on the message parameter”, wherein the BRI for a grouping encompasses the how the output for a user’s intent may be classified into a sentiment group and wherein the classification is used to determine a course of action, such as switching the user between different responding entities); generate a plurality of playbooks based on the plurality of groupings (i.e. para. [0101], “a dataset of input messages and corresponding outputs specifying the appropriate responding entity (e.g., live agent or bot) can be selected for training of the machine learning models”, wherein the BRI for a plurality of playbooks encompasses the plurality of planned routing responses to entities based on the plurality of output user’s intents); receive an electronic communication comprising a user interaction, wherein the user interaction comprises a conversation (i.e. para. [0092], A user may use a network device to initiate a conversation with an agent regarding resolution of an issue); input the electronic communication into the first machine learning model to obtain a set of characterizations associated with the electronic communication (i.e. para. [0092], “The user's intent may be automatically identified”, wherein it is noted that messages exchanged between the network device 705 and the bot 720 may be used as input by the one or more machine learning models or artificial intelligence to generate an output. The output may specify whether the communication session between the network device 705 and the bot 720 is to be switched to a live agent or is to be maintained); compare, the set of characterizations to characterizations within each of the plurality of playbooks (i.e. para. [0134], message recommendation system 930 may evaluate the content of messages received from network devices (or messages received at communication server 910 from bots or terminal devices) and compare the results of the evaluation to the one or more clusters of previous messages stored in message data store 935); select, based on the comparing, a matching playbook of the plurality of playbooks (i.e. para. [0134], Once the cluster is identified, message recommendation system 930 can identify the most relevant response messages based on a confidence threshold); generate a matching set of characterizations, wherein the matching set of characterizations comprises those characterizations within the plurality of characterizations that match the characterizations within the matching playbook (i.e. para. [0072], “A message assessment engine 615 may assess the (e.g., extracted or received) message… Examples of category or tag types can include (for example) topic, sentiment, complexity, and urgency”, wherein the BRI for a matching set of characterization encompasses how a user’s intent may be matched to a set of keyword tags that match with appropriate plans of action, such as switching the user’s responding entity from a bot to a person or vice versa); determine, within the matching set of characterizations, a plurality of associated characterization parameters (i.e. para. [0095], “a dynamic sentiment parameter can be generated to represent a sentiment of messages, conversations, entities, agents, and so on…. in cases where the dynamic sentiment parameter indicates that the user is frustrated with the bot, the system can automatically switch the bot with a terminal device so that a live agent can communicate with the user”, wherein the characterization of a user’s sentiment matches a subset of “frustration” and with a bot, wherein the playbook for this match would be to switch the user from a bot to a human agent), wherein the plurality of associated characterization parameters comprise audio characteristics of the conversation (i.e. para. [0096], “The present disclosure is not limited to the exchange of messages during a communication session. Other forms of communication can be facilitated by the communication session, for example, video communication (e.g., a video feed) and audio communication (e.g., a Voice-Over-IP connection)”, wherein a conversation with an agent may be an audio communication that is analyzed for sentiment. The examiner notes that the BRI for audio characteristics is broad enough to encompass a textual sentiment analysis, as even written words have associated audio characteristics) input each of the plurality of associated characterization parameters comprising the audio characteristics of the conversation into a second machine learning model to generate a corresponding score for each characterization (i.e. para. [0102], “the one or more machine learning models or artificial intelligence may generate, as output, the scores representing sentiment of an input message or series of messages. The communication server 710 may determine whether the resulting score exceeds a threshold value corresponding to allocation of a communication session to a bot 720 or live agent”, wherein the BRI for audio characteristics of the conversation encompasses analyzed sentiment of a conversation, wherein it is noted that para. [0096] notes that an audio communication may be analyzed as part of the exchange of messages wherein the second machine learning model is a neural network that is trained using a second training dataset (i.e. para. [0093], The feedback may be used and analyzed in aggregate to apply data science as training input to models) to generate scores based on characterization parameters comprising audio characteristics of conversations (i.e. para. [0093], “A sentiment score may be utilized to displayed on the terminal device to facilitate priority of rescuing conversations. Assist features utilizing artificial intelligence such as “intent hint” and “recommended automation” may be shown as actionable inline suggestions within the conversation window. The feedback may be used and analyzed in aggregate to apply data science as training input to model”, wherein the second MLM may be a sentiment analysis model trained to output a score); add each corresponding score into a total score for the conversation (i.e. para. [0102], “In some implementations, the one or more machine learning models or artificial intelligence may generate, as output, the scores representing sentiment of an input message or series of messages”, wherein the series of scores may be taken into account and measured against a threshold); and execute, based on the total score, an action of a plurality of actions, wherein each action of the plurality of actions is associated with a corresponding total score of a plurality of total scores (i.e. para. [0102], the scores representing sentiment of an input message or series of messages. The communication server 710 may determine whether the resulting score exceeds a threshold value corresponding to allocation of a communication session to a bot 720 or live agent. For instance, if the score exceeds the threshold value, the communication server 710 may determine that the communication session is best suited for a bot 720). While Bradley teaches that that the subset of characterization parameters comprises audio characteristics and that audio characteristics may be a duration of conversation with the client, Bradley may not explicitly teach wherein wherein the audio characteristics comprise one or more of silence duration within the electronic communication, or simultaneous speech duration within the conversation However, Togawa teaches detecting audio characteristics wherein the audio characteristics comprise one or more of silence duration within the electronic communication, or simultaneous speech duration within the conversation (i.e. para. [0067], “the detecting unit 3 of FIG. 1 may detect…a speech overlapping period where the first speech period overlaps the second speech period…The detecting unit 3 may detect a first silent period contained in the first input signal in response to the first signal intensity, a second silent period contained in the second input signal, and a silent overlapping period where the first silent period overlaps the second silent period”, wherein a detecting unit may analyze a conversation and determine conversation timing data that includes silent periods of conversation and periods where users speech patterns overlap). It would have been obvious to one of ordinary skill in the art before the effective filing data to add wherein the audio characteristics comprise one or more of silence duration within the electronic communication, or simultaneous speech duration within the conversation, to the conversation analysis and scoring against a threshold comparison in order to execute an action of a plurality of actions as taught by Bradley, with how a conversation may be analyzed and silence and simultaneous talk over time data may be detected, as taught by Togawa. One would have been motivated to combine specific timing data detection of Togawa with the threshold scoring comparison of Bradley as the combination provides a more detailed study of whether smooth communication is achieved among the conversation. While Bradley-Togawa teach to input associated audio characterization parameters into a MLM that scores each characterization, Bradley-Togawa may not explicitly teach wherein the plurality of associated characterization parameters are processed in parallel to generate the corresponding score for each characterization. However, Shen teaches wherein the plurality of associated characterization parameters (i.e. para. [0049], “language identifying model 180 includes a group of convolution layers 170 including a plurality of convolution blocks, and a fully connected layer 172 receiving an output from the group of convolution layers 170 and outputting probabilities", wherein BRI for associated characterization parameters encompass the calculated probabilities associated with determining an audio recognition likelihood score) are processed in parallel to generate the corresponding score for each characterization (i.e. [0063], “GPU 317 is capable of parallel processing and it can execute the speech recognition process, the automatic translation process and the speech synthesizing process for a large amount of speech data in parallel simultaneously or in a pipelined manner”, wherein associated audio speech probabilities may be processed in parallel simultaneously to generate a score for an audio segment). It would have been obvious to one of ordinary skill in the art before the effective filing data to add wherein the plurality of associated characterization parameters are processed in parallel to generate the corresponding score for each characterization, to the conversation analysis and scoring against a threshold comparison in order to execute an action of a plurality of actions as taught by Bradley-Togawa, with how associated audio characteristics may be processed in parallel, as taught by Shen. One would have been motivated to combine the parallel processing of audio characteristics of Shen with the recognition scoring of Bradley-Togawa as the combination reduces the time necessary to start the identifying process and the result can be obtained more quickly. Claim 2: Bradley, Togawa, and Shen teach the system of claim 1. Bradley further teaches wherein the instructions, when executed by the one or more processors, further cause the one or more processors to: determine that a subset of characterization parameters comprises timing data (i.e. para. [0125], “if a particular type of bot successfully handled an “order_status” intent to the satisfaction of a user, future “order_status” intents may also be transferred to a bot of the particular type”, wherein the BRI for timing data encompasses the timing status characterization of a topic in the conversation, which may be used to determine a course of action), wherein the timing data comprises a total duration of the conversation (i.e. para. [0027], The message can include …. information about an associated user 110 (e.g., language spoken, duration of having interacted with client) , (i.e. para. [0125], The machine learning engine 835 may be configured to, in conjunction with the processor 810, feed the conversation, identified intent, and provided feedback into a database and analyze the data to draw inferences about how well a type of bot and/or live agent handled the conversation). Togawa further teaches wherein timing data comprises a silence time, and a talk over time (i.e. para. [0067], “the detecting unit 3 of FIG. 1 may detect…a speech overlapping period where the first speech period overlaps the second speech period…The detecting unit 3 may detect a first silent period contained in the first input signal in response to the first signal intensity, a second silent period contained in the second input signal, and a silent overlapping period where the first silent period overlaps the second silent period”, wherein a detecting unit may analyze a conversation and determine conversation timing data that includes silent periods of conversation and periods where users speech patterns overlap). Claim 3: Bradley, Togawa, and Shen teach the system of claim 1. Bradley further teaches wherein the instructions, when executed by the one or more processors, further cause the one or more processors to: determine that a first characterization parameter of the characterization parameters comprises string data (i.e. para. [0104], “communication server 710 can identify whether the one or more lines of text include an anchor. Examples of an anchor include a string of text associated with a polarity (e.g., sentiment or intent)”, wherein the BRI for string data encompasses a string of text data representing an anchor with a polarity); and input, into the second machine learning model, the first characterization parameter and a parameter type associated with the string data (i.e. para. [0104], “anchors can be dynamically determined using supervised machine learning techniques. For example, one or more clustering algorithms can be executed on stored messages to find patterns within the stored messages. The clustered messages can be further filtered and evaluated to determine the anchor”, wherein the second machine learning model for intent may input string data to determine a polarity of an anchor). Claim 4: Bradley, Togawa, and Shen teach the system of claim 1. Bradley further teaches wherein the instructions, when executed by the one or more processors, further cause the one or more processors to: determine an associated score for each characterization that is within the matching set of characterizations and not with a subset of characterizations comprising the plurality of associated characterization parameters (i.e. para. [0108], “These characteristics may include, but are not limited to… user satisfaction rating or score”, wherein a user’s historical satisfaction score may be within a matching set of characterizations for a user being routed to a bot which resolves a user’s conflict and not within a subset of characterizations for routing the user to a human resolving the user’s conflict. The examiner notes that the plurality of associated characterization parameters comprise audio characteristics of the conversation. In an embodiment where audio characteristics are interpreted as sentiment parameters derived from voice calls via an audio communication, Bradley teaches an embodiment where sentiment analysis may be derived from text message conversations, which would be different from the plurality of associated characterization parameters comprising audio call characteristics. In such an embodiment an associated textually derived historical satisfaction score for an unhappiness characterization would be found as matching a playbook to switch the user’s operator and not within the plurality of associated characterization parameters comprising audio call characteristics); and determine the action of the plurality of actions based on each associated score (i.e. para. [0108], To identify which agent is best suited for responding to a user for a technical issue, the communication server 710 may use the characteristics of the messages received from the network device 705 and the technical issue expressed by the user of the network device 705 as input to a machine learning model). Claim 5: Claim 5 is the method claim reciting similar limitations to claim 1 and is rejected for similar reasons. Claim 6: Claim 6 is the method claim reciting similar limitations to claim 1 and is rejected for similar reasons. Claim 7: Claim 7 is the method claim reciting similar limitations to claim 2 and is rejected for similar reasons. Claim 8: Bradley, Togawa, and Shen teach the method of claim 5. Bradley further teaches further comprising: determining that a first characterization parameter of a subset of characterization parameters comprises question answer data (i.e. para. [0072], “A message assessment engine 615 may assess the (e.g., extracted or received) message. The assessment can include identifying, for example, one or more categories or tags for the message… A topic can include, for example, a technical issue, a use question”, wherein the BRI for question answer data encompasses the user requesting an answer to a technical issue or use question); and inputting, into the second machine learning model, the first characterization parameter and a parameter type associated with the question answer data (i.e. para. [0104], “The characteristic can include, for example, the speed of typing, the number of special characters used in the message (e.g., exclamation points, question marks, and so on)”, wherein in the case where a characterization parameter of the subset of characterization parameters encompass data indicating that the user is requesting an answer to a question, then data associated with question answer data would be input into the second machine learning model for sentiment or intent analysis to find patterns within the messages to determine a playbook course of action). Claim 9: Claim 9 is the method claim reciting similar limitations to claim 4 and is rejected for similar reasons. Claim 10: Bradley, Togawa, and Shen teach the method of claim 5. Bradley further teaches further comprising: receiving, from an input device, a plurality of phrases, for a new characterization (i.e. para. [0105], “if the term “kind of” is near the anchor “don't like” (e.g., as in the sentence “I kind of don't like”), the term “kind of” may be identified as an amplifier term that indicates a medium intensity of the negative polarity”, wherein the algorithm based on supervised machine learning techniques may identify a new phrase such as “I kind of don’t like”, which is characterized with a negative polarity sentiment); and training the first machine learning model using the plurality of phrases and the new characterization to recognize the new characterization as associated with the plurality of phrases (i.e. para. [0108], “The machine learning model or artificial intelligence algorithm may be trained using feedback associated with previously conducted conversations between users and live agents. This feedback may be used to identify certain characteristics for each agent. These characteristics may include, but are not limited to,…responsiveness to particular sentiments (e.g., ability to reduce user frustration or anger, etc.)”, wherein a model may be trained to respond to a newly recognized characterization associated with phrases identified with the negative polarity sentiment of frustration). Claim 11: Bradley, Togawa, and Shen teach the method of claim 5. Bradley further teaches wherein inputting the electronic communication into the first machine learning model to obtain the plurality of characterizations associated with the electronic communication comprises: generating a transcription of the electronic communication (i.e. para. [0031], “Remote server 140 may select a particular text passage, recording or file based on, for example, an analysis of a received communication (e.g., a semantic or mapping analysis)”, wherein the BRI for a transcription encompasses a mapping); determining a type associated with the electronic communication (i.e. para. [0072], a category or tag can be determined, for example, based on a semantic analysis of a message (e.g., by identifying keywords, sentence structures, repeated words, punctuation characters and/or non-article words); user input (e.g., having selected one or more categories); and/or message-associated statistics (e.g., typing speed and/or response latency)); and retrieving a plurality of electronic communication parameters corresponding to the type associated with the electronic communication (i.e. para. [0087], message may include a general query. Client mapping engine 640 may, for example, perform a semantic analysis on the message, identify one or more keywords and identify one or more clients associated with the keyword(s)). Claim 12: Bradley, Togawa, and Shen teach the method of claim 11. Bradley further teaches wherein the plurality of electronic communication parameters comprises one or more of communication duration, communication sentiment, silence duration within the electronic communication, and simultaneous speech duration within the electronic communication (i.e. para. [0027], The message can include information about network device 105 (e.g., IP address, device type, and/or operating system), information about an associated user 110 (e.g., language spoken, duration of having interacted with client, skill level, sentiment, and/or topic preferences)) . Claim 13: Claim 13 is the medium claim reciting similar limitations to claim 1 and is rejected for similar reasons. Claim 14: Claim 14 is the medium claim reciting similar limitations to claim 1 and is rejected for similar reasons. Claim 15: Claim 15 is the medium claim reciting similar limitations to claim 2 and is rejected for similar reasons. Claim 16: Claim 16 is the medium claim reciting similar limitations to claim 8 and is rejected for similar reasons. Claim 17: Claim 17 is the medium claim reciting similar limitations to claim 4 and is rejected for similar reasons. Claim 18: Claim 18 is the medium claim reciting similar limitations to claim 10 and is rejected for similar reasons. Claim 19: Claim 19 is the medium claim reciting similar limitations to claim 11 and is rejected for similar reasons. Claim 20: Claim 20 is the medium claim reciting similar limitations to claim 12 and is rejected for similar reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure U.S. Patent Application Publication NO. 20190164131 “Hossain” teaches in para. [0032], classification program 138 performs an analysis (step 216). In other words, classification program 138 analyzes each e-mail group to map the behaviors of the user to each group and to create a profile for each group. In an embodiment, classification program 138 uses sentiment analysis, natural language processing (NLP), keyword extraction, and a determination of e-mail attributes (i.e., a number of recipients, a number of attachments, a type of attachment, etc.) to analyze the e-mails in an e-mail group. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID H TAN whose telephone number is (571)272-7433. The examiner can normally be reached M-F 7:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.T./ Examiner, Art Unit 2145 /CESAR B PAULA/ Supervisory Patent Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Nov 24, 2021
Application Filed
Mar 18, 2025
Non-Final Rejection — §103
May 12, 2025
Applicant Interview (Telephonic)
May 12, 2025
Examiner Interview Summary
Jun 18, 2025
Response Filed
Sep 03, 2025
Final Rejection — §103
Dec 04, 2025
Request for Continued Examination
Dec 11, 2025
Response after Non-Final Action
Jan 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12443336
INTERACTIVE USER INTERFACE FOR DYNAMICALLY UPDATING DATA AND DATA ANALYSIS AND QUERY PROCESSING
2y 5m to grant Granted Oct 14, 2025
Patent 12282863
METHOD AND SYSTEM OF USER IDENTIFICATION BY A SEQUENCE OF OPENED USER INTERFACE WINDOWS
2y 5m to grant Granted Apr 22, 2025
Patent 12182378
METHODS AND SYSTEMS FOR OBJECT SELECTION
2y 5m to grant Granted Dec 31, 2024
Patent 12111956
Methods and Systems for Access Controlled Spaces for Data Analytics and Visualization
2y 5m to grant Granted Oct 08, 2024
Patent 12032809
Computer System and Method for Creating, Assigning, and Interacting with Action Items Related to a Collaborative Task
2y 5m to grant Granted Jul 09, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
31%
Grant Probability
46%
With Interview (+15.8%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 98 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month