Prosecution Insights
Last updated: April 19, 2026
Application No. 17/538,271

COMMUNICATION FRAMEWORK FOR AUTOMATED CONTENT GENERATION AND ADAPTIVE DELIVERY

Non-Final OA §101§102
Filed
Nov 30, 2021
Examiner
SHAH, PARAS D
Art Unit
2653
Tech Center
2600 — Communications
Assignee
International Business Machines Corporation
OA Round
4 (Non-Final)
74%
Grant Probability
Favorable
4-5
OA Rounds
3y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
474 granted / 645 resolved
+11.5% vs TC avg
Strong +31% interview lift
Without
With
+31.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
24 currently pending
Career history
669
Total Applications
across all art units

Statute-Specific Performance

§101
20.3%
-19.7% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 645 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This office is filed in response the amendments to the claims filed on 10/31/2025. Response to Arguments Applicant's arguments filed 10/31/2025 with regards to the 35 USC 101 rejections have been fully considered but they are not persuasive. The applicant argues that claim 1 as filed/amended overcomes 35 USC 101 . The examiner respectfully disagrees. The applicant argues that “Humans cannot, by observation alone, process synchronized multimedia input data streams from microphones, cameras, and connected presentation systems; cannot execute machine- learning algorithms to formulate predictive models; and cannot automatically update such models based on digitally captured delivery and reaction data. These are inherently technological operations requiring computer processing of data in forms imperceptible to the human mind” The examiner interprets these steps a pre-solution activity which is used mainly for data retrieval. Humans can capture by observation, an audience’s reaction to a presentation. A human can observe how the audience is reacting, what types of questions are being asked, how the cameras and microphones work and use this information to update the data and presentation styles for further presentations. The applicant also agues that “The Examiner's assertion that a human "can determine by observing the discussion/event what the topic/message ... is and what information needs to be presented" improperly oversimplifies the claim by ignoring its explicit requirement that the strategic message model be formulated by machine learning. A human mind cannot execute a supervised or unsupervised learning algorithm, perform vectorization of multimedia content, or update learned parameters from audience-reaction data in real time”. However, none of these limitations are recited anywhere in the claims. Additionally, the applicant argues “The claim therefore applies any underlying idea in a particular technological context-multimedia data capture and adaptive machine learning-to achieve a tangible technological result: improved delivery of strategic messages through dynamic model retraining”. None of these retraining method is recited in the claim. As stated above, humans can capture by observation, an audience’s reaction to a presentation. A human can observe how the audience is reacting, what types of questions are being asked, how the cameras and microphones work and use this information to update the data and presentation styles for further presentations. The claim is being broadly interpreted at a high level of generality. The claim fails to identify any specific way of how the delivery of the strategic method is being improved. Applicant’s arguments with respect to claim 1 with regards to the 35 USC 102 rejections have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Objections. Claim 27 is objected to because of the following informalities: Claim 27 recites “… generating a strategic message instance by use of the strategic message”.. It appears that claim 27 should recite “… generating a strategic message instance by use of the strategic message model” Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 and 27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claims 1 and 8 relate to the statutory category of method/process. Claims 1 and 27 recite ”obtaining input data stream capturing a live discussion by one or more strategists on topics of strategic messages to deliver to an audience, the input data stream comprising multimedia data generated by meeting data (input/output) devices located at the live discussion; formulating a strategic message model by machine learning; generating a strategic message instance by use of the strategic message model; monitoring a presentation of the strategic message instance by a presenter by detecting delivery issues; and updating the strategic message model based on a delivery content capturing the presentation and a reaction by the audience”. With regards to Claims 1 and 27: “obtaining input data stream capturing a live discussion by one or more strategists on topics of strategic messages to deliver to an audience, the input data stream comprising multimedia data generated by meeting data (input/output) devices located at the live discussion” as drafted covers mental activity. A human is present and observing/monitoring a live presentation on a particular topic. The presentation can include power point slides, audio recordings, etc. to the audience attending the presentation who are using a computer/other devices to attend and interact with the presentation. “formulating a strategic message model by machine learning” as drafted covers mental activity. A human can determine by observing the presentation, what the topic/message of the presentation is, and what information needs to be presented to the audience. The additional limitation of using machine learning to formulate a strategic message model does not provide an inventive concept. The strategic message model is described in paragraph [0047] of the as filed specification as a generic machine learning model. It can be said that a human brain can also be a learning model that can be trained. “generating a strategic message instance by use of the strategic message model” as drafted covers mental activity. A human can determine what the topic/message is being presented and how to present the information. “monitoring a presentation of the strategic message instance by a presenter by detecting delivery issues” as drafted covers mental activity. A human can, by observing the audience, determining if the topic/message of the discussion is getting thru to the audience based on the feedback and reactions by the audience. “updating the strategic message model based on a delivery content capturing the presentation and a reaction by the audience” as drafted covers mental activity. A human can, by observing the audience’s reaction or feedback, update/change what the topic/message is and/or update/change how the topic/message is delivered. This judicial exception is not integrated into a practical application. The additional limitation of using machine learning to create the strategic message model does not provide an inventive concept. The strategic message model is described in paragraph [0047] of the as filed specification as a generic machine learning model. Accordingly, the additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using machine learning to create the strategic message model is noted. Mere instructions to apply a generic machine learning model cannot provide an inventive concept. The additional limitations in the claims noted above are directed towards insignificant pre-solution activity. The claims are not patent eligible Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 21, 22, 24, 25, and 27-33 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Seleskeroy et al. (US 2022/0138470). Regarding Claim 1, Seleskerov et al discloses a computer implemented method comprising: obtaining input data stream capturing a live discussion by one or more strategists on topics of strategic messages to deliver to an audience, the input data stream comprising multimedia data generated by meeting data devices located at the live discussion (FIG. 3 is a diagram showing examples of data exchanged between the presentation and communications platform 110 and the client devices 105a, 105b, 105c, and 105d. As discussed in the preceding examples, the presentation and communications platform 110 may transmit one or more presentation media streams 305 to the each of the client devices 105 over the network 120. The one or more presentation media streams 305 may include one or more audio media streams, one or more video media streams, and/or other media streams) (page 8, paragraph [0064]); formulating a strategic message model by machine learning (The presentation coaching unit 235 may utilize a delivery attributes model 1170 to analyze audio, video, and presentation content with machine learning models trained to identify aspects of the presenter's presentation skills and the presentation content are good and those that may benefit from improvement) (page 4, paragraph [0038]); generating a strategic message instance by use of the strategic message model (The presentation content may include a set of slides, a document, or other content that may be discussed during presentation) (page 8, paragraph [0064]); monitoring a presentation of the strategic message instance by a presenter by detecting delivery issues (The speaker skills feedback unit 1630 may identify issues with the presenter's presentation style, such as the language usage, language patterns, monotone delivery, reading of slide content, emotional state of the presenter, eye contact and/or gaze direction of the presenter, body pose of the presenter, and/or other information about the presenter and/or the participants) (page 6, paragraph [0049]); and updating the strategic message model based on a delivery content capturing the presentation and a reaction by the audience (Returning to FIG. 2, the model updating unit 220 may be configured to update the slide attribute model 1180 and/or the delivery attributes model 1170 based on the participant reaction information determined by the stream processing unit 215. The slide attribute model 1180 and/or the delivery attributes model 1170 may analyze the online presentation, and the presentation designer unit 230 and the presentation coaching unit 235 may use the inferences output by the slide attribute model 1180 and/or the delivery attributes model 1170 to provide feedback to the presenter for improving the online presentation content and/or the presentation skills of the presenter. The model updating unit 220 may utilize the reaction data obtained from the participants of the online presentation to improve the recommendations provided by the slide attribute model 1180 and/or the delivery attributes model 1170) (page 6, paragraph [0052]). Regarding Claim 21, Seleskerov et al discloses the method, further comprising: processing the input data stream by preparing manageable data units classified according to media types (The frame and filtering preprocessing unit 410 may be configured to determine whether a particular media stream contains audio, video, or both at a particular time and to process the stream using to convert the media stream into an appropriate format to serve as an input to the machine learning models for analyzing that type of content) (page 9, paragraph [0071]), wherein the processing includes: extracting audio streams from video media (The frame and filter preprocessing unit 410 may be configured to perform feature extraction on the media streams and/or reaction data) (page 8, paragraph [0068]); and parsing video frames prior to formulating the strategic message model (Initially, the frame and filtering preprocessing unit 410 may process the stream vi to generate an input or inputs for models that process features from video content) (page 9, paragraph [0071]). Regarding Claim 22, Seleskerov et al discloses the method, wherein the processing of multimedia data includes: identifying speakers and language within audio media (The language usage detection model 620 may be configured to analyze features extracted from video content of the presenter or a participant to identify language usage of that person and to output high-level features information that represents the language usage) (page 10, paragraph [0080]); transcribing the audio media data ( A transcript 715 of the audio portion of the online presentation and/or communication session may be generated by the stream processing unit 215 by analyzing the spoken content provided by the presenter and the participants) (page 11, paragraph [0084]); and synchronizing text media with corresponding video and audio streams based on timestamps (Furthermore, the reactions information generated by the analyzer unit 415 by analyzing the audio content, video content, and/or multi-modal content captured by the client devices 105 of the participants may also include a timestamp indicating when each reaction occurred) (page 5, paragraph [0047]).. Regarding Claim 24, Seleskerov et al discloses the method, wherein monitoring the presentation comprises: detecting delivery issues based on real-time capture of presentation conditions (For example, a participant may utter the word “what?” or utterance “huh?” during the presentation if they do not understand something that is being presented. The feedback and reporting unit 225 may be configured to maps this reaction to a “confused” reaction that may be sent to the client device 105a of the presenter to help the presenter to gain an understanding that at least some of the participants may be confused by a portion of the presentation) (page 10, paragraph [0080]); and responding in real time by modifying at least one of: a message format, timing, prioritization, or delivery channel (The presentation coaching unit 235 may provide suggestions for alternative language and/or language to be avoided during a presentation) (page 10, paragraph [0080]), wherein the modification is selected based on metadata of the presented content and a delivery profile associated with the session (With respect to the participants, the feedback and reporting unit 225 may be configured to identify certain language usage of a participant as being a reaction that may be sent to the client device 105a of the presenter to help the presenter to gain an understanding of the audience engagement in near real time during the presentation) (page 10, paragraph [0080]). Regarding Claim 25, Seleskerov et al discloses the method, wherein the processing comprises: preparing manageable data units classified by media type (The frame and filtering preprocessing unit 410 may be configured to determine whether a particular media stream contains audio, video, or both at a particular time and to process the stream using to convert the media stream into an appropriate format to serve as an input to the machine learning models for analyzing that type of content) (page 9, paragraph [0071]); extracting audio streams (The frame and filter preprocessing unit 410 may be configured to perform feature extraction on the media streams and/or reaction data) (page 8, paragraph [0068]); parsing video frames (Initially, the frame and filtering preprocessing unit 410 may process the stream vi to generate an input or inputs for models that process features from video content) (page 9, paragraph [0071]); identifying speakers and language (The language usage detection model 620 may be configured to analyze features extracted from video content of the presenter or a participant to identify language usage of that person and to output high-level features information that represents the language usage) (page 10, paragraph [0080]); transcribing audio ( A transcript 715 of the audio portion of the online presentation and/or communication session may be generated by the stream processing unit 215 by analyzing the spoken content provided by the presenter and the participants) (page 11, paragraph [0084]); synchronizing text media with corresponding streams (Furthermore, the reactions information generated by the analyzer unit 415 by analyzing the audio content, video content, and/or multi-modal content captured by the client devices 105 of the participants may also include a timestamp indicating when each reaction occurred) (page 5, paragraph [0047]); and generating metadata for each data unit (The one or more presentation media streams 305 may include one or more audio media streams, one or more video media streams, and/or other media streams. The one or more presentation media streams may include an audio component of the presentation where the presenter is discussing presentation content being shared with the participants) (page 8, paragraph [0064]); and wherein monitoring includes: detecting delivery issues (For example, a participant may utter the word “what?” or utterance “huh?” during the presentation if they do not understand something that is being presented. The feedback and reporting unit 225 may be configured to maps this reaction to a “confused” reaction that may be sent to the client device 105a of the presenter to help the presenter to gain an understanding that at least some of the participants may be confused by a portion of the presentation) (page 10, paragraph [0080]) and responding in real time by modifying a delivery characteristic selected based on the metadata (The presentation coaching unit 235 may provide suggestions for alternative language and/or language to be avoided during a presentation) (page 10, paragraph [0080]). Regarding Claim 27, Seleskerov et al discloses a computer implemented method comprising: obtaining input data stream capturing a live discussion by one or more strategists on topics of strategic messages to deliver to an audience, the input data stream comprising multimedia data generated by meeting data input/output devices located at the live discussion (FIG. 3 is a diagram showing examples of data exchanged between the presentation and communications platform 110 and the client devices 105a, 105b, 105c, and 105d. As discussed in the preceding examples, the presentation and communications platform 110 may transmit one or more presentation media streams 305 to the each of the client devices 105 over the network 120. The one or more presentation media streams 305 may include one or more audio media streams, one or more video media streams, and/or other media streams) (page 8, paragraph [0064]); formulating a strategic message model by machine learning (The presentation coaching unit 235 may utilize a delivery attributes model 1170 to analyze audio, video, and presentation content with machine learning models trained to identify aspects of the presenter's presentation skills and the presentation content are good and those that may benefit from improvement) (page 4, paragraph [0038]); generating a strategic message instance by use of the strategic message (model) (The presentation content may include a set of slides, a document, or other content that may be discussed during presentation) (page 8, paragraph [0064]); monitoring a presentation of the strategic message instance by a presenter by detecting delivery issues (The speaker skills feedback unit 1630 may identify issues with the presenter's presentation style, such as the language usage, language patterns, monotone delivery, reading of slide content, emotional state of the presenter, eye contact and/or gaze direction of the presenter, body pose of the presenter, and/or other information about the presenter and/or the participants) (page 6, paragraph [0049]); and updating the strategic message model based on a delivery content capturing the presentation and a reaction by the audience (Returning to FIG. 2, the model updating unit 220 may be configured to update the slide attribute model 1180 and/or the delivery attributes model 1170 based on the participant reaction information determined by the stream processing unit 215. The slide attribute model 1180 and/or the delivery attributes model 1170 may analyze the online presentation, and the presentation designer unit 230 and the presentation coaching unit 235 may use the inferences output by the slide attribute model 1180 and/or the delivery attributes model 1170 to provide feedback to the presenter for improving the online presentation content and/or the presentation skills of the presenter. The model updating unit 220 may utilize the reaction data obtained from the participants of the online presentation to improve the recommendations provided by the slide attribute model 1180 and/or the delivery attributes model 1170) (page 6, paragraph [0052]). Regarding Claim 28, Seleskerov et al discloses the method, further comprising: processing the input data stream obtained from the meeting data devices to distinguish among different types of content contributed during the live discussion ( The presentation and communications platform 110 implements an architecture for efficiently analyzing audio, video, and/or multimodal media streams and/or presentation content) ([age 3, paragraph [0034]), and preparing structured representations of such content suitable for use by the machine learning that formulates the strategic message model ( A technical benefit of this architecture is the media streams and/or presentation content may be analyzed to extract feature information for processing by the various models, and the high-level feature information output by the models may then be utilized by both the presentation coaching unit 235 and the presentation hosting unit 240) (page 3, paragraph [0035]). Regarding Claim 29, Seleskerov et al discloses the method, wherein the method includes processing the multimedia data, wherein the processing the multimedia data includes identifying speakers and linguistic content within audio data (The speaker skills feedback unit 1630 may identify issues with the presenter's presentation style, such as the language usage, language patterns, monotone delivery, reading of slide content, emotional state of the presenter, eye contact and/or gaze direction of the presenter, body pose of the presenter, and/or other information about the presenter and/or the participants) ([age 6, paragraph [0049]), converting the identified speech into text ( A transcript 715 of the audio portion of the online presentation and/or communication session may be generated by the stream processing unit 215 by analyzing the spoken content provided by the presenter and the participants) (page 11, paragraph [0084]), and providing the resulting text for use by the machine learning that formulates the strategic message model (The models may be configured to receive feature data extracted from the presentation media streams 305, the participant media streams 310, and/or the reactions data 315) (page 9, paragraph [0074]). Regarding Claim 30, Seleskerov et al discloses the method, further comprising: processing the multimedia data to detect spoken content and corresponding textual input (The one or more presentation media streams may include an audio component of the presentation where the presenter is discussing presentation content being shared with the participants. The presentation content may include a set of slides, a document, or other content that may be discussed during presentation) (page 8, paragraph [0064]), and generating text based data derived from audio content ( A transcript 715 of the audio portion of the online presentation and/or communication session may be generated by the stream processing unit 215 by analyzing the spoken content provided by the presenter and the participants) for use in training the strategic message model (page 11, paragraph [0084]) (The models may be configured to receive feature data extracted from the presentation media streams 305, the participant media streams 310, and/or the reactions data 315) (page 9, paragraph [0074]). Regarding Claim 31, Seleskerov et al discloses the method, wherein the multimedia data includes audio and text data, and the processing comprises analyzing the audio and text data to extract linguistic, tonal, and contextual information used by the strategic message model in generating the strategic message instance (The presentation coaching unit 235 may provide feedback critiques on aspects of the presentation skills, such as but not limited to pacing, vocal pattern, volume, whether the presenter is speaking in monotone, and/or language usage) (page 4, paragraph [0038]). Regarding Claim 32, Seleskerov et al discloses the method, further comprising: processing the multimedia data including audio and text data (The one or more presentation media streams may include an audio component of the presentation where the presenter is discussing presentation content being shared with the participants. The presentation content may include a set of slides, a document, or other content that may be discussed during presentation) (page 8, paragraph [0064]) to identify patterns, relationships, or context within the captured content (The presentation coaching unit 235 may provide feedback critiques on aspects of the presentation skills, such as but not limited to pacing, vocal pattern, volume, whether the presenter is speaking in monotone, and/or language usage) (page 4, paragraph [0038]), and providing the processed information for application by the machine learning that formulates or updates the strategic message model (The presentation coaching unit 235 may utilize a delivery attributes model 1170 to analyze audio, video, and presentation content with machine learning models trained to identify aspects of the presenter's presentation skills and the presentation content are good and those that may benefit from improvement) (page 4, paragraph [0038]). Regarding Claim 33, Seleskerov et al discloses the method, further comprising: processing audio and text portions of the input data stream (The one or more presentation media streams may include an audio component of the presentation where the presenter is discussing presentation content being shared with the participants. The presentation content may include a set of slides, a document, or other content that may be discussed during presentation) (page 8, paragraph [0064]) to derive representative information characterizing the discussion (The presentation coaching unit 235 may provide feedback critiques on aspects of the presentation skills, such as but not limited to pacing, vocal pattern, volume, whether the presenter is speaking in monotone, and/or language usage) (page 4, paragraph [0038]), and using the derived information as part of the training data applied to the strategic message model The presentation coaching unit 235 may utilize a delivery attributes model 1170 to analyze audio, video, and presentation content with machine learning models trained to identify aspects of the presenter's presentation skills and the presentation content are good and those that may benefit from improvement) (page 4, paragraph [0038]). Allowable Subject Matter Claims 15-20 are allowed. The following is a statement of reasons for the indication of allowable subject matter: Claim 15 teaches similar subject matter as the prior art of Seleskeroy et al. (US 2022/0138470), Daredia et al. (US 2020/0403817, and Advani et al. (US 2018/0145840). However, the prior art fails to teach “formulating a strategic message model by machine learning based on the manageable data units produced by the multimedia content processor, the strategic message model comprising topics and corresponding attributes associated with a topic profile, wherein the formulating includes applying cognitive analysis separately to each media type while maintaining synchronization and fidelity normalization across video resolution and audio volume” and “updating the strategic message model based on a delivery content capturing the presentation and a reaction by the audience wherein the updating includes adapting topic profiles based on audience engagement and feedback associated with the metadata and timestamps recorded during the session” as recited in claim 15. Claims 16-20 are allowed for being dependent on an allowable base claim. Claims 23 and 26 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Cited Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Amiri et al. (US 2021/0400236) discloses aggregating audience member emotes in a large-scale electronic presentation.. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SATWANT K SINGH whose telephone number is (571)272-7468. The examiner can normally be reached Monday thru Friday 9:00 AM to 6:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571}270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SATWANT K SINGH/Primary Examiner, Art Unit 2653
Read full office action

Prosecution Timeline

Nov 30, 2021
Application Filed
Oct 24, 2023
Response after Non-Final Action
Dec 26, 2024
Non-Final Rejection — §101, §102
Jan 28, 2025
Examiner Interview Summary
Jan 28, 2025
Applicant Interview (Telephonic)
Jan 31, 2025
Response Filed
Apr 04, 2025
Final Rejection — §101, §102
Apr 28, 2025
Applicant Interview (Telephonic)
Apr 28, 2025
Examiner Interview Summary
Apr 30, 2025
Response after Non-Final Action
Jun 30, 2025
Request for Continued Examination
Jul 01, 2025
Response after Non-Final Action
Aug 22, 2025
Non-Final Rejection — §101, §102
Oct 27, 2025
Examiner Interview Summary
Oct 27, 2025
Applicant Interview (Telephonic)
Oct 31, 2025
Response Filed
Feb 17, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586591
SOUND SIGNAL DECODING METHOD, SOUND SIGNAL DECODER, PROGRAM, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579367
TWO-TOWER NEURAL NETWORK FOR CONTENT-AUDIENCE RELATIONSHIP PREDICTION
2y 5m to grant Granted Mar 17, 2026
Patent 12579360
LEARNING SUPPORT APPARATUS FOR CREATING MULTIPLE-CHOICE QUIZ
2y 5m to grant Granted Mar 17, 2026
Patent 12562173
WEARABLE DEVICE CONTROL BASED ON VOICE COMMAND OF VERIFIED USER
2y 5m to grant Granted Feb 24, 2026
Patent 12559026
VEHICLE AND CONTROL METHOD THEREOF
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+31.1%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 645 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month