Prosecution Insights
Last updated: April 19, 2026
Application No. 18/140,500

POETRY GENERATION

Final Rejection §101§103
Filed
Apr 27, 2023
Examiner
MCCORD, PAUL C
Art Unit
2692
Tech Center
2600 — Communications
Assignee
BEIJING SOGOU TECHNOLOGY DEVELOPMENT CO., LTD.
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
96%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
393 granted / 569 resolved
+7.1% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
41 currently pending
Career history
610
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 569 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 remain rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim(s) 1, 11 is/are directed to a system, method, etc. for generating poetry using an autoregressive attention model. The claims rely on well understood, routine, and conventional structures such as a processor, memory, data structure, etc. to instruct the system along methods by which previous words, characters, lines, etc. of poetry are generated by application of well understood, routine, and conventional instructions such as software routines. The claims are considered a manner by which data resolves or generates more or additional data, in this case a data driven or data informed poetry candidate is iteratively and auto-regressively assembled, derived, etc. from a specific data input in combination with more iteratively arrived at data; the claims are also considered a stand in for human behavior as the claims steps are substantially similar to the manner in which a human being would arrive at a piece of poetry, such as in iterative consideration of previous words, characters, lines, etc. of the piece. As such the claims cannot be considered to integrate the judicial exceptions of an abstract idea such as data per se or programs per se nor the judicial exception of human activity and/or mental processes such as operations performed in the human mind, human activity, human behavior; etc. as the claims do not include substantially more than the performance of such exceptions upon a computer claimed at a high level of generality and based on models intended to mimic or replicate human cognitive processes. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception and the amendments to the independent claims filed 11/10/25 does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception as the claimed subject matter is considered well-understood routine and conventional as detailed in the art rejection infra.. The dependent claims further address additional subject matter which do not remedy as the claimed functionality may be seen as a stand in human behavior such as a human generating a poem, asking for help in generating a poem, human application of agency in concert with assistive instructions, mathematic concepts, etc. As such claims 2-10, 12-20 do not remedy and are similarly rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 rejected under 35 U.S.C. 103 as being unpatentable over the Love Letters algorithm of Christopher Strachey (algorithm available at least 1952; documented and adapted in MUC letters by van Kemenade available at least 2016;copy thereof provided by Examiner and hereinafter LL); further in view of CN105955964: “A method and device for automatically generating poetry,” (disclosed by Applicant in IDS filed 4/28/23; Examiner has provided machine translation of the patent from the EPO for improved citation in the form of the attached file CN105955964 supplementary translation ENGLISH and hereinafter Wang; to add context Examiner has additionally provided a Wipo translation of the application in the form of the attached file CN105955964 Method and apparatus for automatically generating poem ENGLISH, and hereinafter Wang_2) and further in view of Peng: 20220084510. Regarding claim 1 LL teaches: A method for poetry generation, the method comprising: receiving generation information provided by a user that includes a word to be included in poetry to be generated, the generation information indicating a theme for the poetry generation and a user-defined position of the word in the poetry defined by the user (LL: § starting “The Manchester University Computer…”: details the manner in which a plurality of sentences are generated using a selection of a particular sentence type wherein a particular word list is selected from for insertion of the selected word into particular parts of the selected sentence type wherein the user includes the word list(s) and other generation information including the position in the sentence in which to include a word selected from a particular list); and determining, by processing circuitry, at least a candidate piece of the poetry corresponding to the generation information a language model, the language model being configured to generate elements in the candidate piece of the poetry (LL: generally: the recitation comprises a succinct encapsulation of the love letters algorithm), the elements being determined from potential elements in a potential element list (LL: § starting “The Manchester University Computer…”: system selects elements from lists to fill in user defined positions in one or more sentences), the elements corresponding to a plurality of potential elements, an element in the elements being a character or a word (id.: the lists are populated with potential elements by which the algorithm generatively constructs a plurality of poems, love letters, etc.) LL lays a groundwork for algorithmic construction of a poem, message, etc. in the manner discussed supra—a well-known manner which has emerged upon computing over time as a context free grammar (please see additionally Rita.js documentation provided and available at least 1/19/2021; Applicant is invited to further explore the language tools taught and/or made obvious by Rita.js available at “https://rednoise.org/rita2/”. LL does not explicitly teach the recited determining, by processing circuitry, at least a candidate piece of the poetry corresponding to the generation information according to an autoregressive language model, the autoregressive language model being configured to generate elements in the candidate piece of the poetry in an autoregressive manner with a plurality of regression rounds, the elements being determined from a list ranked according to attention levels of the potential elements, the elements corresponding to a plurality of top ranking potential elements of the potential elements, an element in the elements being a character or a word, wherein: the autoregressive language model is configured to generate the poetry in a plurality of formats, the autoregressive language model including a plurality of processing layers connected sequentially, a processing layer in the plurality of processing layers is configured to determine the attention levels for the potential elements in the potential element list according to generated elements prior to a current regression round, and predict one or more additional elements for the current regression round using a neural network according to the attention levels. In a related field of endeavor Wang teaches: A method for poetry generation, the method comprising: receiving generation information indicative of a theme for the poetry generation (Wang pp 1, § 1-4; pp 12-14; Fig 3: at step 301 a poem type, genre, keywords, etc. of a poem to be generated obtained, such as from a user); and determining, by processing circuitry, at least a candidate piece of poetry (Wang: pp 1-4, 12-14; Fig 3: poetry generation by selection of candidate verses taught as well understood, routine and conventional, but insufficiently regressive; additionally a candidate piece of poetry can be selected from existing poems and iteratively improved by the user by practice of the disclosed method); the candidate piece of poetry corresponding to the generation information according to a regressive language model, the regressive language model being configured to generate elements in the candidate piece of poetry in an regressive manner with a plurality of regression rounds, the elements being determined from potential elements in a potential element list that are ranked according to attention levels of the potential elements, the elements corresponding to a plurality of top ranking potential elements of the potential elements (Wang: pp 1, 10: system obtains a list of word or character scores, said words, characters ranked based on attention scores for selection, output, etc.), an element in the elements being a character or a word (Wang: pp 1-4, 8-10; Fig 3: the steps of the taught improved method include, step c: generating and/or selecting a verse, said verse corresponding to all previously generated verses, that is a variable, in the form of a poem, verse thereof, sentence therein, etc. is predictively determined based on the previous values thereof and step d including the generated verse in the set of all generated verses and repeating said steps until a poem is generated) wherein: the regressive language model is configured to generate poetry in a plurality of formats (Wang: pp 6 ¶ 3: user selection of a poem to be generated includes a poem format such as a seven character quatrain, seven character poem, etc.); the regressive language model including a plurality of processing layers connected sequentially (Wang: pp 9-12, 16, etc.; Fig 2: model includes plurality of neural network layers in a bi-directional LSTM(s) used as an encoder and decoder to generate a poem, lines thereof, such as in an iterative word-wise, character-wise, etc. manner, such as at step S303, etc.)to a processing layer in the plurality of processing layers (Wang: pp 10, 16, etc.: an RNN comprising layers operates to iteratively generate verses of poetry such as using a plurality of module layers such as in a plurality of iterative modules such as plural LSTM modules etc.) is configured to determine attention levels for potential elements in a potential element list according to generated elements prior to a current regression round (Wang: pp 10, 16, etc.: LSTM comprising layers calculates attention scores for iterative generation of poetry, lines, characters, etc. thereof), and predict one or more additional elements for the current regression round using a neural network according to the attention levels (Wang: pp 10, 16, etc.: each upcoming word is predictively generated based on the attention values determined). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to adapt the poetry composition templates available to a context-free grammar as taught or suggested by LL by incorporating the teachings of Wang to score, rank etc. user or system proposed word, characters, sentences, etc. for at least the purpose of generating a poem such as in concert with a context free grammar and/or in keeping with a user’s desired formatting, lexical choices, sentence construction, etc.; one of ordinary skill in the art would have expected only predictable results therefrom. LL in view of Wang does not explicitly discuss the taught system and method comprising an auto-regression. In a related field of endeavor Peng teaches a system and method for generating of an output text similar to a corpus, texts therein, to predict a series of characters, words, tokens etc. (Peng: Abstract; ¶ 23-25, 32, 51-53etc.), the system comprising: receiving generation information indicative of a theme for the character, word, token, etc. generation (Peng: ¶ 5, 21-25, 30, 32, etc.: such as by training a model to generate task specific responses based on training data comprising a task specific dataset of seed utterances, dialog acts, etc.); and determining, by processing circuitry, at least a candidate character, word, token, etc. corresponding to the generation information (Peng: ¶ 51, 53, 57, 67, etc.: such as by generating multiple task, domain, etc. specific candidate utterances based on the training data); according to an autoregressive language model, the autoregressive language model being configured to generate elements in the candidate character, word, token, etc. in an autoregressive manner (Peng: ¶ 49-51, etc.: system operates to generate a next character, word, token, etc. in an autoregressive manner such as by utilization of a an auto-regressive language model comprising layers of multi-head self-attention) with a plurality of regression rounds, an element in the elements being a character or a word (Peng: ¶ 49-51, 53, 57, 67, etc.: such as by iteratively proceeding to generate a next character, word, token, etc. based on the generation of a current and previous character, word, token, etc.), wherein: the autoregressive language model is configured to generate a character, word, token, etc. in a plurality of domains (Peng: ¶ 35, 51, 54, etc. generation based on plurality of domain label)s, the autoregressive language model including a plurality of processing layers connected sequentially, a processing layer in the plurality of processing layers (Peng: ¶ 18-20, 42, etc.: system operates using a plurality of connected layers) is configured to determine attention levels for potential elements (system determines upcoming character, word, token, etc. in part based on determining attention weighted sums of one or more of a character, word, token, etc. of a preceding layer) in a potential element list according to generated elements prior to a current regression round (Peng: ¶ 18-20, 42, etc.: system iteratively determines a set of characters, words, tokens, etc. in part based on iteratively determining a upcoming, next etc. character, word, token, by determining attention weighted sums of one or more of a character, word, token, etc. of a preceding layer), and predict one or more additional elements for the current regression round using a neural network according to the attention levels (Peng: ¶ 75-77: system is trained to provide a set of next tokens based on a current session, turn, etc. with the model). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to adapt the LL in view of Wang system and method to utilize an multi-layer, auto-regressive, self-attention transformer model as taught or suggested by Peng for at least the purpose of operating the LSTM of Wang in an autoregressive manner, such as by using A GPT type architecture; one of ordinary skill in the art would have expected only predictable results therefrom without undue experimentation thereon. Regarding claim 2 LL in view of Wang in view of Peng teaches or suggests: The method according to claim 1, wherein the generation information comprises: a beginning element for the poetry generation; and/or the theme for the poetry generation (Wang pp 1, § 1-4; pp 12-14; Fig 3: at step 301 a poem type, genre, keywords, etc. comprises a theme of a poem). The claim is considered obvious over LL as modified by Wang, and Peng as addressed in the base claim as it would have been obvious to apply the further teaching of LL, Wang, and/or Peng to the modified device of LL, Wang, and Peng; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 3 LL in view of Wang in view of Peng teaches or suggests: The method according to claim 1, wherein the autoregressive language model is trained (Peng: ¶ 51, 52: autoregressive model trained to generate responses using a corpus) based on a poetry corpus (Wang: pp 2: training samples comprising poems with respect to genre used to train an attention based poetry model for poem generation), the poetry corpus comprises: one or more sample pieces of poetry; and/or one or more sample pieces of poetry with respective themes (Wang: pp 2: training samples comprise poems of known genre; genre is considered thematic as it denotes specific and distinctive qualities, characteristics, etc. of the poems therein); (Peng: ¶ 51, 53, 57, 67, etc.: model trained on domain specific databases, a domain is similarly considered a theme as it bears particular characteristics). The claim is considered obvious over LL as modified by Wang, and Peng as addressed in the base claim as it would have been obvious to apply the further teaching of LL, Wang, and/or Peng to the modified device of LL, Wang, and Peng; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 4 LL in view of Wang in view of Peng teaches or suggests: The method according to claim 1, wherein the autoregressive language model is configured to generate a plurality of candidate pieces of poetry conforming to the plurality of formats (Wang pp 1, § 1-4; pp 12-14; Fig 3: dependent from user selection of a poem type the system recursively generates one or more lines for one or more poems); (Peng: ¶ 51, 52: an autoregressive model trained to generate responses). The claim is considered obvious over LL as modified by Wang, and Peng as addressed in the base claim as it would have been obvious to apply the further teaching of LL, Wang, and/or Peng to the modified device of LL, Wang, and Peng; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 5 LL in view of Wang in view of Peng teaches or suggests: The method according to claim 1, further comprising: providing at least a first option of a format parameter and a second option of the format parameter (Wang: p 6: a poem format determined from a plurality of first, second, etc. options in the form of genre such as a five character quatrain, seven character quatrain, etc.); determining a target format according to a selection from at least the first option and the second option by a user (Wang pp 1, § 1-4; pp 12-14; Fig 3: at step 301 a poem type, genre, keywords, etc. of a poem to be generated obtained, such as from a user); and determining the at least the candidate piece of poetry in the target format based on the autoregressive language model (Wang pp 1, § 1-4; pp 12-14; Fig 3: dependent from user selection of a poem type the system recursively generates one or more lines for one or more poems); (Peng: ¶ 51, 52: an autoregressive model trained to generate responses). The claim is considered obvious over LL as modified by Wang, and Peng as addressed in the base claim as it would have been obvious to apply the further teaching of LL, Wang, and/or Peng to the modified device of LL, Wang, and Peng; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 6 LL in view of Wang in view of Peng teaches or suggests: The method according to claim 1, wherein the determining the at least the candidate piece of poetry further comprises: determining a first candidate piece of poetry and a second candidate piece of poetry corresponding to the generation information according to the autoregressive language model, the first candidate piece of poetry being in a first format and the second candidate piece being in a second format (Wang.: pp 1, 4, 12-14: system functions iteratively, that is a user may select a first poem to be generated such as in a particular first format, and a second poem to be generated such as in a second format, candidates generated thereby may be saved, transcribed, etc. such as by the system, a user thereof, etc.). The claim is considered obvious over LL as modified by Wang, and Peng as addressed in the base claim as it would have been obvious to apply the further teaching of LL, Wang, and/or Peng to the modified device of LL, Wang, and Peng; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 7 LL in view of Wang in view of Peng teaches or suggests: The method according to claim 1, wherein the determining the at least the candidate piece of poetry comprises: determining first inputs for the current regression round according to the generated elements prior to the current regression round; and inputting the first inputs into the autoregressive language model to obtain a first prediction result of the current regression round from the autoregressive language model. (Wang: pp 1-4, 8-10; Fig 3: the steps of the taught improved method include, step c: generating and/or selecting a verse, said verse corresponding to all previously generated verses, that is a variable, in the form of a poem, verse thereof, sentence therein, etc. is predictively determined based on the previous values thereof and step d including the generated verse in the set of all generated verses and repeating said steps until a poem is generated, additionally the generated poem may be based off of poems, lines, characters, etc. thereof in the training corpora). The claim is considered obvious over LL as modified by Wang, and Peng as addressed in the base claim as it would have been obvious to apply the further teaching of LL, Wang, and/or Peng to the modified device of LL, Wang, and Peng; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 8 LL in view of Wang in view of Peng teaches or suggests: The method according to claim 7, wherein the determining the at least the candidate piece of poetry further comprises: adding the first prediction result of the current regression round with the first inputs to obtain second inputs for a next regression round after the current regression round; and inputting the second inputs into the autoregressive language model to obtain a second prediction result of the next regression round from the autoregressive language model (Wang: pp 14, 15, fig 3: such as by iteratively progressing through steps S303-S305). The claim is considered obvious over LL as modified by Wang, and Peng as addressed in the base claim as it would have been obvious to apply the further teaching of LL, Wang, and/or Peng to the modified device of LL, Wang, and Peng; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 9 LL in view of Wang in view of Peng teaches or suggests: The method according to claim 7, wherein the first prediction result comprises at least one predicted element with an attention level satisfying a preset condition. Examiner has taken official notice which Applicant has failed to timely and specifically traverse and it is thus accepted as Admitted Prior Art (APA: Please see MPEP 2144.03) that determining particular prediction results based on an attention value threshold would have comprised an obvious inclusion for at least the purpose of generating a more contextually, semantically, harmoniously, etc. appropriate result; one of ordinary skill in the art would have expected only predictable results therefrom. The claim is thus considered obvious over LL as modified by Wang, and Peng as addressed in the base claim as it would have been obvious to apply the further teaching of LL, Wang, and/or Peng to the modified device of LL, Wang, and Peng; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 10 LL in view of Wang in view of Peng teaches or suggests: The method according to claim 7, further comprising: ranking the potential elements in an order according to the attention levels; and selecting one or more top ranking potential elements according to the order as the first prediction result. Examiner has taken official notice which Applicant has failed to timely and specifically traverse and it is thus accepted as Admitted Prior Art (APA: Please see MPEP 2144.03) that that determining particular prediction results based ranking and/or re-ranking a list of results, candidates, etc. based on an attention value would have comprised an obvious inclusion for at least the purpose of generating a more contextually, semantically, harmoniously, etc. appropriate result; one of ordinary skill in the art would have expected only predictable results therefrom. The claim is thus considered obvious over LL as modified by Wang, and Peng as addressed in the base claim as it would have been obvious to apply the further teaching of LL, Wang, and/or Peng to the modified device of LL, Wang, and Peng; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 11 – the claim is considered to recite substantially similar subject matter to that of claim 1 supra and is similarly rejected. Regarding claim 12 – the claim is considered to recite substantially similar subject matter to that of claim 2 supra and is similarly rejected. Regarding claim 13 – the claim is considered to recite substantially similar subject matter to that of claim 3 supra and is similarly rejected. Regarding claim 14 – the claim is considered to recite substantially similar subject matter to that of claim 4 supra and is similarly rejected. Regarding claim 15 – the claim is considered to recite substantially similar subject matter to that of claim 5 supra and is similarly rejected. Regarding claim 16 – the claim is considered to recite substantially similar subject matter to that of claim 6 supra and is similarly rejected. Regarding claim 17 – the claim is considered to recite substantially similar subject matter to that of claim 7 supra and is similarly rejected. Regarding claim 18 – the claim is considered to recite substantially similar subject matter to that of claim 8 supra and is similarly rejected. Regarding claim 19 – the claim is considered to recite substantially similar subject matter to that of claim 9 supra and is similarly rejected. Regarding claim 20 – the claim is considered to recite substantially similar subject matter to that of claim 10 supra and is similarly rejected. Response to Arguments Applicant’s arguments in concert with amendments to the claims, see Remarks and Claims, filed 11/10/25, with respect to the rejection(s) of claim(s) 1-20 under 35 USC 103 over Wang and Peng have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Love Letters algorithm, Wang and Peng. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL C MCCORD whose telephone number is (571)270-3701. The examiner can normally be reached 730-630 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, CAROLYN EDWARDS can be reached at (571) 270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAUL C MCCORD/Primary Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

Apr 27, 2023
Application Filed
Sep 04, 2025
Non-Final Rejection — §101, §103
Oct 23, 2025
Applicant Interview (Telephonic)
Oct 23, 2025
Examiner Interview Summary
Nov 10, 2025
Response Filed
Dec 31, 2025
Final Rejection — §101, §103
Feb 02, 2026
Examiner Interview Summary
Feb 02, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603094
ADAPTIVE PROCESSING WITH MULTIPLE MEDIA PROCESSING NODES
2y 5m to grant Granted Apr 14, 2026
Patent 12592238
INFORMATION PROCESSING METHOD, INFORMATION PROCESSING DEVICE, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM
2y 5m to grant Granted Mar 31, 2026
Patent 12593192
MEDIA PLAYBACK BASED ON SENSOR DATA
2y 5m to grant Granted Mar 31, 2026
Patent 12572323
DYNAMIC AUDIO CONTENT GENERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12567003
TECHNOLOGIES FOR DECENTRALIZED FLEET ANALYTICS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
96%
With Interview (+26.6%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 569 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month