Prosecution Insights
Last updated: April 19, 2026
Application No. 18/355,654

METHOD, ETC. FOR GENERATING TRAINED MODEL FOR PREDICTING ACTION TO BE SELECTED BY USER

Non-Final OA §101§102§103§112
Filed
Jul 20, 2023
Examiner
WILLIAMS, ROSS A
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Cygames Inc.
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
3y 11m
To Grant
79%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
408 granted / 657 resolved
-7.9% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
56 currently pending
Career history
713
Total Applications
across all art units

Statute-Specific Performance

§101
22.2%
-17.8% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
20.2%
-19.8% vs TC avg
§112
11.1%
-28.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 657 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. This subject matter eligibility analysis follows the latest guidance for Patent Subject Matter Eligibility Guidance. Claims 1 - 9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Step 1: Claims 1 – 6 are drawn to a method. Claims 7 are drawn to a non-transitory CRM Claims 8 and 9 are drawn to a system. Thus, initially, under Step 1 of the analysis, it is noted that the claims are directed towards eligible categories of subject matter. Step 2A: Prong 1: Does the Claim recite an Abstract idea, Law of Nature, or Natural Phenomenon? Claims 8 are exemplary because they require substantially the same operative limitations of the remaining claims (reproduced below.) Examiner has underlined the claim limitations which recite the abstract idea, discussed in detail in the paragraphs that follow. [Claim 8] A system for generating a trained model for predicting an action to be selected by a user in a game that proceeds in accordance with actions selected by the user, while updating game states, wherein: game state text and action text, which are text data expressed in a prescribed format, are generated from data of game states and actions included in history data concerning the game, and training data including pairs of game state text and action text corresponding to pairs of one game state and an action selected in the one game state are generated; and a trained model is generated on the basis of the generated training data. The claims recite italicized limitations that fall within at least one of the groupings of abstract ideas enumerated in the 2019 PEG, namely, Mental Processes and Certain Methods of Organizing human activity. More specifically, under this grouping, the italicized limitations represent Mental Processes such as predicting actions, and evaluating possible actions, and Methods of Organizing Human Activity such as user behavior prediction and/or decision making in games. Prong 2: Does the Claim recite additional elements that integrate the exception in to a practical application of the exception? The claims do not recite additional limitations that represent an improvement to the functioning of a computer, or to any other technology or technical field, (MPEP 2106.05(a)). Nor do they apply the exception using a particular machine, (MPEP 2106.05(b)). Furthermore, they do not effect a transformation. (MPEP 2106.05(c)). Rather, these additional limitations amount to an instruction to “apply” the judicial exception using a computer as a tool to perform the abstract idea. Therefore, since the additional limitations, individually or in combination, are indistinguishable from a computer used as a tool to perform the abstract idea, the analysis continues to Step 2B, below. Step 2B: Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because they amount to conventional and routine computer implementation and mere instructions for implementing the abstract idea on generic computing devices. Al claim elements viewed individually and as a whole, are indistinguishable from conventional computing elements known in the art. Therefore, the additional elements fail to supply additional elements that yield significantly more than the underlying abstract idea. As the Alice court cautioned, citing Flook, patent eligibility cannot depend simply on the draftsman’s art. Here, amending the claims with generic computing elements does not (in this Examiner’s opinion), confer eligibility. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Moreover, the claims do not recite improvements to another technology or technical field. Nor, do the claims improve the functioning of the underlying computer itself -- they merely recite generic computing elements. Furthermore, they do not effect a transformation of a particular article to a different state or thing: the underlying computing elements remain the same. Concerning preemption, the Federal Circuit has said in Ariosa Diagnostics, Inc., V. Sequenom, Inc., (Fed Cir. June 12, 2015): The Supreme Court has made clear that the principle of preemption is the basis for the judicial exceptions to patentability. Alice, 134 S. Ct at 2354 (“We have described the concern that drives this exclusionary principal as one of pre-emption”). For this reason, questions on preemption are inherent in and resolved by the § 101 analysis. The concern is that “patent law not inhibit further discovery by improperly tying up the future use of these building blocks of human ingenuity.” Id. (internal quotations omitted). In other words, patent claims should not prevent the use of the basic building blocks of technology—abstract ideas, naturally occurring phenomena, and natural laws. While preemption may signal patent ineligible subject matter, the absence of complete preemption does not demonstrate patent eligibility. In this case, Sequenom’s attempt to limit the breadth of the claims by showing alternative uses of cffDNA outside of the scope of the claims does not change the conclusion that the claims are directed to patent ineligible subject matter. Where a patent’s claims are deemed only to disclose patent ineligible subject matter under the Mayo framework, as they are in this case, preemption concerns are fully addressed and made moot. (Emphasis added.) For these reasons, it appears that the claims are not patent-eligible under 35 USC §101. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5 – 7 and 9are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 6 (and child claim 7) recites the limitation "the trained model" in line 14. There is insufficient antecedent basis for this limitation in the claim. The parent claim 1 recited at least two instances of “a trained model” and claim 6 fails to differentiate between which “trained model” is being referred to. Claim 9 recites the limitation "the trained model" in line 14. There is insufficient antecedent basis for this limitation in the claim. The parent claim 8 recited at least two instances of “a trained model” and claim 6 fails to differentiate between which “trained model” is being referred to. The term “Suitable” in claim 5 is a relative term which renders the claim indefinite. The term “Suitable” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. What is suitable to one person or system may or may not be suitable to another person or system. The question can be raised is “what makes text that is expressed by using grammar, syntax and vocabulary suitable for mechanical conversion?”. Clarification is needed to be able to fully understand the scope of the claim. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 2 and 8 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Yao et al “Keep Calm and Explore: Language Models for Action Generation in Text-Based Games, 2020. As per claim 1, Yao discloses: a step of generating game state text and action text, which are text data expressed in a prescribed format, from data of game states and actions included in history data concerning the game, (Yao discloses the generation of game states and actions expressed in a prescribed format from history concerning a game) (Yao 3.2 and 4.1) and generating training data including pairs of game state text and action text corresponding to pairs of one game state and an action selected in the one game state; and (Yao discloses the training of “CALM” wherein top actions are generated based for every unique state of the game, there is generated the top 30 actions (i.e. pairs of actions)) (Yao 4.2, “Generating Top Actions”) a step of generating a trained model on the basis of the generated training data. (Yao further discloses the generation of the trained model based upon the CALM generated top 30 actions for each state (Yao 5.3 Analysis, 5. CALM (random agent)) As per claim 2, Yao discloses: wherein the step of generating training data includes generating, as game state text corresponding to one game state, a plurality of items of game state text having different orders of a plurality of text elements included in the game state text, and generating training data including pairs of each of the plurality of items of generated game state text and action text corresponding to an action selected in the one game state. (Yao discloses the training of “CALM” wherein top actions are generated based for every unique state of the game, there is generated the top 30 actions (i.e. pairs of actions)) (Yao 4.2, “Generating Top Actions”) Dependent claim(s) 8 is/are anticipated by Yao based on the same analysis set forth for claim(s) 1, which are similar in claim scope. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yao et al “Keep Calm and Explore: Language Models for Action Generation in Text-Based Games, 2020 in view of Devlin et al; “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding” 2019. As per claim 3, Yao fails to disclose: wherein the step of generating a trained model includes generating a trained model by training a pretrained natural language model with the generated training data, the pretrained natural language model having learned in advance grammatical structures and text-to- text relationships concerning a natural language. However in a similar field of endeavor, Devlin discloses the generation of a pretrained model based upon training a pretrained natural language model that has learned in advance grammatical structures and text to text relationships by utilizing masked LM and next sentence prediction (NSP) (Devlin page 4174m Task #1, Task #2) Devlin further states (“3.2 Fine-tuning BERT Fine-tuning is straightforward since the self-attention mechanism in the Transformer allows BERT to model many downstream tasks— whether they involve single text or text pairs—by swapping out the appropriate inputs and outputs. For applications involving text pairs, a common pattern is to independently encode text pairs before applying bidirectional cross attention, such as Parikh et al. (2016); Seo et al. (2017). BERT instead uses the self-attention mechanism to unify these two stages, as encoding a concatenated text pair with self-attention effectively includes bidirectional cross attention between two sentences. For each task, we simply plug in the task specific inputs and outputs into BERT and finetune all the parameters end-to-end. At the input, sentence A and sentence B from pre-training are analogous to (1) sentence pairs in paraphrasing, (2) hypothesis-premise pairs in entailment, (3) question-passage pairs in question answering, and (4) a degenerate text-∅ pair in text classification or sequence tagging. At the output, the token representations are fed into an output layer for token level tasks, such as sequence tagging or question answering, and the [CLS] representation is fed into an output layer for classification, such as entailment or sentiment analysis.” (Devlin page 4175) It would be obvious to one of ordinary skill in the art, at the time of filing, to modify Yao in view of Devlin to utilize pretrained that was trained to learn in advance grammar and text to text relationships associated with natural language. This would be beneficial as it would enable the text based game to more efficiently predict the next most correct action based upon the context of the game state and the natural language used. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yao et al “Keep Calm and Explore: Language Models for Action Generation in Text-Based Games, 2020 in view of Kano et al (US 2021/0279638). As per claim 4, Yao fails to disclose: wherein: the step of generating training data includes training data including first pairs and second pairs, the first pairs being pairs of game state text and action text corresponding to pairs of one game state and an action selected in the one game state, generated on the basis of data of game states and actions included in the history data, and the second pairs being pairs of the one game state text and action text corresponding to an action…(Yao discloses the generation of training data based upon a plurality of pairs of observable game states and actions. Yao discloses the training of “CALM” wherein top actions are generated based for every unique state of the game, there is generated the top 30 actions (i.e. pairs of actions)) (Yao 4.2, “Generating Top Actions”) Yao fails to disclose: …that is selected at random from actions selectable by a user and that is not included in the first pairs; and the step of generating a trained model includes generating a trained model by performing training with the first pairs as correct data and performing training with the second pairs as incorrect data. However, in a similar field of endeavor wherein machine learning models are trained, Kano discloses a system to improve the accuracy of machine learning (Kano 0006). Kano states: “[0022] In the present exemplary embodiment, a filter model is trained by using correct pairs of text and title, which are used as “positive examples”, and incorrect pairs, which are used as “negative examples”. Negative examples, which are incorrect pairs, are obtained by changing input-output pairs, for example, through random sampling. In the present exemplary embodiment, negative examples are generated by changing input-output pairs. In the present exemplary embodiment, the filter model 22 learns how appropriate pairs of text and summary are. The difference between the present exemplary embodiment and the related art is that, while a classification model is used to increase the training data in the related art, the negative-example generating unit 30 generates the negative example 32 from the training data 26 in the present exemplary embodiment. As long as combinations between input and output are changed, the generation process performed by the negative-example generating unit 30 is any. Pairs of text and summary in the training data 26 may be subjected to random sampling to generate new pairs, thus generating the negative example 32. [0043] The actual pairs of text and summary in the training data 26 are used as the positive example 28, and the pairs, which are obtained through random sampling, are used as the negative example 32. Thus, the filter model 22 is trained. After training, the filter model 22 makes discrimination again only on the positive example 28 in the training data 26, that is, on the training data 26 itself. A bottom n % of data in descending order of predicted probability is removed from the training data for the summary model 24, that is, the supervised data that is input to the summary model 24. (Kano 0022, 0042 – 0043).” It would be obvious to one of ordinary skill in the art, at the time of filing, to modify Yao in view of Kano to utilize negative sampling of randomly selected pairs of that are incorrect to train the machine learning model. As Kano states, this would be beneficial as it would improve the accuracy of the machine learning model, “To improve the accuracy of machine learning, it is necessary to prepare, in advance, a sufficient amount of supervised data formed of correct input-output pairs (hereinafter referred to as “positive examples”). In a machine learning model” (Kano 0006) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROSS A WILLIAMS whose telephone number is (571)272-5911. The examiner can normally be reached Mon-Fri 8am - 4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kang Hu can be reached at (571)270-1344. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RAW/Examiner, Art Unit 3715 /KANG HU/Supervisory Patent Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Jul 20, 2023
Application Filed
Nov 29, 2025
Non-Final Rejection — §101, §102, §103
Jan 20, 2026
Interview Requested
Feb 11, 2026
Examiner Interview Summary
Feb 11, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12481323
DISPLAY DEVICE
2y 5m to grant Granted Nov 25, 2025
Patent 12450978
COIN OPERATED ENTERTAINMENT SYSTEM
2y 5m to grant Granted Oct 21, 2025
Patent 12444274
VIRTUAL SPORTS BOOK SYSTEMS AND METHODS
2y 5m to grant Granted Oct 14, 2025
Patent 12383836
IMPORTING AGENT PERSONALIZATION DATA TO INSTANTIATE A PERSONALIZED AGENT IN A USER GAME SESSION
2y 5m to grant Granted Aug 12, 2025
Patent 12387550
PUSHBUTTON SWITCH, OPERATING UNIT, AND AMUSEMENT MACHINE
2y 5m to grant Granted Aug 12, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
79%
With Interview (+17.2%)
3y 11m
Median Time to Grant
Low
PTA Risk
Based on 657 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month