Prosecution Insights
Last updated: April 19, 2026
Application No. 18/480,469

REDUCING LATENCY IN GAME CHAT BY PREDICTING SENTENCE PARTS TO INPUT TO ML MODEL USING DIVISION OF CHAT BETWEEN IN-GAME AND SOCIAL

Final Rejection §103
Filed
Oct 03, 2023
Examiner
LIM, SENG HENG
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Sony Interactive Entertainment Inc.
OA Round
2 (Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
95%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
627 granted / 949 resolved
-3.9% vs TC avg
Strong +29% interview lift
Without
With
+28.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
51 currently pending
Career history
1000
Total Applications
across all art units

Statute-Specific Performance

§101
13.2%
-26.8% vs TC avg
§103
39.0%
-1.0% vs TC avg
§102
27.2%
-12.8% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 949 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Response to Arguments Applicant's arguments filed 11/6/25 have been fully considered but they are not persuasive. Applicant argues that none of the references discloses the contextual information comprising motion vectors. Examiner respectfully disagrees. Please review updated rejection below addressing the newly added limitation. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 5-7, 9, 13, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over US 2022/0139383 Al to NVIDIA CORPORATION (hereinafter "NVIDIA"), in view of US 2020/0012718 Al to INTERNATIONAL BUSINESS MACHINES COROPORATION (hereinafter "IBM'718"). As per claim 1, NVIDIA discloses an apparatus comprising: at least one processor assembly configured to (at least one processor executing instructions; Paragraph [0022]): determine contextual information related to a computer game while the computer game is being played by a first computer gamer and a second computer gamer, the contextual information comprising motion vectors (i.e. processes data streams from game streaming applications involving multiple participants/players; determines context from ongoing gameplay via audio transcripts, video analysis (e.g., computer vision on frames), and metadata; applicable to multiplayer game streams where chat occurs between gamers. Contextual determination includes video stream analysis via computer vision, which processes frame data to detect gestures or actions; motion in video inherently involves vector-based representations in computer vision algorithms for tracking movement or state changes in gameplay; Paragraphs [0024]-[0026], [0032]-[0034], [0042], [0044]-[0045], [0057]); based on the contextual information, determine whether chat that is input by the first computer gamer to the second computer gamer is related to the computer game or is not related to the computer game (determine whether chat that is input by a first user (first computer gamer) to a second user (second computer gamer) is related to a topic of discussion or content (computer game) being streamed/played by the users is irrelevant to the topic/content; Figure 2, Paragraphs [0024, 0030, 0038]); responsive to determining that the chat is related to the computer game, select a first machine learning "ML" model (responsive to determining that the chat related to a topic, select a topic neural network (first machine learning ML model); Paragraphs [0015, 0026-0027, 0038]); and determining that the chat is not related to the computer game, (determining that the chat comments are irrelevant to a topic; Paragraphs [0015, 0026-0027, 0031-0032, 0038]). NVIDIA fails to disclose select a first machine learning "ML" model to predict completion of a sentence input by the first computer gamer; and responsive to determining that the chat is not related to the computer game, select a second ML model to predict completion of the sentence. IBM’718 discloses select a first machine learning "ML" model to predict completion of a sentence input by the first computer gamer (select a relation-based template autocompletion model (first machine learning ML model) to predict the desired word (completion) of a sentence input by the first user; Figure 4, Paragraphs [0028, 0037-0038, 0041]); and responsive to determining that the chat is not related to the computer game, select a second ML model to predict completion of the sentence (responsive to a different determined relationship, select a different relation-based template autocompletion model (second ML model) from multiple models to predict the desired word of the sentence; Paragraphs [0027-0028, 0038, 0041, 0054]). It would have been obvious to one of ordinary skill in the art, prior to the relevant date, to modify the method/system of NVIDIA to include select a first machine learning "ML" model to predict completion of a sentence input by the first computer gamer; responsive to determining that the chat is not related to the computer game, select a second ML model to predict completion of the sentence, as taught by IBM'718, in order to improve the user experience by predicting words faster based on the user and reduce the amount of typing for the user. As per claim 5, NVIDIA in view of IBM'718 discloses the apparatus of Claim 1. NVIDIA discloses wherein the first ML model is trained on data from plural players of the computer game (wherein the first topic neural network is trained on data from comments or chats between the participants on a topic of discussion or content being streamed/played by the users; Paragraphs [0028, 0038]). As per claim 6, NVIDIA in view of IBM'718 discloses the apparatus of Claim 1. NVIDIA discloses wherein the second ML model is trained on chat data from plural computer games (wherein a topic neural network of the plurality of topic neural network is trained on different topics/contexts or may be associated with similar topics with different scopes; Paragraphs [0016, 0027-0028]). As per claim 7, NVIDIA in view of IBM'718 discloses the apparatus of Claim 1. NVIDIA discloses wherein the processor assembly is configured to execute a third ML model to determine whether the chat is related to the computer game or is not related to the computer game (wherein the processor is configured to select a neural network of a plurality of topic neural networks (third ML model) to determine whether the comment is of relevance to the topic of discussion; Figure 2, Paragraphs [0022, 0024, 0027, 0029-0030, 0038]). As per claim 9, NVIDIA discloses an apparatus comprising: at least one computer medium that is not a transitory signal and that comprises instructions executable by at least one processor assembly to (at least one computer-storage media that includes both volatile and nonvolatile media that comprises computer readable instructions by at least one processor; Paragraphs [0022, 0061]): determine contextual information related to a computer game while the computer game is being played by a first computer gamer and a second computer gamer, the contextual information comprising motion vectors (i.e. processes data streams from game streaming applications involving multiple participants/players; determines context from ongoing gameplay via audio transcripts, video analysis (e.g., computer vision on frames), and metadata; applicable to multiplayer game streams where chat occurs between gamers. Contextual determination includes video stream analysis via computer vision, which processes frame data to detect gestures or actions; motion in video inherently involves vector-based representations in computer vision algorithms for tracking movement or state changes in gameplay; Paragraphs [0024]-[0026], [0032]-[0034], [0042], [0044]-[0045], [0057]); execute a first machine learning "ML" model, using the contextual information as input (employ machine learning (first machine learning ML model); Paragraph [0015]); based on first output from the first ML model, select a second ML model to process chat related to at least one computer game (responsive to determining that the chat related to a topic, select a topic neural network (first machine learning ML model); Paragraphs [0015-0017, 0026-0027, 0038]); NVIDIA fails to disclose based on second output from the first ML model, select a third ML model to process chat not related to the computer game. IBM'718 discloses based on second output from the first ML model, select a third ML model to process chat not related to the computer game (responsive to a different determined relationship, select a different relation-based template autocompletion model (second ML model) from multiple models to predict the desired word of the sentence; Paragraphs [0027-0028, 0038, 0041, 0054]). It would have been obvious to one of ordinary skill in the art, prior to the relevant date, to modify the apparatus of NVIDIA to include based on second output from the first ML model, select a third ML model to process chat not related to the computer game as taught by IBM'718, in order to improve the user experience by predicting words faster based on the user and reduce the amount of typing for the user. As per claim 13, NVIDIA in view of IBM'718 discloses the apparatus of Claim 9. NVIDIA discloses wherein the second ML model is trained on data from plural players of the computer game (wherein a topic neural network of a plurality of topic neural network (second ML model) is trained on data from comments or chats between the participants on the a topic of discussion or content being streamed/played by the users; Paragraphs [0028, 0038]). As per claim 14, NVIDIA in view of IBM'718 discloses the apparatus of Claim 9. NVIDIA discloses wherein the third ML model is trained on chat data from plural computer games ( wherein a topic neural network of the plurality of topic neural network is trained on different topics/contexts or may be associated with similar topics with different scopes; Paragraphs [0016, 0027-0028]). Claims 2, 3, 10, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over NVIDIA in view of IBM'718, and further in view of US 2022/0147983 Al to CITIBANK, N.A. (hereinafter "CITIBANK"). As per claim 2, NVIDIA in view of IBM'718 discloses the apparatus of Claim 1. NVIDIA discloses the first and second ML models (a plurality of topic neural network; Paragraphs [0028, 0038]). NVIDIA fails to disclose wherein the first and second ML models reduce latency. CITIBANK discloses wherein the model reduces latency (wherein the standard data model reduces data latency; Paragraph [0090]). It would have been obvious to one of ordinary skill in the art, prior to the relevant date, to modify the apparatus of NVIDIA to include wherein the model reduces latency as taught by CITIBANK, in order to increase operational efficiency by removing duplicative processes to reduce processing time. As per claim 3, NVIDIA in view of IBM'718 discloses the apparatus of Claim 1 and the method of claim 16, respectively. NVIDIA discloses the first and second ML models (a plurality of topic neural network; Paragraphs [0028, 0038]). NVIDIA fails to disclose wherein the first and second ML models reduce latency in translating the chat. CITIBANK discloses wherein the model reduces latency in translating the chat (wherein the standard data model reduces data latency in translating messages (chat); Paragraph [0090]). It would have been obvious to one of ordinary skill in the art, prior to the relevant date, to modify the apparatus of NVIDIA to include wherein the model reduces latency in translating the chat as taught by CITIBANK, in order to increase operational efficiency by removing duplicative processes to reduce processing time. As per claim 10, NVIDIA in view of IBM'718 discloses the apparatus of Claim 9. NVIDIA discloses the third and second ML models (a plurality of topic neural network; Paragraphs [0028, 0038]). NVIDIA fails to disclose wherein the third and second ML models reduce latency. CITIBANK discloses wherein the model reduces latency (wherein the standard data model reduce data latency; Paragraph [0090]). It would have been obvious to one of ordinary skill in the art, prior to the relevant date, to modify the apparatus of NVIDIA to include wherein the model reduces latency as taught by CITIBANK, in order to increase operational efficiency by removing duplicative processes to reduce processing time. As per claim 11, NVIDIA in view of IBM'718 discloses the apparatus of Claim 9. NVIDIA discloses the third and second ML models (a plurality of topic neural network; Paragraphs [0028, 0038]). NVIDIA fails to disclose wherein the model reduces latency in translating the chat. CITIBANK discloses wherein the model reduces latency in translating the chat (wherein the standard data model reduces data latency in translating messages; Paragraph [0090]). It would have been obvious to one of ordinary skill in the art, prior to the relevant date, to modify the apparatus of NVIDIA to include wherein the model reduces latency in translating the chat as taught by CITIBANK, in order to increase operational efficiency by removing duplicative processes to reduce processing time. Claims 4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over NVIDIA in view of IBM'718, and further in view of US 2022/0092272 Al to INTERNATIONAL BUSINESS MACHINES CORPORATION (hereinafter "IBM'272"). As per claims 4 and 12, NVIDIA in view of IBM'718 discloses the apparatus of Claim 1 and the apparatus of claim 9, respectively. NVIDIA fails to disclose wherein the processor assembly is configured to translate the chat from a first human language to a second human language. IBM'272 discloses wherein the processor assembly is configured to translate the chat from a first human language to a second human language (wherein the processor is configured to translate the native language (human language) of a message to a target language (second human language); Paragraphs [0020, 0040]). It would have been obvious to one of ordinary skill in the art, prior to the relevant date, to modify the apparatus of NVIDIA in view of IBM'7 l 8 to include wherein the processor assembly is configured to translate the chat from a first human language to a second human language as taught by IBM'272, in order to improve user experience by allowing communication between different native languages to be translated faster. Claims 8 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over NVIDIA in view ofIBM'718, and further in view of US 2022/0044676 Al to BANK OF AMERICA CORPORATION (hereinafter "BOA"). As per claim 8, NVIDIA in view of IBM'718 discloses the apparatus of Claim 7. NVIDIA fails to disclose wherein the third ML model is trained on data comprising chat data and at least one of: voice intonation data, and facial expression data. BOA discloses wherein the third ML model is trained on data comprising chat data and at least one of: voice intonation data, and facial expression data (wherein the machine learning algorithm of the multiple (third ML model) is trained on conversational input (chat data) and at least one of: intonation in the audio or audiovisual (voice intonation data), and facial expressions of the user (facial expression data); Paragraphs [0031, 0034, 0035]). It would have been obvious to one of ordinary skill in the art, prior to the relevant date, to modify the apparatus of NVIDIA in view of IBM'718 to include wherein the third ML model is trained on data comprising chat data and at least one of: voice intonation data, and facial expression data as taught by BOA, in order to improve the accuracy of the algorithm by using conversational inputs for analyzation and determining the user intent over time. As per claim 15, NVIDIA in view of IBM'718 discloses the apparatus of Claim 9. NVIDIA fails to disclose wherein the first ML model is trained on data comprising chat data and at least one of: voice intonation data, and facial expression data. BOA discloses wherein the first ML model is trained on data comprising chat data and at least one of: voice intonation data, and facial expression data (wherein the machine learning algorithm is trained on conversational input and at least one of: intonation in the audio or audiovisual, and facial expressions of the user; Paragraphs [0031, 0034, 0035]). It would have been obvious to one of ordinary skill in the art, prior to the relevant date, to modify the apparatus of NVIDIA in view of IBM'718 to include wherein the first ML model is trained on data comprising chat data and at least one of: voice intonation data, and facial expression data as taught by BOA, in order to improve the accuracy of the algorithm by using conversational inputs for analyzation and determining the user intent overtime. Claims 16 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over NVIDIA in view of IBM'718 and IBM'272. As per claim 16, NVIDIA discloses a method, comprising: determine contextual information related to a computer game while the computer game is being played by a first computer gamer and a second computer gamer, the contextual information comprising motion vectors; (i.e. processes data streams from game streaming applications involving multiple participants/players; determines context from ongoing gameplay via audio transcripts, video analysis (e.g., computer vision on frames), and metadata; applicable to multiplayer game streams where chat occurs between gamers. Contextual determination includes video stream analysis via computer vision, which processes frame data to detect gestures or actions; motion in video inherently involves vector-based representations in computer vision algorithms for tracking movement or state changes in gameplay; Paragraphs [0024]-[0026], [0032]-[0034], [0042], [0044]-[0045], [0057]); based on the contextual information, determine whether chat that is input by the first computer gamer in a first language to the second computer gamer is related to the computer game or is not related to the computer game; Figure 2, Paragraphs [0024, 0030, 0038]); and using a first machine learning "ML" model responsive to the chat being related to a computer game being played by the gamers (responsive to determining that the chat related to a topic, select a topic neural network (first machine learning ML model); Paragraphs [0015, 0026-0027, 0038]); NVIDIA fails to disclose completing a sentence of chat input by a first computer gamer in a first language to translate the chat for a second computer gamer in a second language, completing the sentence of chat input by the first computer gamer in the first language to translate the chat for the second computer gamer in the second language using a second ML model responsive to the chat not being related to the computer game being played by the gamers. IBM'718 discloses completing a sentence of chat input by a first computer gamer (select a relation based template autocompletion model (first machine learning ML model) to predict the desired word (completion) of a sentence input by the first user; Figure 4, Paragraphs [0028, 0037-0038, 0041]) completing the sentence of chat input by the first computer gamer (select a relation-based template autocompletion model (first machine learning ML model) to predict the desired word (completion) of a sentence input by the first user; Figure 4, Paragraphs [0028, 0037-0038, 0041]) using a second ML model responsive to the chat not being related to the computer game being played by the gamers (responsive to a different determined relationship, select a different relation-based template autocompletion model (second ML model) from multiple models to predict the desired word of the sentence; Paragraphs [0027-0028, 0038, 0041, 0054]). It would have been obvious to one of ordinary skill in the art, prior to the relevant date, to modify the method of NVIDIA to include select a first machine learning "ML" model to predict completion of a sentence input by the first computer gamer; and select a second ML model to predict completion of the sentence as taught by IBM'718, in order to improve the user experience by predicting words faster based on the user and reduce the amount of typing for the user. IBM'272 discloses chat input of a first computer gamer in a first language to translate the chat for a second computer gamer in a second language ( wherein the processor is configured to translate the native language (human language) of a message to a second user target language (second human language); Paragraphs [0020, 0040]). chat input by the first computer gamer in the first language to translate the chat for the second computer gamer in the second language (wherein the processor is configured to translate the native language (human language) of a message to a second user target language (second human language); Paragraphs [0020, 0040]). It would have been obvious to one of ordinary skill in the art, prior to the relevant date, to modify the method of NVIDIA in view of IBM'718 to include wherein the processor assembly is configured to translate the chat from a first human language to a second human language as taught by IBM'272, in order to improve user experience by allowing communication between different native languages to be translated faster. As per claim 18, NVIDIA in view of IBM'718 and IBM'272 discloses the method of claim 16. NVIDIA fails to disclose wherein the processor assembly is configured to translate the chat from a first human language to a second human language. IBM'272 discloses wherein the processor assembly is configured to translate the chat from a first human language to a second human language (wherein the processor is configured to translate the native language (human language) of a message to a target language (second human language); Paragraphs [0020, 0040]). It would have been obvious to one of ordinary skill in the art, prior to the relevant date, to modify the method of NVIDIA in view of IBM'718 to include wherein the processor assembly is configured to translate the chat from a first human language to a second human language as taught by IBM'272, in order to improve user experience by allowing communication between different native languages to be translated faster. As per claim 19, NVIDIA in view ofIBM'718 and IBM'272 discloses the method of claim 16. NVIDIA discloses wherein the first ML model is trained on data from plural players of the computer game (wherein the first topic neural network is trained on data from comments or chats between the participants on a topic of discussion or content being streamed/played by the users; Paragraphs [0028, 0038]). As per claim 20, NVIDIA in view of IBM'718 and IBM'272 discloses the apparatus of Claim 1. NVIDIA discloses wherein the second ML model is trained on chat data from plural computer games (wherein a topic neural network of the plurality of topic neural network is trained on different topics/contexts or may be associated with similar topics with different scopes; Paragraphs [0016, 0027-0028]). Claims 17 is rejected under 35 U.S.C. 103 as being unpatentable over NVIDIA in view of IBM'718 and IBM'272, and further in view of CITIBANK. As per claim 17, NVIDIA in view of IBM'718 and IBM'272 discloses the method of claim 16. NVIDIA discloses the first and second ML models (a plurality of topic neural network; Paragraphs [0028, 0038]). NVIDIA fails to disclose wherein the first and second ML models reduce latency in translating the chat. CITIBANK discloses wherein the model reduces latency in translating the chat (wherein the standard data model reduces data latency in translating messages; Paragraph [0090]). It would have been obvious to one of ordinary skill in the art, prior to the relevant date, to modify the method of NVIDIA to include wherein the model reduces latency in translating the chat as taught by CITIBANK, in order to increase operational efficiency by removing duplicative processes to reduce processing time. Filing of New or Amended Claims The examiner has the initial burden of presenting evidence or reasoning to explain why persons skilled in the art would not recognize in the original disclosure a description of the invention defined by the claims. See Wertheim, 541 F.2d at 263, 191 USPQ at 97 (“[T]he PTO has the initial burden of presenting evidence or reasons why persons skilled in the art would not recognize in the disclosure a description of the invention defined by the claims.”). However, when filing an amendment an applicant should show support in the original disclosure for new or amended claims. See MPEP § 714.02 and § 2163.06 (“Applicant should specifically point out the support for any amendments made to the disclosure.”). Please see MPEP 2163 (II) 3. (b) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Correspondence Any inquiry concerning this communication or earlier communications from the examiner should be directed to SENG H LIM whose telephone number is (571)270-3301. The examiner can normally be reached Monday-Friday (9-5). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David L. Lewis can be reached at (571) 272-7673. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Seng H Lim/Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Oct 03, 2023
Application Filed
Aug 08, 2025
Non-Final Rejection — §103
Nov 06, 2025
Response Filed
Nov 21, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589296
METHODS, SYSTEMS, AND DEVICES FOR DYNAMICALLY APPLYING EQUALIZER PROFILES
2y 5m to grant Granted Mar 31, 2026
Patent 12569751
Somatosensory Interaction Method and Electronic Device
2y 5m to grant Granted Mar 10, 2026
Patent 12558622
INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM
2y 5m to grant Granted Feb 24, 2026
Patent 12551804
METHOD FOR PROVIDING INTERACTIVE GAME
2y 5m to grant Granted Feb 17, 2026
Patent 12548406
GAMING SYSTEMS AND METHODS USING DYNAMIC GAMING INTERFACES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
95%
With Interview (+28.7%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 949 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month