Prosecution Insights
Last updated: April 19, 2026
Application No. 17/932,681

TRANSLATING WEB CONTENT USING ACCESSIBILITY INFORMATION

Final Rejection §101§103
Filed
Sep 16, 2022
Examiner
MASTERS, KRISTEN MICHELLE
Art Unit
2659
Tech Center
2600 — Communications
Assignee
International Business Machines Corporation
OA Round
4 (Final)
62%
Grant Probability
Moderate
5-6
OA Rounds
3y 2m
To Grant
87%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
25 granted / 40 resolved
+0.5% vs TC avg
Strong +25% interview lift
Without
With
+24.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
36 currently pending
Career history
76
Total Applications
across all art units

Statute-Specific Performance

§101
35.2%
-4.8% vs TC avg
§103
46.9%
+6.9% vs TC avg
§102
8.0%
-32.0% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 40 resolved cases

Office Action

§101 §103
Detailed Action This communication is in response to the Arguments and Amendments filed on 11/25/2025. Claims 1-3 and 5-18 are pending and have been examined. Claim 4 is cancelled. Claims 1-3 and 5-18 are rejected. Claims 1, 7 and 13 are independent. Information Disclosure Statement The information disclosure statement (IDS) submitted on 9/8/2025, are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Arguments and Amendments Applicant has amended the independent claims to include “creating, for a target language,” “identifying, by a web application module of a translation context engine, a software application for translation; extracting, by an extract module of a translation context engine, an original program integrated information (PII) string having accessibility information associated therewith, the original PII written in a first language; identifying, by the extract module, the associated with the original PII string” “of the web application” “creating, by a context dataset module of the translation context engine and using a screen reader, a natural language description of the original PII string based on the identified accessibility information; collecting, by the context dataset module, semantic role information” “from the natural language description” “creating, by the context dataset module,” ”semantic role information, the translation context dataset created in the target language; generating, by a translation pair module of the translation engine,” “semantic role information; receiving, by a submit-to-translator module of an index builder of the translation context engine” “pair, the translated PII string being in the target language; and storing, by an index module of the index builder,”” for the target language.” Regarding the Interview on 12/3/2025, Applicant notes Examiner Masters noted that additional consideration may be necessary in view of the amendments made. Examiner notes no agreement was reached regarding the proposed amendments. Regarding the Rejections under 35 U.S.C. 101 Applicant notes The Current Office Action rejects claims 1-3 and 5-18 on the grounds of 35 USC 101, non- patent-eligible subject matter. Specifically, the claims as drafted are said to cover mental activity or human process, which is an abstract idea. Applicant amends the claims herein with reference to Figures 1 and 2, specifically identifying the translation context engine and components in electronic communication therein to identify structural requirements in the claimed steps. The claimed operations are computer-based operations that cannot be performed in the human mind. Examiner notes the independent claims recites a sequence of data transformations and computations: “dataset manipulations”, creating “translation pairs” — these can be reasonably characterized as information processing/mental like steps (conceptual transformations of linguistic information). Absent claim detail tying the operations to specific technical mechanisms that go beyond mere data processing, these limitations are susceptible to classification as mental/data manipulation concepts. On the present claim wording, the limitations are largely functional and outcome oriented (“dataset manipulations”, creating “translation pairs”) without concrete computational detail or a recitation of how the arrangements materially improve the functioning of the computer system itself (e.g., speed/latency reductions, memory or computational efficiency, novel data representations that reduce error by a measurable metric, or specific unconventional network architectures constrained in a way that produces the improvement). Applicant’s arguments with respect to claim(s) 1-3, 5-18 have been considered but are moot because the new ground of rejection does not rely on the primary reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Hence, new grounds for rejection have been made over Myers (US Patent Number US 11636252 B1) in view of Rosart (US Patent Number US 9547643 B2). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 5-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent Claim 1 recites, “1. (Currently Amended) A computer-implemented method for creating, for a target language, a translation index for user interface elements of software applications, the method comprising: identifying, by a web application module of a translation context engine, a software application for translation; [a human can identify a application for translation using visual processes] extracting, by an extract module of a translation context engine, an original program integrated information (PII) string having accessibility information associated therewith, the original PII written in a first language; [A human can extract a PII string using pen and paper] identifying, by the extract module, associated with the original PII string in user-interface source code of the web application, [A human can identify a PII string using pen and paper] creating, by a context dataset module of the translation context engine and using a screen reader, a natural language description of the original PII string based on the identified accessibility information; [A human can create a natural language description of the PII string using natural human language processing abilities in the mind and pen and paper] collecting, by the context dataset module, semantic role information from the natural language description[A human can collect semantic role information using natural human language processing abilities in the mind and pen and paper] creating, by the context dataset module, a translation context dataset for the original PII string including the semantic role information, the translation context dataset created in the target language; [A human can create a translation context dataset using natural human language processing abilities in the mind and pen and paper] generating, by a translation pair module of the translation engine, a translation pair including the original PII string and the semantic role information; [A human can generate a translation pair using natural human language processing abilities in the mind and pen and paper] receiving, by a submit-to-translator module of an index builder of the translation context engine, a translated PII string based on the translation pair, the translated PII string being in the target language; and [A human can receive a translated string using and pen and paper and human vision] storing, by an index module of the index builder, the translation pair and translated PII string in a translation index for the target language. [A human can store a translation pair using pen and paper] Regarding Independent Claim 7 claim 7 is a method claim with limitations similar to that of claim 1 and is rejected under the same rationale. Regarding Independent Claim 13 claim 13 is a system claim with limitations similar to that of claim 1 and is rejected under the same rationale. This judicial exception is not integrated into a practical application. In particular, claim 13 recites additional elements of “processor”, and “storage medium” as per the independent claims. For example, in [0099] of the as filed specification, there is description of using with significant data processing and/or machine readable instruction reading capabilities including, but not limited to: desktop computers, mainframe computers, laptop computers, field-programmable gate array (FPGA) based devices, smart phones, personal digital assistants (PDAs), body-mounted or inserted computers, embedded device style computers, application-specific integrated circuit (ASIC) based devices. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using a computing device such as a processor, a memory, is noted as a general computer. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, the additional limitation in the claims noted above are directed towards insignificant solution activity. The claims are not patent eligible. With respect to Claim 2 the claim relates to submitting, to a translator module, the translation pair for translation of the original PII string. This relates to a human submitting a document containing a translation pair to a translator, or verbally reciting the pair to the translator. No additional limitations are present. With respect to Claims 3, 10, and 16, the claims relate to translating the software application from a first language to a target language, including the translated PII string. This relates to a human using natural language understanding to translate from a first language to a target language. No additional limitations are present. With respect to Claims 5 and 11, and 17 the claims relate to the translation index is a two-stage index with a primary index being the original PII string and a secondary index being the translation context dataset. This relates to a human using natural language understanding to translate from a first language to a target language using a translation index. No additional limitations are present. With respect to Claims 6 and 12, and 18 the claims relate to scanning the software applicationwebpage for translation. This relates to a human using logic and reasoning to choose an appropriate application for translation. The claims relate to and scanning the web pages of the software application for the original program integrated information (PII) string. This relates to a human using logic and reasoning and natural language understanding to scan the web pages for the string. No additional limitations are present. With respect to Claims 8 and 14 the claims relate to the original program integrated information (PII) string is extracted while displaying the web application. This relates to a human using logic and reasoning to extract information from a web application. The claims relate to determining the accurate translation occurs in real time upon request while displaying the web application. This relates to a human using natural language understanding to translate in real time. No additional limitations are present. With respect to Claims 9 and 15 the claims relate to identifying the matching semantic role information includes: calculating semantic similarity of the matching semantic information to the translation context information. This relates to a human identifying a matching dataset using natural language understanding and pattern recognition. The claims relate to determining a match to the translation semantic information by the semantic similarity meeting a threshold level of similarity; This relates to a human determining a match using natural language understanding and pattern recognition. The claims relate to the translation index maps the matching semantic information to a feature space that is sensitive to linguistic semantics. This relates to a human using natural language understanding to match context information to a feature space that is sensitive to linguistic semantics No additional limitations are present. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 7-8, 10-11, 13-14, 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Myers (US Patent Number US 11636252 B1) in view of Rosart (US Patent Number US 9547643 B2). Regarding Claim 1, Myers teaches 1. (Currently Amended) A computer-implemented method for creating, for a target language, a translation index for user interface elements of software applications, the method comprising: (see Myers (5:17-44) “(25) …The textual dictation can be incorporated in a textual document, such as a markup language document, and read out to a user. When read out to the user, the textual dictation verbalizes the speech received from the user who recorded the audio verbalizing the selected portion. This textual dictation is stored in the accessibility system 10 in association with the selected portion of the training markup language document. This association creates a pair of a training markup language document portion and a corresponding dictation. Additional pairs can be similarly generated by receiving inputs from the same or other agents using respective agent client devices 11.”) identifying, by a web application module of a translation context engine, a software application for translation; (see Myers (3:45-67) “(19) In the example shown in FIG. 1, a user using the member-related client device 12 can establish a communication session with an agent associated with the agent client device 11 via a website associated with the agent client device 11. The agent can be a human agent or an automated agent, e.g., on behalf of an organization. The automated agent can be associated with a medical group that includes the member. The automated agent can be an interactive voice response (IVR), a virtual online assistant, or a chatbot provided on a website. During a communication session between the user and the agent, the customer service server system 2 identifies the member using initial context data (e.g., the phone number the member is calling from, the website login information inputted, automatic number identification (ANI), etc.) and retrieves the data on the member (e.g., member account information, name, address, insurance information, information on spouse and dependents, etc.) to be presented on a webpage on the member-related client device 12. Specifically, the agent client device 11 instructs the website transcription server system 2 to generate a markup language document based on the retrieved data to provide the markup language document for presentation to a user on the member-related client device 12.”) extracting, by an extract module of a translation context engine, an original program integrated information (PII) string having accessibility information associated there with, the original PII written in a first language; (see Myers (5:17-44) “(25) The website transcription server system 2 trains a machine learning technique implemented by the accessibility system 10 to generate dictations for certain portions of a markup language document. For example, the accessibility system 10 presents a training markup language document to a user on an agent client device 11. Input is received from the user on the agent client device 11 that selects a portion of the training markup language document to dictate. In response to receiving the input, the portion of the markup language document is extracted from the training markup language document. Input is received from the user of the agent client device 11 that includes a recording of audio verbalizing the selected portion. The recorded audio is provided to an offline analysis server 17 to convert the recorded audio to a textual dictation. The textual dictation can be incorporated in a textual document, such as a markup language document, and read out to a user. When read out to the user, the textual dictation verbalizes the speech received from the user who recorded the audio verbalizing the selected portion. This textual dictation is stored in the accessibility system 10 in association with the selected portion of the training markup language document. This association creates a pair of a training markup language document portion and a corresponding dictation. Additional pairs can be similarly generated by receiving inputs from the same or other agents using respective agent client devices 11.”) identifying, by the extract module, associated with the original PII string in user-interface source code of the web application, (see Myers (11:53-12:3) “(61) At operation 618, an updated user interface is provided back to the user. The user at the member-related client device 12 receives the randomly selected markup language document (e.g., the document with the machine learning model based revision or the document to which a typical text-to-speech engine has been applied). (62) At operation 619, an indication is saved as to whether or not a task was successfully completed in the updated user interface. The screen reader survey uses accessibility system 10 to determine whether tasks were completely or partially completed on the webpage by the user of the member-related client device 12. The screen reader survey collects results from a plurality of users indicating the task completion probability for the markup language documents with the machine learning model based dictation and indicating the task completion probability for the markup language documents with the typical text-to-speech engine applied.”) creating, by a context dataset module of the translation context engine and using a screen reader, a natural language description of the original PII string based on the identified accessibility information; (see Myers (12:37-53) “(66) At operation 732, the raw markup language document of the webpage (e.g., the HTML) is provided to a trained machine learning model (e.g., an RNN). For example, the markup language document corresponding to the webpage shown in FIG. 9 is provided to the accessibility system 10 (particularly to the trained model 360) in response to receiving the request for the visual accessibility version of the webpage. The trained model 360 processes sections of the markup language document to estimate new markup language for one or more portions. For example, the trained model 360 may have been previously trained to generate markup language (e.g., an aria label with a dictation) for an ordered list of items. The new markup language document corresponding to the webpage shown in FIG. 9 may include an ordered list of items 910. In this case, the trained model 360 generates a new markup language that estimates a dictation for the ordered list of items 910.”) collecting, by the context dataset module, semantic role information from the natural language description based on accessible rich internet applications (ARIA) roles; (see Myers (12:37-53) “(66) At operation 732, the raw markup language document of the webpage (e.g., the HTML) is provided to a trained machine learning model (e.g., an RNN). For example, the markup language document corresponding to the webpage shown in FIG. 9 is provided to the accessibility system 10 (particularly to the trained model 360) in response to receiving the request for the visual accessibility version of the webpage. The trained model 360 processes sections of the markup language document to estimate new markup language for one or more portions. For example, the trained model 360 may have been previously trained to generate markup language (e.g., an aria label with a dictation) for an ordered list of items. The new markup language document corresponding to the webpage shown in FIG. 9 may include an ordered list of items 910. In this case, the trained model 360 generates a new markup language that estimates a dictation for the ordered list of items 910.”) creating, by the context dataset module, a translation context dataset for the original PII string including the semantic role information, the translation context dataset created in the target language; (see Myers “(33) Training data 320 includes constraints 326 which may define the constraints of a given markup language document, such as a website or webpage. The paired training data sets 322 may include sets of input-output pairs, such as a pairs of a plurality of training markup language document (or portions of the markup language documents) and corresponding training dictations. Some components of training input 310 may be stored separately at a different off-site facility or facilities than other components.”) generating, by a translation pair module of the translation engine, a translation pair including the original PII string and the semantic role information; (see Myers (5:44-60) “(26) The accessibility system 10 processes batches of pairs of training markup language document portion and dictations (ground-truth dictations) to train a neural network, such as Long-Short Term Memory Neural Networks (LSTM). For example, as explained in more detail in connection with FIG. 2, the neural network estimates an estimated dictation for a given training markup language document portion. The neural network compares the estimated dictation with the corresponding ground-truth dictation to generate an error. Using a loss function and based on the error, the neural network is updated and applied to another set of training markup language document portion and ground truth dictation. The neural network parameters are again adjusted and when the loss function satisfies a stopping criterion, the neural network is trained and utilized by a member-related client device 12 to generate a dictation for a given markup language document.”) Myers does not specifically teach receiving, by a submit-to-translator module of an index builder of the translation context engine, a translated PII string based on the translation pair, the translated PII string being in the target language; and storing, by an index module of the index builder, the translation pair and translated PII string in a translation index for the target language. However, Rosart does teach this limitation (see Rosart (5:21-35) “(27) FIG. 6 is a flow diagram of a method 600 for receiving and processing requests to translate resources A location of the resource is received from in the form of, e.g., a URL or other identifier, and a translation language pair (step 602). The location is accessed and a, e.g., web page or document, at that address is retrieved. The text is translated from a specified first language to a specified second language, and the page structure reformatted, if necessary (step 604). As part of the reformatting process, predetermined translated text structures (e.g., words, sentences, paragraphs, tabular columns/rows) are delimited by span tags with the title attribute set to the untranslated version of that sentence to induce a behavior in the client viewing the translated location. The translated text and/or reformatted resource is communicated to the requestor (step 606).”)(See Rosart (5:35-56) “(28) FIG. 7 illustrates an example of the user interface 100 where text 114 is entered into the text box 102 to be translated from a first language to a second language, as specified in the drop-down box 104. As shown in FIG. 8. the user can specify the particular to/from translation language pair by selecting the down arrow 116 (e.g., from a computer mouse or other input device) and supported translation language pairs are shown in the list 118. (examiner interprets index as “list”) After selecting a translation language pair (e.g., German to English), the request is submitted using the translate button 106. As shown in FIG. 9, while the request is processed, the user can receive feedback in the form of a status box 120. As shown in FIG. 10, translated text 122 can be provided in a pane next to the original text 114 for rapid viewing. The translated text 122 can be provided to the user interface 100 using Asynchronous JavaScript and XML (AJAX), where small amounts of data are exchanged with a server, so that the user interface 100 does not have to be reloaded each time the user makes a request using the translate button 106. In other implementations, the original and translated text can be displayed top-to-bottom.”) Myers in view of Rosart are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Myers, to incorporate the receiving, by a submit-to-translator module of an index builder of the translation context engine, a translated PII string based on the translation pair, the translated PII string being in the target language; and storing, by an index module of the index builder, the translation pair and translated PII string in a translation index for the target language of Rosart. Doing so allows for user interaction with the user interface as recognized by Rosart (6:62-64). Regarding Independent Claim 7 claim 7 is a method claim with limitations similar to that of claim 1 and is rejected under the same rationale. Additionally, Myers teaches A computer-implemented method for translating program integrated information (PII) strings of user interface source code in web applications to a target language, the method comprising: (see Myers (12:37-53) “(66) At operation 732, the raw markup language document of the webpage (e.g., the HTML) is provided to a trained machine learning model (e.g., an RNN). For example, the markup language document corresponding to the webpage shown in FIG. 9 is provided to the accessibility system 10 (particularly to the trained model 360) in response to receiving the request for the visual accessibility version of the webpage. The trained model 360 processes sections of the markup language document to estimate new markup language for one or more portions. For example, the trained model 360 may have been previously trained to generate markup language (e.g., an aria label with a dictation) for an ordered list of items. The new markup language document corresponding to the webpage shown in FIG. 9 may include an ordered list of items 910. In this case, the trained model 360 generates a new markup language that estimates a dictation for the ordered list of items 910.”) Regarding Independent Claim 13 claim 13 is a System claim with limitations similar to that of claim 1 and is rejected under the same rationale. Additionally Myers teaches A computer system for translating program integrated information (PII) strings of user interface source code in web applications, to a target language, (see Myers (12:37-53) “(66) At operation 732, the raw markup language document of the webpage (e.g., the HTML) is provided to a trained machine learning model (e.g., an RNN). For example, the markup language document corresponding to the webpage shown in FIG. 9 is provided to the accessibility system 10 (particularly to the trained model 360) in response to receiving the request for the visual accessibility version of the webpage. The trained model 360 processes sections of the markup language document to estimate new markup language for one or more portions. For example, the trained model 360 may have been previously trained to generate markup language (e.g., an aria label with a dictation) for an ordered list of items. The new markup language document corresponding to the webpage shown in FIG. 9 may include an ordered list of items 910. In this case, the trained model 360 generates a new markup language that estimates a dictation for the ordered list of items 910.”) the computer system comprising: a processor set; and a computer readable storage medium; wherein: the processor set is structured, located, connected, and/or programmed to run program instructions stored on the computer readable storage medium; and the program instructions which, when executed by the processor set, (see Myers (13:10-24) “(70) FIG. 10 is a flowchart illustrating example operations of the visually accessible website system in performing process 1000, according to example embodiments. The process 1000 may be embodied in computer-readable instructions for execution by one or more processors such that the operations of the process 1000 may be performed in part or in whole by the functional components of the system 1; accordingly, the process 1000 is described below by way of example with reference thereto. However, in other embodiments, at least some of the operations of the process 1000 may be deployed on various other hardware configurations. Some or all of the operations of process 1000 can be in parallel, out of order, or entirely omitted.”) As to Claim 2, Myers in view of Rosart teach The method of claim 1, (see Claim 1). Furthermore, Rosart teaches further comprising: submitting, to a translator, by the submit-to-translator module, the translation pair for translation of the original PII string. (see Rosart (5:21-35) “(27) FIG. 6 is a flow diagram of a method 600 for receiving and processing requests to translate resources A location of the resource is received from in the form of, e.g., a URL or other identifier, and a translation language pair (step 602). The location is accessed and a, e.g., web page or document, at that address is retrieved. The text is translated from a specified first language to a specified second language, and the page structure reformatted, if necessary (step 604). As part of the reformatting process, predetermined translated text structures (e.g., words, sentences, paragraphs, tabular columns/rows) are delimited by span tags with the title attribute set to the untranslated version of that sentence to induce a behavior in the client viewing the translated location. The translated text and/or reformatted resource is communicated to the requestor (step 606).”)(See Rosart (5:35-56) “(28) FIG. 7 illustrates an example of the user interface 100 where text 114 is entered into the text box 102 to be translated from a first language to a second language, as specified in the drop-down box 104. As shown in FIG. 8. the user can specify the particular to/from translation language pair by selecting the down arrow 116 (e.g., from a computer mouse or other input device) and supported translation language pairs are shown in the list 118. (examiner interprets index as “list”) After selecting a translation language pair (e.g., German to English), the request is submitted using the translate button 106. As shown in FIG. 9, while the request is processed, the user can receive feedback in the form of a status box 120. As shown in FIG. 10, translated text 122 can be provided in a pane next to the original text 114 for rapid viewing. The translated text 122 can be provided to the user interface 100 using Asynchronous JavaScript and XML (AJAX), where small amounts of data are exchanged with a server, so that the user interface 100 does not have to be reloaded each time the user makes a request using the translate button 106. In other implementations, the original and translated text can be displayed top-to-bottom.”) Myers in view of Rosart are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of Myers and Rosart to incorporate the submitting, to a translator, by the submit-to-translator module, the translation pair for translation of the original PII string of Rosart. Doing so allows for user interaction with the user interface as recognized by Rosart (6:62-64). As to Claims 3, 10, and 16, Myers in view of Rosart teach The method of claim 1, (see Claim 1) The method of claim 7, (see Claim 7) and The computer system of claim 13, (see Claim 13). Furthermore, Myers teaches further comprising: translating the software application from the first language to the target language, including the translated PII string. (see Myers (5:17-44) “(25) …The textual dictation can be incorporated in a textual document, such as a markup language document, and read out to a user. When read out to the user, the textual dictation verbalizes the speech received from the user who recorded the audio verbalizing the selected portion. This textual dictation is stored in the accessibility system 10 in association with the selected portion of the training markup language document. This association creates a pair of a training markup language document portion and a corresponding dictation. Additional pairs can be similarly generated by receiving inputs from the same or other agents using respective agent client devices 11.”) As to Claims 5, 11, and 17, Myers in view of Rosart teach The method of claim 1, (see Claim 1) The method of claim 7, (see Claim 7) and The computer system of claim 13, (see Claim 13). Furthermore, Myers teaches wherein the translation index is a two-stage index with a primary index being the original PII string and a secondary index being the translation context dataset. (see Myers (5:17-44) “(25) …The textual dictation can be incorporated in a textual document, such as a markup language document, and read out to a user. When read out to the user, the textual dictation verbalizes the speech received from the user who recorded the audio verbalizing the selected portion. This textual dictation is stored in the accessibility system 10 in association with the selected portion of the training markup language document. This association creates a pair of a training markup language document portion and a corresponding dictation. Additional pairs can be similarly generated by receiving inputs from the same or other agents using respective agent client devices 11.”) As to Claims 8 and 14, Myers in view of Rosart teach The method of claim 7, (see Claim 7) and The computer system of claim 13, (see Claim 13). Furthermore, Myers teaches wherein the original PII string is extracted while displaying the web application; (see Myers (22:34-50) “…generating a first of the dictations corresponding to a first of the plurality of training markup language documents by: displaying a training webpage based on the first training markup language document; receiving input selecting a portion of the displayed training webpage; identifying a portion of the first training markup language document corresponding to the selected portion; recording speech that reads out the selected portion of the displayed training webpage; transcribing the recorded speech to generate the first dictation corresponding to the first training markup language document; and associating the first dictation with the identified portion of the first training markup language document..”) wherein the original (PII) string is extracted and determining the accurate translation occurs in real time upon request (see Myers 11:59-12:20) “(62) At operation 619, an indication is saved as to whether or not a task was successfully completed in the updated user interface. The screen reader survey uses accessibility system 10 to determine whether tasks were completely or partially completed on the webpage by the user of the member-related client device 12. The screen reader survey collects results from a plurality of users indicating the task completion probability for the markup language documents with the machine learning model based dictation and indicating the task completion probability for the markup language documents with the typical text-to-speech engine applied. (63) In some embodiments, the accessibility system 10 applies a weight to each task of a given webpage and a weight to each survey and accumulates the weights together. The weights in total should equal ‘1’. More complex tasks may be associated with greater value weights than less complex tasks. As tasks get completed the tasks are combined with their corresponding weights to store an aggregated score for the task completion of the given webpage. In some cases, the survey scores are normalized across multiple webpages. An overall score is computed as a function of the weight per task multiplied by whether the task was successfully completed and added with the weight of the survey and the normalized score of the survey (e.g., success.sub.total=(weight.sub.task*success.sub.task)+(weight.sub.survey*normalizedscoresurvey)). In some cases, data for which the score is below a level of task completion and customer satisfaction are filtered out of the data used to train the machine learning model.”) Claims 6, 12, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Myers (US Patent Number US 11636252 B1) in view of Rosart (US Patent Number US 9547643 B2), and further in view of Zhang (US Patent Number US-20060174196-A1) As to Claims 6, 12, and 18, Myers in view of Rosart teach The method of claim 1, (see Claim 1) The method of claim 7, (see Claim 7), and The computer system of claim 13, (see Claim 13) Myers in view of Rosart do not specifically teach further comprising: scanning, by a scan module of the translation context engine web pages of the software application for the original program integrated information (PII) string. However, Zhang does teach this limitation (see Zhang [0056] Note that in one embodiment, this single translation, after being saved to the repository 335 (FIG. 3A) is automatically used in any web page that contains the same string, which eliminates the need for the translator to redundantly translate the same string multiple times. In such an embodiment, when a PERI page is first selected by a human translator, that page is automatically scanned by editor tool 334 for any text strings that have entries in translation repository 335 and if so these translations are substituted instead of the text strings in the window 351. Note that window 351 (FIGS. 3I and 3J) displays the web page being translated, as rendered by a browser. Moreover, editor tool 334 returns to act 342 (FIG. 3H) to receive additional translations from the human translator. At any time, e.g. after all strings in the currently-displayed web page have been translated, the human translator may choose to stop translating and close the web page, and load another web page.”) Myers in view of Rosart and Zhang are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified The method of combination of Myers and Rosart to incorporate identifying the software application for translation; and scanning the web pages of the software application for the original program integrated information (PII) string of Zhang. Doing so allows for Such separation allows easy customization when all translatable information is identified and isolated into the repository as recognized by Zhang [0001]. Claims 9 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Myers (US Patent Number US 11636252 B1) in view of Rosart (US Patent Number US 9547643 B2) and further in view of Horai (US Patent Number US 10409623 B2). As to Claims 9 and 15, Myers in view of Rosart teach The method of claim 7, (see Claim 7) and The computer system of claim 13, (see Claim 13). Furthermore, Myers teaches wherein identifying the matching semantic role information includes: calculating semantic similarity of the matching semantic role information of the translation context dataset to the collected semantic role information; (see Myers (12:54-13:5) “(67) At operation 733, a determination is made as to whether a confidence of the machine learning model is sufficient to make a prediction. If so, the process proceeds to operation 735. If not, the process proceeds to operation 734. For example, the trained model 360 may compute a score with a confidence in the estimated dictation for the ordered list of items 910. The trained model 360 computes another score for an estimated dictation of a second portion of the webpage shown in in FIG. 9. In some circumstances, if the score if greater than a given threshold, the new markup language document replaces the raw markup language document to provide a dictation for the portion with the score that is greater than the threshold. If the score is less than the given threshold but greater than a second threshold, a typical text-to-speech engine is used to transcribe the markup language document instead of using the estimated dictation of the trained model 360. If the score is less than the second threshold, the raw markup language document is not modified.”) Myers in view of Rosart does not specifically teach and determining a match to the translation collected semantic role information by the semantic similarity meeting a threshold level of similarity; wherein: the translation index maps the matching semantic role information to a feature space that is sensitive to linguistic semantics. However Horai does teach this limitation (see Horai (11:29-12:5) “(52) To compare the selected text to the recognized strings stored in the database, the context access module can apply any of a number of string comparison algorithms. For example, because an OCR module can introduce errors in recognized strings, the string comparison algorithm can be implemented so as to execute approximate, or “fuzzy”, matches. As an example, the context data can be implemented using a database that supports full-text searching operations with query operators that can be applied to strings. An approximate matching mechanism can be implemented with query operators that find one or more substrings within a string. For example, a “near” operator can identify strings in which two substrings are near each other: the operation “TermA Near/N TermB” means that TermA and TermB are less than N+1 words apart from each other. As another example, a “match” operation can identify strings including a substring: the operation [Match “Tokyo”] can retrieve records having the string “Tokyo Station” or “Center of Tokyo”. Some systems use a wildcard operator to provide a similar result. As an example, an SQLite database is a relational database that supports full-text search with a “Near” operator. In this example, an initial query on the context data, given the selected text, can retrieve a set of candidate entries. For example, if the selected text is a single word or other string, a “match” query can be applied to the database to retrieve all entries that begin with or that contain the word or string. If the selected text includes multiple words or strings, a “near” query can be built from the words of the selected text and applied to the database to retrieve all entries that contain the words in approximately the same order. Additionally, the candidate entries can be limited by the number of characters in the recognized string as compared to the number of characters in the selected text. For each of the candidate entries, a similarity or distance metric between the recognized string of the entry and the selected text is calculated. A variety of other similarity or distance metrics can be used. For example, any edit distance metric can be used. An example distance metric that can be used is a Levenshtein distance metric. Approximate matches having a measure of distance or similarity over a given threshold can be selected and sorted based on this measure by the context access module, and presented in sorted order by the translation editing tool.”) Myers in view of Rosart and Horai are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified The method of combination of Wu and Mehta and identifying the matching context dataset includes: calculating semantic similarity of the matching context information to the translation context information of Myers and Rosart to incorporate determining a match to the translation context information by the semantic similarity meeting a threshold level of similarity; wherein: the translation index maps the matching context information to a feature space that is sensitive to linguistic semantics of Horai. Doing so allows for the distance or similarity selected and sorted and presented in sorted order by the translation editing tool as recognized by Horai in (11:67-12:5). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KRISTEN MICHELLE MASTERS whose telephone number is (703)756-1274. The examiner can normally be reached M-F 8:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Louis Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KRISTEN MICHELLE MASTERS/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Sep 16, 2022
Application Filed
Oct 05, 2024
Non-Final Rejection — §101, §103
Jan 03, 2025
Response Filed
Mar 19, 2025
Final Rejection — §101, §103
May 21, 2025
Response after Non-Final Action
Jun 26, 2025
Request for Continued Examination
Jun 27, 2025
Response after Non-Final Action
Aug 22, 2025
Non-Final Rejection — §101, §103
Nov 06, 2025
Interview Requested
Nov 17, 2025
Applicant Interview (Telephonic)
Nov 25, 2025
Response Filed
Nov 29, 2025
Examiner Interview Summary
Mar 16, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592219
Hearing Device User Communicating With a Wireless Communication Device
2y 5m to grant Granted Mar 31, 2026
Patent 12548569
METHOD AND SYSTEM OF DETECTING AND IMPROVING REAL-TIME MISPRONUNCIATION OF WORDS
2y 5m to grant Granted Feb 10, 2026
Patent 12548564
SYSTEM AND METHOD FOR CONTROLLING A PLURALITY OF DEVICES
2y 5m to grant Granted Feb 10, 2026
Patent 12547894
ENTROPY-BASED ANTI-MODELING FOR MACHINE LEARNING APPLICATIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12547840
MULTI-STAGE PROCESSING FOR LARGE LANGUAGE MODEL TO ANSWER MATH QUESTIONS MORE ACCURATELY
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
62%
Grant Probability
87%
With Interview (+24.7%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 40 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month