Prosecution Insights
Last updated: April 19, 2026
Application No. 18/523,497

Sign Language Translation Method And System Thereof

Non-Final OA §101§103§112
Filed
Nov 29, 2023
Examiner
ALAM, MIRZA F
Art Unit
2688
Tech Center
2600 — Communications
Assignee
Deafinitely Communication Ltd.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
742 granted / 1004 resolved
+11.9% vs TC avg
Strong +34% interview lift
Without
With
+34.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
27 currently pending
Career history
1031
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
58.3%
+18.3% vs TC avg
§102
2.7%
-37.3% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1004 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 1. This communication is a first office action, non-final rejection on the merits. Claims 1-17, as originally filed, are currently pending and have been considered below. Priority 2. As required by M.P.E.P.201.14(c), acknowledgement is made of applicant’s claim for priority based on applications filed on May 31, 2022 (PCT/IL22/50577) and June 01, 2021 (IL 283626). Information Disclosure Statement 3. The information disclosure statement (IDS) submitted on 12/13/2023 has been considered. The submission is in compliance with the provisions of 37 CFR 1.97. Form PTO-1449 is signed and attached hereto. Claim Objections 4. The following claims are objected to because of the following informalities: Claims 4-12 recites, “a method according to claim ”. For clarity and consistency, it is suggested to change “the method according to claim ,”. Appropriate corrections are required. Claim Interpretation 5. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 6. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “learning module”, “translation module”, “display module”, “means for receiving”, “means for seamlessly integrating”, in claims 6, and 13 . Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 7. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea). Claim 1: Step Analysis 1: Statutory Category? Yes. The claim is a method claim. 2A - Prong 1: Judicial Exception Recited? Yes. The claim recites the limitation of integrating a sign language translation system, receiving , performing , generating based on analysis of text input. This limitation, as drafted, is a method that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “integrating a sign language translation system,” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “integrating a sign language translation system in chat application” language, the claim encompasses a user simply integrating and performing analysis a sign language translation system in his/her mind. The mere nominal recitation of integrating and performing analysis a sign language translation system and generating sequence does not take the claim limitation out of the mental processes grouping. Thus, the claim recites a mental. 2A - Prong 2: Integrated into a Practical Application? No. The claim recites two additional elements: integrating the generated sign language signs into chat application are displayed in real-time performs the integrating and performing analysis step. The integrating and performing analysis are recited at a high level of generality (i.e., as a general means of gathering network traffic data for use in the comparison step), and amounts to mere data gathering or manipulations, which is a form of insignificant extra-solution activity. The integrating the generated sign language signs into chat application are displayed in real-time is also recited at a high level of generality, and merely automates the integrating and performing step. Each of the additional limitations is no more than mere instructions to apply the exception using a generic computer component (the integrating the generated sign language signs into chat application are displayed in real-time). The combination of these additional elements is no more than mere instructions to apply the exception using a generic computer component (the integrating the generated sign language signs). Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Thus, the claim is directed to the abstract idea. 2B: Claim provides an Inventive Concept? No. As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, the integrating and performing was considered to be extra-solution activity in Step 2A, and thus it is re-evaluated in Step 2B to integrating if it is more than what is well-understood, routine, conventional activity in the field. The background of the example does not provide any indication that the integrating and performing analysis is anything other than a generic, and the Symantec, TLI, and OIP Techs. court decisions cited in MPEP 2106.05(d)(II) indicate that mere collection or determination of integrating and performing analysis is a well understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). Accordingly, a conclusion that the determining step is well-understood, routine, conventional activity is supported under Berkheimer. For these reasons, there is no inventive concept in the claim, and thus it is ineligible. Claim 2: Similar analysis applied as searching complete phrases and analyzing text input from searching root word databases, searching morphemes. Claim 3: Similar analysis applied as utilizing statistical tools to rate phrases. Claim 4: Similar analysis applied as hierarchically analyzing and providing sign language translation through individual words, root words and composing morphemes and letters. Claim 5: Similar analysis applied as for claim 1. Claim 6: Similar analysis applied as learning module to receive captured videos of a person expressing sign language. Claim 7: Similar analysis applied as for images captured. Claim 8: Similar analysis applied as for textual meaning for phrases, words, morphemes. Claim 9: Similar analysis applied as for sign language translation within internet, webpage…..and displaying translation. Claim 10: Similar analysis applied as for translation from text to sign language. Claim 11: Similar analysis applied as for QR codes linked to sign language. Claim 12: Similar analysis applied as for QR codes assigned to packaging. Claims 13-17: Similar analysis applied as in claim 1-12. Claim Rejections - 35 USC § 112 8. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Regarding claims 4, 9, 12, the phrase "to be" renders the claim indefinite because it is unclear whether the limitation(s) following the phrase are part of the claimed invention. Claim Rejections - 35 USC § 103 9. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 10. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 11. Claims 1-17 are rejected under 35 U.S.C. 103(a) as being unpatentable over KELLY (US 20230343011 A1) (hereinafter KELLY) in view of Bruner (US 20140046661 A1) (hereinafter Bruner). Regarding claim 1, KELLY discloses a computer-implemented method for integrating a sign language translation system into a chat application (FIG. 9, computer system implementing various aspects, para 40, FIG. 5 sign language translation of input 102, computer-generated three-dimensional environment presenting a sign language, including individual signs and movement between individual signs, human-like character, para 70, system implemented with chat screen, presenting input languages and output languages), comprising: a) receiving a text input for translation within the chat application (Abstract, receives input language data and translates the input language data into sign language grammar, para 21, system received and converted data of spoken or written language into sign language grammar, phonetic representations corresponding to sign language); b) performing analysis of the text input (para 54, search can be conducted using K Nearest-Neighbors (KNN) analysis with a Dynamic Time Warping (DTW) distance metric, translate the sign language gloss to the target language); c) generating an inclusive sequence of sign language signs based on the analysis of the text input (para 26, system 100 to produce translation based rules specific to a type of sign language, para 74, chat bot processes the output language and generates a response, the external chat bot can send an input language); and d) wherein the sign language signs are displayed to represent the translation of the text input in real-time (para 26, based on synchronous grammar models generate related strings using parsing algorithms for synchronous, para 40, presenting a sign language, including individual signs and movement between individual signs, para 21, translating between spoken or written language into sign language in real-time). KELLY specifically fails to disclose integrating the generated sign language signs into the chat application. In analogous art, Bruner discloses integrating the generated sign language signs into the chat application (para 113, integrated program components (e.g., plug-ins), and the like, para 126, translation platform database 519 into component, para 140, translation information into sign language). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify teaching of systems, and methods for translation between a spoken or written language and a sign language, and the presentation of such translations disclosed by KELLY to identify one or more sign language (SL) identifiers corresponding the one or more extracted speech elements, wherein sign language identifiers directly corresponds to a synonym of speech elements as taught by Bruner to provide translation platforms configured to process various types of information or identify speech and textual elements as inputs to yield sequences of sign language identifiers, video clips, avatar instructions or the like corresponding to the information content of those inputs. [Bruner, paragraph 021]. Regarding claim 2, KELLY discloses the method of claim 1, wherein the analysis of the text input is performed by a hierarchical analysis, including: a) searching complete phrases of the text input in a phrases database to find corresponding sign language translations, thereby providing a fast and accurate translation of text phrases while considering the context (para 70, detecting output language, searching for appropriate responses to the output language, and producing an input language response for translation into avatar sign language presentation); and b) analyzing remaining portions of the text input by searching individual words in a words database, searching root words in a root words database, and searching morphemes and letters in corresponding databases until a sign language translation is found for the entire text input (Abstract, sign language grammar from a sign language databases and generates coordinates from the phonetic representations, para 02, systems, and methods for translation between a spoken or written language and a sign language, and the presentation of such translations). Regarding claim 3, KELLY discloses the method of claim 1, further comprising utilizing statistical tools to rate phrases and words correspondingly to their usage frequency in translation searches, thereby improving the speed of translation results (para 06, FIG. 2 translating source text in an input language to sign language grammar, para 18, sign language translation of predetermined text created and produces phrases that have been recorded). Regarding claim 4, KELLY discloses a method according to claim 1, wherein the translation comprising receiving a text to be translated, and hierarchically analyzing and providing corresponding sign language translation for one or more sections of said text from the phrase level through individual words, root words, and the composing morphemes and letters thereof, while considering the context of the text (para 54, search can be conducted using K Nearest-Neighbors (KNN) analysis with a Dynamic Time Warping, translate the sign language gloss to the target language). Regarding claim 5, KELLY discloses a method according to claim 4, wherein the hierarchical analysis and translation (para 60, Persistence layer 718 can include a data access layer for controlling access to the various databases) comprises: a) providing a central computer adapted with suitable processing, memory, and storage hardware for executing instruction sets of a dedicated sign language translation agent adapted to receive text and to perform hierarchical analysis of said received text (para 65, classify sequences of signs using individual signs. Specifically, the sliding-window nature of convolutional layers can be exploited, para 66, translates a string of signs into the target language, a Large Language Model, para 83, Computer system 900 include one or more secondary storage devices or memory 910), storing one or more translation databases for text phrases, individual words, root words, morphemes and letters, and communicating with one or more sign-language translation client applications, wherein each of said one or more translation databases comprises text sections and corresponding sign language translation thereof (Abstract, etrieves phonetic representations that correspond to the sign language grammar from a sign language databases, para 31, Mouth morphemes and other non-manual markers, such as facial expressions, describe non-hand related aspects of body configuration); b) processing the received text to detect text phrases (para 18, produces natural signing, for phrases that have been recorded); c) searching in a phrases database within said one or more translation databases for the detected text phrases and corresponding sign language translation thereof (para 54, searching conducted using K Nearest-Neighbors (KNN) analysis with a Dynamic Time Warping, processing module 706 can then translate the sign language gloss to the target language); d) processing the remaining untranslated portions of the received text following step c, to detect words (para 20, systems for translating between a sign language and a spoken or written language, and presenting such translating, per-word basis unable to present translating include higher-level linguistic features unique to sign language); e) searching in a words database within said one or more translation databases for the detected words and corresponding sign language translation thereof (para 24, translator 104 process input 102 to algorithms that includes computational operations, para 26, translator 104 perform a tree transduction on lemmatized scheme); f) repeating steps d and e with root words, morphemes, and letters until obtaining sign language translation for the entire received text (para 53, compute a bounding box of dominant and non-dominant hands. This can be computed by iterating all bounding boxes, finding the ones closest); and g) returning the obtained sign language translation for displaying on a user device (para 55, processing module 706 displayed on an output device 708 and sign language information include a target language translation). Regarding claim 6, KELLY discloses a method according to claim 5, further comprises providing a learning module configured to receive captured stills and video images of a person expressing sign language and store the captured images in conjunction with their corresponding textual meaning (para 62, multi-stage machine learning pipeline to detect a sign language presentation from imagery or video and to translate sign language input into a target language, para 67, interface 800 include avatar 118, video feed, chat screen 804, and highlighted border 806). Regarding claim 7, KELLY discloses a method according to claim 6, wherein the images are captured by a camera of the user device (para 69, camera interface with a computer system that detect sign language and translate sign language). Regarding claim 8, KELLY discloses a method according to claim 6, wherein the corresponding textual meaning is one or more of the following: phrases, words, root words, morphemes, and letters (para 31, Mouth morpheme and other non-manual markers, such as facial expressions, describe non-hand related aspects of body configuration, para 32, phonetic representation for each of the individual words of grammar 106 to output phonetics 110, para 44, lemmatization scheme groups words based variant forms of word). Regarding claim 9, KELLY discloses a method according to claim 5, wherein the sign-language translation client application is embedded within an Internet browser, a webpage, or a website, thereby enabling select text on a webpage to be translated into the sign language, providing the selected text to the central computer via said sign-language translation client application and displaying the text's translation in the sign language via said Internet browser on the user device (para 27, communications channels include any combination of Local Area Networks, Wide Area Networks, the Internet, and other suitable communication means. Grammar translator 104 and phonetic extractor 108 can be run in the same program, para 86, interface 924 enabling system 900 to communicate and interact with any combination of remote devices). Regarding claim 10, KELLY discloses a method according to claim 1, wherein the translation can be from text to sign language or vice-versa (FIG. 2 user interface used translating source text in an input language to sign language, FIG. 5 avatar presenting a sign language translation of source text in an input language). Regarding claim 11, KELLY discloses a method according to claim 5, further comprising providing QR code(s) that link to sign language translation of text/speech (para 54, sign processed using encoder-decoder framework on a Recurrent Neural Network (RNN), para 76, system use machine-readable code (e.g., QR code) unique to signing user 802). Regarding claim 12, KELLY discloses a method according to claim 11, wherein the QR codes are adapted to be assigned on pharmaceutical packaging, prescription, and labels to communicate better to hearing-impaired people (para 76, system use machine-readable code (e.g., QR code) unique to signing user 802). Bruner teaches translation various types of information and/or identify speech and/or textual elements and providing hearing impaired individuals with an alternative and possibly more effective accessibility option.[021], and information received as closed captions, HTML, e-mail, text messages, e-books, written text, audio, video, images, scans, Quick Response (QR) codes, bar codes, sign language [148]. Regarding claim 13, KELLY discloses a chat application with integrated sign language translation capabilities comprising: a) a user interface for displaying chat messages and receiving user inputs within the chat application (FIG. 9, computer system implementing various aspects, para 40, FIG. 5 sign language translation of input 102, computer-generated three-dimensional environment presenting a sign language, including individual signs and movement between individual signs, human-like character, para 70, system implemented with chat screen, presenting input languages and output languages); b) text input receiving means for receiving a text input from a user within the chat application (para 21, system received and converted data of spoken or written language into sign language grammar, phonetic representations corresponding to sign language); c) sign language translation module configured to perform analysis of the text input (Abstract, receives input language data and translates the input language data into sign language grammar, para 21, system received and converted data of spoken or written language into sign language grammar, phonetic representations corresponding to sign language); d) a search engine for searching the databases to find sign language translations for the text input (para 54, search can be conducted using K Nearest-Neighbors (KNN) analysis with a Dynamic Time Warping (DTW) distance metric, translate the sign language gloss to the target language); e) a display module for displaying the sign language translations in real-time within the chat application, wherein the sign language translations correspond to the text input (para 55, processing module 706 displayed on an output device 708 and sign language information include a target language translation, para 70, system implemented with chat screen, presenting input languages and output languages); and f) wherein the sign language translations are displayed alongside the corresponding chat messages to enable real-time communication between users in text and sign language formats (para 26, based on synchronous grammar models generate related strings using parsing algorithms for synchronous, para 40, presenting a sign language, including individual signs and movement between individual signs, para 21, translating between spoken or written language into sign language in real-time). KELLY specifically fails to disclose integration means for seamlessly integrating the sign language translations into the chat application. In analogous art, Bruner discloses integration means for seamlessly integrating the sign language translations into the chat application (para 113, integrated program components (e.g., plug-ins), and the like, para 126, translation platform database 519 into component, para 140, translation information into sign language). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify teaching of systems, and methods for translation between a spoken or written language and a sign language, and the presentation of such translations disclosed by KELLY to identify one or more sign language (SL) identifiers corresponding the one or more extracted speech elements, wherein sign language identifiers directly corresponds to a synonym of speech elements as taught by Bruner to provide translation platforms configured to process various types of information or identify speech and textual elements as inputs to yield sequences of sign language identifiers, video clips, avatar instructions or the like corresponding to the information content of those inputs. [Bruner, paragraph 021]. Regarding claim 14, KELLY discloses the chat application of claim 13, further comprising statistical tools for rating phrases and words based on usage frequency in translation searches (para 06, FIG. 2 translating source text in an input language to sign language grammar, para 18, sign language translation of predetermined text created and produces phrases that have been recorded). Regarding claim 15, KELLY discloses the chat application of claim 13, wherein the analysis of the text input comprising: a) a phrases database storing multiple phrases and their corresponding sign language translations (Abstract, retrieves phonetic representations correspond to sign language grammar from sign language databases, para 31, Mouth morphemes and other non-manual markers, such as facial expressions, describe non-hand related aspects of body configuration); b) a words database storing individual words and their corresponding sign language translations (para 66, translates a string of signs into the target language, a Large Language Model); c) a root words database storing root words and their corresponding sign language translations (para 24, translator 104 process input 102 to algorithms that includes computational operations, para 26, translator 104 perform a tree transduction on lemmatized scheme); and d) morphemes and letters databases storing smaller structural units and their corresponding sign language translations (para 31, Mouth morphemes and other non-manual markers, such as facial expressions, describe non-hand related aspects of body configuration). Regarding claim 16, KELLY discloses a sign language translation system (para 60, Persistence layer 718 can include a data access layer for controlling access to the various databases), comprising: a) a central computer adapted with processing, memory, storage, and communication hardware, said central computer is configured to execute instruction sets of a dedicated sign language translation agent para 65, classify sequences of signs using individual signs. Specifically, the sliding-window nature of convolutional layers can be exploited, para 66, translates a string of signs into the target language, a Large Language Model, para 83, Computer system 900 include one or more secondary storage devices or memory 910), to store one or more translation databases thereof, and to communicate with one or more sign-language translation client applications running in user devices (Abstract, etrieves phonetic representations that correspond to the sign language grammar from a sign language databases, para 31, Mouth morphemes and other non-manual markers, such as facial expressions, describe non-hand related aspects of body configuration), wherein each of said one or more translation databases comprises common text sections and sign language translation thereof (Abstract, retrieves phonetic representations that correspond to the sign language grammar from a sign language databases); and b) one or more user devices adapted to run a sign-language translation client application, which is configured to utilize said one or more user devices' input, communication (para 24, translator 104 process input 102 to algorithms that includes computational operations, para 26, translator 104 perform a tree transduction on lemmatized scheme, and to display said corresponding sign language translation to users, characterizing with that said sign language translation agent and said one or more translation databases are adapted to analyze and translate said input text hierarchically and while considering the context of the text (para 55, processing module 706 displayed on an output device 708 and sign language information include a target language translation). KELLY specifically fails to disclose display hardware to receive input text for translation, to submit said input text to said central computer and to receive corresponding sign language translation of said input text from said central computer In analogous art, Bruner discloses display hardware to receive input text for translation, to submit said input text to said central computer and to receive corresponding sign language translation of said input text from said central computer (para 04, one or more sign language identifiers reproduced on a display of a displaying device, para 33, translation servers 190 translated SL (e.g., video clips), code or instructions (e.g., display an avatar), translated words, phrases, numbers, etc. and present translated information either locally (e.g., on a display or through an audio system of the translation platform) and/or in a remote device (e.g., a television, computer, tablet or the like), para 126, translation platform database 519 into component, para 140, translation information into sign language). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify teaching of systems, and methods for translation between a spoken or written language and a sign language, and the presentation of such translations disclosed by KELLY to identify one or more sign language (SL) identifiers corresponding the one or more extracted speech elements, wherein sign language identifiers directly corresponds to a synonym of speech elements as taught by Bruner to provide translation platforms configured to process various types of information or identify speech and textual elements as inputs to yield sequences of sign language identifiers, video clips, avatar instructions or the like corresponding to the information content of those inputs. [Bruner, paragraph 021]. Regarding claim 17, KELLY discloses the system according to claim 16, further comprising QR codes that link to sign language translation of text/speech (para 54, sign processed using encoder-decoder framework on a Recurrent Neural Network (RNN), para 76, system use machine-readable code (e.g., QR code) unique to signing user). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Mirza Alam whose telephone number is (469) 295-9286. The examiner can be reached on Monday-Thursday 7:30AM-6:00PM (EST). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steven Lim can be reached on 571-270-1210. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for Published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MIRZA F ALAM/Primary Examiner, Art Unit 2688
Read full office action

Prosecution Timeline

Nov 29, 2023
Application Filed
Dec 06, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602986
METHODS FOR SETTING ADDRESSES IN A BUILDING MANAGEMENT SYSTEM AND INSTALLATION TOOL FOR SUCH A SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602982
Network Edge Detection and Notification of Gas Pressure Situation
2y 5m to grant Granted Apr 14, 2026
Patent 12602975
GATE APPARATUS, CONTROL METHOD FOR GATE APPARATUS, PROGRAM, AND GATE SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12592708
SAR ANALOG-TO-DIGITAL CONVERTER WITH HIGH-ORDER NOISE-SHAPING CHARACTERISTICS
2y 5m to grant Granted Mar 31, 2026
Patent 12587035
Device for Displaying in Response to a Sensed Motion
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+34.3%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 1004 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month