Prosecution Insights
Last updated: April 19, 2026
Application No. 18/616,955

METHOD FOR TRANSLATING INPUT TO SIGN LANGUAGE AND SYSTEM THEREFOR

Non-Final OA §101§102
Filed
Mar 26, 2024
Examiner
BLAISE, MALINA D
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
VISUAL S L LTD
OA Round
1 (Non-Final)
57%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
97%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
364 granted / 635 resolved
-12.7% vs TC avg
Strong +40% interview lift
Without
With
+39.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
38 currently pending
Career history
673
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
41.7%
+1.7% vs TC avg
§102
14.0%
-26.0% vs TC avg
§112
8.9%
-31.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 635 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites a computerized method for translating input to sign language. The limitation of obtaining an input; converting the input to representation in a designated sign language, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “computerized,” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “computerized” language, “obtaining” in the context of this claim encompasses the user mentally seeing language and mentally thinking about how it would translate to sign language. Similarly, the limitations of: processing, generate, extract, obtaining, generating, and providing are processes that, under their broadest reasonable interpretation, covers performance of the limitation in the mind. The same interpretation is applied to the remaining steps in claim 1. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim only recites one additional element – computerized. The computerized language is recited at a high-level of generality (i.e., as a generic processor implementing a step) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using computerized amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Similar reasoning is applied to claims 2-24. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-24 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US Publication No. 2023/0343011 A1 to Kelly et al. (hereinafter “Kelly”). Concerning claim 1, Kelly discloses a computerized method for translating input to sign language (Abstract), the method comprising: obtaining an input; converting the input to representation in a designated sign language (Fig. 6, paragraphs [0041], [0042] – input is received and converted to sign language), by: processing the input, based at least on contextual data associated with the input (Fig. 6, paragraphs [0043]-[0045]- the input is processed based on contextual data), to: generate a gloss sequence comprising one or more glosses (Fig. 7A, paragraphs [0042]-[0044], [0054] - word-for-word representation of a signed sentence is arranged in the grammatical order of the sign language rather than the spoken language); and extract visual generation guidance (paragraphs [0046]-[0049] – visual generation guidance is extracted); obtaining visual data corresponding to the gloss sequence (paragraphs [0042]-[0049], [0054] – visual data relating to the gloss sequence is obtained); and generating representation based on the gloss sequence, the visual data and the visual generation guidance (paragraphs [0042]-[0049], [0054] – visual data relating to the gloss sequence is generated and provided); and providing the representation (Fig. 8, paragraphs [0042]-[0049], [0054] – visual data relating to the gloss sequence is obtained). Concerning claim 2, Kelly discloses further comprising: configuring an avatar to perform the representation; rendering the avatar (Fig. 8, paragraphs [0042]-[0049], [0054] – avatar performs sign language). Concerning claim 3, Kelly discloses wherein obtaining an input comprises receiving the input through an API platform (paragraphs [0034], [0047], [0070] – input is received through API) . Concerning claim 4, Kelly discloses wherein translating the input to the sign language is performed in real time (paragraph [0053] – translating occurs in real time). Concerning claim 5, Kelly discloses wherein processing the input further comprises: extracting contextual data from the input; and processing the input based at least on the extracted contextual data (Fig. 6, paragraphs [0043]-[0045]- the input is processed and extracted based on contextual data). Concerning claim 6, Kelly discloses wherein converting the input further comprises: prior to processing the input, restructuring or reducing the form of the input (Fig. 7A, paragraphs [0042]-[0044], [0054] – input is restructured). Concerning claim 7, Kelly discloses wherein processing the input to generate the gloss sequence further comprises: generating an initial gloss sequence comprising one or more initial glosses based on the input; and modifying the generated initial gloss sequence based on the contextual data, to generate a modified gloss sequence (Fig. 7A, paragraphs [0042]-[0044], [0054] – generating and modifying the gloss sequence based on contextual data). Concerning claim 8, Kelly discloses wherein modifying the generated initial gloss sequence further comprises: replacing at least one of the initial glosses with a different gloss (Fig. 7A, paragraphs [0042]-[0044], [0054] – replacing the gloss sequence based on contextual data). Concerning claim 9, Kelly discloses wherein replacing at least one of the initial glosses further comprises: for at least one of the initial glosses, generating a new gloss based on the initial gloss; and replacing one of the initial glosses with the new gloss (Fig. 7A, paragraphs [0042]-[0044], [0054] – replacing the gloss sequence based on contextual data). Concerning claim 10, Kelly discloses wherein the new gloss represents finger spelling of a word (Fig. 7A, paragraphs [0042]-[0044], [0054] – the gloss sequence represents finger spelling). Concerning claim 11, Kelly discloses further comprising: determining, based on the contextual data, that a particular finger spelling gloss surpasses a particular one of the initial glosses; and replacing the particular initial gloss with the particular finger spelling gloss (Fig. 7A, paragraphs [0042]-[0044], [0054] – replacing the gloss sequence with finger spelling gloss). Concerning claim 12, Kelly discloses wherein processing the input to generate a gloss sequence comprises applying on the input at least one technique selected from a group comprising: N-grams to gloss, synonyms, finger spelling, homograph disambiguation, Temporal Aspect Modifiers, and number classifications (Fig. 7A, paragraphs [0042]-[0044], [0054], [0092] – input technique includes finger spelling gloss). Concerning claim 13, Kelly discloses wherein processing the input to generate a gloss sequence comprises applying a classifiers matching technique (paragraphs [0042]-[0044], [0054],- classifiers matching is applied). Concerning claim 14, Kelly discloses wherein processing the input to generate a gloss sequence comprises applying an emotional technique (paragraph [0031], [0042]-[0044], [0054],- facial expressions are used to generate gloss). Concerning claim 15, Kelly discloses wherein obtaining the visual data further comprises: for at least a first gloss in the gloss sequence, obtaining visual data of an optimal presentation from among a plurality of visual presentations available for the first gloss (paragraphs [0042]-[0049], [0054] – visual data relating to the gloss sequence is obtained). Concerning claim 16, Kelly discloses wherein the plurality of visual presentations is generated using one or more techniques selected from a group comprising: Mono-Cam computer vision (CV), Multi-Cam CV, Motion Capture (MoCap) and manual generation (paragraphs [0031], [0042]-[0049] – motion capture is used). Concerning claim 17, Kelly discloses wherein the visual generation guidance pertains to presentation of the gloss sequence (Fig. 8, paragraphs [0042]-[0049], [0054] – visual data relating to the gloss sequence is obtained). Concerning claim 18, Kelly discloses wherein the visual generation guidance includes at least two layers of guidance, wherein each layer comprises guidance pertaining to a separate aspect of animation of the gloss sequence (Fig. 7B, paragraphs [0056], [0060], [0065]- multiple layers are used for the visual generation). Concerning claim 19, Kelly discloses wherein at least one of the layers is associated with implementation priority over another layer (Fig. 7B, paragraphs [0056], [0060], [0065]- multiple layers are implemented using priority). Concerning claim 20, Kelly discloses wherein at least one of the layers pertains to one aspect selected from a group comprising: transitions between at least two of the glosses, emotions, animated Indexing, grammatical structure, Contextual Non-Manual Markers (NMM), classifiers, and avatar humanization (Figs. 7 and 8, paragraphs [0040]-[0049], [0054], [0056], [0060], [0065]- layers include avatar humanization). Concerning claims 21 and 22, see the rejection of claim 1. Concerning claim 23, Kelly discloses a personal assistant system, comprising: a reception unit for receiving an input from a user; one or more processors configured to: process the input using language learning models (LLMs) to generate processed data; translate the processed data to sign language utilizing the method of claim 1 to generate the representation; and an output interface for providing the representation (Fig. 7A, paragraphs [0028], [0034], [0042]-[0047], [0054], [0070]- see the rejection of claim 1 – input using LLM generate processed data which is translated to sign language and visually output). Concerning claim 24, Kelly discloses a personal assistant system, comprising: a reception unit configured to receive an input from an external system, the reception unit comprising one or more processors configured to process the input using language learning models (LLMs) to generate processed data; and one or more processors configured to translate the processed data to sign language utilizing the method of claim 1 to generate the representation; and an output interface for providing the representation (Fig. 7A, paragraphs [0028], [0034], [0042]-[0047], [0054], [0070]- see the rejection of claim 1 – input using LLM generate processed data which is translated to sign language and visually output). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is listed in the PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MALINA D BLAISE whose telephone number is (571)270-3398. The examiner can normally be reached Mon. - Thurs. 7:00 am - 5:00 pm (PT). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached at 571-272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MALINA D. BLAISE Primary Examiner Art Unit 3715 /MALINA D. BLAISE/ Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Mar 26, 2024
Application Filed
Jan 28, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582920
TOY
2y 5m to grant Granted Mar 24, 2026
Patent 12573269
INFORMATION PROCESSOR AND GAME CONTROL METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12558613
Control Method and Electronic Device
2y 5m to grant Granted Feb 24, 2026
Patent 12551792
SYSTEMS AND METHODS FOR GAMIFICATION IN A METAVERSE
2y 5m to grant Granted Feb 17, 2026
Patent 12544665
COMPUTER SYSTEM, GAME SYSTEM, AND REPLACEMENT PLAY EXECUTION CONTROL METHOD
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
57%
Grant Probability
97%
With Interview (+39.6%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 635 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month