Prosecution Insights
Last updated: April 19, 2026
Application No. 18/627,344

SCALABLE HANDWRITING, AND SYSTEMS AND METHODS OF USE THEREOF

Non-Final OA §103
Filed
Apr 04, 2024
Examiner
PARCHER, DANIEL W
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
Meta Platforms Technologies, LLC
OA Round
1 (Non-Final)
61%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
160 granted / 264 resolved
+5.6% vs TC avg
Strong +59% interview lift
Without
With
+59.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
35 currently pending
Career history
299
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
55.6%
+15.6% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
16.9%
-23.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 264 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Use of the word “means” (or “step for”) in a claim with functional language creates a rebuttable presumption that the claim element is to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is invoked is rebutted when the function is recited with sufficient structure, material, or acts within the claim itself to entirely perform the recited function. Absence of the word “means” (or “step for”) in a claim creates a rebuttable presumption that the claim element is not to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is not invoked is rebutted when the claim element recites function but fails to recite sufficiently definite structure, material or acts to perform that function. Claim elements in this application that use the word “means” (or “step for”) are presumed to invoke 35 U.S.C. 112(f) except as otherwise indicated in an Office action. Similarly, claim elements that do not use the word “means” (or “step for”) are presumed not to invoke 35 U.S.C. 112(f) except as otherwise indicated in an Office action. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “computing device” in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Prior Art Listed herein below are the prior art references relied upon in this Office Action: Berenzweig et al. (US Patent Application Publication 2020/0097082), referred to as Berenzweig herein. Rubin et al. (US Patent Application Publication 2021/0064132), referred to as Rubin herein. Hartz (US Patent Application Publication 2023/0120309), referred to as Hartz herein. Kim et al. (“3D Space Handwriting Recognition with Ligature Model”, UCS 2006, LNCS 4239, pp. 41 – 56, 2006), referred to as Kim herein. Xin et a. (“From 2D to 3D: Facilitating Single-Finger Mid-Air Typing on QWERTY Keyboards with Probabilistic Touch Modeling”, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Volume 7, Issue 1 (March 2023)), referred to as Xin herein. Examiner’s Note Strikethrough notation in the pending claims has been added by the Examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-12, and 15-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Berenzweig in view of Rubin in further view of Hartz. Regarding claim 1, Berenzweig discloses a non-transitory, computer-readable storage medium including instructions that, when executed by a computing device communicatively coupled with a wrist-wearable device, cause the wrist-wearable device to perform (Berenzweig, Figs. 1 and 9A-9C with ¶0119-¶0125, ¶0149, and ¶0170-¶0173 – wrist-worn sensor system communicatively coupled with a virtual headset. ¶0189 – computer memory and processor executing stored instructions. This element is interpreted under 35 U.S.C. 112(f) as the head-mounted display described in Applicant’s Specification ¶0159): detecting, by a wearable device worn by a user, a text-symbolic hand gesture performed by the user (Berenzweig, ¶0031-¶0032, ¶0170 – neuromuscular signals are detected by the wearable device. ¶0111, ¶0173 – input signals include text input including handwritten text, as well as entry via a virtual keyboard, each including detected mid-air gestures); in response to detecting the text-symbolic hand gesture, causing a display communicatively coupled with the wearable device to present (i) and (ii) a predicted user input based on the character (Berenzweig, ¶0163 – displaying a list of suggested predicted words or phrases for the text input in response to identified gestures); detecting, by the wearable device worn by the user, a subsequent input performed by the user; in response to a determination that the subsequent input selects the predicted user input (Berenzweig, ¶0163 – the user can select a predicted word from the list), However, Berenzweig appears not to expressly disclose the limitations in strikethrough above. However, in the same field of endeavor, Rubin discloses detecting EMG signals from a wrist-wearable device (Rubin, Abstract, ¶0045-¶0046, ¶0052, Fig. 11 with ¶0100-¶0101) communicatively coupled with augmented reality display (Rubin, ¶0043) including providing instructions that cause the wearable device to initiate sending of the predicted user input to another electronic device (Rubin, Figs. 4-5 with ¶0066-¶0067 - selecting predicted text results in inputting the text to a chat or email. ¶0181 – command to send a message). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the predicted user input of Berenzweig to include providing the input to a message composer and sending the message to a recipient electronic device based on the teachings of Rubin. The motivation for doing so would have been to provide users greater flexibility and convenience (Rubin, ¶0042) in interfacing with messaging applications. However, Berenzweig as modified appears not to expressly disclose displaying a representation of the application-specific action. However, in the same field of endeavor, Hartz discloses text input prediction (Hartz, Abstract), including present a representation of an application-specific action associated with a character identified from the input text; and in response to a determination that the subsequent input selects the representation of the application-specific action associated with the character, providing instructions that cause the to initiate performance of the application-specific action (Hartz, Figs. 4-5 with ¶0036-¶0047 – partial input of “ca” results in results including camera, calendar, and message applications predictions. Selection of the suggestion results in commanding the application to perform the action). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the predicted user input commands of Berenzweig to include providing a representation of the application-specific action based on the teachings of Hartz. The motivation for doing so would have been to extend predictive functions beyond text terms to deep behaviors or tendencies of users in accessing applications and associated functions (Hartz, ¶0017-¶0018). Regarding claim 2, Berenzweig as modified discloses the elements of claim 1 above, and further discloses wherein initiating sending of the predicted user input to another electronic device includes one or more of: causing an application on the wearable device to send the predicted user input to the other electronic device, causing a portion of a user interface presented by the wearable device to be populated with the predicted user input for review before sending (Rubin, Figs. 4-5 with ¶0066-¶0067 - selecting predicted text results in inputting the text to a chat or email), causing the predicted user input to be shared with another electronic device distinct from the wearable device (Rubin, ¶0181 – command to send a message. Hartz, Fig. 4 with ¶0036, ¶0046-¶0048, ¶0054 – sending an email to a contact corresponding to the text input). Regarding claim 3, Berenzweig as modified discloses the elements of claim 1 above, and further discloses wherein initiating performance of the application-specific action includes one or more of: causing an application on the wearable device to open (Hartz, Fig. 4-6 with ¶0036 – suggestions for opening camera, calendar, or call applications), causing a portion of a user interface of an application to be presented on the wearable device to be populated with one or more user inputs (Rubin, Figs. 4-5 with ¶0066-¶0067 - selecting predicted text results in inputting the text to a chat or email), causing an application on the wearable device to perform an application specific operation (Berenzweig, ¶0152 – application-defined gestures including gestures mapped to commands or user-defined. Rubin, Figs. 4-5 with ¶0066-¶0067 - selecting predicted text results in inputting the text to a chat or email. Hartz, Fig. 4 with ¶0036, ¶0046-¶0048, ¶0054 – sending an email to a contact corresponding to the text input, Fig. 4-6 with ¶0036 – suggestions for opening camera, calendar, or call applications), and causing an application to initiate an interaction with a predefined contact associated with the character (Hartz, Fig. 4 with ¶0036, ¶0046-¶0048, ¶0054 – sending an email to a contact corresponding to the text input. Fig. 6 with ¶0038 – call Avis. ¶0019 – contacts identified for voice call functionality, texting, social networking, email). Regarding claim 4, Berenzweig as modified discloses the elements of claim 1 above, and further discloses wherein the text-symbolic hand gesture is a first text-symbolic hand gesture and the character is a first character, and the instructions, when executed by the computing device, cause the wrist-wearable device to perform: detecting, by the wearable device worn by the user, a second text-symbolic hand gesture performed by the user; in response to detecting the second text-symbolic hand gesture and in accordance with a determination that a second character identified from the second text-symbolic hand gesture is a linking character that connects to the first character to form a set of characters, causing the display communicatively coupled with the wearable device to present (i) a representation of an application-specific action associated with the set of characters and (ii) an updated predicted user input based on the set of characters (Berenzweig , ¶0174-¶0176 – gestures for continuation/input of a word, editing a word or character, and inserting a space to indicate that characters are not linked to enable input of a new word (set of linked characters). Hartz, Fig. 4-6 with ¶0036 – updating suggestions for opening camera, calendar, or call applications as additional characters, including linked characters are input). Regarding claim 5, Berenzweig as modified discloses the elements of claim 4 above, and further discloses wherein the text-symbolic hand gesture is a first text-symbolic hand gesture and the character is a first character, and the instructions, when executed by the computing device, cause the wrist-wearable device to perform: detecting, by the wearable device worn by the user, a third text-symbolic hand gesture performed by the user; in response to detecting the third text-symbolic hand gesture and in accordance with a determination that a third character identified from the second text-symbolic hand gesture is a non-linking character that is not connected to the first character such that the first character forms a first set of characters and the third character forms a second set of characters, causing the display communicatively coupled with the wearable device to present (i) a representation of an application-specific action associated with the first and second set of characters and (ii) an updated predicted user input based on the first and second set of characters (Berenzweig , ¶0174-¶0176 – gestures for continuation/input of a word, editing a word or character, and inserting a space to indicate that characters are not linked to enable input of a new word (set of linked characters). Hartz, Fig. 4-6 with ¶0036 – updating suggestions for opening camera, calendar, or call applications as additional characters, including non-linked characters are input). Regarding claim 6, Berenzweig as modified discloses the elements of claim 4 above, and further discloses when executed by the computing device, cause the wrist-wearable device to perform: detecting, by the wearable device worn by the user, another input performed by the user; in response to a determination that the other input selects one or more sets of characters, providing instructions that cause the wearable device to perform an operation associated with the one or more sets of characters (Berenzweig , ¶0174-¶0176 – gestures for editing or deleting a word or character). Regarding claim 7, Berenzweig as modified discloses the elements of claim 6 above, and further discloses wherein the operation associated with the respective set of characters includes one or more of: causing an application on the wearable device to open (Hartz, Fig. 4-6 with ¶0016, ¶0036-¶0037 – suggestions for opening camera, calendar, or call applications as text is input in real time), causing a portion of a user interface presented on the wearable device to be populated with a predetermined input (Berenzweig, ¶0152 – application-defined gestures including gestures mapped to commands or user-defined. Rubin, Figs. 4-5 with ¶0066-¶0067 - selecting predicted text results in inputting the text to a chat or email), and causing the wearable device to initiate an interaction with a predefined contact associated with the character (Hartz, Fig. 4 with ¶0016, ¶0036, ¶0046-¶0048, ¶0054 – sending an email to a contact corresponding to the text input and updated as text is input. Fig. 6 with ¶0038 – call Avis. ¶0019 – contacts identified for voice call functionality, texting, social networking, email as text is input). Regarding claim 8, Berenzweig as modified discloses the elements of claim 1 above, and further discloses wherein the text- symbolic hand gesture is a swipe typing gesture (Berenzweig, ¶0155 – virtual swipe gesture on a swipe keyboard). Regarding claim 9, Berenzweig as modified discloses the elements of claim 1 above, and further discloses wherein the text- symbolic hand gesture is a surface typing gesture (Berenzweig, ¶0155 – virtual swipe gesture on a swipe keyboard). Regarding claim 10, Berenzweig as modified discloses the elements of claim 1 above, and further discloses wherein the text-symbolic hand gesture is a handwriting gesture (Berenzweig, ¶0155 – virtual swipe gesture on a swipe keyboard. ¶0173 – handwriting gesture text input). Regarding claim 11, Berenzweig as modified discloses the elements of claim 10 above, and further discloses wherein the handwriting gesture is user specific shorthand and the determination that the text-symbolic hand gesture is associated with the character includes interpreting the user specific shorthand based on historic user data (Hartz, Abstract, ¶0002, ¶0017, ¶0021, ¶0031 – user-specific analysis of past behavior, tendencies, and selections is used to tailor predictions for user input from partial text entries (shorthand)). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the predicted user input commands of Berenzweig to include providing user-specific gesture results based on past behavior based on the teachings of Hartz. The motivation for doing so would have been to prevent the user from having to enter additional text or from doing additional searching (Hartz, ¶0021). Regarding claim 12, Berenzweig as modified discloses the elements of claim 10 above, and further discloses wherein the handwriting gesture is a learned gesture generated by the wearable device and historic user data, wherein the learned gesture is specific to the user (Berenzweig, ¶0152-¶154 – gestures may be defined by a user, and include writing actions. ¶Rubin, ¶0124-¶0125 – personalized data collected from additional training sessions for generating a personalized inference model). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the training of Berenzweig as modified to include device and user-specific training based on the teachings of Rubin. The motivation for doing so would have been improve accuracy for detecting user character inputs (Rubin, ¶0140). Regarding claim 15, Berenzweig as modified discloses the elements of claim 11 above, and further discloses wherein the text-symbolic hand gesture is a visually imperceptible hand gesture (Berenzweig, ¶0152 – covert gestures imperceptible to another person). Regarding claim 16, Berenzweig as modified discloses the elements of claim 1 above, and further discloses wherein the determination that the text-symbolic hand gesture is associated with a character includes: comparing image and/or sensor data captured by the wearable device with stored character data including predefined character data, user specific character data, user-device specific character data, and selecting, based on comparison of the image and/or sensor data with the stored character data, the character (Berenzweig, ¶0139-¶0140 – classifier trained on recorded sensor signals. ¶0130 – recorded sensor signals are input to the trained inference model for gesture recognition. Rubin, Fig. 9 with ¶0046, ¶0089 –statistical model is trained based on sensor data stored and used in identification. Sensor signals are input to the inference model, which compares the sensor signals to a set of gestures and outputs a likelihood of match to the stored gestures. ¶0124-¶0125 – personalized data collected from additional training sessions for generating a personalized inference model. ¶0136-¶0156 – training data specifically for neuromuscular sensor wearable device); Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the training of Berenzweig as modified to include device and user-specific training based on the teachings of Rubin. The motivation for doing so would have been improve accuracy for detecting user character inputs (Rubin, ¶0140). Regarding claim 17, Berenzweig discloses a method, comprising: detecting, by a wearable device worn by a user, a text-symbolic hand gesture performed by the user (Berenzweig, Figs. 1 and 9A-9C with ¶0119-¶0125, ¶0149, and ¶0170-¶0173 – wrist-worn sensor system communicatively coupled with a virtual headset. ¶0031-¶0032, ¶0170 – neuromuscular signals are detected by the wearable device. ¶0111, ¶0173 – input signals include text input including handwritten text, as well as entry via a virtual keyboard, each including detected mid-air gestures); in response to detecting the text-symbolic hand gesture, causing a display communicatively coupled with the wearable device to present (i) and (ii) a predicted user input based on the character (Berenzweig, ¶0163 – displaying a list of suggested predicted words or phrases for the text input in response to identified gestures); detecting, by the wearable device worn by the user, a subsequent input performed by the user; in response to a determination that the subsequent input selects the predicted user input (Berenzweig, ¶0163 – the user can select a predicted word from the list), However, Berenzweig appears not to expressly disclose the limitations in strikethrough above. However, in the same field of endeavor, Rubin discloses detecting EMG signals from a wrist-wearable device (Rubin, Abstract, ¶0045-¶0046, ¶0052, Fig. 11 with ¶0100-¶0101) communicatively coupled with augmented reality display (Rubin, ¶0043) including providing instructions that cause the wearable device to initiate sending of the predicted user input to another electronic device (Rubin, Figs. 4-5 with ¶0066-¶0067 - selecting predicted text results in inputting the text to a chat or email. ¶0181 – command to send a message). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the predicted user input of Berenzweig to include providing the input to a message composer and sending the message based on the teachings of Rubin. The motivation for doing so would have been to provide users greater flexibility and convenience (Rubin, ¶0042) in interfacing with messaging applications. However, Berenzweig as modified appears not to expressly disclose displaying a representation of the application-specific action. However, in the same field of endeavor, Hartz discloses text input prediction (Hartz, Abstract), including present a representation of an application-specific action associated with a character identified from the input text; and in response to a determination that the subsequent input selects the representation of the application-specific action associated with the character, providing instructions that cause the to initiate performance of the application-specific action (Hartz, Figs. 4-5 with ¶0036-¶0047 – partial input of “ca” results in results including camera, calendar, and message applications predictions. Selection of the suggestion results in commanding the application to perform the action). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the predicted user input commands of Berenzweig to include providing a representation of the application-specific action based on the teachings of Hartz. The motivation for doing so would have been to extend predictive functions beyond text terms to deep behaviors or tendencies of users in accessing applications and associated functions (Hartz, ¶0017-¶0018). Regarding claim 18, Berenzweig as modified discloses the elements of claim 17 above, and further discloses wherein initiating sending of the predicted user input to another electronic device includes one or more of: causing an application on the wearable device to send the predicted user input to the other electronic device, causing a portion of a user interface presented by the wearable device to be populated with the predicted user input for review before sending (Rubin, Figs. 4-5 with ¶0066-¶0067 - selecting predicted text results in inputting the text to a chat or email), causing the predicted user input to be shared with another electronic device distinct from the wearable device (Rubin, ¶0181 – command to send a message. Hartz, Fig. 4 with ¶0036, ¶0046-¶0048, ¶0054 – sending an email to a contact corresponding to the text input). Regarding claim 19, Berenzweig discloses a wrist-wearable device, comprising: one or more sensors; one or more processors; and detect, by a wearable device worn by a user, a text-symbolic hand gesture performed by the user (Berenzweig, ¶0031-¶0032, ¶0170 – neuromuscular signals are detected by the wearable device. ¶0111, ¶0173 – input signals include text input including handwritten text, as well as entry via a virtual keyboard, each including detected mid-air gestures); in response to detecting the text-symbolic hand gesture, cause a display communicatively coupled with the wearable device to present (i) and (ii) a predicted user input based on the character (Berenzweig, ¶0163 – displaying a list of suggested predicted words or phrases for the text input in response to identified gestures); detect, by the wearable device worn by the user, a subsequent input performed by the user; in response to a determination that the subsequent input selects the predicted user input (Berenzweig, ¶0163 – the user can select a predicted word from the list), However, Berenzweig appears not to expressly disclose the limitations in strikethrough above. However, in the same field of endeavor, Rubin discloses detecting EMG signals from a wrist-wearable device (Rubin, Abstract, ¶0045-¶0046, ¶0052, Fig. 11 with ¶0100-¶0101) communicatively coupled with augmented reality display (Rubin, ¶0043) including a wrist-wearable device, comprising: one or more sensors; one or more processors; and memory (Rubin, ¶0221 – wearable device including processor and memory executing instructions), providing instructions that cause the wearable device to initiate sending of the predicted user input to another electronic device (Rubin, Figs. 4-5 with ¶0066-¶0067 - selecting predicted text results in inputting the text to a chat or email. ¶0181 – command to send a message). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the predicted user input of Berenzweig to include a memory and providing the input to a message composer and sending the message based on the teachings of Rubin. The motivation for doing so would have been to enable use in generally available products such as smartwatches, and to provide users greater flexibility and convenience (Rubin, ¶0042) in interfacing with messaging applications. However, Berenzweig as modified appears not to expressly disclose displaying a representation of the application-specific action. However, in the same field of endeavor, Hartz discloses text input prediction (Hartz, Abstract), including present a representation of an application-specific action associated with a character identified from the input text; and in response to a determination that the subsequent input selects the representation of the application-specific action associated with the character, providing instructions that cause the to initiate performance of the application-specific action (Hartz, Figs. 4-5 with ¶0036-¶0047 – partial input of “ca” results in results including camera, calendar, and message applications predictions. Selection of the suggestion results in commanding the application to perform the action). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the predicted user input commands of Berenzweig to include providing a representation of the application-specific action based on the teachings of Hartz. The motivation for doing so would have been to extend predictive functions beyond text terms to deep behaviors or tendencies of users in accessing applications and associated functions (Hartz, ¶0017-¶0018). Regarding claim 20, Berenzweig as modified discloses the elements of claim 19 above, and further discloses wherein initiating sending of the predicted user input to another electronic device includes one or more of: causing an application on the wearable device to send the predicted user input to the other electronic device, causing a portion of a user interface presented by the wearable device to be populated with the predicted user input for review before sending (Rubin, Figs. 4-5 with ¶0066-¶0067 - selecting predicted text results in inputting the text to a chat or email), causing the predicted user input to be shared with another electronic device distinct from the wearable device (Rubin, ¶0181 – command to send a message. Hartz, Fig. 4 with ¶0036, ¶0046-¶0048, ¶0054 – sending an email to a contact corresponding to the text input). Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Berenzweig in view of Rubin in further view of Hartz in further view of Kim. Regarding claim 13, Berenzweig as modified discloses the elements of claim 1 above. However, Berenzweig appears not to expressly disclose wherein the character determined based on the text-symbolic hand gesture has a character error rate less than 10 percent. However, in the same field of endeavor, Kim discloses a 3D handwriting input model (Kim, Abstract), including wherein the character determined based on the text-symbolic hand gesture has a character error rate less than 10 percent (Kim, Pages 50-53 with Figs. 12 and 15 – digit and alphabet character recognition error rate is less than 10 percent). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the gesture recognition of Berenzweig to realize a character error rate less than 10 percent based on the teachings of Kim. The motivation for doing so would have been to improve writing speed, efficiency, and reduce frustration for users. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Berenzweig in view of Rubin in further view of Hartz in further view of Xin. Regarding claim 14, Berenzweig as modified discloses the elements of claim 1 above. However, Berenzweig appears not to expressly disclose wherein the character and/or predicted user input are determined such that the rate at which the user generates one or more words is at least 20 words per minute. However, in the same field of endeavor, Xin discloses a VR 3D text input model with predicted text (Xin, Abstract with Fig. 10), including wherein the character and/or predicted user input are determined such that the rate at which the user generates one or more words is at least 20 words per minute (Xin, Abstract - over 20 WPM input speed. See also Pages 17-18, 23 with Fig. 11 (a) and (b)). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the gesture recognition of Berenzweig to realize a character input speed over 20 wpm based on the teachings of Xin. The motivation for doing so would have been to improve usability in real world tasks (Xin, Abstract) and improve the user’s confidence with the interface, improving the experience and overall efficiency (Xin, Pages 21-22). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. References are at least relevant as indicated in the corresponding summary. Brand (US Patent Application Publication 2011/0254765) – learning user-specific handwriting. Montaldi et al. (US Patent Application Publication 2018/0314343) – augmented reality auto-completion. Cundall (US Patent Application publication 2023/0144975) – auto-completion of computer recognized commands/shortcuts for performing specific functions in a messaging application. Kim et al. (US Patent Application Publication 2023/0049881) – auto-completion of user-specific shortcut commands. Yang (US Patent Application Publication 2022/0413625) – auto-completion of computer recognized commands/shortcuts for performing specific functions in a messaging application. Ramaro et al. (US Patent Application Publication 2018/0329592) – auto-completion of both regular text and application commands/shortcuts. Huang et al. (US Patent Application Publication 2020/0201443) – user-specific in-the-air finger handwriting detection for user-specific learned inputs. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL W PARCHER whose telephone number is (303)297-4281. The examiner can normally be reached Monday - Friday, 9:00am - 5:00pm, Mountain Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Bashore can be reached at (571)272-4088 (Eastern Time). The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL W PARCHER/Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

Apr 04, 2024
Application Filed
Mar 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596464
ELECTRONIC APPARATUS AND METHOD FOR PROVIDING USER INTERFACE THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12591347
USER INTERFACES FOR INDICATING STATUS OF A TRACKED ENTITY
2y 5m to grant Granted Mar 31, 2026
Patent 12591607
AUTOMATED CONTENT CREATION AND CONTENT SERVICES FOR COLLABORATION PLATFORMS
2y 5m to grant Granted Mar 31, 2026
Patent 12578977
OMNI-CHANNEL MICRO FRONTEND CONTROL PLANE
2y 5m to grant Granted Mar 17, 2026
Patent 12541378
SYSTEMS AND METHODS FOR GENERATING AND PROVIDING A DYNAMIC USER INTERFACE
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
61%
Grant Probability
99%
With Interview (+59.4%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 264 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month