DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/13/2026 has been entered.
Response to Amendment
The rejections under 35 U.S.C. §112(b) of claim 18 (and dependent claims 19-21) regarding “the gesture” is maintained.
The rejections under 35 U.S.C. §112(a) of claims 1-4, 6-7, 11, 13, 15, 17-21, 26-29, 32, 36, 38, 40, 42 are withdrawn in view of the amendments to the independent claims.
Examiner acknowledges the amendments to the claims received on 1/13/2026 have been entered, and that no new matter has been added.
Response to Arguments
Argument 1: Applicant argues on page 9 in the filing on 1/13/2026 that the cited prior art does not teach “generating a blended list of words… prior to receiving a gesture in an application,” and “increasing a probability of one or more words in the blended list that were associated with the HPL prior to the blend; suggesting a word from the blended list of words that corresponds to the received gesture based on the increased probability,” in claim 1, because:
“Wahlen lacks a proactive HPL generation phase that applies NLP to external categories to pre-populate a blended list prior to a gesture being received. Wahlen merely reacts to what was rejected, whereas the amended claim proactively shifts word probabilities based on a deep understanding of the broader current context via application of NLP techniques;” and
“Wahlen's "context" is limited to the immediate character buffer and historical overrides. In contrast, the amended claim generates a blended list prior to the gesture by pulling from external categories, e.g., social media posts and historical chats between users.”
Response to Argument 1: Respectfully, Wahlen and Skarbovsky teach the above.
Regarding (1): Wahlen Fig. 5-7, and Col 8 lines 1-7, teaches two gestures, one gesture in element 602 in Fig. 6(b), and a second gesture in element 606 in Fig. 6(d). Prior to the second gesture of Fig. 6d, a HPL list is generated from the words the user types (“grrr”) in Fig. 6(c). The words the user types (“grrr”) appear in the list in Fig. 6(d). The list includes the word “grrr” before the second gesture. This fulfills the claimed limitation “prior to receiving a gesture.” Skarbovsky 0018, 0042 teaches using an NPL technique of parsing to parse names, titles, and vocabulary from documents from primary and secondary categories. This fulfills the claimed natural language processing from a plurality of categories.
Whether Wahlen lacks of proactive HPL generation phase is not relevant, because the claims only require “generating a blended list of words… prior to receiving a gesture.” And Wahlen’s list is updated with “grrr” before a gesture of Fig. 6(d).
Regarding the rest of the details, Wahlen teaches increasing a probability of a word in Wahlen Col 8 lines 5-27, Fig. 7(b), with “new leaf node 770 is assigned the average probability (0.25) of all of the other leaf nodes, and the remaining probabilities are… 0.1875 each.” Wahlen teaches suggesting a word based on the increased probability in Wahlen Col 4 lines 36-38, Fig. 5a-5e, 6a-6d and Col 5 lines 43-55, where the highest probability words are displayed.
Regarding (2): the claim do not recite that the context must be from external sources, such as social media posts and historical chats between users. The claims merely require contextual data from categories. Skarbovsky 0018, 0042 teaches using an NPL technique of parsing to parse names, titles, and vocabulary from documents from primary and secondary categories. This fulfills the claimed natural language processing of contextual data from a plurality of categories. See rejection below for more details.
Argument 2: Applicant argues on page 10 that Bellegarda
“does not describe, alone, or in combination, "generating a blended list of words ... prior to receiving a gesture in an application;" and "increasing a probability of one or more words in the blended list that were associated with the HPL prior to the blend; suggesting a word from the blended list of words that corresponds to the received gesture based on the increased probability."
It also does not, as described above, address spatial input noise unique to gesture-typing trajectories,” in claim 1.
Response to Argument 2: Respectfully, Wahlen and Skarbovsky teach the above.
Regarding (1): Wahlen and Skarbovsky teach (1). This is described in Response to Argument 1.
Regarding (2): The Applicant argues that the prior art does not address spatial input noise unique to gesture-typing trajectories. The Applicant appears to argue that the prior art does not solve the problem of mistypes or typographical errors. However, the claims do not require addressing spatial input noise unique to gesture-typing trajectories. These terms are not used in any of the claims. See rejection below for more details.
This meets the claim limitations as currently claimed, and Applicant's Arguments 1 and 2 filed on 1/13/2026 are moot in view of new grounds of rejection necessitated by the applicant’s amendment. Applicant’s remaining statements regarding the remaining independent and dependent claims are moot or not persuasive for the reasons stated above.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-4, 7, 11, 13, 15, 17-21, 26-29, 32, 36, 38, 40, and 42-43 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites “high priority list of words (HPL).” The term “high priority” in HPL is a relative term. For example, the same words may be high priority to some people, but not high priority to other people. How high is “high priority?” The term “high priority list of words (HPL)” is not defined in the specification. The limitation “obtained from a context-specific database” describes how the HPL is created, but does not define how high the “high priority in an HPL is. Clarification is required. Claims 2, 3, 7, 18, 26, 27, 28, 32, and 43 also recite “HPL,” and are rejected for the same reason.
Claim 1 recites “in response to… the gesture… increasing a probability… prior to the blend.” The “increasing” limitation occurs prior to the blend, but after “in response to the gesture.” But the blend occurs first, because the blend occurs in the first limitation which is located in line 2: “generating a blended list of words…” How can a probability be increased prior to the blend, if the “in response to” step is after the blend? This limitation cannot occur, because this does not make sense.
For example, according to claim 1:
Step A – generate blended list
Step B – in response to a gesture, perform step C
Step C – increase a probability… prior to blend (which is Step A)
Step C cannot be performed, because it cannot be performed prior to Step A, yet also after Step B. For the purposes of express examination, the Examiner interprets this to mean “increasing a probability… at any time.” Claims 18, 26, and 43 recite similar limitations.
Claim 1 recites “increasing a probability of one or more words in the blended list that were associated.” The limitation “were associated” is in past tense. This limitation references an association that has occurred in the past. However, the examiner does not see any previous association occurring in claim 1.
The examiner sees the limitation “prior to the blend,” but the blend occurs in the first limitation. There is no action of “association” that occurs “prior to the blend” (before the first limitation) in the claims. There is no action at all before the first limitation (prior to the blend). There is no action of “association” elsewhere in claim 1. Claims 18, 26, and 43 recite similar limitations.
Claim 18 recites the limitation "the gesture" in line 8. However, there is “a current gesture” in line 2, and “a gesture previous to the current gesture,” in line 4. “The gesture” in line 8 could refer to either “a current gesture” or “a gesture previous to the current gesture.” It is unclear to which gesture “the gesture” of line 8 is referring to. Examiner interprets “the gesture” of line 8 to be “the current gesture.” There is another instance of “the gesture” in line 2, which may also need to be amended if “the gesture” in line 8 is amended, for the sake of consistency.
Claims 1-4, 7, 11, 13, 15, 17 and 43 depend on independent claim 1, and inherit the indefinite/lack of clarity issues of independent claim 1.
Claims 19-21 depend on independent claim 18, and inherit the indefinite/lack of clarity issues of independent claim 18.
Claims 27-29, 32, 36, 38, 40, and 42 depend on independent claim 26, and inherit the indefinite/lack of clarity issues of independent claim 26.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 7, 11, 13, 17-20, 26-29, 32, 36, 38, and 42-43 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wahlen, Patent Number US 8712931 B1 (hereinafter “Wahlen”), in view of Skarbovsky et al., Patent Application Publication number US 20180143970 A1 (hereinafter “Skarbovsky”).
Claim 1: Wahlen teaches “A method comprising:
generating a blended list of words (Wahlen Fig. 6a-6d. Fig. 6d shows a list of words for the “gr” input gesture), wherein the blended list of words is a blend of a high-priority list of words (HPL) obtained from context-specific database (From Fig. 5, 6c-6d shows “Oh grrr,” as a high priority list, has been previously used in this conversation history context. It is noted that “a high priority list of words (HPL)” is not clearly defined in the specification. The closest to a definition the Examiner could find in the instant specification is “Once additional words from context specific databases are obtained, the control circuitries 220 and/or 228 may add those words to a high priority list (HPL) [instant spec 0103].” However, this still does not define what a “high priority list (HPL)” actually is. Therefore, the broadest reasonable interpretation of “high priority list” includes text used in the current conversation history) (i.e. New leaf node 770 has been added to incorporate the user text input ("gar" [grrr]) associated with the user override [Wahlen Col 8 lines 5-27, Fig. 7(b)] note: Fig. 7b shows a networked database of words. The words in this database are context-specific (see “grrr” element 770 of Fig. 7b) to the current conversation)…, and a lexicon (“Oh grrr” has been previously used in this conversation application. If words are suggested, they are pulled from a lexicon. “Oh grrr” is suggested from the application. Wahlen Fig. 6a-6d shows a blended list of multiple words for the “gr” input gesture. The list of words include “grrr” based on historical context of the application, and “great, grand, grant, gross” from the application’s lexicon), prior to receiving a gesture in an application (i.e. FIG. 7(b) illustrates the Bayesian network after it has evolved… to update the input repository based on user text input associated with a user override, as shown by example in different contexts in FIGS. 5(c) and 6(c). New leaf node 770 has been added to incorporate the user text input ("gar" [sic grrr]) associated with the user override [Wahlen Col 8 lines 1-7, Fig. 5a-7b] note: the word “grrr” is updated into the repository of Fig. 7b after the user override input of Fig. 5(c) and 6(c). This is before the second/current gesture in Fig. 5(d) and 6(d). Note2: this is for the “gr” input gesture); and
in response to receiving the gesture in the application, wherein the gesture corresponds to a navigation of a path on a displayed digital keyboard of the application (i.e. The device 100 includes an example "virtual" keyboard 118 that is displayed on display screen 102, which is able to sense user input entered on keyboard 118 by any conventional key selection technique, including at least touching, tapping, swiping across, or otherwise selecting keys on keyboard 118 [Wahlen Col 3 lines 23-27, Fig. 6a-6d] note: Fig. 6d shows a list of words for the “gr” input gesture, which can be a swipe input gesture from “g” to “r” on a virtual keyboard):
increasing a probability of one or more words in the blended list that were associated with the HPL prior to the blend (i.e. New leaf node 770 has been added to incorporate the user text input ("gar" [grrr]) associated with the user override… the new leaf node is given greater weight than the average probability… new leaf node 770 is assigned the average probability (0.25) of all of the other leaf nodes, and the remaining probabilities are distributed amongst leaf nodes 730, 740, 750, and 760 (0.1875 each) [Wahlen Col 8 lines 5-27, Fig. 7(b)] note: Fig. 7b shows a networked database of words. The words in this database are context-specific (see “grrr” element 770 of Fig. 7b) to the current conversation. Regarding “prior to the blend,” see section 112);
suggesting a word from the blended list of words that corresponds to the received gesture based on the increased probability; and
displaying the suggested word on a display of the electronic device (i.e. the set of one or more input predictions may include only the top two (or three or four, etc.) possible results with the highest probability [Wahlen Col 4 lines 36-38, Fig. 5a-5e, 6a-6d]… enables embodiments to "learn" from the user's input and acceptance of suggested input or entry of override triggers… additional nodes may be added to the network, and a prior probability distribution of the probabilistic model may be updated [Wahlen Col 5 lines 43-55] note: highest ranked words are presented, then the system learns from overrides, which adds the new words to the probability model, then the probability model is updated, then the highest ranked word(s) are presented from a new list of words that includes the new word) (i.e. this time around, since the device "learned" from the user's prior override of the device's suggested input, as shown in FIG. 5(e), the device did not present a different suggestion [“Oh great”]. On subsequent user input including a similar string of characters, the device then could present an accurate second suggested input ("Oh grrr") after learning that particular input string [Wahlen Col 6 lines 65 - Col 7 line 24, Fig. 5a-5e]).”
Wahlen is silent regarding a database “determined by applying a natural language processing (NLP) technique to contextual data from a plurality of categories.”
Skarbovsky teaches “generating a blended list of words, wherein the blended list of words is a blend of a high-priority list of words (HPL) obtained from context-specific database determined by applying a natural language processing (NLP) technique to contextual data from a plurality of categories (i.e. adds new terms into the contextual dictionary 130 based on primary and supplemental contextual data… Primary contextual data includes the parsed names and vocabulary from documents associated with the event in a productivity database 180… Supplemental contextual data includes the parsed names and vocabulary discovered in the productivity database 180 [Skarbovsky 0018]… supplemental contextual information are given lower weights or less effect on existing weights of entries in the contextual dictionary 130 than contextual information [Skarbovsky 0042] note: primary and supplementary contextual data correlates to a plurality of categories. Note2: parsing is a natural language processing technique)…
increasing a probability of one or more words in the blended list that were associated with the HPL prior to the blend (i.e. a discovered term (from the contextual information or the supplemental contextual information) is assigned an initial weight that the user may adjust so that a given term will be chosen more or less frequently [Skarbovsky 0045, Fig. 2B]… contextual information or supplemental contextual information that are to be included in the contextual dictionary 130 [Skarbovsky 0047] note: each term can be increased in probability using weighting controls 235 of Fig. 2B. Note: the terms are increased in weighting as the terms are divided into two lists, which are “to be” included in the contextual dictionary. In other words, the increase in probability occurs before the lists are blended into a dictionary);”
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention/combination of Wahlen to include the feature of having the ability to parse categorized documents as disclosed by Skarbovsky.
One would have been motivated to do so, before the effective filing date of the invention because it provides the benefit to have multiple sources/types of data, which increases the data we have to work with, which tailors a lexicon to a specific user/need, which increases the accuracy of the suggested words.
Claim 2: Wahlen and Skarbovsky teach all the limitations of claim 1, above. Skarbovsky teaches “further comprising: determining a high-level context based on applying the NLP technique (i.e. a meeting event may be mined to discover its attendees, a title and description, and documents attached to that meeting event. These data are parsed to derive contextual information about the event [Skarbovsky [0026] note: parsing is an NLP technique);
accessing the context specific database based on the determined high-level context to obtain context specific words from the context specific database (i.e. the names of the attendees and terms parsed from the title description and attached documents are added to the contextual dictionary 130 [Skarbovsky 0026] note: accessing by adding words to the contextual dictionary/database. The contextual dictionary/database obtains the words that are added);
generating the HPL with words from the context specific database and words from the lexicon (i.e. contextual dictionary 130 is augmented from a base state (e.g., a standard dictionary, a prior-created contextual dictionary 130) to include terminology discovered via context mining [Skarbovsky 0026] note: HPL (augmented contextual dictionary) includes words from context and words from base state lexicon); and
blending the HPL with the lexicon to generate the blended list of words (i.e. contextual dictionary 130 is augmented from a base state (e.g., a standard dictionary, a prior-created contextual dictionary 130) to include terminology discovered via context mining [Skarbovsky 0026] note: HPL (augmented contextual dictionary) includes words from context and words from base state lexicon).”
One would have been motivated to combine Wahlen and Skarbovsky, before the effective filing date of the invention because it provides the benefit to have multiple sources/types of data, which increases the data we have to work with, which tailors a lexicon to a specific user/need, which increases the accuracy of the suggested words.
Claim 3: Wahlen and Skarbovsky teach all the limitations of claim 1, above. Wahlen teaches “wherein suggesting the word from the blended list of words further comprises:
increasing a probability of those words in the blended list that are associated with the HPL that correspond to the received gesture (i.e. the probability assigned to the new leaf node is given greater weight than the average probability assigned to the leaf nodes prior to insertion of the new leaf node [Wahlen Col 8 lines 11-13, Fig. 7a-7b] note: Fig. 7a-7b shows the word “grrr,” that corresponds to the “gr” input gesture, with a probability of .25, which is increased relative to the words “great” and “gross.” The probability of the words “great” and “gross” fell from .25 to .1875);
rank ordering the words in the blended list based on the increased probability; and
suggesting the word based on the rank ordering (i.e. the set of one or more input predictions may include only the top two (or three or four, etc.) possible results with the highest probability [Wahlen Col 4 lines 36-38]).”
Claim 4: Wahlen and Skarbovsky teach all the limitations of claim 3, above. Wahlen teaches “wherein the suggested word is a word that is ranked with the highest probability (i.e. the set of one or more input predictions may include only the top two (or three or four, etc.) possible results with the highest probability [Wahlen Col 4 lines 36-38] note: top two includes the top ranked word).”
Claim 7: Wahlen and Skarbovsky teach all the limitations of claim 1, above. Wahlen teaches “wherein the HPL includes words that are selected from any one of
a) words that were used above a threshold frequency in prior communications that occurred using the application of the electronic device,
b) words that were used in prior communications between a same sender and recipient of the communication (Wahlen Fig. 6a-6d shows word “grrr” used in a prior communication message between a same sender and recipient), and
c) words that were previously used in a current communication, and words that were used in prior communications by a user for communications relating to a same context.”
Claim 11: Wahlen and Skarbovsky teach all the limitations of claim 1, above. Wahlen teaches “wherein the application is a mobile messaging application, and the communication is a message composed by a sender intended for a recipient (i.e. FIGS. 6(a)-(d) illustrate a series of screenshots of another example text messaging session [Wahlen Col 7 lines 25-26, Fig. 6a-6d]).”
Claim 13: Wahlen and Skarbovsky teach all the limitations of claim 1, above. Wahlen teaches “wherein the gesture is received in response to a user touching a finger on the displayed digital keyboard of the application and moving the finger along a path on the displayed keyboard, wherein the keyboard includes icons or letters of an alphabet (i.e. the user has entered input 602 ("Oh gr") [Wahlen Col 7 lines 33-34, Fig 6a-6d]… The device 100 includes an example "virtual" keyboard 118 that is displayed on display screen 102, which is able to sense user input entered on keyboard 118 by any conventional key selection technique, including at least touching, tapping, swiping across, or otherwise selecting keys on keyboard 118 [Wahlen Col 3 lines 23-27]).”
Claim 17: Wahlen and Skarbovsky teach all the limitations of claim 1, above. Wahlen teaches “further comprising generating a new blended list of words for each new gesture received (i.e. the user input ("JLo") [Wahlen Col 8 lines 38-39, Fig. 8a-] note: the input gesture for “JLo” is a new gesture. “JLo” suggests a new list of words, such as Jello).”
Claim 18: Wahlen teaches “A method comprising:
receiving a current gesture in an application (i.e. the user enters several characters 500 ("Oh gr") in text entry area 124 [Wahlen Col 6 lines 65 - Col 7 line 24, Fig. 5a-5e]), wherein the gesture is related to trajectory of a hand movement on a displayed digital keyboard related to the application (i.e. The device 100 includes an example "virtual" keyboard 118 that is displayed on display screen 102, which is able to sense user input entered on keyboard 118 by any conventional key selection technique, including at least touching, tapping, swiping across, or otherwise selecting keys on keyboard 118 [Wahlen Col 3 lines 23-27, Fig. 5a-5e] note: “gr” text input, which can be a swipe input gesture from “g” to “r” on a virtual keyboard);
determining that all words suggested in a gesture previous to the current gesture were rejected (i.e. the user enters several characters 500 ("Oh gr") in text entry area 124… the suggested input ("Oh great") 502. However, because that is not what the user intended, in FIG. 5(c), the user uses the backspace key (non-character user input 504) to delete the last three characters of the suggested input to get back to the previous state of the user input 506 as illustrated in FIG. 5(d), thereby indicating an override trigger [Wahlen Col 6 lines 65 - Col 7 line 24, Fig. 5a-5e]); and
in response to the determination, determining whether a similarity index is above a threshold (i.e. In FIG. 5(d), the user input 506 was the same as was previously replaced [Wahlen Col 7 lines 15-16, Fig. 5a-5e] note: the user input has been determined to be the same, which is a similarity of 100%, which is greater than a threshold of 0%);
generating a blended list of words for the current gesture (Wahlen Fig. 5a-5e. Fig. 5e shows a list of one word “grrr” for the “gr” input gesture. Wahlen Fig. 6a-6d shows a blended list of multiple words for the “gr” input gesture. The list of words include “grrr” based on historical context of the application, and “great, grand, grant, gross” from the application’s lexicon) prior to receiving the gesture (i.e. FIG. 7(b) illustrates the Bayesian network after it has evolved (such as is described with respect to FIG. 3, at step 370) to update the input repository based on user text input associated with a user override, as shown by example in different contexts in FIGS. 5(c) and 6(c). New leaf node 770 has been added to incorporate the user text input ("gar" [sic grrr]) associated with the user override [Wahlen Col 8 lines 1-7] note: the word “grrr” is updated into the repository of Fig. 7b after the user override input of Fig. 5(c) and 6(c). This is before the second/current gesture in Fig. 5(d) and 6(d). Note2: this is for the “gr” input gesture), wherein the blended list of words is a blend of a high-priority list of words (HPL) obtained from context-specific database (From Fig. 5, “Oh grrr,” as a high priority list, has been previously used in this conversation history context. It is noted that “a high priority list of words (HPL)” is not clearly defined in the specification. The closest to a definition the Examiner could find in the instant specification is “Once additional words from context specific databases are obtained, the control circuitries 220 and/or 228 may add those words to a high priority list (HPL) [instant spec 0103].” However, this still does not define what a “high priority list (HPL)” actually is. Therefore, the broadest reasonable interpretation of “high priority list” includes text used in the current conversation history) (i.e. New leaf node 770 has been added to incorporate the user text input ("gar" [grrr]) associated with the user override [Wahlen Col 8 lines 5-27, Fig. 7(b)] note: Fig. 7b shows a networked database of words. The words in this database are context-specific (see “grrr” element 770 of Fig. 7b) to the current conversation)…, and a lexicon associated with an application (“Oh grrr” has been previously used in this conversation application. If words are suggested, they are pulled from a lexicon. “Oh grrr” is suggested from the application. Wahlen Fig. 6a-6d shows a blended list of multiple words for the “gr” input gesture. The list of words include “grrr” based on historical context of the application, and “great, grand, grant, gross” from the application’s lexicon), wherein the generated blended list of words includes an increased probability of one or more words that were associated with the HPL (i.e. New leaf node 770 has been added to incorporate the user text input ("gar" [grrr]) associated with the user override… the new leaf node is given greater weight than the average probability… new leaf node 770 is assigned the average probability (0.25) of all of the other leaf nodes, and the remaining probabilities are distributed amongst leaf nodes 730, 740, 750, and 760 (0.1875 each) [Wahlen Col 8 lines 5-27, Fig. 7(b)] note: Fig. 7b shows a networked database of words. The words in this database are context-specific (see “grrr” element 770 of Fig. 7b) to the current conversation)… and does not include the words rejected in the gesture previous to the current gesture if the similarity index is above the threshold (i.e. the user input 500 ("Oh gr") is automatically replaced by the suggested input ("Oh great") 502. However, because that is not what the user intended… thereby indicating an override trigger. The user then enters input 508 ("Oh grrr")… In FIG. 5(d), the user input 506 was the same as was previously replaced. However, this time around, since the device "learned" from the user's prior override of the device's suggested input, as shown in FIG. 5(e), the device did not present a different suggestion. On subsequent user input including a similar string of characters, the device then could present an accurate second suggested input ("Oh grrr") after learning that particular input string [Wahlen Col 7 lines 6-24, Fig. 5a-5e]); and
presenting a highest rank word or words from the blended list of words (i.e. the set of one or more input predictions may include only the top two (or three or four, etc.) possible results with the highest probability [Wahlen Col 4 lines 36-38, Fig. 5a-5e, 6a-6d]… enables embodiments to "learn" from the user's input and acceptance of suggested input or entry of override triggers… additional nodes may be added to the network, and a prior probability distribution of the probabilistic model may be updated [Wahlen Col 5 lines 43-55] note: highest ranked words are presented, then the system learns from overrides, which adds the new words to the probability model, then the probability model is updated, then the highest ranked word(s) are presented from a new list of words that includes the new word) (i.e. this time around, since the device "learned" from the user's prior override of the device's suggested input, as shown in FIG. 5(e), the device did not present a different suggestion [“Oh great”]. On subsequent user input including a similar string of characters, the device then could present an accurate second suggested input ("Oh grrr") after learning that particular input string [Wahlen Col 6 lines 65 - Col 7 line 24, Fig. 5a-5e]).”
Wahlen is silent regarding a database “determined by applying a natural language processing (NLP) technique to contextual data from a plurality of categories,” and “prior to the blend.”
Skarbovsky teaches “generating a blended list of words… wherein the blended list of words is a blend of a high-priority list of words (HPL) obtained from context-specific database determined by applying a natural language processing (NLP) technique to contextual data from a plurality of categories (i.e. adds new terms into the contextual dictionary 130 based on primary and supplemental contextual data… Primary contextual data includes the parsed names and vocabulary from documents associated with the event in a productivity database 180… Supplemental contextual data includes the parsed names and vocabulary discovered in the productivity database 180 [Skarbovsky 0018]… supplemental contextual information are given lower weights or less effect on existing weights of entries in the contextual dictionary 130 than contextual information [Skarbovsky 0042] note: primary and supplementary contextual data correlates to a plurality of categories. Note2: parsing is a natural language processing technique)… wherein the generated blended list of words includes an increased probability of one or more words that were associated with the HPL prior to the blend (i.e. a discovered term (from the contextual information or the supplemental contextual information) is assigned an initial weight that the user may adjust so that a given term will be chosen more or less frequently [Skarbovsky 0045, Fig. 2B]… contextual information or supplemental contextual information that are to be included in the contextual dictionary 130 [Skarbovsky 0047] note: each term can be increased in probability using weighting controls 235 of Fig. 2B. Note: the terms are increased in weighting as the terms are divided into two lists, which are “to be” included in the contextual dictionary. In other words, the increase in probability occurs before the lists are blended into a dictionary).”
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention/combination of Wahlen to include the feature of having the ability to parse categorized documents as disclosed by Skarbovsky, and adjust the probability of words as disclosed by Skarbovsky.
One would have been motivated to do so, before the effective filing date of the invention because it provides the benefit to have multiple sources/types of data, which increases the data we have to work with, which tailors a lexicon to a specific user/need, which increases the accuracy of the suggested words. And also because it provides the benefit to flexibly adjust the probability of a suggested word, when the suggested words are not accurate enough (improve accuracy) or when conversation is about a specific topic (improve accuracy based on context).
Claim 19: Wahlen and Skarbovsky teach all the limitations of claim 18, above. Wahlen teaches “wherein the similarity index relates to a similarity between the current and previous gestures (i.e. In FIG. 5(d), the user input 506 was the same as was previously replaced [Wahlen Col 7 lines 15-16, Fig. 5a-5e] note: the user input has been determined to be the same, which is a similarity of 100%, which is greater than a threshold of 0%).”
Claim 20: Wahlen and Skarbovsky teach all the limitations of claim 18, above. Wahlen teaches “further comprising, triggering determining of the similarity index in response to determining that the word suggested in the gesture previous to the current gesture was rejected (i.e. in FIG. 5(c), the user uses the backspace key… thereby indicating an override trigger… In FIG. 5(d), the user input 506 was the same as was previously replaced [Wahlen Col 7 lines 15-16, Fig. 5a-5e] note: initially suggested word is rejected, then the very next step is determining whether an input is the same).”
Claim 26: Wahlen and Skarbovsky teaches a system comprising: communications circuitry configured to access a displayed digital keyboard of an application (i.e. The device 100 includes an example "virtual" keyboard 118 that is displayed on display screen 102, which is able to sense user input entered on keyboard 118 by any conventional key selection technique, including at least touching, tapping, swiping across, or otherwise selecting keys on keyboard 118 [Wahlen Col 3 lines 23-27]); and control circuitry configured to (i.e. at least one processor 1002 for executing instructions [Wahlen Col 9 lines 52]) perform operations corresponding to the method of claim 1; therefore, it is rejected under the same rationale.
Claim 27: Claim 27 is similar in content and in scope to claim 2, thus it is rejected under the same rationale.
Claim 28: Claim 28 is similar in content and in scope to claim 3, thus it is rejected under the same rationale.
Claim 29: Claim 29 is similar in content and in scope to claim 4, thus it is rejected under the same rationale.
Claim 32: Wahlen and Skarbovsky teach the system of claim 26, above. Wahlen and Skarbovsky teach “wherein the HPL includes words that are selected from any one of
a) words that were used above a threshold frequency in prior communications that occurred using the application of the electronic device,
b) words that were used in prior communications between a same sender and recipient of the communication (Wahlen Fig. 6a-6d shows word “grrr” used in a prior communication message between a same sender and recipient),
c) words that were previously used in a current communication, and words that were used in prior communications by a user for communications relating to a same context, and
d) words that are context specific (Wahlen Fig. 6a-6d shows word “grrr” used in a prior communication message between a same sender and recipient).”
Claim 36: Claim 36 is similar in content and in scope to claim 11, thus it is rejected under the same rationale.
Claim 38: Claim 38 is similar in content and in scope to claim 13, thus it is rejected under the same rationale.
Claim 42: Claim 42 is similar in content and in scope to claim 17, thus it is rejected under the same rationale.
Claim 43: Wahlen and Skarbovsky teach all the limitations of claim 1, above. Wahlen teaches “further comprising, the increasing of the probability of one or more words in the blended list that were associated with the HPL prior to the blend (i.e. a discovered term (from the contextual information or the supplemental contextual information) is assigned an initial weight that the user may adjust so that a given term will be chosen more or less frequently [Skarbovsky 0045, Fig. 2B]… contextual information or supplemental contextual information that are to be included in the contextual dictionary 130 [Skarbovsky 0047] note: each term can be increased in probability using weighting controls 235 of Fig. 2B. Note: the terms are increased in weighting as the terms are divided into two lists, which are “to be” included in the contextual dictionary. In other words, the increase in probability occurs before the lists are blended into a dictionary) is performed to favor the HPL over the lexicon (Examiner interprets this limitation to be intended use).”
One would have been motivated to combine Wahlen and Skarbovsky, before the effective filing date of the invention because it provides the benefit to flexibly adjust the probability of a suggested word, when the suggested words are not accurate enough (improve accuracy) or when conversation is about a specific topic (improve accuracy based on context).
Claims 15 and 40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wahlen, in view of Skarbovsky, in view of McKenzie, Patent Application Publication number US 20170322623 A1 (hereinafter “McKenzie”).
Claim 15: Wahlen and Skarbovsky teach all the limitations of claim 1, above. Wahlen and Skarbovsky are silent regarding “wherein the gesture is received in response to a user gaze directed at displayed letters of the alphabet of the displayed digital keyboard of the application.”
McKenzie teaches “wherein the gesture is received in response to a user gaze directed at displayed letters of the alphabet of the displayed digital keyboard of the application (i.e. a virtual keyboard, detecting an initial eye gaze directed at an initial key of a plurality of keys of the virtual keyboard, receiving an initial input at a controller, initializing an eye gaze input mode in response to the initial input from the controller and the detection of the initial eye gaze, receiving at least one character input in response to at least one of a detected eye gaze input or a detected manipulation of the controller, and displaying, in the virtual reality environment, a character entry in response to the received at least one character input [McKenzie 0004]).”
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention/combination of Wahlen and Skarbovsky to include the feature of the above, as disclosed by McKenzie.
One would have been motivated to do so, before the effective filing date of the invention because it provides the benefit “in which this type of gaze input may be coupled with one or more user inputs via a handheld electronic device, or controller, and in particular, a touch sensitive surface of such a controller, to facilitate text entry inputs, and improve input accuracy [McKenzie 0016].”
Claim 40: Claim 40 is similar in content and in scope to claim 15, thus it is rejected under the same rationale.
Claim 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wahlen, in view of Skarbovsky, in view of Maggio, Patent Application Publication number US 20230372829 A1, (hereinafter “Maggio”), in view of Kosakowski, Patent Application Publication number US 20090162818 A1 (hereinafter “Kosakowski”).
Claim 21: Wahlen and Skarbovsky teach all the limitations of claim 18, above.
Wahlen and Skarbovsky are silent regarding “further comprising, determining that one or more… words exceeds a language proficiency level of a user, and removing the one or more… words from the blended list based on the determination.”
Maggio teaches “further comprising, determining that one or more… words exceeds a language proficiency level of a user, and removing the one or more… words from the blended list based on the determination (i.e. As a player increases in proficiency, reaching progressively higher ranks, the lexicon of words may increase, and in embodiments, higher ranking players may enable play against peers of similar skill levels, such that lower tier players do not become frustrated with having to solve words with which lower tier players may be frequently unfamiliar [Maggio 0102] note: making a determination to increase a player’s rank, or not, is a determination of language proficiency level. Determining to stay on a lower proficiency level eliminates the words in the higher proficiency levels. Maggio is picking and choosing which words/lexicon to serve to the user; they are solving the same problem, thus Maggio is an analogous art).”
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention/combination of W Wahlen and Skarbovsky to include the feature of the above, as disclosed by Maggio.
One would have been motivated to do so, before the effective filing date of the invention because it provides the benefit to filter out unwanted words, which decreases clutter and reduces user error.
Wahlen and Skarbovsky and Maggio are silent regarding “highest ranked” words.
Kosakowski teaches “further comprising, determining that one or more highest ranked words exceeds a language proficiency level of a user, and removing the one or more highest rank words from the blended list based on the determination (i.e. associate a rank with each word in said learning content, to determine a rank based on said at least one initial language skill parameter, to remove each word with a rank lower than said rank [Kosakowski 0021] note: while claim 21 recites removing higher ranked words, removing lower ranked words is conceptually the same function).”
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention/combination of Wahlen and Skarbovsky and Maggio to include the feature of the above, as disclosed by Kosakowski.
One would have been motivated to do so, before the effective filing date of the invention because it provides the benefit to more accurately filter out unwanted words, which decreases clutter and reduces user error.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Lewis (US 20180365232 A1) listed on 892 is related to selecting lexicons, specifically in a translation application.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMUEL SHEN whose telephone number is (469)295-9169 and email address is samuel.shen@uspto.gov. The examiner can normally be reached Monday-Thursday, 7:00 am - 5:00 pm CT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fred Ehichioya can be reached on (571) 272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.S./Examiner, Art Unit 2179
/IRETE F EHICHIOYA/Supervisory Patent Examiner, Art Unit 2179