Prosecution Insights
Last updated: April 19, 2026
Application No. 18/178,922

THOUGHT INFERENCE SYSTEM, INFERENCE MODEL GENERATION SYSTEM, THOUGHT INFERENCE DEVICE, INFERENCE MODEL GENERATION METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM

Non-Final OA §102§103§112
Filed
Mar 06, 2023
Examiner
TAN, DAVID H
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
National University Corporation Ehime University
OA Round
1 (Non-Final)
31%
Grant Probability
At Risk
1-2
OA Rounds
4y 1m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 31% of cases
31%
Career Allow Rate
30 granted / 98 resolved
-24.4% vs TC avg
Strong +16% interview lift
Without
With
+15.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
41 currently pending
Career history
139
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
63.5%
+23.5% vs TC avg
§102
19.8%
-20.2% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 98 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/06/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a data set acquisition module configured to acquire a plurality of data sets…”, “an inference model generation module configured to generate, for each of a plurality of combinations of a time frame and a location, an inference model…” “an inference module configured to infer a second thought…” “an output module configured to output the second thought”. In Claim 1. Claims 2, 14, 15, and 16 recite similar limitations. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim limitations “a data set acquisition module configured to acquire a plurality of data sets…”, “an inference model generation module configured to generate, for each of a plurality of combinations of a time frame and a location, an inference model…” “an inference module configured to infer a second thought…” “an output module configured to output the second thought”,” Invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. At most the specification filed 03/06/2023 cites in para. [0020], that “The ROM 22 or the auxiliary storage device 23 stores, therein, an operating system and computer programs such as an inference service program 2P. The inference service program 2P is a program for implementing the functions of a learning module 201, an inference model storage module 202, an inference module 203, and so on, which are illustrated in Fig. 3. Examples of the auxiliary storage device 23 include a hard disk and a solid-state drive (SSD).”. Applicants specification merely states that the modules may be programs may include hardware, as such under BRI the specification and is devoid of any structure that explicitly performs the functions in the claims. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-3, 11, & 14-16 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by U.S. Patent Application Publication No. 20210124422 “Forsland”. Claim 1: Forsland teaches a thought inference system comprising: a data set acquisition module configured to acquire a plurality of data sets (i.e. para. [0041], The sensory devices 110, 120, 130, 150 and 160, are computing devices (see FIG. 3) that are used by users to translate a user's gesture to an audible, speech command. The sensory devices 110, 120, 130, 150 and 160 sense and receive gesture inputs by the respective user on a sensor interface), each of the data sets indicating a first condition for a case where a first physical reaction is seen in a person with speech difficulties (i.e. para. [0039], This system or device will benefit individuals with communication disabilities. In particular, it will benefit nonverbal individuals, allowing them to express their thoughts in the form of spoken language) and a first thought of the person with speech difficulties for a case where the first physical reaction is seen (i.e. para. [0057-0063], “the process proceeds to 420 where the user inputs a gesture… Alternatively, the gesture can include a swipe in a certain direction, such as swipe up, swipe down, swipe southeast, and such. In addition, the gesture can include a letter, or a shape, or an arbitrary design. The user can also input a series of gestures. The sensors on the sensory device 410 capture the gestures… After the raw data has been interpreted, the cloud system would compare the raw data inputted to a database or library of previously saved gestures stored on the cloud system. The database or library would include previously saved gestures with corresponding communication commands associated with each previously saved gesture… After the communication command has been identified, the cloud system 440 then transmits the communication command at 470 over the network to the sensory device 410. The sensory device 410 then generates the speech command 475”, wherein the BRI for a though encompasses a user’s intended command and the BRI for a physical reaction encompasses a gesture. Wherein a sensor may acquire gesture data sets that correspond to conditional commands, such as speech generation commands, associated with a matching physical gesture); an inference model generation module (i.e. para. [0090], In the framework, input from the sensors (e.g., due to input received by the sensors) are received by or as an input gesture. In the framework, context awareness is used to interpret or determine the user gesture or intent from the inputs received. In the framework machine learning is used to interpret or determine the user gesture or intent from the inputs received) configured to generate, for each of a plurality of combinations of a time frame and a location (i.e. para. [0063], Contextual data may include contact lists, location, time, urgency metadata), an inference model by machine learning (i.e. para. [0090], in the framework machine learning is used to interpret or determine the user gesture or intent from the inputs received) in which the first condition indicated in the data set acquired in a time frame and a location of the subject combination (i.e. para. [0059], “the cloud system may analyze the raw data of the gesture inputted by determining the pattern, such as the direction of the gesture, or by determining the time spent in one location, such as how long the user pressed down on the sensory device”, wherein it is further noted in para. [0069], that the definition of “gesture” is as a time based input with a beginning/middle/end) is used as an explanatory variable and the first thought indicated in the data set is used as an objective variable (i.e. para. [0090], “the left intention signals being combined with context awareness metadata to enrich the data in order to determine the logic of the output and action”, wherein the first gesture has combination of time frame and location data that is used to determine a corresponding thought action that may be the user’s objective when inputting the gesture); an inference module configured to infer a second thought for a case where a second reaction is seen in the person with speech difficulties by inputting input data indicating a second condition for a case where the second reaction is seen to the inference model that is generated, among the plurality of combinations, for a combination of a time frame and a location in which the second reaction is seen (i.e. para. [0074], “the sensory device 810 shows a user's pre-configured gestures that are stored in a user's account… For example, a single tap 845, may translate into the words, “Thinking of You.” A double tap may translate into the words, “How are you?””, wherein the BRI for a second thought would be a second action with corresponding combination of gestures with contextual time frame and location data); and an output module (i.e. para. [0041], The sensory devices 110, 120, 130, 150 and 160 also generate an audio or visual output which translates the gesture into a communication command) configured to output the second thought (i.e. para. [0089], “The device may geolocation data that indicates the user is away from home; tag the communication with appended contextual information; and its output and action logic tells the system to send a text message to the caregiver with the user's location in a human-understandable grammatically correct phrase “Help, I'm in Oak Park” including the user's Sender ID/Profile and coordinates pinned on a map”, wherein it is noted that each user may customize their gesture library, in which case a double tap gesture with contextual data may correspond to a text message for help). Claim 2: Forsland teaches an inference model generation system comprising: a data set acquisition module configured to acquire a plurality of data sets (i.e. para. [0041], The sensory devices 110, 120, 130, 150 and 160, are computing devices (see FIG. 3) that are used by users to translate a user's gesture to an audible, speech command. The sensory devices 110, 120, 130, 150 and 160 sense and receive gesture inputs by the respective user on a sensor interface), each of the data sets indicating a condition for a case where a physical reaction is seen in a person with speech difficulties (i.e. para. [0039], This system or device will benefit individuals with communication disabilities. In particular, it will benefit nonverbal individuals, allowing them to express their thoughts in the form of spoken language) and a thought of the person with speech difficulties for a case where the physical reaction is seen (i.e. para. [0057-0063], “the process proceeds to 420 where the user inputs a gesture… Alternatively, the gesture can include a swipe in a certain direction, such as swipe up, swipe down, swipe southeast, and such. In addition, the gesture can include a letter, or a shape, or an arbitrary design. The user can also input a series of gestures. The sensors on the sensory device 410 capture the gestures… After the raw data has been interpreted, the cloud system would compare the raw data inputted to a database or library of previously saved gestures stored on the cloud system. The database or library would include previously saved gestures with corresponding communication commands associated with each previously saved gesture… After the communication command has been identified, the cloud system 440 then transmits the communication command at 470 over the network to the sensory device 410. The sensory device 410 then generates the speech command 475”, wherein the BRI for a though encompasses a user’s intended command and the BRI for a physical reaction encompasses a gesture. Wherein a sensor may acquire gesture data sets that correspond to conditional commands, such as speech generation commands, associated with a matching physical gesture); and an inference model generation module (i.e. para. [0090], In the framework, input from the sensors (e.g., due to input received by the sensors) are received by or as an input gesture. In the framework, context awareness is used to interpret or determine the user gesture or intent from the inputs received. In the framework machine learning is used to interpret or determine the user gesture or intent from the inputs received) configured to generate, for each of a plurality of combinations of a time frame and a location (i.e. para. [0063], Contextual data may include contact lists, location, time, urgency metadata), an inference model by machine learning (i.e. para. [0090], in the framework machine learning is used to interpret or determine the user gesture or intent from the inputs received) in which the condition indicated in the data set acquired in a time frame and a location of the subject combination is used (i.e. para. [0059], “the cloud system may analyze the raw data of the gesture inputted by determining the pattern, such as the direction of the gesture, or by determining the time spent in one location, such as how long the user pressed down on the sensory device”, wherein it is further noted in para. [0069], that the definition of “gesture” is as a time based input with a beginning/middle/end) as an explanatory variable and the thought indicated in the data set is used as an objective variable (i.e. para. [0090], “the left intention signals being combined with context awareness metadata to enrich the data in order to determine the logic of the output and action”, wherein the first gesture has combination of time frame and location data that is used to determine a corresponding thought action that may be the user’s objective when inputting the gesture). Claim 3: Forsland teaches the inference model generation system according to claim 2, wherein the data set acquisition module acquires, as the data set, data indicating biometric information of the person with speech difficulties or a state around the person with speech difficulties (i.e. para. [0039], Gesture, as used throughout this patent, may be defined as a ‘time-based’ analog input to a digital interface, and may include, but not be limited to, time-domain (TD) biometric data from a sensor, motion tracking data from a sensor or camera, direct selection data from a touch sensor, orientation data from a location sensor, and may include the combination of time-based data from multiple sensors) . Claim 11: Forsland teaches the inference model generation system according to claim 2, wherein a plurality of thought options is prepared in advance according to each of the plurality of combinations (i.e. para. [0059], The database or library would include previously saved gestures with corresponding communication commands associated with each previously saved gesture), and the data set acquisition module acquires, as the data set for each of the plurality of combinations, data that indicates, as the thought, a thought option selected based on the reaction from among the plurality of thought options according to the subject combination by a person who cares for the person with speech difficulties (i.e. para. [0060], The cloud system 440 determines if there is a gesture match at 435 between the inputted gesture and the stored preconfigured gestures. To determine if there is a gesture match, the cloud system would analyze the inputted gesture, and the raw data associated with the inputted gesture, and lookup the preconfigured gestures stored in the database. If the inputted gesture exists in the database, then the database will retrieve that record stored in the database. The record in the database will include the communication command associated with the inputted gesture). Claim 14: Forsland teaches a thought inference device comprising: an inference module configured to infer a second thought for a case where a second reaction is seen by inputting input data indicating a second condition for a case where the second reaction is seen to an inference model generated for a time frame and a location in which the second reaction is seen by the inference model generation system according to claim 2 (i.e. para. [0074], “the sensory device 810 shows a user's pre-configured gestures that are stored in a user's account… For example, a single tap 845, may translate into the words, “Thinking of You.” A double tap may translate into the words, “How are you?””, wherein the BRI for a second thought would be a second action with corresponding combination of gestures with contextual time frame and location data); and an output module (i.e. para. [0041], The sensory devices 110, 120, 130, 150 and 160 also generate an audio or visual output which translates the gesture into a communication command) configured to output the second thought (i.e. para. [0089], “The device may geolocation data that indicates the user is away from home; tag the communication with appended contextual information; and its output and action logic tells the system to send a text message to the caregiver with the user's location in a human-understandable grammatically correct phrase “Help, I'm in Oak Park” including the user's Sender ID/Profile and coordinates pinned on a map”, wherein it is noted that each user may customize their gesture library, in which case a double tap gesture with contextual data may correspond to a text message for help). Claim 15: Claim 15 is the method claim reciting similar limitations to claim 2 and is rejected for similar reasons. Claim 16: Claim 16 is the non-transitory computer readable storage medium claim reciting similar limitations to claim 2 and is rejected for similar reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 4 & 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 20210124422 “Forsland” in light of U.S. Patent Application Publication No. 20140244266 “Brown”. Claim 4: Forsland teaches the inference model generation system according to claim 3. wherein the data set acquisition module acquires, as the data set, data (i.e. para. [0063], Contextual data may include contact lists, location, time, urgency metadata). While Forsland teaches acquiring contextual data around a user, Forsland may not explicitly teach that the contextual data acquiring includes data Indicating weather. However Brown teaches acquiring data around a person, wherein the data may be Indicating weather (i.e. para. [0042-0055], context of a conversation may comprise any type of information that aids in understanding the meaning of a query and/or in formulating a response. a location of the user (e.g., a geolocation of the user associated with the device through which the user provides the query, location based on network information, address of the user, etc.).. information derived from the user's location (e.g., current, forecasted, or past weather at the location). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add wherein the data may be indicating weather, to the contextual data that influences an inference of Forsland, with how contextual data may be location based weather data as taught be Brown. One would have been motivated to combine Forsland with Brown and would have had a reasonable expectation of success as the combination provide a more robust aid in understanding the meaning of a query and/or in formulating a response. Claim 12: Forsland teaches the inference model generation system according to claim 11, wherein the condition includes a condition regarding a motion of the person with speech difficulties (i.e. para. [0102], Based on this central ideal position, the user interface is adjusted to conform to the user's range of motion limitations), a condition regarding at least any one of an environment around the person with speech difficulties (i.e. para. [0087], Above is a context and awareness block which receives and processes metadata inputs from sensors such as biometrics; environment), (i.e. para. [0101], the user can move the direction of their face towards an item grossly moving a cursor towards the intended object, then may use eye movement to fine tune their cursor control). Forsland may not explicitly teach wherein condition information includes weather in a location where the person with speech difficulties is present. However, Brown teaches including information about weather in a location where the person with speech difficulties is present (i.e. para. [0042-0055], context of a conversation may comprise any type of information that aids in understanding the meaning of a query and/or in formulating a response. a location of the user (e.g., a geolocation of the user associated with the device through which the user provides the query, location based on network information, address of the user, etc.).. information derived from the user's location (e.g., current, forecasted, or past weather at the location). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add wherein weather in a location where the person with speech difficulties is present, to the conditional data that influences an inference of Forsland, with how condition data may be location based weather data, as taught be Brown. One would have been motivated to combine Forsland with Brown and would have had a reasonable expectation of success as the combination provide a more robust aid in understanding the meaning of a query and/or in formulating a response. Claim 13: Forsland teaches the inference model generation system according to claim 11, wherein the condition includes conditions regarding a motion of the person with speech difficulties (i.e. para. [0102], Based on this central ideal position, the user interface is adjusted to conform to the user's range of motion limitations), an environment around the person with speech difficulties (i.e. para. [0087], Above is a context and awareness block which receives and processes metadata inputs from sensors such as biometrics; environment), (i.e. para. [0101], the user can move the direction of their face towards an item grossly moving a cursor towards the intended object, then may use eye movement to fine tune their cursor control). Forsland may not explicitly teach wherein condition information includes weather in a location where the person with speech difficulties is present. However, Brown teaches including information about weather in a location where the person with speech difficulties is present (i.e. para. [0042-0055], context of a conversation may comprise any type of information that aids in understanding the meaning of a query and/or in formulating a response. a location of the user (e.g., a geolocation of the user associated with the device through which the user provides the query, location based on network information, address of the user, etc.).. information derived from the user's location (e.g., current, forecasted, or past weather at the location). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add wherein weather in a location where the person with speech difficulties is present, to the conditional data that influences an inference of Forsland, with how condition data may be location based weather data, as taught be Brown. One would have been motivated to combine Forsland with Brown and would have had a reasonable expectation of success as the combination provide a more robust aid in understanding the meaning of a query and/or in formulating a response. Claim(s) 5 & 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 20210124422 “Forsland” in light of U.S. Patent Application Publication No. 20130218812 “Weiss”. Claim 5: Forsland teaches the inference model generation system according to claim 3, wherein the data set acquisition module acquires, as the data set, data indicating information on each of a plurality of items (i.e. para. [0060], To determine if there is a gesture match, the cloud system would analyze the inputted gesture, and the raw data associated with the inputted gesture, and lookup the preconfigured gestures stored in the database. If the inputted gesture exists in the database, then the database will retrieve that record stored in the database), and the inference model generation module generates the inference model by using a (i.e. para. [0092], The system may include embedded software that handles the digitization and post-processing of the signals. Post-processing may include but not be limited to various models of compression, feature analysis, classification, metadata tagging, categorization. The system may handle preprocessing, digital conversion, and post-processing using a variety of methods, ranging from statistical to machine learning). While Forsland teaches a generating a machine learning model for feature analysis, classification, metadata tagging, and categorization of gesture data signals, Forsland may not explicitly teach wherein he model is generating using a Weight coefficient set. However, Weiss teaches an inference model generating by using a weight coefficient set in each of the plurality of items (i.e. para. [0020, 0031], “A classifier can include a decision tree or other static or dynamic classifier and can be conditioned by a Markov model... The classifier can be trained based on received training data including sensed data from a particular device and an indication of one or more known states corresponding to the sensed data. A plurality of different classifiers respectively weighted towards particular physical activities can be trained for predicting user state when the monitored user is scheduled to be performing the particular physical activities”, wherein data set acquired for a user may include weighted classifiers as part of the model generated and trained to infer a state of an observed user). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add wherein an inference model generating by using a weight coefficient set in each of the plurality of items, to the inference model of Forsland, with how weights may be used in a classification model, as taught be Weiss. One would have been motivated to combine Forsland with Weiss and would have had a reasonable expectation of success as the combination provides additional data that can improve the probability for detecting a targeted behavior. Claim 8: Forsland teaches the inference model generation system according to claim 3, wherein the data set acquisition module acquires, as the data set, data indicating information on each of a plurality of items (i.e. para. [0041], The sensory devices 110, 120, 130, 150 and 160, are computing devices (see FIG. 3) that are used by users to translate a user's gesture to an audible, speech command. The sensory devices 110, 120, 130, 150 and 160 sense and receive gesture inputs by the respective user on a sensor interface), Forsland may not explicitly teach acquiring a weight coefficient for each of the plurality of items is determined according to each of the plurality of combinations, and the inference model generation module generates the inference model for each of the plurality of combinations by using the weight coefficient according to the subject combination. However, Weiss teaches a weight coefficient for each of the plurality of items is determined according to each of the plurality of combinations, and the inference model generation module generates the inference model for each of the plurality of combinations by using the weight coefficient according to the subject combination (i.e. para. [0020, 0031], “A classifier can include a decision tree or other static or dynamic classifier and can be conditioned by a Markov model... The classifier can be trained based on received training data including sensed data from a particular device and an indication of one or more known states corresponding to the sensed data. A plurality of different classifiers respectively weighted towards particular physical activities can be trained for predicting user state when the monitored user is scheduled to be performing the particular physical activities”, wherein a combination of different weights and classifiers may be used to generate and train the Markov classification model). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add wherein an inference model generating by using a weight coefficient set in each of the plurality of items, to the inference model of Forsland, with how weights may be used in combination with observed data in a classification model, as taught be Weiss. One would have been motivated to combine Forsland with Weiss and would have had a reasonable expectation of success as the combination provides additional data that can improve the probability for detecting a targeted behavior. Claim(s) 6-7 & 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 20210124422 “Forsland” in light of U.S. Patent Application Publication No. 20130218812 “Weiss”, as applied to Claims 5 & 8 above, and further in light of U.S. Patent Application Publication No. 20140244266 “Brown”. Claim 6: Forsland and Weiss teach the inference model generation system according to claim 5. Forsland further teaches wherein the condition includes a condition regarding a motion of the person with speech difficulties (i.e. para. [0102], Based on this central ideal position, the user interface is adjusted to conform to the user's range of motion limitations) and a condition regarding at least any one of an environment around the person with speech difficulties (i.e. para. [0087], Above is a context and awareness block which receives and processes metadata inputs from sensors such as biometrics; environment), (i.e. para. [0101], the user can move the direction of their face towards an item grossly moving a cursor towards the intended object, then may use eye movement to fine tune their cursor control). Forsland and Weiss may not explicitly teach wherein condition information includes weather in a location where the person with speech difficulties is present. However, Brown teaches including information about weather in a location where the person with speech difficulties is present (i.e. para. [0042-0055], context of a conversation may comprise any type of information that aids in understanding the meaning of a query and/or in formulating a response. a location of the user (e.g., a geolocation of the user associated with the device through which the user provides the query, location based on network information, address of the user, etc.).. information derived from the user's location (e.g., current, forecasted, or past weather at the location). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add wherein weather in a location where the person with speech difficulties is present, to the conditional data that influences an inference of Forsland-Weiss, with how condition data may be location based weather data, as taught be Brown. One would have been motivated to combine Forsland-Weiss with Brown and would have had a reasonable expectation of success as the combination provide a more robust aid in understanding the meaning of a query and/or in formulating a response. Claim 7: Forsland and Weiss teach the inference model generation system according to claim 5. Forsland further teaches wherein the condition includes conditions regarding a motion of the person with speech difficulties (i.e. para. [0102], Based on this central ideal position, the user interface is adjusted to conform to the user's range of motion limitations), an environment around the person with speech difficulties (i.e. para. [0087], Above is a context and awareness block which receives and processes metadata inputs from sensors such as biometrics; environment), (i.e. para. [0101], the user can move the direction of their face towards an item grossly moving a cursor towards the intended object, then may use eye movement to fine tune their cursor control). Forsland and Weiss may not explicitly teach wherein condition information includes weather in a location where the person with speech difficulties is present. However, Brown teaches including information about weather in a location where the person with speech difficulties is present (i.e. para. [0042-0055], context of a conversation may comprise any type of information that aids in understanding the meaning of a query and/or in formulating a response. a location of the user (e.g., a geolocation of the user associated with the device through which the user provides the query, location based on network information, address of the user, etc.).. information derived from the user's location (e.g., current, forecasted, or past weather at the location). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add wherein weather in a location where the person with speech difficulties is present, to the conditional data that influences an inference of Forsland-Weiss, with how condition data may be location based weather data, as taught be Brown. One would have been motivated to combine Forsland-Weiss with Brown and would have had a reasonable expectation of success as the combination provide a more robust aid in understanding the meaning of a query and/or in formulating a response. Claim 9: Forsland and Weiss teach the inference model generation system according to claim 8. Forsland further teaches wherein the condition includes a condition regarding a motion of the person with speech difficulties (i.e. para. [0102], Based on this central ideal position, the user interface is adjusted to conform to the user's range of motion limitations) and a condition regarding at least any one of an environment around the person with speech difficulties (i.e. para. [0087], Above is a context and awareness block which receives and processes metadata inputs from sensors such as biometrics; environment), (i.e. para. [0101], the user can move the direction of their face towards an item grossly moving a cursor towards the intended object, then may use eye movement to fine tune their cursor control). Forsland and Weiss may not explicitly teach wherein condition information includes weather in a location where the person with speech difficulties is present. However, Brown teaches including information about weather in a location where the person with speech difficulties is present (i.e. para. [0042-0055], context of a conversation may comprise any type of information that aids in understanding the meaning of a query and/or in formulating a response. a location of the user (e.g., a geolocation of the user associated with the device through which the user provides the query, location based on network information, address of the user, etc.).. information derived from the user's location (e.g., current, forecasted, or past weather at the location). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add wherein weather in a location where the person with speech difficulties is present, to the conditional data that influences an inference of Forsland-Weiss, with how condition data may be location based weather data, as taught be Brown. One would have been motivated to combine Forsland-Weiss with Brown and would have had a reasonable expectation of success as the combination provide a more robust aid in understanding the meaning of a query and/or in formulating a response. Claim 10: Forsland, Weiss, and Brown teach the inference model generation system according to claim 8. Forsland further teaches wherein the condition includes conditions regarding a motion of the person with speech difficulties (i.e. para. [0102], Based on this central ideal position, the user interface is adjusted to conform to the user's range of motion limitations), an environment around the person with speech difficulties (i.e. para. [0087], Above is a context and awareness block which receives and processes metadata inputs from sensors such as biometrics; environment). Brown further teaches wherein the condition includes conditions regarding weather in a location where the person with speech difficulties is present(i.e. para. [0042-0055], context of a conversation may comprise any type of information that aids in understanding the meaning of a query and/or in formulating a response. a location of the user (e.g., a geolocation of the user associated with the device through which the user provides the query, location based on network information, address of the user, etc.).. information derived from the user's location (e.g., current, forecasted, or past weather at the location). Forsland further teaches wherein the condition includes conditions regarding a body of the person with speech difficulties (i.e. para. [0101], the user can move the direction of their face towards an item grossly moving a cursor towards the intended object, then may use eye movement to fine tune their cursor control). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Patent Application Publication NO. 20180153430 “ Ang”, teaches in para. [0043], Here we describe, among other things, technologies that can detect tissue electrical signals (in particular, nerve electrical signals) non-invasively, accurately, and precisely, and use information about the electrical signals for a wide variety of purposes including diagnosis, therapy, control, inference of gestures or other indications of intent, bio-feedback, interaction, analysis. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID H TAN whose telephone number is (571)272-7433. The examiner can normally be reached M-F 7:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.T./ Examiner, Art Unit 2145 /CESAR B PAULA/ Supervisory Patent Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Mar 06, 2023
Application Filed
Oct 16, 2025
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12443336
INTERACTIVE USER INTERFACE FOR DYNAMICALLY UPDATING DATA AND DATA ANALYSIS AND QUERY PROCESSING
2y 5m to grant Granted Oct 14, 2025
Patent 12282863
METHOD AND SYSTEM OF USER IDENTIFICATION BY A SEQUENCE OF OPENED USER INTERFACE WINDOWS
2y 5m to grant Granted Apr 22, 2025
Patent 12182378
METHODS AND SYSTEMS FOR OBJECT SELECTION
2y 5m to grant Granted Dec 31, 2024
Patent 12111956
Methods and Systems for Access Controlled Spaces for Data Analytics and Visualization
2y 5m to grant Granted Oct 08, 2024
Patent 12032809
Computer System and Method for Creating, Assigning, and Interacting with Action Items Related to a Collaborative Task
2y 5m to grant Granted Jul 09, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
31%
Grant Probability
46%
With Interview (+15.8%)
4y 1m
Median Time to Grant
Low
PTA Risk
Based on 98 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month