Prosecution Insights
Last updated: April 19, 2026
Application No. 18/530,722

Task-Based Distributional Semantic Model or Embeddings for Inferring Intent Similarity

Non-Final OA §101§103
Filed
Dec 06, 2023
Examiner
HEADLY, MELISSA A
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Hamilton Sundstrand Space Systems International Inc.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
306 granted / 408 resolved
+20.0% vs TC avg
Strong +40% interview lift
Without
With
+40.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
24 currently pending
Career history
432
Total Applications
across all art units

Statute-Specific Performance

§101
11.5%
-28.5% vs TC avg
§103
58.1%
+18.1% vs TC avg
§102
6.7%
-33.3% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 408 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner Notes Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. The examiner encourages Applicant to submit an authorization to communicate with the examiner via the Internet by making the following statement (from MPEP 502.03): “Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.” Please note that the above statement can only be submitted via Central Fax, Regular postal mail, or EFS Web (PTO/SB/439). Claim Objections Claim 3 is objected to because of the following informalities: the phrase “wherein the computing system compares the similarity value to a threshold value and a failure to achieve the intended target goal based on the comparison” to corrected to “wherein the computing system compares the similarity value to a threshold value and determines a failure to achieve the intended target goal based on the comparison.” Claim 9 objected to because of the following informalities: the phrase “The method of claim 1” should be corrected to “The method of claim 9”. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to non-statutory subject matter. During examination, the claims must be interpreted as broadly as their terms reasonably allow. In re American Academy of Science Tech Center, 367 F.3d 1359, 1369, 70 U.S.P.Q.2d 1827, 1834 (Fed. Cir. 2004). Independent claim 9 recites a “system,” which is not comprehensively defined by the specification. The broadest reasonable interpretation of a claim drawn to a system covers software per se in view of the ordinary and customary meaning of system, particularly when the specification is silent. Software per se is not a “process,” a “machine,” a “manufacture,” or a “composition of matter” as defined in 35 U.S.C. § 101. E Examiner suggests adding a recitation of a “processor” and a “memory.” Claims 1-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. In adhering to the 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG), Step 1 is directed to determining whether or not the claims fall within a statutory class. Herein, the claims fall within statutory class of process, machine or manufacture. Hence, the claims qualify as potentially eligible subject matter under 35 U.S.C §101. With Step 1 being directed to a statutory category, the analysis directed to Step 2A. Step 2A is a two prong inquiry. Prong 1 considers whether the claim recites a judicial exception (an abstract idea enumerated in the 2019 PEG, a law of nature, or a natural phenomenon). In this case independent claim 1 recites mental processes as applied to human activity (i.e. concepts performed in the human mind but for the recitation of generic computing components). For example, the following claimed steps are functions that can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, and/or opinion: a sensor configured to monitor tasks included in a course of action (CoA) performed by a human operator in an environment; and wherein the computing system inputs the monitored tasks determined by the sensor into the trained task-based distributional semantic model to determine a deviation between the reference tasks and the monitored tasks But for the recitation of generic computing components, steps a and b can be completed in the human mind with the aid of pen and paper through observation, evaluation, judgment, and/or opinion. “As the Federal Circuit has explained, ‘[c]ourts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind.’ Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015).” (MPEP 2106.04(a)(2) (III)). Steps a and b describe collecting and analyzing information at a high level of generality. According to the MPEP, these steps are examples of mental processes, thus it is reasonable to identify these limitations as reciting mental processes. (MPEP 2106.04(a)(2) III (A), claims do recite a mental process when they contain limitations that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions. Examples of claims that recite mental processes include: a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016)). Since the claims are directed toward a judicial exception, analysis flows to Prong 2. Prong 2 considers whether the judicial exception is integrated into a practical application. In this case, the judicial exception is not integrated into a practical application because the claim language merely describes steps of collecting data, field of use/technological environment, and using a computer as a tool to apply the abstract idea and fails to describe an improvement to the functioning of a computer or other technical field. The additional elements recited in the claim do not integrate the judicial exception into a practical application for the following reasons: The additional elements of “a sensor,” “a computing system in signal communication with the sensor,” “a database,” and a “trained task-based distributional semantic model” are recited at a high level of generality and amounts to using a generic computing component as a tool to apply the abstract idea (MPEP § 2106.05(f)). These additional elements also appear to be an attempt to generally link the use of the judicial exception to a particular technological environment or field of use. (MPEP 2106.05(h)); The additional element of “a computing system in signal communication with the sensor, the computing system including a database storing a plurality of reference CoAs defined by reference tasks having an intended target goal” amounts to insignificant extra-solution data gathering/data transmission activity (MPEP § 2106.05(g)); and The additional element of “storing a trained task-based distributional semantic model configured to determine an intent similarity of the operator performing tasks included in the CoA during real-time” amounts to insignificant extra-solution data gathering/data transmission activity (MPEP § 2106.05(g)). Therefore the abstract idea has not been integrated into practical application and the claims are directed to the abstract idea. Since the claims are directed to the determined judicial exception, the analysis flows to Step 2B. Therein, the elements and combination of elements are examined in the claims to determine whether the claims as a whole amounts to significantly more than the judicial exception. In this case, the additional elements identified at Step 2A Prong 2, individually or in an order combination, also do not amount to significantly more than the abstract idea for the same reasons as given in Step 2A Prong 2 and further because: The claimed “sensor,” “computing system in signal communication with the sensor,” and “database,” are generically recited as mere instructions to implement an abstract idea on a computer. Thus, these steps do not add significantly more to the respective limitations. Taken as an ordered combination, the afore-mentioned limitations are directed to limitations referenced in Alice Corp. (also called the Mayo test) that are not enough to qualify as significantly more when recited in a claim with an abstract idea. (MPEP § 2106.05 (I)(A)), “Limitations that the courts have found not to be enough to qualify as “significantly more” when recited in a claim with a judicial exception include: i… mere instructions to implement an abstract idea on a computer.” The additional element of “storing a trained task-based distributional semantic model configured to determine an intent similarity of the operator performing tasks included in the CoA during real-time” amounts to well‐understood, routine, and conventional functions because the step is claimed in a merely generic manner (e.g., at a high level of generality) and as insignificant extra-solution activity. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network.” (MPEP 2106.05(d) (II)). “A claim directed to a judicial exception cannot be made eligible “simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use.” Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application.” (MPEP 2106.05 (h)). Employing generic computer functions to execute an abstract idea, even when limiting the use of the idea to one particular environment, does not add significantly more, similar to how limiting the abstract idea in Flook to petrochemical and oil-refining industries was insufficient. (MPEP 2106.05 (h)). Viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself (Note MPEP 2106.05(a)). Since there are no elements or ordered combination of elements that amount to significantly more than the judicial exception, the claims are not eligible subject matter under 35 USC §101. Regarding claim 2, the step of “wherein the computing system performs a cosine similarity analysis to produce a similarity value indicating a level of the deviation” is ineligible under Prong 1 because it recites mathematical concepts. The additional element of a “CoA monitoring system” merely recites instructions to implement an abstract idea on a generic computer, or merely uses a generic computer or computer components as a tool to perform the abstract idea under Prong 2. Therefore, these additional elements do not integrate the judicial exception into a practical application. (MPEP 2106.05(f)). Under Step 2B, since these additional elements merely recite generic computer components to carry out the abstract idea, they do not amount to significantly more than the judicial exception. Regarding claim 3, the step of “wherein the computing system compares the similarity value to a threshold value and a failure to achieve the intended target goal based on the comparison” is ineligible under prong 1 since these steps can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, and/or opinion but for the recitation of generic computing components. For example, a person can think and evaluate to perform a comparison and to determine a failure based on the comparison. The additional element of a “CoA monitoring system” merely recites instructions to implement an abstract idea on a generic computer, or merely uses a generic computer or computer components as a tool to perform the abstract idea under Prong 2. Therefore, these additional elements do not integrate the judicial exception into a practical application. (MPEP 2106.05(f)). Under Step 2B, since these additional elements merely recite generic computer components to carry out the abstract idea, they do not amount to significantly more than the judicial exception. Regarding claim 4, the step of “wherein the computing system determines the failure to achieve the intended target goal in response to the similarity value being less than the threshold value” recites an additional mental process under Prong 1 since this step can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, and/or opinion but for the recitation of generic computing components. For example, a person can think and evaluate to determine a failure to achieve an intended target goal in response to a similarity value not meeting a threshold. The additional element of a “CoA monitoring system” merely recites instructions to implement an abstract idea on a generic computer, or merely uses a generic computer or computer components as a tool to perform the abstract idea under Prong 2. Therefore, these additional elements do not integrate the judicial exception into a practical application. (MPEP 2106.05(f)). Under Step 2B, since these additional elements merely recite generic computer components to carry out the abstract idea, they do not amount to significantly more than the judicial exception. Regarding claim 5, the steps of “wherein the cosine similarity analysis includes assigning a reference vector to each reference task included in the reference CoA, assigning a vector to each monitored task performed by the operator, and determining a distance between the vector of a monitored task and the reference vector of the reference task” is ineligible under prong 1 because they recite mathematical concepts. The additional element of a “CoA monitoring system” merely recites instructions to implement an abstract idea on a generic computer, or merely uses a generic computer or computer components as a tool to perform the abstract idea under Prong 2. Therefore, these additional elements do not integrate the judicial exception into a practical application. (MPEP 2106.05(f)). Under Step 2B, since these additional elements merely recite generic computer components to carry out the abstract idea, they do not amount to significantly more than the judicial exception. Regarding claim 6, the additional element of “wherein the computing system generates an alert in response to determining the failure to achieve the intended target goal” recites an additional mental process under Prong 1 since this step can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, and/or opinion but for the recitation of generic computing components. For example, a person can think and evaluate to generate an alert in response to determining a failure to achieve an intended result. The additional element of a “CoA monitoring system” merely recites instructions to implement an abstract idea on a generic computer, or merely uses a generic computer or computer components as a tool to perform the abstract idea under Prong 2. Therefore, these additional elements do not integrate the judicial exception into a practical application. (MPEP 2106.05(f)). Under Step 2B, since these additional elements merely recite generic computer components to carry out the abstract idea, they do not amount to significantly more than the judicial exception. Regarding claim 7, the additional element of “wherein the alert includes instructions on how to correct the deviation” is analyzed as a mental process with the “alert” step of claim 6 and is ineligible for the same reasons. The “alert” step can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, and/or opinion but for the recitation of generic computing components. For example, a person can think and evaluate to generate an alert in response to determining a failure to achieve an intended result. Claims 8-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. In adhering to the 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG), Step 1 is directed to determining whether or not the claims fall within a statutory class. Herein, the claims fall within statutory class of process, machine or manufacture. Hence, the claims qualify as potentially eligible subject matter under 35 U.S.C §101. With Step 1 being directed to a statutory category, the analysis directed to Step 2A. Step 2A is a two prong inquiry. Prong 1 considers whether the claim recites a judicial exception (an abstract idea enumerated in the 2019 PEG, a law of nature, or a natural phenomenon). In this case, independent claim 8 recites a mental process as applied to human activity (i.e. concepts performed in the human mind but for the recitation of generic computing components). For example, the following claimed steps are functions that can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, and/or opinion: monitoring, via a sensor, tasks included in a course of action (CoA) performed by a human operator in an environment. But for the recitation of generic computing components, step a can be completed in the human mind with the aid of pen and paper through observation, evaluation, judgment, and/or opinion. “As the Federal Circuit has explained, ‘[c]ourts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind.’ Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015).” (MPEP 2106.04(a)(2) (III)). Steps a describes collecting and analyzing information at a high level of generality. According to the MPEP, this step is an example of mental processes, thus it is reasonable to identify this limitation as reciting a mental process. (MPEP 2106.04(a)(2) III (A), claims do recite a mental process when they contain limitations that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions. Examples of claims that recite mental processes include: a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016)). Since the claims are directed toward a judicial exception, analysis flows to Prong 2. Prong 2 considers whether the judicial exception is integrated into a practical application. In this case, the judicial exception is not integrated into a practical application because the claim language merely describes steps of collecting data, field of use/technological environment, and using a computer as a tool to apply the abstract idea and fails to describe an improvement to the functioning of a computer or other technical field. The additional elements recited in the claim do not integrate the judicial exception into a practical application for the following reasons: The additional elements of “a database,” “a computing system,” and “a trained task-based distributional semantic model,” are recited at a high level of generality and amounts to using a generic computing component as a tool to apply the abstract idea (MPEP § 2106.05(f)). These additional elements also appears to be an attempt to generally linking the use of the judicial exception to a particular technological environment or field of use. (MPEP 2106.05(h)); The additional elements of : “storing, in a database, a plurality of reference CoAs defined by reference tasks having an intended target goal;” “storing, in a computing system, a trained task-based distributional semantic model configured to determine an intent similarity of the operator performing tasks included in the CoA during real-time;” “outputting the monitored tasks from the sensor to the computing system;” and “inputting the monitored tasks into the trained task-based distributional semantic model to determine a deviation between the reference tasks and the monitored tasks” amounts to insignificant extra-solution data gathering/data transmission activity (MPEP § 2106.05(g)). Therefore the abstract idea has not been integrated into practical application and the claims are directed to the abstract idea. Since the claims are directed to the determined judicial exception, the analysis flows to Step 2B. Therein, the elements and combination of elements are examined in the claims to determine whether the claims as a whole amounts to significantly more than the judicial exception. In this case, the additional elements identified at Step 2A Prong 2, individually or in an order combination, also do not amount to significantly more than the abstract idea for the same reasons as given in Step 2A Prong 2 and further because: The claimed “sensor,” “computing system,” “database,” and “task-based distributional semantic model” are generically recited as mere instructions to implement an abstract idea on a computer. Thus, these steps do not add significantly more to the respective limitations. Taken as an ordered combination, the afore-mentioned limitations are directed to limitations referenced in Alice Corp. (also called the Mayo test) that are not enough to qualify as significantly more when recited in a claim with an abstract idea. (MPEP § 2106.05 (I)(A)), “Limitations that the courts have found not to be enough to qualify as “significantly more” when recited in a claim with a judicial exception include: i… mere instructions to implement an abstract idea on a computer.” The additional elements of: “storing, in a database, a plurality of reference CoAs defined by reference tasks having an intended target goal;” “storing, in a computing system, a trained task-based distributional semantic model configured to determine an intent similarity of the operator performing tasks included in the CoA during real-time;” “outputting the monitored tasks from the sensor to the computing system;” and “inputting the monitored tasks into the trained task-based distributional semantic model to determine a deviation between the reference tasks and the monitored tasks” amounts to well‐understood, routine, and conventional functions because these steps are claimed in a merely generic manner (e.g., at a high level of generality) and as insignificant extra-solution activity. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network.” (MPEP 2106.05(d) (II)). “A claim directed to a judicial exception cannot be made eligible “simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use.” Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application.” (MPEP 2106.05 (h)). Employing generic computer functions to execute an abstract idea, even when limiting the use of the idea to one particular environment, does not add significantly more, similar to how limiting the abstract idea in Flook to petrochemical and oil-refining industries was insufficient. (MPEP 2106.05 (h)). Viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself (Note MPEP 2106.05(a)). Since there are no elements or ordered combination of elements that amount to significantly more than the judicial exception, the claims are not eligible subject matter under 35 USC §101. Regarding claim 9, the step of “performing a cosine similarity analysis to produce a similarity value indicating a level of the deviation” is ineligible under prong 1 because it recites mathematical concepts. Regarding claim 10, the steps of “comparing the similarity value to a threshold value; and determining a failure to achieve the intended target goal based on the comparison” are ineligible under prong 1 since these steps can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, and/or opinion but for the recitation of generic computing components. For example, a person can think and evaluate to perform a comparison and to determine a failure based on the comparison. Regarding claim 11, the step of “determining the failure to achieve the intended target goal in response to the similarity value being less than the threshold value” recites an additional mental process under Prong 1 since this step can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, and/or opinion but for the recitation of generic computing components. For example, a person can think and evaluate to determine a failure to achieve an intended target goal in response to a similarity value not meeting a threshold. Regarding claim 12, the steps of “wherein the cosine similarity analysis includes assigning a reference vector to each reference task included in the reference CoA, assigning a vector to each monitored task performed by the operator, and determining a distance between the vector of a monitored task and the reference vector of the reference task” is ineligible under prong 1 because they recite mathematical concepts. Regarding claim 13, the additional element of “generating an alert in response to determining the failure to achieve the intended target goal” recites an additional mental process under Prong 1 since this step can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, and/or opinion but for the recitation of generic computing components. For example, a person can think and evaluate to generate an alert in response to determining a failure to achieve an intended result. Regarding claim 14, the additional element of “wherein the alert includes instructions on how to correct the deviation” is analyzed as a mental process with the “alert” step of claim 6 and is ineligible for the same reasons. The “alert” step can be reasonably carried out in the human mind with the aid of pen and paper, through observation, evaluation, judgment, and/or opinion but for the recitation of generic computing components. For example, a person can think and evaluate to generate an alert in response to determining a failure to achieve an intended result. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-14 are rejected under 35 U.S.C. 103 as being unpatentable over Buras et al. (US 2021/0327303 A1) in view of Cavallo et al. (US 10586532 B1). As per claim 1, Buras teaches the invention substantially as claimed including a course of action (CoA) monitoring system ([0013], the present invention comprises a medical guidance system (100) for providing real-time, three-dimensional (3D) augmented reality (AR) feedback guidance in the use of a medical equipment system (200)) comprising: a sensor configured to monitor tasks included in a course of action (CoA) performed by a human operator in an environment ([0013], a three-dimensional guidance system (3DGS) (400) that is capable of sensing real-time user positioning data relating to one or more of the movement, position, and orientation of at least a portion of the medical equipment system (200) during said medical procedure performed by the user; and [0089], The position and rotation of passive markers in the real world may be measured by the depth cameras in relation to a volume within the user's environment (e.g., an operating room volume), which may be captured by both the depth cameras and color cameras. In other embodiments, one or more sensors configured to receive electromagnetic wavelength bands other than color and infrared, or larger than and possibly encompassing one or more of color and infrared, may be used); a computing system ([0105], computer 700 is provided that includes the software interfaces as well as various other computer functionalities (e.g., computational elements, memory, processors, input/output elements, timers, etc.)) in signal communication with the sensor ([0105], software interfaces between the various components of the system 100 are included to allow the system components 200, 300, etc. to function together. A computer 700 is provided that includes the software interfaces as well as various other computer functionalities (e.g., computational elements, memory, processors, input/output elements, timers, etc.)), the computing system including a database storing a plurality of reference CoAs defined by reference tasks ([0013], the medical guidance system comprising: ... a library (500); and [0094], library 500 includes detailed information on the medical equipment system 200, which may include instructions (written, auditory, and/or visually) for performing one or more medical procedures using the medical equipment system) having an intended target goal ([0014], medical equipment interface receives data from the medical equipment system during a medical procedure performed by a user to achieve a medical procedure outcome), and storing a trained task-based [distributional semantic] model ([0016], storing the machine learning model for the neural network; and [0150], The ML model developed by the DLM platform is the structure of the actual neural network that will be used in evaluating images captured by a novice user 50), wherein the computing system inputs the monitored tasks determined by the sensor into the trained task-based [distributional semantic] model ([0013], a three-dimensional guidance system (3DGS) (400) that is capable of sensing real-time user positioning data relating to one or more of the movement, position, and orientation of at least a portion of the medical equipment system (200) during said medical procedure performed by the user; [0014], the MLM (600) comprising a position-based feedback module comprising a first module for receiving and analyzing real-time user positioning data; a second module for comparing the user positioning data to the stored reference positioning data, and a third module for generating real-time position-based 3D AR feedback based on the output of the second module, and providing said real-time position-based 3D AR feedback to the user via the ARUI; and [0150], The ML model developed by the DLM platform is the structure of the actual neural network that will be used in evaluating images captured by a novice user 50; Examiner Note: Buras’ model is used to evaluate monitored tasks performed by a user: [0011], systems of the present disclosure provide machine learning guidance to a medical device user; and [0018], Generally, “machine learning” utilizes analytical models that use neural networks, math equations (e.g., statistics), science, etc., to find patterns or other information without explicitly being programmed to do so) to determine a deviation between the reference tasks and the monitored tasks ([0104], MLM 600 may in some instances provide both “coarse” and “fine” feedback to the novice user to help achieve a procedural outcome similar to that of a reference outcome (e.g., obtained from a proficient user); [0129], The MLM 600 generates position-based feedback by comparing the actual movements of a novice user 50 (e.g., using positioning data received from the 3DGS 400 tracking the movement of the ultrasound probe 215) to reference data for the same task; and [0134], if the novice user fails to properly adjust the angle of an ultrasound proper at a specific point in a medical procedure, the MLM 600 and/or computer 700 may generate a video for display to the user that this limited to the portion of the procedure that the user is performing incorrectly). Buras fails to specifically teach, a trained task-based distributional semantic model configured to determine an intent similarity of the operator performing tasks included in the CoA during real-time. However, Cavallo teaches a trained task-based distributional semantic model (Column 4, Lines 54-57, present application presents an elegant and innovative method for supporting free user responses using modern distributed sentence representation and information extraction techniques; and Column 5, Lines 55-54, the dialogue system embeds the input text to generate a vector representation for the input text. The vector representations are generated based on machine learning models that have been trained on training data) configured to determine an intent similarity of the operator performing tasks included in the CoA during real-time (Column 12, Lines 42-45, To formalise the goal of detecting multiple answers expressed in a single input, known more generally as multiple intent recognition, we first define the inferred clauses of a sentence; and Column 12, Lines 51-56, Once an utterance is decomposed into its inferred clauses, the semantic similarity is calculated between each of those clauses and the answers to the question, in exactly the same way as described with regard to FIG. 3 for the entire utterance. This is generally equivalent to repeating the method of FIG. 3 for each inferred clause). Buras and Cavallo are analogous because they are each related to infer user intent using trained models. Buras teaches a method of inferring user intent from user behaviors and interaction with various systems. ([0014], a three-dimensional guidance system (3DGS) (400) that senses real-time user positioning data relating to one or more of the movement, position, and orientation of at least a portion of the medical equipment system (200) within a volume of a user's environment during a medical procedure performed by the user; a library (500) containing 1) stored reference positioning data relating to one or more of the movement, position, and orientation of at least a portion of the medical equipment system (200) during a reference medical procedure and 2) stored reference outcome data relating to an outcome of a reference performance of the reference medical procedure; and a machine learning module (MLM) (600) for providing at least one of 1) position-based 3D AR feedback to the user based on the sensed user positioning data and 2) outcome-based 3D AR feedback to the user based on the medical procedure outcome). Cavallo teaches a method of determining a user’s intent using similarity analysis and a semantic machine learning model (Column 2, Lines 13-21, there is provided a computer-implemented natural language processing method comprising: (a) receiving a user input comprising a set of one or more words; (b) retrieving a predefined set of potential inputs, each potential input comprising one or more words; (c) determining, for each of the potential inputs, a similarity score indicating similarity between the respective potential input and the user input). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention that based on the combination, the guidance system of Buras would be modified to include Cavallo’s semantic-based similarity analysis resulting in a system that compares user behavior including semantic data to reference data in order to perform a similarity analysis. Therefore, it would have been obvious to combine the teachings of Buras and Cavallo. As per claim 2, Cavallo teaches, wherein the computing system performs a cosine similarity analysis to produce a similarity value indicating a level of the deviation (Column 9, Line 66-Column 10, Line 2, the semantic textual similarity (STS) can be computed by embedding the utterance and all the answer texts and calculating an appropriate vector-based similarity function, such as cosine similarity). As per claim 3, Cavallo teaches, wherein the computing system compares the similarity value to a threshold value (Column 7, Lines 45-48, A match is determined if the similarity of the recognised text to one of the expected inputs (e.g. based on semantic textual similarity) is greater than a predetermined threshold; and Column 17, Lines 47-54, the dialogue module 515 is configured to determine similarity values with respect to an input phrase relative to each of the predefined phrases for the current state of the system (the current position within a predefined dialogue flow). The system is then able to determine the most similar predefined phrase and then respond with the corresponding predefined response that is associated with that predefined phrase) and a failure to achieve the intended target goal based on the comparison (Column 3, Lines 27-30, in response to none of the plurality of potential inputs having a further similarity value that exceeds the predetermined threshold, issuing a request to the user to repeat their input; and Column 7, Lines 60-64, it is always possible that the user says something totally different from any of the expected answers. In this case, none of the available answers will match the user utterance (none will have a similarity value with the utterance that exceeds a required similarity threshold)). As per claim 4, Cavallo teaches, wherein the computing system determines the failure to achieve the intended target goal in response to the similarity value being less than the threshold value (Column 3, Lines 27-30, in response to none of the plurality of potential inputs having a further similarity value that exceeds the predetermined threshold, issuing a request to the user to repeat their input; and Column 7, Lines 60-64, it is always possible that the user says something totally different from any of the expected answers. In this case, none of the available answers will match the user utterance (none will have a similarity value with the utterance that exceeds a required similarity threshold)). As per claim 5, Cavallo teaches, wherein the cosine similarity analysis (Column 9, Line 66-Column 10, Line 2, the semantic textual similarity (STS) can be computed by embedding the utterance and all the answer texts and calculating an appropriate vector-based similarity function, such as cosine similarity) includes assigning a reference vector to each reference task included in the reference CoA (Column 17, Lines 54-55, predefined phrases may be stored as sets of embedding vectors), assigning a vector to each monitored task performed by the operator (Column 5, Line 61-Column 6, Line 3, dialogue system utilises the vector representations to determine the meaning of the input text. This is achieved by comparing the vector representation of the input text (which will be referred to hereinafter as the input vector representation) to other vector representations stored in a database. The other vector representations represent predefined inputs that the user may utilise (e.g. predefined answers to a question posed by the dialogue system). The dialogue system determines whether the input vector representation is similar to other vector representations within the database), and determining a distance between the vector of a monitored task and the reference vector of the reference task (Column 15, Lines 48-50, Embodiments make use of distance based methods on phonetic representations of text as an additional measure of textual similarity). As per claim 6, Buras teaches, wherein the computing system generates an alert in response to determining the failure to achieve the intended target goal ([0134], if the novice user fails to properly adjust the angle of an ultrasound proper at a specific point in a medical procedure, the MLM 600 and/or computer 700 may generate a video for display to the user that this limited to the portion of the procedure that the user is performing incorrectly). As per claim 7, Buras teaches, wherein the alert includes instructions on how to correct the deviation ([0134], the novice user's performance may be tracked over time to determine areas in which the novice user repeatedly fails to implement previously provided feedback. In such cases, training exercises may be generated for the novice user focusing on the specific motions or portions of the medical procedure that the novice user has failed to correct, to assist the novice user to achieve improved results; and [0136], the MLM 600 may provide further or additional instructions to the user in real-time by comparing the user's response to a previous real-time feedback guidance instruction to refine or further correct the novice user's performance of the procedure). As per claim 8, Buras teaches the invention substantially as claimed including a method of monitoring a course of action (CoA), the method comprising: storing, in a database, a plurality of reference CoAs defined by reference tasks having an intended target goal ([0013], the medical guidance system comprising: ... a library (500); and [0094], library 500 includes detailed information on the medical equipment system 200, which may include instructions (written, auditory, and/or visually) for performing one or more medical procedures using the medical equipment system); storing, in a computing system, a trained task-based [distributional semantic] model ([0016], storing the machine learning model for the neural network; and [0150], The ML model developed by the DLM platform is the structure of the actual neural network that will be used in evaluating images captured by a novice user 50); monitoring, via a sensor, tasks included in a course of action (CoA) performed by a human operator in an environment ([0013], a three-dimensional guidance system (3DGS) (400) that is capable of sensing real-time user positioning data relating to one or more of the movement, position, and orientation of at least a portion of the medical equipment system (200) during said medical procedure performed by the user; [0014], the MLM (600) comprising a position-based feedback module comprising a first module for receiving and analyzing real-time user positioning data; a second module for comparing the user positioning data to the stored reference positioning data, and a third module for generating real-time position-based 3D AR feedback based on the output of the second module, and providing said real-time position-based 3D AR feedback to the user via the ARUI; and [0150], The ML model developed by the DLM platform is the structure of the actual neural network that will be used in evaluating images captured by a novice user 50; Examiner Note: Buras’ model is used to evaluate monitored tasks performed by a user: [0011], systems of the present disclosure provide machine learning guidance to a medical device user; and [0018], Generally, “machine learning” utilizes analytical models that use neural networks, math equations (e.g., statistics), science, etc., to find patterns or other information without explicitly being programmed to do so); outputting the monitored tasks from the sensor to the computing system ([0014], the MLM (600) comprising a position-based feedback module comprising a first module for receiving and analyzing real-time user positioning data; a second module for comparing the user positioning data to the stored reference positioning data, and a third module for generating real-time position-based 3D AR feedback based on the output of the second module, and providing said real-time position-based 3D AR feedback to the user via the ARUI); inputting the monitored tasks into the trained task-based [distributional semantic] model ([0015], receiving data from a medical equipment system during a medical procedure performed by a user of the medical equipment to achieve a medical procedure outcome; sensing real-time user positioning data relating to one or more of the movement, position, and orientation of at least a portion of the medical equipment system within a volume of the user's environment during the medical procedure performed by the user...comparing at least one of 1) the sensed real-time user positioning data to the retrieved reference positioning data, and 2) the data received from the medical equipment system during a medical procedure performed by the user to the retrieved reference outcome data; and [0095], MLM 600 is capable of comparing data of a novice user's performance of a procedure or task to that of a reference performance (e.g., by a proficient user). MLM 600 may receive real-time data relating to one or both of 1) the movement, position or orientation (“positioning data”) of a portion of the medical equipment 200 during the novice user's performance of a desired medical task (e.g., the motion, position and orientation of an ultrasound probe as manipulated by a novice user to examine a patient's carotid artery), and 2) data received from the medical equipment 200 relating to an outcome of the medical procedure (“outcome data”)) to determine a deviation between the reference tasks and the monitored tasks ([0104], MLM 600 may in some instances provide both “coarse” and “fine” feedback to the novice user to help achieve a procedural outcome similar to that of a reference outcome (e.g., obtained from a proficient user); [0129], The MLM 600 generates position-based feedback by comparing the actual movements of a novice user 50 (e.g., using positioning data received from the 3DGS 400 tracking the movement of the ultrasound probe 215) to reference data for the same task; and [0134], if the novice user fails to properly adjust the angle of an ultrasound proper at a specific point in a medical procedure, the MLM 600 and/or computer 700 may generate a video for display to the user that this limited to the portion of the procedure that the user is performing incorrectly). Buras fails to specifically teach, a trained task-based distributional semantic model configured to determine an intent similarity of the operator performing tasks included in the CoA during real-time. However, Cavallo teaches, a trained task-based distributional semantic model (Column 4, Lines 54-57, present application presents an elegant and innovative method for supporting free user responses using modern distributed sentence representation and information extraction techniques; and Column 5, Lines 55-54, the dialogue system embeds the input text to generate a vector representation for the input text. The vector representations are generated based on machine learning models that have been trained on training data) configured to determine an intent similarity of the operator performing tasks included in the CoA during real-time (Column 12, Lines 42-45, To formalise the goal of detecting multiple answers expressed in a single input, known more generally as multiple intent recognition, we first define the inferred clauses of a sentence; and Column 12, Lines 51-56, Once an utterance is decomposed into its inferred clauses, the semantic similarity is calculated between each of those clauses and the answers to the question, in exactly the same way as described with regard to FIG. 3 for the entire utterance. This is generally equivalent to repeating the method of FIG. 3 for each inferred clause). As per claim 9, Cavallo teaches, further comprising performing a cosine similarity analysis to produce a similarity value indicating a level of the deviation (Column 9, Line 66-Column 10, Line 2, the semantic textual similarity (STS) can be computed by embedding the utterance and all the answer texts and calculating an appropriate vector-based similarity function, such as cosine similarity). As per claim 10, Cavallo teaches, further comprising: comparing the similarity value to a threshold value (Column 7, Lines 45-48, A match is determined if the similarity of the recognised text to one of the expected inputs (e.g. based on semantic textual similarity) is greater than a predetermined threshold; and Column 17, Lines 47-54, the dialogue module 515 is configured to determine similarity values with respect to an input phrase relative to each of the predefined phrases for the current state of the system (the current position within a predefined dialogue flow). The system is then able to determine the most similar predefined phrase and then respond with the corresponding predefined response that is associated with that predefined phrase); and determining a failure to achieve the intended target goal based on the comparison (Column 3, Lines 27-30, in response to none of the plurality of potential inputs having a further similarity value that exceeds the predetermined threshold, issuing a request to the user to repeat their input; and Column 7, Lines 60-64, it is always possible that the user says something totally different from any of the expected answers. In this case, none of the available answers will match the user utterance (none will have a similarity value with the utterance that exceeds a required similarity threshold)). As per claim 11, Cavallo teaches, further comprising determining the failure to achieve the intended target goal in response to the similarity value being less than the threshold value (Column 3, Lines 27-30, in response to none of the plurality of potential inputs having a further similarity value that exceeds the predetermined threshold, issuing a request to the user to repeat their input; and Column 7, Lines 60-64, it is always possible that the user says something totally different from any of the expected answers. In this case, none of the available answers will match the user utterance (none will have a similarity value with the utterance that exceeds a required similarity threshold)). As per claim 12, Cavallo teaches, wherein the cosine similarity analysis (Column 9, Line 66-Column 10, Line 2, the semantic textual similarity (STS) can be computed by embedding the utterance and all the answer texts and calculating an appropriate vector-based similarity function, such as cosine similarity) includes assigning a reference vector to each reference task included in the reference CoA (Column 17, Lines 54-55, predefined phrases may be stored as sets of embedding vectors), assigning a vector to each monitored task performed by the operator (Column 5, Line 61-Column 6, Line 3, dialogue system utilises the vector representations to determine the meaning of the input text. This is achieved by comparing the vector representation of the input text (which will be referred to hereinafter as the input vector representation) to other vector representations stored in a database. The other vector representations represent predefined inputs that the user may utilise (e.g. predefined answers to a question posed by the dialogue system). The dialogue system determines whether the input vector representation is similar to other vector representations within the database), and determining a distance between the vector of a monitored task and the reference vector of the reference task (Column 15, Lines 48-50, Embodiments make use of distance based methods on phonetic representations of text as an additional measure of textual similarity). As per claim 13, Buras teaches, further comprising generating an alert in response to determining the failure to achieve the intended target goal ([0134], if the novice user fails to properly adjust the angle of an ultrasound proper at a specific point in a medical procedure, the MLM 600 and/or computer 700 may generate a video for display to the user that this limited to the portion of the procedure that the user is performing incorrectly). As per claim 14, Buras teaches, wherein the alert includes instructions on how to correct the deviation ([0134], the novice user's performance may be tracked over time to determine areas in which the novice user repeatedly fails to implement previously provided feedback. In such cases, training exercises may be generated for the novice user focusing on the specific motions or portions of the medical procedure that the novice user has failed to correct, to assist the novice user to achieve improved results; and [0136], the MLM 600 may provide further or additional instructions to the user in real-time by comparing the user's response to a previous real-time feedback guidance instruction to refine or further correct the novice user's performance of the procedure). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is as follows: Oberoi et al. (US 2023/0004727 A1)- Teaches a task prediction system based on a user’s intent ([0030], task-action prediction engine 110 includes an intent prediction machine learning model 140 that operates to analyze the task and determine an intent of the task; [0038], A determination may be made that a task should have a particular intent based on a threshold similarity (e.g., an intent score) between the task features of the task and the intent features of historical intents. For example, the intent prediction machine learning model 140 may identify an intent for a task and generate an intent score that indicates a quantified likelihood that the task corresponds to a particular intent in the set of predefined intent-task-categories; and [0047], task-action prediction engine 110 includes intent prediction machine learning model 140 that uses task features and intent-task-category features to determine an intent of the task. The task features may refer to specific characteristics of a task that are determined through natural language processing of task application data 112A and data analytics service data 112B)); and Pappu et al. (Predicting Tasks in Goal-Oriented Spoken Dialog Systems using Semantic Knowledge Bases, 2013, Proceedings of the SIGDIAL 2013 Conference, Pgs. 242-250)- Teaches using semantic machine learning models to identify tasks that are related to user goals (Abstract, Goal-oriented dialog agents are expected to recognize user-intentions from an utterance and execute appropriate tasks; and Pg. 243, we have framed the task prediction problem as a classification problem. We use the user’s utterances to extract lexical semantic features and classify it into being one of the many tasks the system was designed to perform...semantic distance/similarity between concepts in the knowledge base is incorporated into the model using a kernel). Any inquiry concerning this communication or earlier communications from the examiner should be directed to MELISSA A HEADLY whose telephone number is (571)272-1972. The examiner can normally be reached Monday- Friday 9-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 571-272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MELISSA A HEADLY/Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Dec 06, 2023
Application Filed
Mar 05, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602242
OPTIMIZED SYSTEM DESIGN FOR DEPLOYING AND MANAGING CONTAINERIZED WORKLOADS AT SCALE
2y 5m to grant Granted Apr 14, 2026
Patent 12591447
HARDWARE APPARATUS FOR ISOLATED VIRTUAL ENVIRONMENTS
2y 5m to grant Granted Mar 31, 2026
Patent 12585554
SERVER GROUP SELECTION SYSTEM, SERVER GROUP SELECTION METHOD, AND PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12578989
EFFICIENT INITIATION OF AUTOMATED PROCESSES
2y 5m to grant Granted Mar 17, 2026
Patent 12578984
Virtual Machine Register in a Computer Processor
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+40.4%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 408 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month