Prosecution Insights
Last updated: April 19, 2026
Application No. 18/088,586

COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA

Non-Final OA §103
Filed
Dec 25, 2022
Examiner
OGUNBIYI, OLUWADAMILOL M
Art Unit
2653
Tech Center
2600 — Communications
Assignee
UNLIKELY ARTIFICIAL INTELLIGENCE LIMITED
OA Round
5 (Non-Final)
78%
Grant Probability
Favorable
5-6
OA Rounds
2y 12m
To Grant
96%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
236 granted / 304 resolved
+15.6% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
31 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
20.1%
-19.9% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 304 resolved cases

Office Action

§103
DETAILED ACTION Claims 1 – 10, 12 – 27 and 29 – 31 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 01 September 2025 has been entered. Response to Amendment With regard to the Final Office Action from 02 April 2025, the Applicant has filed a response on 01 September 2025. Claim 31 has been amended to include limitations similar to the other independent claims. The claim will be addressed by its current presentation in the following section. Response to Arguments The Applicant argues against the Examiner’s 35 U.S.C. 103 rejection given to the independent claims, particularly that ‘A prima facie case of obviousness has not been established’ (Remarks: page 16 par 2), and that particularly, the reference of Kramer fails to teach claimed invention, and instead teaches the opposite of what is being provided by the claimed invention. The Applicant states (Remarks: page 17 par 4) that this Kramer reference instead focuses on defining permitted, or allowed, actions, rather than defining prohibited actions. The Applicant buttresses (Remarks: page 18 par 2) that the prior art must be considered in its entirety, including disclosures that teach away from the claims, that ‘a prior art reference must be considered in its entirety, i.e., as a whole, including portion that would lead away from the claimed invention.’ To this, the Examiner notes that the prior art’s mere disclosure of an alternative does not constitute a teaching away from the contents of the instant claims, in that the relevant section of the applied claim is suitable to teach the applied section of the claimed invention, and the applied relevant section of the applied prior art also does not ‘criticize, discredit, or otherwise discourage the solution claimed.’ (MPEP 2141.02 VI.) The Applicant does not seem to focus on the Bolous et al. reference which is actually applied to teach the claimed limitations of interest, only generally making reference to it in combination with the other applied references (Remarks: page 17 par 3; page 19 par 1). The Examiner indicates that, for the claimed limitations regarding ‘one or more tenets defining prohibited actions,’ the Examiner indicates that the reference of Boulos et al. suitable teaches this limitation as well, rendering it obvious over the prior art of record. The Examiner will maintain the previously provided rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 4 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Kramer (U.S. 9,355,358 B1) in view of Sidorkin et al. (US 2021/0081814 A1: hereafter — Sidorkin) further in view of Boulos et al. (US 2020/0388403 A1: hereafter — Boulos). For claim 1, Kramer discloses a computer implemented method for the automated analysis or use of data, comprising the steps of: (a) storing in a non-transitory computer-readable medium a structured, machine- readable representation of data that conforms to a machine-readable language (Kramer: Col 6 lines 3-6 — user-provided information can be stored in a data repository (indicating machine-readable representation of data); Col 6 lines 10-12 — profile information stored in a repository on a server (indicating that profile information stored on a repository would be in a machine-readable language)); wherein the data includes first data relating to personal information, of a plurality of persons, defining one or more of the following attributes of each person of the plurality of persons: sex, age, information relevant to dating or match-making, information relevant to identifying business connections; information relevant to identifying friends (Kramer: Col 5 lines 25-30 — collecting user information including age (which includes information of other users as well); Col 9 lines 8-19 — collecting dating information; Col 10 lines 16-20 — collecting a user’s gender (sex) information in a user input section that includes form fields for the purpose of collecting user information (machine-readable representation of data)); (b) automatically processing the structured, machine-readable representation of data to provide a compatibility match between persons (Kramer: Col 5 lines 25-30 — performing matching analysis on user information in order to arrive at a compatibility value; Col 14 lines 36-42 — performing compatibility matches between users). The reference of Kramer fails to teach the further limitations of this claim, for which Sidorkin is now introduced to teach as: in which the method includes automatically selecting, deciding on or executing actions, and in which the structured machine-readable representation of data includes second data including one or more tenets [[defining prohibited actions]] (Sidorkin: [0014] — the determination of multiple different possible actions that match the higher order action parameters to expand the number of actions that are considered as matches to increase a likelihood of finding the best action satisfying the higher order action parameters; FIG. 4B Steps 414[Wingdings font/0xE0]416[Wingdings font/0xE0]420 — choosing from multiple actions, the action that best matches with a criterion, with step 412 being to collect actions that match with parameters such as a certain constraint (the constraint taken here as the claimed rule or tenet); [0002] — considering all possible outcomes that can be taken, given a set of actions or rules (the rules inherently being in a machine-readable representation which the system understands)); and the method further includes the steps of (i) analysing a potential action to determine whether executing the potential action would optimize or otherwise affect achievement or realization of those tenets (Sidorkin: [0002] — considering a set of possible actions to be taken given a set of rules; FIG. 4B Steps 414[Wingdings font/0xE0]416[Wingdings font/0xE0]420 — choosing from multiple actions, the action that best matches with a criterion, with step 412 being to collect actions that match with parameters such as a certain constraint (the constraint taken here as the claimed rule or tenet)); (ii) automatically selecting, deciding on or executing actions only if the actions optimize or otherwise positively affect the achievement or realization of those tenets (Sidorkin: FIG. 4B Step 416 and 418 — an automatic selection of the best action; [0049] — performing the action matching the necessary parameters to produce the desired result; [0002] — considering a set of possible actions to be taken given a set of rules) in which a neural architecture is used to generate at least some of the structured, machine-readable representation of data that conforms to the machine-readable language (Sidorkin: [0032] — ‘[e]ach neural network may be trained using backward propagation to adjust weights and biases at nodes in a hidden layer to produce the computed output’ teaching of the use of neural architecture to process and produce output in a form that the system is able to comprehend, the machine-readable language). The reference of Kramer provides teaching for an automated analysis method involving the obtaining of user data that conforms to a machine-readable language in a machine-readable representation. This differs from the claimed invention in that the claimed invention now further provides teaching for a machine-readable representation of data including available tenets providing that a potential action is analysed to determine if performing the potential action would optimise the achievement of the tenets, and then performing the potential action. This isn’t new to the art as the reference of Sidorkin is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Sidorkin which provides the selection of an action if it is determined to optimise the realisation of a tenet, with the technique provided by Kramer which provides an automated analysis method obtaining user data and conforming to a machine-readable language in a machine-readable representation, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of achieving a compatibility match between persons, based on the determination that the matching of both persons has been analysed to optimise certain pre-set conditions (tenets). See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). The combination of Kramer in view of Sidorkin fails to teach the further limitation of this claim, for which Boulos is now introduced to teach as in which the method includes automatically selecting, deciding on or executing actions, and in which the structured machine-readable representation of data includes second data including one or more tenets defining prohibited actions (Boulos: [0184] — the presence of safety rules that prohibit the performance of certain actions). The combination of Kramer in view of Sidorkin provides teaching for a machine-readable representation of data including available tenets that provide that a potential action is analysed to determine if performing the potential action would optimise the achievement of the tenets, to then perform the action. It differs from the claimed invention in that the claimed invention further provides teaching for the tenets being defined by prohibited actions. This isn’t new to the art as the reference of Boulos is seen to teach this as provided above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Boulos which provides tenets defined by prohibited actions, with the teaching of the combination of Kramer in view of Sidorkin which provides the selection of an action if it is determined to optimise the realisation of a tenet/rule, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of ensuring that only certain actions are permitted, disallowing some actions that could go against certain safety issues from being performed. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). For claim 3, claim 1 is incorporated and the combination of Kramer in view of Sidorkin further in view of Boulos discloses the method in which the personal information comprises one or more of: information coming from conversations in natural language, information coming from user responses, information coming from the output of a machine learning model, information coming from reasoning, information coming from learning (Kramer: Col 10 lines 40-54 — a user can enter natural language text in a text field as a user response to “My Single Friend’s Details”). For claim 4, claim 1 is incorporated and the combination of Kramer in view of Sidorkin further in view of Boulos discloses the method in which the structured representation of data further includes a representation of a spoken, written or GUI instruction provided by a human to a human/machine interface (Kramer: Col 10 lines 40-54 — a user can enter textual (written) instructions as a user description into a text box, which the system is able to process and scan for certain keywords (indicating structured data representation)). As for claim 29, system claim 29 and method claim 1 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Kramer at Col 1 lines 66-67 provides that this is a computer-implemented method, and at Col 6 lines 32-33, there is a server capable of reading on the computer-based system, suitable to read upon the limitations of this claim. Accordingly, claim 29 is similarly rejected under the same rationale as applied above with respect to method claim 1. Claims 2, 5, 6, 7, 8, 9, 10, 30 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Kramer (U.S. 9,355,358 B1) in view of Sidorkin (US 2021/0081814 A1) further in view of Boulos (US 2020/0388403 A1) as applied to claim 1, and further in view of London (US 2015/0142704 A1). For claim 2, claim 1 is incorporated but the combination of Kramer in view of Sidorkin further in view of Boulos fails to disclose the limitations of this claim, for which London is now introduced to teach as: the method in which the structured, machine-readable representation of data that conforms to a machine-readable language comprises semantic nodes and passages (London: [0058], [0077] — a knowledge tree graph (taken as machine-readable data representation) containing nodes and edges); and in which a semantic node represents an entity and is itself represented by an identifier (London: FIG. 3A — a knowledge tree graph ontology chart having semantic nodes that are represented by identifiers); and a passage is either (i) a semantic node or (ii) a combination of semantic nodes (London: FIG. 3A — shows a combination of semantic nodes); and where machine-readable meaning comes from the choice of semantic nodes and the way they are combined and ordered as passages (London: [0165] — having edges between related nodes that are associated by context and traversing through the nodes depends on a conceptual relationship (the connection between two related nodes being the passage)). The combination of Kramer in view of Sidorkin further in view of Boulos provides teaching for obtaining machine-readable representation of input information. It differs from the claimed invention in that the claimed invention further provides teaching that the machine-readable data representation conforms to a machine-readable language that comprises semantic nodes and passages. This isn’t new to the art as the reference of London is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of London which provides teaching that the machine-readable data representation conforms to a machine-readable language that comprises semantic nodes and passages, with the teaching of the combination of Kramer in view of Sidorkin further in view of Boulos which provides teaching for obtaining machine-readable representation of input information, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of being able to define the relationship between an input concept or entity and other related entities through the use of a knowledge tree graph. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). For claim 5, claim 1 is incorporated and as applied to claim 2 above, the combination of Kramer in view of Sidorkin further in view of Boulos and further in view of London discloses the method including the step of personal information being automatically translated into the machine readable language by a machine learning system that generates the semantic nodes or passages that represent the personal information (London: [0110] — a caller provides a name and other identifying information (personal information); FIG. 3A, [0155] — a knowledge tree graph as a machine readable language generating semantic nodes that represent user speech input). The same motivation applied to the incorporation of the London reference into the teaching of Kramer in claim 2 is applicable here still. For claim 6, claim 1 is incorporated and the combination of Kramer in view of Sidorkin further in view of Boulos and further in view of London provides teaching for receiving personal information expressed in a natural language form (Kramer: Col 7 lines 14-16 — storing user information in electronic form as text (the text here is known to be in natural language form); Col 10 lines 40-49 — a user can freely input textual information (natural language form)). As applied to claim 2 above, the combination of Kramer in view of Sidorkin further in view of Boulos and further in view of London discloses the method including the step of automatically translating personal information expressed in a natural language into the machine-readable language, and in which the structure of the sequence of words is compared with known machine-readable language structures in the memory to identify similarities (London: FIG. 3A, [0155]-[0157] — a knowledge tree graph receiving user [speech] natural language input as a machine readable language, wherein the input words are compared to statistical conversational flows in order to decide on the patterns which the conversations would take; [0183] — matching speech input with known language constructs). The same motivation applied to the incorporation of the London reference into the teaching of the combination of Kramer in view of Sidorkin as applied to claim 2 is applicable here still. For claim 7, claim 1 is incorporated and as applied to claim 2 above, the combination of Kramer in view of Sidorkin further in view of Boulos and further in view of London discloses the method including the step of automatically translating personal information into the machine-readable language by referencing a store of previously identified correct translations between the natural language and the machine-readable language (London: FIG. 3A, [0155]-[0157] — a knowledge tree graph receiving user [speech] natural language input as a machine readable language, wherein the input words are translated into the knowledge tree graph form (which is a machine-readable language); [0183] — matching speech input with known language constructs (indicating the reference to a store of previously identified correct translations)). The same motivation applied to the incorporation of the London reference into the teaching of the combination of Kramer in view of Sidorkin in claim 2 is applicable here still. For claim 8, claim 1 is incorporated and as applied to claim 2 above, the combination of Kramer in view of Sidorkin further in view of Boulos and further in view of London discloses the method including the step of automatically translating personal information into the machine-readable language is achieved by utilising a pipeline of functions which transform the word or sequence of words into a series of intermediate forms (London: [0197] — a series of processes (or pipeline functions) which take the user’s inputs through natural language processing, lexical to ontology mapping, and updating the knowledge tree graph’s probabilities, are steps all performed on the input speech, which generate intermediate forms for attaining the machine-readable language; [0198] — filling roles in the knowledge tree graph (as the machine-readable language) to obtain fillers which then get mapped to specific roles (the mapping being a function that leads to the machine-readable language)). The same motivation applied to the incorporation of the London reference into the teaching of the combination of Kramer in view of Sidorkin as applied to claim 2 is applicable here still. For claim 9, claim 5 is incorporated and the combination of Kramer in view of Sidorkin further in view of Boulos and further in view of London discloses the method in which the machine learning system is a neural network system, such as a deep learning system (London: [0124] — a deep learning neural network). For claim 10, claim 5 is incorporated and the combination of Kramer in view of Sidorkin further in view of Boulos and further in view of London discloses the method in which the machine learning system has been trained on training data comprising natural language and a corresponding structured machine- readable representation, such as a machine-readable language comprising semantic nodes and passages (London: [0199] — role-filling in the knowledge tree graph (the machine-readable language) involving annotated training data with hundreds or thousands of role mappings; FIG. 3A shows such a mapping of input speech into machine-readable language (a knowledge tree graph) comprising such nodes and passages). For claim 30, Kramer discloses a computer implemented method for the automated analysis or use of data, comprising the steps of: (a) storing in a non-transitory computer-readable medium a structured, machine- readable representation of data that conforms to a machine-readable language (Kramer: Col 6 lines 3-6 — user-provided information can be stored in a data repository (indicating machine-readable representation of data); Col 6 lines 10-12 — profile information stored in a repository on a server (indicating that profile information stored on a repository would be in a machine-readable language)); wherein the data includes first data relating to personal information, of a plurality of persons, defining one or more of the following attributes of each person of the plurality of persons: sex, age, information relevant to dating or match-making, information relevant to identifying business connections; information relevant to identifying friends (Kramer: Col 5 lines 25-30 — collecting user information including age (which includes information of other users as well); Col 9 lines 8-19 — collecting dating information; Col 10 lines 16-20 — collecting a user’s gender (sex) information in a user input section that includes form fields for the purpose of collecting user information (machine-readable representation of data)); (b) automatically processing the structured, machine-readable representation of data to provide a compatibility match between persons (Kramer: Col 5 lines 25-30 — performing matching analysis on user information in order to arrive at a compatibility value; Col 14 lines 36-42 — performing compatibility matches between users). The reference of Kramer provides teaching for an automated analysis method involving the obtaining of user data that conforms to a machine-readable language in a machine-readable representation. The teaching of Kramer differs from that of the claimed invention in that the claimed invention further provides teaching for further machine-readable representation of data including tenets or rules defining certain objectives, such that a potential action is analysed to determine if performing the potential action would optimise the achievement of the rules, and then performing the potential action. This isn’t new to the art as the reference of Sidorkin is now introduced to teach this as: in which the method includes automatically selecting, deciding on or executing actions, and in which the structured machine-readable representation of data includes second data including one or more tenets [[defining prohibited actions]] (Sidorkin: [0014] — the determination of multiple different possible actions that match the higher order action parameters to expand the number of actions that are considered as matches to increase a likelihood of finding the best action satisfying the higher order action parameters; FIG. 4B Steps 414[Wingdings font/0xE0]416[Wingdings font/0xE0]420 — choosing from multiple actions, the action that best matches with a criterion, with step 412 being to collect actions that match with parameters such as a certain constraint (the constraint taken here as the claimed rule or tenet); [0002] — considering all possible outcomes that can be taken, given a set of actions or rules (the rules inherently being in a machine-readable representation which the system understands)); and the method further includes the steps of (i) analysing a potential action to determine whether executing the potential action would optimize or otherwise affect achievement or realization of those tenets (Sidorkin: [0002] — considering a set of possible actions to be taken given a set of rules; FIG. 4B Steps 414[Wingdings font/0xE0]416[Wingdings font/0xE0]420 — choosing from multiple actions, the action that best matches with a criterion, with step 412 being to collect actions that match with parameters such as a certain constraint (the constraint taken here as the claimed rule or tenet)); (ii) automatically selecting, deciding on or executing actions only if the actions optimize or otherwise positively affect the achievement or realization of those tenets (Sidorkin: FIG. 4B Step 416 and 418 — an automatic selection of the best action; [0049] — performing the action matching the necessary parameters to produce the desired result; [0002] — considering a set of possible actions to be taken given a set of rules). The same motivation applied to claim 1 for incorporating the reference of Sidorkin is applicable here still. The combination of Kramer in view of Sidorkin fails to provide teaching for the further limitation of this claim, for which the reference of Boulos is now introduced to teach as: in which the method includes automatically selecting, deciding on or executing actions, and in which the structured machine-readable representation of data includes second data including one or more tenets defining prohibited actions (Boulos: [0184] — the presence of safety rules that prohibit the performance of certain actions). The same motivation applied to claim 1 for incorporating the reference of Boulos is applicable here still. The combination of Kramer in view of Sidorkin further in view of Boulos fails to disclose the further limitation of this claim, for which the reference of London is now introduced to teach as: in which the structured, machine-readable representation of data that conforms to the machine-readable language comprises semantic nodes and passages (London: [0058], [0077] — a knowledge tree graph (taken as machine-readable data representation) containing nodes and edges); and in which a semantic node represents an entity and is itself represented by an identifier (London: FIG. 3A — a knowledge tree graph ontology chart having semantic nodes that are represented by identifiers); and a passage is either (I) a semantic node or (II) a combination of semantic nodes (London: FIG. 3A — shows a combination of semantic nodes); and where machine-readable meaning comes from the choice of semantic nodes and the way they are combined and ordered as passages (London: [0165] — having edges between related nodes that are associated by context and traversing through the nodes depends on a conceptual relationship (the connection between two related nodes being the passage)), the method including the step of personal information being automatically translated into the structured, machine-readable representation of data that conforms to the machine-readable language by a machine learning system that generates the semantic nodes or passages that represent the personal information (London: [0110] — a caller provides a name and other identifying information (personal information); FIG. 3A, [0155] — a knowledge tree graph as a machine readable language generating semantic nodes that represent user speech input). The same motivation applied to claim 2 for incorporating the reference of London is applicable here still. For claim 31, Kramer discloses a computer-based system configured to analyse data, the system being configured to: (a) store in a non-transitory computer-readable medium a structured, machine- readable representation of data that conforms to a machine-readable language (Kramer: Col 6 lines 3-6 — user-provided information can be stored in a data repository (indicating machine-readable representation of data); Col 6 lines 10-12 — profile information stored in a repository on a server (indicating that profile information stored on a repository would be in a machine-readable language)); wherein the data includes first data relating to personal information, of a plurality of persons, defining one or more of the following attributes of each person of the plurality of persons: sex, age, information relevant to dating or match-making, information relevant to identifying business connections; information relevant to identifying friends (Kramer: Col 5 lines 25-30 — collecting user information including age (which includes information of other users as well); Col 9 lines 8-19 — collecting dating information; Col 10 lines 16-20 — collecting a user’s gender (sex) information in a user input section that includes form fields for the purpose of collecting user information (machine-readable representation of data)); (b) automatically process the structured, machine-readable representation of data to provide a compatibility match between persons (Kramer: Col 5 lines 25-30 — performing matching analysis on user information in order to arrive at a compatibility value; Col 14 lines 36-42 — performing compatibility matches between users). The reference of Kramer provides teaching for an automated analysis method involving the obtaining of user data that conforms to a machine-readable language in a machine-readable representation. The teaching of Kramer differs from that of the claimed invention in that the claimed invention further provides teaching for further machine-readable representation of data including tenets defining prohibited actions, such that a potential action is analysed to determine if performing the potential action would optimise the achievement of the rules, and then performing the potential action. This isn’t new to the art as the reference of Sidorkin is now introduced to teach this as: in which the system is configured to automatically select, decide on or execute actions, and in which the structured machine-readable representation of data includes second data including one or more tenets [[defining prohibited actions]] (Sidorkin: [0014] — the determination of multiple different possible actions that match the higher order action parameters to expand the number of actions that are considered as matches to increase a likelihood of finding the best action satisfying the higher order action parameters; FIG. 4B Steps 414[Wingdings font/0xE0]416[Wingdings font/0xE0]420 — choosing from multiple actions, the action that best matches with a criterion, with step 412 being to collect actions that match with parameters such as a certain constraint (the constraint taken here as the claimed rule or tenet); [0002] — considering all possible outcomes that can be taken, given a set of actions or rules (the rules inherently being in a machine-readable representation which the system understands)); in which the system is configured to (i) analyse a potential action to determine whether executing the potential action would optimize or otherwise affect achievement or realization of those tenets (Sidorkin: [0002] — considering a set of possible actions to be taken given a set of rules; FIG. 4B Steps 414[Wingdings font/0xE0]416[Wingdings font/0xE0]420 — choosing from multiple actions, the action that best matches with a criterion, with step 412 being to collect actions that match with parameters such as a certain constraint (the constraint taken here as the claimed rule or tenet)); (ii) automatically select, decide on or execute actions only if the actions optimize or otherwise positively affect the achievement or realization of those tenets (Sidorkin: FIG. 4B Step 416 and 418 — an automatic selection of the best action; [0049] — performing the action matching the necessary parameters to produce the desired result; [0002] — considering a set of possible actions to be taken given a set of rules). The same motivation applied to claim 1 for incorporating the reference of Sidorkin is applicable here still. The combination of Kramer in view of Sidorkin fails to provide teaching for the further limitations, and the reference of Bolous is introduced to teach this as in which the system is configured to automatically select, decide on or execute actions, and in which the structured machine-readable representation of data includes second data including one or more tenets defining prohibited actions (Boulos: [0184] — the presence of safety rules that prohibit the performance of certain actions). The same motivation applied to claim 1 for incorporating the reference of Bolous is applicable here still. The combination of Kramer in view of Sidorkin further in view of Bolous fails to provide teaching for the further limitations, and the reference of London is introduced to teach this as: in which the structured, machine-readable representation of data that conforms to the machine-readable language comprises semantic nodes and passages (London: [0058], [0077] — a knowledge tree graph (taken as machine-readable data representation) containing nodes and edges); and in which a semantic node represents an entity and is itself represented by an identifier (London: FIG. 3A — a knowledge tree graph ontology chart having semantic nodes that are represented by identifiers); and a passage is either (I) a semantic node or (II) a combination of semantic nodes (London: FIG. 3A — shows a combination of semantic nodes); and where machine-readable meaning comes from the choice of semantic nodes and the way they are combined and ordered as passages (London: [0165] — having edges between related nodes that are associated by context and traversing through the nodes depends on a conceptual relationship (the connection between two related nodes being the passage)), wherein the machine learning system is configured to automatically translate personal information into the structured, machine-readable representation of data that conforms to the machine-readable language, to generate the semantic nodes or passages that represent the personal information (London: [0110] — a caller provides a name and other identifying information (personal information); FIG. 3A, [0155] — a knowledge tree graph as a machine readable language generating semantic nodes that represent user speech input). The same motivation applied to claim 2 for incorporating the reference of London is applicable here still. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Kramer (U.S. 9,355,358 B1) in view of Sidorkin (US 2021/0081814 A1) further in view of Boulos (US 2020/0388403 A1) as applied to claim 1, and further in view of Menick et al. (US 2021/0256375 A1: hereafter — Menick). For claim 12, claim 1 is incorporated but the combination of Kramer in view of Sidorkin further in view of Boulos fails to teach the limitations of this claim, for which the reference of Menick is now introduced to teach as the method in which the neural architecture utilises recurrent neural networks or LSTMs or attention mechanisms or transformers (Menick: [0027] — applying a recurrent neural network to process user input as well as for natural language understanding). The combination of Kramer in view of Sidorkin further in view of Boulos provides teaching for the presence of a neural architecture being used to generate at least some of the structured machine-readable representation of data. It differs from the claimed invention in that the claimed invention now further provides teaching for the neural architecture using recurrent neural networks or long short-term memories (LSTMs) or attention mechanisms or transformers. This is however not new to the art as the reference of Menick is presented to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Menick which applies a recurrent neural network (as a neural architecture), with the teaching of the combination of Kramer in view of Sidorkin further in view of Boulos which provides teaching for the presence of a neural architecture being used to generate at least some of the structured machine-readable representation of data, to thereby come up with the claimed invention. The combination of both prior art elements is an obvious method to try due to the ease of use of recurrent neural networks for machine understanding of human input. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). Claims 13 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Kramer (U.S. 9,355,358 B1) in view of Sidorkin (US 2021/0081814 A1) further in view of Boulos (US 2020/0388403 A1) as applied to claim 1, and further in view of PEITZ et al. (US 2019/0303442 A1: hereafter — Peitz). For claim 13, claim 1 is incorporated but the combination of Kramer in view of Sidorkin further in view of Boulos fails to disclose the limitations of this claim, for which the reference of Peitz is now introduced to teach as the method in which a passage of natural language is passed through a sequence-to-sequence neural architecture trained on training data comprising natural language and a corresponding structured representation that encodes meaning (Peitz: [0266] — an encoder-decoder network with multiple RNN layers having LSTM hidden units (indicating a sequence-to-sequence neural architecture); [0041] — being able to determine a user’s intent based on the speech input (the intent being an indication of the meaning behind the user’s input)). The combination of Kramer in view of Sidorkin further in view of Boulos provides teaching for the use of a neural architecture to generate a machine-readable language. It differs from the claimed invention in that the claimed invention now further provides teaching for passing a natural language input through a sequence-to-sequence neural architecture. This is however not new to the art as the reference of Peitz is presented to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Peitz which provides passing a natural language input through a sequence-to-sequence neural architecture which applies a recurrent neural network (as a neural architecture), with the teaching of the combination of Kramer in view of Sidorkin further in view of Boulos which provides for the use of a neural architecture to generate a machine-readable language, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result that the likelihood of question-answer models needing disambiguation due to the provision of dedicated responses to queries taking different forms (such as the different query forms of Peitz: FIG. 9B). See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). For claim 16, claim 1 is incorporated but the combination of Kramer in view of Sidorkin further in view of Boulos fails to disclose the limitation of this claim, for which Peitz is now introduced to teach as the method in which the machine-readable language uses a single syntactical item, such as parentheses or brackets, to disambiguate the meaning of structured representations of data (Peitz: FIG. 9B — using braces (curly brackets) to show the meanings of structured data representations, such as in Section 902 with {“eine Katze”} being a disambiguation for {“Cat”} and the braces being used by the system to disambiguate the meaning). The same motivation applied to the incorporation of the Peitz reference into the teaching of the combination of Kramer in view of Sidorkin further in view of Boulos as applied to claim 13 is applicable here still. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Kramer (U.S. 9,355,358 B1) in view of Sidorkin (US 2021/0081814 A1) further in view of Boulos (US 2020/0388403 A1) as applied to claim 1, and further in view of Fedus, William, Barret Zoph, and Noam Shazeer. “SWITCH TRANSFORMERS: SCALING TO TRILLION PARAMETER MODELS WITH SIMPLE AND EFFICIENT SPARSITY.” arXiv preprint arXiv:2101.03961, 2021 (hereafter — Fedus1). For claim 14, claim 1 is incorporated but the combination of Kramer in view of Sidorkin further in view of Boulos fails to disclose the limitation of this claim, for which the reference of Fedus is now introduced to teach as the method in which the neural architecture is a switch transformer feed forward neural network system (Fedus: page 7 par 1 — switch transformer models with experts at feedforward layers). The combination of Kramer in view of Sidorkin further in view of Boulos provides teaching for the use of a neural architecture to generate a machine-readable language, but differs from the claimed invention in that the claimed invention further provides that the neural architecture is a switch transformer feedforward neural network. This isn’t new to the art as the reference of Fedus goes to show above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Fedus which provides that the neural architecture is a switch transformer feedforward neural network, with the teaching of the combination of Kramer in view of Sidorkin further in view of Boulos which provides teaching for the use of a neural architecture to generate a machine-readable language, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result that the use of a switch transformer enables being able to train large models with relatively small amounts of data (Fedus: Abstract; page 4 2 SWITCH TRANSFORMER). See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Kramer (U.S. 9,355,358 B1) in view of Sidorkin (US 2021/0081814 A1) further in view of Boulos (US 2020/0388403 A1) as applied to claim 1, and further in view of Guo et al. (US 2020/0311146 A1: hereafter — Guo). For claim 15, claim 1 is incorporated but the combination of Kramer in view of Sidorkin further in view of Boulos fails to disclose the limitations of this claim, for which Guo is now introduced to teach as the method in which the neural architecture comprises an encoder and decoder and beam searching is used during decoding of the semantic representations from the decoder to remove invalid semantic representations (Guo: [0027] — ‘[a]n artificial neural approach for use with a search system for generating related search queries for source search queries’ (to indicate a neural architecture for generating a machine-readable language); [0034] — a neural encoder-decoder approach; [0116] — a beam search technique used in the decoder, able to keep the top most relevant extensions and discard the other generated extensions (removing invalid semantic representations)). The combination of Kramer in view of Sidorkin further in view of Boulos provides teaching for the generation of machine-readable language. This combination however differs from the claimed invention in that the claimed invention further provides teaching for a beam search technique at a decoder to remove invalid semantic representations. This isn’t new to the art as the reference of Guo is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Guo which provides a beam search technique at a decoder to remove invalid semantic representations, with the teaching of the combination of Kramer in view of Sidorkin further in view of Boulos which provides the generation of machine-readable language, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of reducing memory requirements of the computing system by getting rid of invalid representations, while improving on the generation of encoder-decoder pairs. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). Claims 17, 18 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Kramer (U.S. 9,355,358 B1) in view of Sidorkin (US 2021/0081814 A1) further in view of Boulos (US 2020/0388403 A1) as applied to claim 1, and further in view of Bennett (US 2004/0117189 A1). For claim 17, claim 1 is incorporated but the combination of Kramer in view of Sidorkin further in view of Boulos fails to disclose the limitation of this claim, for which the reference of Bennett is now introduced to teach as the method in which the machine-readable language uses a shared syntax across factual statements, queries and reasoning (Bennett: [0318] — having syntactic types to address parts-of-speech taggers (indicating shared syntax for queries that adhere to certain parts-of-speech tags); [0333] — a system provides answers to a query by following the sentence syntax “The answer to your question … is as follows: …” (indicating a shared syntax across reasonings); [0331] — the system here is used to address frequently asked questions (an indication that the system is used to address factual statements, thereby teaching of shared syntax across factual statements as well according to [0333])). The combination of Kramer in view of Sidorkin further in view of Boulos provides teaching for the generation of machine-readable language. This combination however differs from the claimed invention in that the claimed invention further provides that the machine-readable language uses a shared syntax across factual statements, queries and reasoning. This isn’t new to the art as the reference of Bennett is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Bennett which provides that the machine-readable language uses a shared syntax across factual statements, queries and reasoning, with the teaching of Kramer in view of Sidorkin further in view of Boulos which provides the generation of machine-readable language, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of ensuring similarity and order in presentation results to a user, as well as subjecting input queries from a user to abide by certain syntax rules so that system is able to more effectively easily process queries. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). For claim 18, claim 1 is incorporated and as applied to claim 17 above, the combination of Kramer in view of Sidorkin further in view of Boulos and further in view of Bennett discloses the method in which the machine-readable language uses nesting of nodes and passages, as a substantially unambiguous syntax (Bennett: [0170] — a recognition network consisting of a set of nodes connected by arcs (nested nodes and passages), taking different paths and performing pruning to select the best path to traverse (teaching of following the most-probable unambiguous syntax)). The same motivation applied to the incorporation of the Bennett reference into the teaching of the combination of Kramer in view of Sidorkin in claim 17 is applicable here still. For claim 23, claim 1 is incorporated and as applied to claim 17 above, the combination of Kramer in view of Bolous and further in view of Bennett discloses the method which includes the step of learning new information and representing the new information in a structured, machine-readable representation of data that conforms to the machine-readable language (Bennett: [0170]-[0172] — receiving an input and translating into a query language expression (a machine-readable language understood by the system); [0272] — a training system for learning new information and organising the information in a machine-readable format). The same motivation applied to the incorporation of the Bennett reference into the teaching of the combination of Kramer in view of Sidorkin in claim 17 is applicable Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Kramer (U.S. 9,355,358 B1) in view of Sidorkin (US 2021/0081814 A1) further in view of Boulos (US 2020/0388403 A1) as applied to claim 1, and further in view of MASAI (US 2021/0011673 A1). For claim 19, claim 1 is incorporated but the combination of Kramer in view of Sidorkin further in view of Boulos fails to disclose the limitation of this claim, for which the reference of Masai is now introduced to teach as the method in which the machi
Read full office action

Prosecution Timeline

Dec 25, 2022
Application Filed
May 19, 2023
Non-Final Rejection — §103
Aug 16, 2023
Response Filed
Sep 05, 2023
Final Rejection — §103
Nov 13, 2023
Response after Non-Final Action
Jul 09, 2024
Request for Continued Examination
Jul 30, 2024
Response after Non-Final Action
Aug 05, 2024
Non-Final Rejection — §103
Dec 09, 2024
Response Filed
Mar 27, 2025
Final Rejection — §103
Sep 01, 2025
Request for Continued Examination
Sep 03, 2025
Response after Non-Final Action
Oct 17, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579979
NAMING DEVICES VIA VOICE COMMANDS
2y 5m to grant Granted Mar 17, 2026
Patent 12537007
METHOD FOR DETECTING AIRCRAFT AIR CONFLICT BASED ON SEMANTIC PARSING OF CONTROL SPEECH
2y 5m to grant Granted Jan 27, 2026
Patent 12508086
SYSTEM AND METHOD FOR VOICE-CONTROL OF OPERATING ROOM EQUIPMENT
2y 5m to grant Granted Dec 30, 2025
Patent 12499885
VOICE-BASED PARAMETER ASSIGNMENT FOR VOICE-CAPTURING DEVICES
2y 5m to grant Granted Dec 16, 2025
Patent 12469510
TRANSFORMING SPEECH SIGNALS TO ATTENUATE SPEECH OF COMPETING INDIVIDUALS AND OTHER NOISE
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
78%
Grant Probability
96%
With Interview (+18.6%)
2y 12m
Median Time to Grant
High
PTA Risk
Based on 304 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month