Prosecution Insights
Last updated: April 19, 2026
Application No. 18/088,590

COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA

Final Rejection §103
Filed
Dec 25, 2022
Examiner
SPOONER, LAMONT M
Art Unit
2657
Tech Center
2600 — Communications
Assignee
UNLIKELY ARTIFICIAL INTELLIGENCE LIMITED
OA Round
4 (Final)
74%
Grant Probability
Favorable
5-6
OA Rounds
3y 4m
To Grant
86%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
445 granted / 603 resolved
+11.8% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
22 currently pending
Career history
625
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
50.1%
+10.1% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 603 resolved cases

Office Action

§103
DETAILED ACTION Introduction This office action is in response to applicant’s amendments filed 10/8/2025. Claims 1-9, 11-13, 23, 24, 26 and 30-32 are currently pending and have been examined. Applicant’s IDS have been considered. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in United Kingdom on 8/24/2020, 9/21/2020 and 12/18/2020. Response to Arguments Applicant's arguments filed 3/17/2025 have been fully considered but they are not persuasive, regarding amended claim 1. More specifically, applicant argues, “Hence because Lin discloses rules which are generated or extracted, Lin discloses rules which are changeable, and this is the opposite to the amended Claim 1 limitations of "the structured representation of data includes one or more non-changeable tenets, non-changeable statements or other non-changeable rules defining objectives or motives". And because Lin discloses rules which are generated or extracted, and because Lin discloses feedback may allow for the performance of the system to improve with continued use, Lin further suggests rules which are changeable, and this is the opposite to the amended Claim 1 limitations of "the structured representation of data includes one or more non-changeable tenets, non-changeable statements or other non-changeable rules defining objectives or motives". Therefore amended Claim 1 is not obvious over Hewavitharana in view of Lin and Prospero.” However, the Examiner notes, the applicant’s above arguments lack addressing the specific non-changeable tenets, non-changeable statements or other non-changeable rules, as the Examiner notes are taught by Prospero. Therefore, as Prospero teaches having no-changeable tenets, non-changeable statements or other non-changeable rules, as well as changeable elements, it is clear, by the combination of references, and clearly motivated with respect to combining references, that the non-changeable features provide a static, coherent set of standards, that define objectives, roles and responsibilities (see the rejection below). Applicant’s arguments above, are still based on the elements taught by Lin and do not address the newly cited sections, teachings and evidence brought forth by Prospero. Therefore, the applicant’s corresponding arguments are deemed non-persuasive. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-9, 11, 23, 24, 26 and 29-32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hewavitharana et al. (Hewavitharana, US 2018/0068031) in view of Lin et al. (Lin, US 2021/0279621), and further in view of Prospero et al. (Chatbots as Assistants: An Architectural Framework). As per claim 1, Hewavitharana teaches a computer implemented method for the automated analysis or use of data, comprising the steps of: (a) storing or accessing in a memory a structured, machine-readable representation of data that conforms to a machine-readable processable language (paragraph [0029, 0050, 0075, 0076, 0080, 0081, 0092, 0126, 0127]-his inventory, database/cloud storing data, and corresponding knowledge graph), [in which the structured machine-readable representation of data includes reasoning passages, wherein the reasoning passages are represented in the processable language to represent semantics of reasoning steps]; wherein the data includes product data, the product data including product descriptions and including one or more of (paragraphs [0040, 0041, 0075, 0076]-his product inventory database, item attributes, marketplace, eBay inventory, and the like): user product requests, a user's previous product-related search, product data includes in a user’s social media history or a user’s shopping history (ibid-see his item attributes, including historical interaction data, historical marketplace interactions); (b) automatically processing the structured machine-readable representation of data, [including processing at least some of the reasoning passages represented in the processable language to represent semantics of reasoning steps, to reason], to determine which products best match a user's product requests or a user's previous search, social media or shopping history (ibid-paragraph [0086]-the above representation of data is used to determine the most relevant results/products that best match the user’s query/request); (c) Automatically selecting, deciding on or executing actions, and in which the structured representation of data includes one or more [non-changeable] tenets, [non-changeable] statements or [non-changeable] other rules defining objectives or motives, also represented using the structured representation of data (paragraphs [0039, 0040, 0080-0082]-his AI action decision framework, see also claims 7, 8, 9, see his AI generated knowledge graph, statements as generated, his user learned history for products searching, purchasing, learned and represented in the database, See Figs. 1, 2, AI framework, and above corresponding representation of data discussion; (d) analysing a potential action to determine whether executing an action would optimize or otherwise affect achievement or realization of those [non-changeable] tenets, [non-changeable] statements or other [non-changeable] rules (ibid-see his AI, thus learning for optimization of each next action, to achieve type/parameters as statements and rulesets, for completion of the intent); and (e) automatically selecting, deciding on or executing actions only if they optimize or otherwise positively affect achievement or realization of those [non-changeable] tenets, [non-changeable] statements or other [non-changeable] rules (ibid-his AI automated decision for selecting the particular parameter/ruleset for optimizing particular ruleset/tenets via tuple of intent and intent parameters, see paragraphs [0039-0048]-which further disclose AI optimization and selection of action flows, based on training and learned tenets, statements or rules for achieving the desired interaction objectives/motives). Hewavitharana lacks explicitly teaching that which Lin teaches storing or accessing in a memory a structured, machine-readable representation of data that conforms to a machine-readable processable language in which the structured machine-readable representation of data includes reasoning passages, wherein the reasoning passages are represented in the processable language to represent semantics of reasoning steps (paragraphs [0081, 0003, 0021], Figs. 11, 12-as his stored accessed reasoning passages represented in a machine- readable processable language to represent semantics of “reasoning steps”, to reason, as applied to analysis or use of data); (b) automatically processing the structured machine-readable representation of data, including processing at least some of the reasoning passages represented in the processable language to represent semantics of reasoning steps, to reason] (paragraphs [0081, 0003, 0021], Figs. 11, 12-as his automatically processed reasoning passages represented in a machine- readable processable language to represent semantics of “reasoning steps”, to reason, as applied to analysis or use of data). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Hewavitharana and Lin to combine the prior art element of translating natural language into a machine-readable representation form as taught by Hewavitharana with processing of reasoning passages to fulfill a user request as taught by Lin each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be an automatic translation from a natural language to the machine-readable representation, wherein reasoning and explanation is realized and presented to a user (ibid-Lin, Figs. 10-12, paragraph [0081]). The above combination lacks explicitly teaching the “non-changeable” tenets, [non-changeable] statements or other [non-changeable] rules. However, Prospero teaches the above lacking non-changeable tenets (pages 77, 78, , section 2.2, 2.2.1, Fig. 1, page 79-81, section 3.4-section 5, Figs. 3 and 4-his principal self, as the non-changeable tenet of the Chatbot App core). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Hewavitharana and Lin and Prospero to combine the prior art element of translating natural language into a machine-readable representation form as taught by Hewavitharana with processing of reasoning passages to fulfill a user request as taught by Lin with the principal self, as the non-changeable tenets, non-changeable statements or other non-changeable rules, of the chatbot architecture for dialogue with a user, as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be an automatic translation from a natural language to the machine-readable representation, wherein reasoning and explanation is realized and presented to a user, such that the dialogue agent must absolutely adhere to the tenets in actions or communications with the user, thus providing a standard (ibid-Lin, Figs. 10-12, paragraph [0081], ibid, Prospero-section 2.2, page 77). As per claim 2, Hewavitharana further makes obvious the method of Claim 1 in which the structured, machine-readable processable representation of data that conforms to a machine-readable language comprises semantic nodes and passages (paragraphs [0103, 0104, 0133]-his knowledge graph, and nodes, and structured query as passages, comprising semantic information); and in which a semantic node represents an entity and is itself represented by an identifier; and a passage is either (i) a semantic node or (ii) a combination of semantic nodes (paragraphs [0099-0105]-his entity based index, and corresponding n-gram based entities); and where machine-readable meaning comes from a choice of semantic nodes and a way they are combined and ordered as passages (ibid-wherein the combination of entities, via n-gram entity, provide meaning from choice of semantic nodes and corresponding combination and ordering as passages. As per claim 3, Hewavitharana further makes obvious the method of Claim 1 including the step of presenting products which are a best match to the user's product requests or the user's previous search, social media or shopping history (ibid-see claim 1 history discussion, paragraphs [0076, 0119, 0133-0135]). As per claim 4, Hewavitharana further makes obvious the method of Claim 1 where the step of automatically processing the structured representation of data happens as part of a natural language conversation with the user about what product the user is looking to purchase (ibid, paragraph [0037-0041, 0072-0076, 0093]-his natural language query, and dialog, for user product search for purchase). As per claim 5, Hewavitharana further makes obvious the method of Claim 1 in which the structured representation of data further includes a representation of a spoken, written or graphical using interface (GUI) instruction provided by a human to a human/machine interface (ibid, paragraph [0035-0041, 0072-0076, 0093, 0094]-his spoken natural language query, written and GUI instructions for product related interactions, Fig. 2). As per claim 6, Hewavitharana further makes obvious the method of Claim 1 in which the representations of product data have been automatically translated into the machine readable processable language (ibid, paragraphs [0099-0105, 0126, 0127, 0135]-as his automatically product data being translated into machine readable language and indexed into database). As per claim 7, Hewavitharana further makes obvious the method of Claim 2 in which a machine learning system is used to generate the semantic nodes or passages that are the representations of product data (ibid-see claims 1, 2, paragraphs [0126-0135]-his structured query, and corresponding AI (Artificial Intelligence) generated knowledge graphs including semantic nodes or passages, wherein the generated combination of nodes are the representations of product data). As per claim 8, Hewavitharana further makes obvious the method of Claim 7 in which the machine learning system is a neural network system, that is used to generate the machine-readable processable language (ibid-his AI, as a neural network deep learning framework, see also paragraphs [0105-0111]-his deep learning, AI discussion, see claim 7, AI discussion). As per claim 9, Hewavitharana further makes obvious the method of Claim 7 in which the machine learning system has been trained on training data comprising natural language and a corresponding structured machine-readable representation (ibid-see claims 1, 4, 7 and 8-structured machine-readable representation and natural language query and AI discussion, paragraphs [0044, 0045, 0126-0135]-his training using the queries, and AI trained knowledge graphs, comprising the semantic nodes and passages). As per claim 11, Hewavitharana further makes obvious the method of Claim 8 in which the neural network system utilises recurrent neural networks or LSTMs or attention mechanisms or transformers (ibid, see above AI discussion, paragraphs [0102, 099]-RNN, CNN, long short-term models, and his other ML models discussed). As per claim 23, Hewavitharana further makes obvious the method of Claim 1 which includes the step of (a) the machine-readable processable language representing a question in a memory as a structured, machine-readable representation of data (ibid-see claim 1, corresponding and similar limitation); and the method further includes the step of (b) automatically generating a response to the question (ibid-see paragraphs [0044, 0045, 0086-0105]-see dialog and response to user natural language query discussion), using one or more of the following steps: (i) matching the question with structured, machine-readable representations of data previously stored in a memory store (ibid-see product matching, in database/datastore, comprising structured machine-readable representations of data, query and structured query, and structured nodes and passages discussion); (ii) fetching and executing one or more computation units, where computation units represent computational capabilities relevant to answering the question (ibid, his relevance/ranking, AI intelligence, as query answering computations); (iii) fetching and execution of one or more reasoning passages, which are structured, machine-readable representations of data that represent the semantics of potentially applicable reasoning steps relevant to answering the question (ibid-see knowledge graphs, nodes, and corresponding combination of nodes, based on AI reasoning steps, which define nodes combination, AI generated structed query based on reasoning/intent determination, history, and other knowledge graph based data); and in which the representation of the question, the structured, machine-readable representations of data previously stored in the memory store (ibid, see his AI, and databases(s), above knowledge graph discussion, and structured database and queries), the computation units and the reasoning passages are all represented in substantially the same machine-readable processble language (ibid-paragraphs [0126-0135]-his generated structured query, and corresponding knowledge nodes, matching the query language and paths, with respect to the generated question and corresponding database language). As per claim 24, Hewavitharana further makes obvious the method of Claim 1 which includes the step of learning new information and representing the new information in a structured, machine-readable representation of data that conforms to the machine-readable processable language (ibid-see claims 7, 8, 9, see his AI generated knowledge graph and index discussion, his user learned history for products searching, purchasing, learned and represented in the database, See Figs. 1, 2, AI framework, and above corresponding representation of data discussion). As per claim 26, Hewavitharana further makes obvious the method of Claim 1 which includes the step of providing a service operable to receive a description of an entity and return one or more identifiers for structured, machine-readable representations of data corresponding to the entity, so that a user is able to use a shared identifier for the entity (paragraphs [0043-0045, 0075-0078]-his NER component, user description of the entity, information parsed form the user input, transformed into machine readable/understandable language, Fig. 12, comprising identifier for the entities, the entity resolution, comprising using a shared identifier for the entity). As per claim 30, claim 30 sets forth limitations similar to claim 1 and is thus rejected under similar reasons and rationale, wherein the system is deemed to embody the method, such that Hewavitharana with Lin make obvious a computer-based system configured to analyse data (Hewavitharana, Figs 1, 2 and 4, as his system), the system being configured to: (a) store or access in a memory a structured, machine-readable representation of data that conforms to a machine-readable processable language (ibid-see claim 1, corresponding and similar limitation), in which the structured, machine-readable representation of data includes reasoning passages (ibid), wherein the reasoning passages are represented in the processable language to represent semantics of reasoning steps (ibid); wherein the data includes product descriptions and includes one or more of: user product requests, a user's previous product-related search, product data included in a user’s social media history or a user’s shopping history (ibid); (b) automatically process the structured machine-readable representation of data, including processing at least some of the reasoning passages represented in the processable language to represent semantics of reasoning steps, to reason, to determine which products best match a user's product requests or a user's previous search, social media or shopping history (ibid); (c) automatically select, decide on or execute actions, and in which the structured representation of data includes one or more non-changeable tenets, non-changeable statements or other non-changeable rules defining objectives or motives, also represented using the structured representation of data (ibid); (d) analyze a potential action to determine whether executing an action would optimize or otherwise affect achievement or realization of those non-changeable tenets, non-changeable statements or other non-changeable rules (ibid); and (e) automatically select, decide on or execute actions only if they optimize or otherwise positively affect achievement or realization of those non-changeable tenets, non-changeable statements or other non-changeable rules (ibid). As per claim 31, Hewavitharana further makes obvious the method of Claim 8 in which the neural network system is a deep learning system (ibid-see claim 7, machine learning discussion, his AI, as a neural network deep learning framework, see also paragraphs [0105-0111]-his deep learning, AI discussion). As per claim 32, Hewavitharana further makes obvious the method of Claim 9 in which the corresponding structured machine- readable representation is a machine-readable language comprising semantic nodes and passages (ibid-see claims 1, 4, 7 and 8-structured machine-readable representation and natural language query and AI discussion, paragraphs [0044, 0045, 0126-0135]-his training using the queries, and AI trained knowledge graphs, comprising the semantic nodes and passages). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hewavitharana in view of Lin in view of Prospero, as applied to claim 8 above, and further in view of Yang et al. (Yang, Towards Making the Most of BERT in Neural Machine Translation). As per claim 12, Hewavitharana with Lin with Prospero further makes obvious the method of Claim 8, but lacks teaching that which Yang teaches in which the neural network system is a switch transformer feed forward neural network system (pages 3, 4, Fig. 1-his transformer, feed forward network (FFN), and dynamic switch). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Hewavitharana and Yang to combine the prior art element of translating natural language into a machine-readable representation form as taught by Hewavitharana with using a switch transformer FFN in the neural translation process as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be an automatic translation from a natural language to the machine-readable representation, wherein the translation model is trained using a neural network, comprising a switching function to route knowledge (ibid- Yang-abstract). Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hewavitharana in view of Lin in view of Prospero, as applied to claim 8 above, and further in view of Chakraborty et al. (Introduction to Neural Network based Approaches for Question Answering over Knowledge Graphs). As per claim 13, Hewavitharana further makes obvious the method of Claim 8, but lacks explicitly teaching that which Chakraborty teaches in which the neural network system comprises an encoder and decoder and beam searching is used during decoding of the semantic representations from the decoder to remove invalid semantic representations (pages 14 and 15, his neural network based model, using beam searching which prunes invalid semantic representations).. Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Hewavitharana and Chakraborty to combine the prior art element of translating natural language into a machine-readable representation form as taught by Hewavitharana with using an encoder/decoder and beam searching as taught by Chakraborty as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be an automatic translation from a natural language to the machine-readable representation, wherein the translation model is trained using a neural network employing a beam search constraining a beam size for identifying and retaining top candidates and most likely sequences in the translation (ibid- Chakraborty). Conclusion Applicant's amendment necessitated the ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAMONT M SPOONER whose telephone number is (571)272-7613. The examiner can normally be reached 8:00 AM -5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached on (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LAMONT M SPOONER/Primary Examiner, Art Unit 2657 lms 11/29/2025
Read full office action

Prosecution Timeline

Dec 25, 2022
Application Filed
Jan 26, 2024
Non-Final Rejection — §103
Jul 31, 2024
Response Filed
Oct 12, 2024
Final Rejection — §103
Mar 17, 2025
Request for Continued Examination
Mar 18, 2025
Response after Non-Final Action
May 03, 2025
Non-Final Rejection — §103
Oct 08, 2025
Response Filed
Nov 29, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602542
Text Analysis System, and Characteristic Evaluation System for Message Exchange Using the Same
2y 5m to grant Granted Apr 14, 2026
Patent 12596881
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12591737
Systems and Methods for Word Offensiveness Detection and Processing Using Weighted Dictionaries and Normalization
2y 5m to grant Granted Mar 31, 2026
Patent 12572744
Generative Systems and Methods of Feature Extraction for Enhancing Entity Resolution for Watchlist Screening
2y 5m to grant Granted Mar 10, 2026
Patent 12518107
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
74%
Grant Probability
86%
With Interview (+11.8%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 603 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month