Prosecution Insights
Last updated: April 19, 2026
Application No. 19/342,547

COMPUTER IMPLEMENTED METHODS FOR THE AUTOMATED ANALYSIS OR USE OF DATA, AND RELATED SYSTEMS

Final Rejection §101§102§103
Filed
Sep 27, 2025
Examiner
BHATNAGAR, ANAND P
Art Unit
2668
Tech Center
2600 — Communications
Assignee
UNLIKELY ARTIFICIAL INTELLIGENCE LIMITED
OA Round
2 (Final)
91%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
94%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
648 granted / 710 resolved
+29.3% vs TC avg
Minimal +2% lift
Without
With
+2.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
18 currently pending
Career history
728
Total Applications
across all art units

Statute-Specific Performance

§101
20.9%
-19.1% vs TC avg
§103
26.0%
-14.0% vs TC avg
§102
34.2%
-5.8% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 710 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Applicant’s amendment/response filed on 02/20/2026 has been entered and made of record. 3. Applicant has amended claims 1, 21, 23-24, and 28-30. Applicant has not added any new claims. Claims 26 and 27 have been canceled. Currently, claims 1-25 and 28-30 are pending. 4. Applicant has amended claims 21, 23, and 24 to overcome the 35USC 112(b)/second paragraph rejection, therefore, this rejection is now withdrawn. Examiner refers to the action below. Response to Arguments 5. Applicant's arguments filed 2/20/2026 have been fully considered but they are not persuasive. Applicant’s representative in essence argues, regarding the 35USC 101 rejection, that the amended claims are not an abstract idea. Applicant argues, regarding claims 1 and 30, that the newly added features of "the structured, machine-readable representation of data includes semantic nodes and passages, wherein the passages comprise a plurality of semantic nodes" and "storing the outputted detected and interpreted events in the structured, machine- readable representation of data that conforms to the machine-readable language" and these are not operations that a human can perform in their mind. Both limitations “representation of data includes semantic nodes and passages, wherein the passages comprise a plurality of semantic nodes” (i.e. data gathering) and storing (i.e. this step can be performed on paper) are both abstract ideas. Linking abstract ideas to a general computer does not make it overcome the 35USC 101 rejection. Further, applicant’s representative argues that the newly added features of "the structured, machine-readable representation of data includes semantic nodes and passages, wherein the passages comprise a plurality of semantic nodes" and "storing the outputted detected and interpreted events in the structured, machine- readable representation of data that conforms to the machine-readable language" and these are not mere data gathering and indexing the data all together, because the structured, machine- readable representation of data includes semantic nodes and passages, and this does not constitute mere data gathering and indexing of data all together. Examiner disagrees. Applicant is describing what the data structures/components is/are at certain points (i.e. nodes) and nothing more which makes it an abstract idea. Lastly, applicant’s representative argues that amended Claim 1 includes "the structured, machine-readable representation of data includes semantic nodes and passages, wherein the passages comprise a plurality of semantic nodes" and "storing the outputted detected and interpreted events in the structured, machine- readable representation of data that conforms to the machine-readable language" and these are not routine and well-known steps. There is an improvement in technology because in the computer-implemented method of amended Claim 1, a deep learning model is used to detect and to interpret real time events from an input data stream; the detected and interpreted events are output in the structured, machine-readable representation of data that conforms to the machine-readable language, in which the structured, machine-readable representation of data includes semantic nodes and passages, wherein the passages comprise a plurality of semantic nodes, and the outputted detected and interpreted events are stored in the structured, machine-readable representation of data that conforms to the machine-readable language. This technical process was not previously possible. As stated in para. [033] of the application as filed, an advantage is that a computer system which processes the structured, machine-readable representation of data can be immediately aware of what is happening in the real world, and a further advantage is that the detected and interpreted events reflected in the structured, machine-readable representation of data can be readily processed by a computer system. Applicant’s representative states that this is an improvement in technology because a deep learning model is being applied to detect and interpret in real time without giving details as how detection and interpretation are performed and on what type of data (i.e. detection and interpretation can be somebody listen to a audio recording or watching a video and interpreting and writing down the actions/events/objects/sounds that are determined/observed.). A deep learning model is just a software program which would be on a computer to carry out the process. As stated above, linking the abstract idea to a computer does not make an overcome the 35USC 101 rejection. Therefore, examiner believes that the 35USC 101 still exists on the amended claims and maintains this rejection. Claim Rejections - 35 USC § 101 6. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 7. Claims 1-25 and 28-30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a mental process. This judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The following reasons are provided to evaluate subject matter eligibility. (1) Are the claims directed to a process, machine, manufacture or composition of matter; (2A) Prong One: Are the claims directed to a judicially recognized exception, i.e., a law of nature, a natural phenomenon, or an abstract idea; Prong Two: If the claims are directed to a judicial exception under Prong One, then is the judicial exception integrated into a practical application; (2B) If the claims are directed to a judicial exception and do not integrate the judicial exception, do the claims provide an inventive concept. With regard to (1), the analysis is a 'yes', claim 1 recites a process/method and claim 30 recites a computer system/machine. With regard to (2A) Prong One, the analysis is a "yes". Claim 1 recites "a deep learning model detects and interprets real time events from an input data stream, in which the detected and interpreted events are output in a structured, machine-readable representation of data that conforms to a machine-readable language , the method including the steps of:(i) using the deep learning model to detect and to interpret the real time events from the input data stream; (ii) outputting the detected and interpreted events in the structured, machine-readable representation of data that conforms to the machine-readable language, in which the structured, machine-readable representation of data includes semantic nodes and passages, wherein the passages comprise a plurality of semantic nodes, and (iii) storing the outputted detected and interpreted events in the structured, machine-readable representation of data that conforms to the machine-readable language." When viewed under the broadest most reasonable interpretation the claim recites an abstract idea of mental processes. The step of "detects and interprets...' is generically recited because there is no description of how this is accomplished. It can be interpreted as merely looking at the data, and evaluating the data in the mind. The concepts, as claimed, are observations and/or evaluations ("detects and interprets..."). There is nothing in the claim that requires more than an operation that a human, armed with the appropriate apparatus, pen/paper, can perform. One can perform the process using pen and paper, and the recitation of computer system in the system/device claim is a mere use of generic computer components. See MPEP 2106.04 and the 2019 PEG. With regard to (2A) Prong Two: the analysis is a "No". Claim 1 recites the additional elements of "interpreted events are output in a structured, machine- readable representation of data that conforms to a machine-readable language"; and these additional elements represents mere data gathering and indexing the data all together that is necessary for use of the recited abstract idea. Therefore, the limitation is insignificant extra-solution activity, which is a generic operation. See MPEP 2106.05(1). The claim as a whole, looking at the additional elements individually and in combination, does not integrate the abstract idea into a practical application. With regard to (2B): the pending claims do not show what is more than a routine in the art presented in the claims, i.e., the additional elements are nothing more than routine and well-known steps. The additional elements do not reflect an improvement to a technology or technical field, including the use of a particular machine or particular transformation. It has not been shown that the mental process allows the "technology" to do something that it previously was not able to do. Claim 30 is similarly rejected for the same reasons as claim 1. Dependent claims 2-25 and 28-29 do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are rejected for the same reasons and not repeated herewith. Claim Rejections - 35 USC § 102 8. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-16, 20-25, and 28-30 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Tunstall-Pedoe et al. (U.S. patent pub. 2023/126972 will be further referred to as Tunstall). The applied reference has a common assignee with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2). This rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C. 102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B) if the same invention is not being claimed; or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed in the reference and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement. Regarding claim 1: Tunstall discloses a computer implemented method in which a deep learning model detects and interprets real time events from an input data stream, in which the detected and interpreted events are output in a structured, machine-readable representation of data that conforms to a machine-readable language (abstract, paragraphs 0020-0024, 0029, 0056, 0210-0212, 0078-0088, and 0881), the method including the steps of: (i) using the deep learning model to detect and to interpret the real time events from the input data stream (abstract, fig. 8, paragraphs 0020-0024, 0029, 0056, 0210-0212, 0078-0088, and 0881); (ii) outputting the detected and interpreted events in the structured, machine-readable representation of data that conforms to the machine-readable language, in which the structured, machine-readable representation of data includes semantic nodes and passages, wherein the passages comprise a plurality of semantic nodes (abstract, fig. 8, paragraphs 0020-0024, 0029, 0056, 0210-0212, 0078-0088, 0758-0810, and 0881), and (iii) storing the outputted detected and interpreted events in the structured, machine- readable representation of data that conforms to the machine-readable language (abstract, fig. 8, paragraphs 0020-0024, 0029, 0056, 0210-0212, 0078-0088, 0758-0810, and 0881). Regarding claim 2:The method of Claim 1, in which the deep learning model outputs the structured, machine-readable representation of data (abstract, fig. 8, paragraphs 0020-0024, 0029, 0056, 0210-0212, 0078-0088, 0758-0810, and 0881). Regarding claim 3: The method of Claim 1, in which a first output of the deep learning model is translated into the output which is the structured, machine-readable representation of data (paragraphs 0758-0810 and 0881);. Regarding claim 4: The method of Claim 1, in which the method is executed in real time (paragraph 0729, it is an automated system, i.e. read as real time). Regarding claim 5: The method of Claim 1, in which the input data stream is received from a vision system, the input data stream including an image, and in which the output is a caption for the image in the structured, machine-readable representation of data (paragraph 0326, 0372, and 0381). Regarding claim 6: The method of Claim 5, in which the deep learning model or the vision system has been trained to output a caption for an image in the structured, machine-readable representation of data (paragraphs 0303-0304, 0326, and 0336). Regarding claim 7: The method of Claim 5, in which the deep learning model or the vision system has been trained to output a caption for an image in a natural language, and in which the caption for the image in the natural language is translated into the structured, machine-readable representation of data (paragraphs 0303-0304,0326, and 0336). Regarding claim 8: The method of Claim 5, in which the deep learning model or the vision system receives a stream of images from a camera (fig. 8 and paragraphs 0372 and 0381). Regarding claim 9: The method of Claim 5, in which the deep learning model or the vision system continuously reports what it sees (paragraph 0326, i.e. automatically performed is read as real time and continuous analysis). Regarding claim 10: The method of Claim 5, in which the vision system is interrogated by a system which uses the structured, machine-readable representation of data to interrogate the vision system (paragraph 0029 and 0346-0347, system can start a conversation, i.e. questioning/interrogation). Regarding claim 11: The method of Claim 5, in which the vision system is interrogated by a system which uses the structured, machine-readable representation of data to interrogate the vision system to report what it sees (paragraph 0029 and 0346-0347, system can start a conversation, i.e. questioning/interrogation). Regarding claim 12: The method of Claim 5, in which the deep learning model or the vision system is used to identify a dangerous situation and to take appropriate action driven by tenets (paragraph 0352). Regarding claim 13: The method of Claim 5, in which the vision system and a system which uses the structured, machine-readable representation of data in communication with the vision system are used to identify a dangerous situation and to take appropriate action driven by tenets of the system which uses the structured, machine-readable representation of data (paragraph 0352). Regarding claim 14: The method of Claim 8, including using a vision classifier which identifies images from the stream of images from the camera and estimates ages of people present or classifies the people as being a minor or adult (paragraph 1121). Regarding claim 15: The method of Claim 5, in which the deep learning model or the vision system is used to identify the humans in a room and derives their adult or minor status from knowledge known about them directly such as their age or date of birth (paragraph 1121). Regarding claim 16: The method of Claim 5, in which the deep learning model or the vision system is used to identify a dangerous situation involving a child and to take appropriate action driven by tenets (paragraphs 0352, 0389 and 0435). Regarding claim 20: The method of Claim 1, in which the input data stream is received from a microphone, wherein the microphone is used to detect and transcribe voice or to transcribe the voice directly to the structured, machine-readable representation of data (paragraphs 0372 and 0406). Regarding claim 21: The method of Claim 1, wherein voice analysis is used to detect emotions including happiness, or sadness, or irritability, or anger, and to detect attributes including fatigue or drunkenness (paragraphs 0257, 0347, and 0354). Regarding claim 22: The method of Claim 1, wherein voice analysis is used to identify the user, or to identify attributes of the user (paragraph 0635, gender=attribute). Regarding claim 23: The method of Claim 1, wherein voice analysis is used to identify attributes of the user including probable gender or age (paragraph 0635, gender). Regarding claim 24: The method of Claim 22, wherein the identified identity of the user, or the identified attributes of the user, are combined with other sources including visual information or information about the person identified by other means (paragraph 01232). Regarding claim 25: The method of Claim 1, wherein the machine readable language has a syntax that is a single shared syntax that applies to passages that represent factual statements, query statements and reasoning statements (paragraphs 0694 and 0695). Regarding claim 28: The method of Claim 1, in which the semantic nodes and passages are nestable in the structured, machine-readable representations of data (paragraphs 0095 and 0097). Regarding claim 29: The method of Claim 1, in which the deep learning model includes a large language model (LLM) (paragraph 0919). Regarding claim 30: See claim 1. Claim Rejections - 35 USC § 103 9. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Tunstall-Pedoe et al. (U.S. patent pub. 2023/126972 will be further referred to as Tunstall), and further in view of Bose et al. (U.S. patent pub. 2024/0312489A1). Regarding claim 17: Tunstall does not teach the feature of “in which the deep learning model or the vision system is used to identify a dangerous situation involving a child and to take appropriate action driven by tenets, the appropriate action including sending a message to the child's parents or finding a nearby adult.” Bose teaches the feature of “in which the deep learning model or the vision system is used to identify a dangerous situation involving a child and to take appropriate action driven by tenets, the appropriate action including sending a message to the child's parents or finding a nearby adult” (Bose et al.; paragraphs 0019, 0034, 0041, 0049, and 0057, calling emergency services read as finding a nearby adult). It would have been obvious to one ordinary skilled in the art to combine the teaching of Bose et al. with the disclosure of Tunstall because they are analogous in the field of event detection and analysis. One ordinary skilled in the art would have been motivated to combine the teaching of Bose et al. with the disclosure of Tunstall in order to have a intelligent analysis of event data from a variety of sensors and/or non-sensor data (Bose et al.; paragraph 0019). Regarding claim 18: Tunstall does not teach the feature of “in which the deep learning model or the vision system is used to identify a dangerous situation involving a child and to take appropriate action driven by tenets, the appropriate action including communicating urgently with the child if the child is old enough.” Bose et al. teaches the feature of “in which the deep learning model or the vision system is used to identify a dangerous situation involving a child and to take appropriate action driven by tenets, the appropriate action including communicating urgently with the child if the child is old enough” (paragraphs 0019, 0034, 0041, 0049, and 0057). It would have been obvious to one ordinary skilled in the art to combine the teaching of Bose et al. with the disclosure of Tunstall because they are analogous in the field of event detection and analysis. One ordinary skilled in the art would have been motivated to combine the teaching of Bose et al. with the disclosure of Tunstall in order to have an intelligent analysis of event data from a variety of sensors and/or non-sensor data (Bose et al.; paragraph 0019). Regarding claim 19: Tunstall does not teach the feature of “in which the input data stream is received from a temperature sensor, or a humidity sensor, or an air pollution sensor, or a sound detection system (e.g. glass breaking, footsteps, doors opening, babies crying, dogs barking etc.), or a light detection system, and in which the output is a reported event in the structured, machine-readable representation of data.” Bose teaches the feature of in which the input data stream is received from a temperature sensor (Bose et al.; abstract and paragraphs 0022-0026, i.e. temperature), or a humidity sensor (abstract and paragraphs 0022-0026, i.e. humidity) or an air pollution sensor, or a sound detection system (abstract and paragraphs 0022-0026. i.e. sound) (e.g. glass breaking, footsteps, doors opening, babies crying, dogs barking etc.), or a light detection (abstract and paragraphs 0022-0026, i.e. light) system, and in which the output is a reported event in the structured, machine-readable representation of data. It would have been obvious to one ordinary skilled in the art to combine the teaching of Bose et al. with the disclosure of Tunstall because they are analogous in the field of event detection and analysis. One ordinary skilled in the art would have been motivated to combine the teaching of Bose et al. with the disclosure of Tunstall in order to have an intelligent analysis of event data from a variety of sensors and/or non-sensor data (Bose et al.; paragraph 0019). Conclusion 10. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information 11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANAND BHATNAGAR whose telephone number is (571)272-7416. The examiner can normally be reached on M-F 7:30am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached on 571-272-4650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANAND P BHATNAGAR/Primary Examiner, Art Unit 2668 March 14, 2026
Read full office action

Prosecution Timeline

Sep 27, 2025
Application Filed
Nov 14, 2025
Non-Final Rejection — §101, §102, §103
Feb 20, 2026
Response Filed
Mar 14, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597282
IMAGE PROCESSING APPARATUS, CONTROL METHOD OF IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597172
DECODING ATTRIBUTE VALUES IN GEOMETRY-BASED POINT CLOUD COMPRESSION
2y 5m to grant Granted Apr 07, 2026
Patent 12592003
Methods for the compression and decompression of a digital terrain model file; associated compressed and decompressed files and associated computer program product
2y 5m to grant Granted Mar 31, 2026
Patent 12592053
METHOD FOR ADJUSTING A REGION OF INTEREST IN A DYNAMIC IMAGE FOR ADVANCED DRIVER-ASSISTANCE SYSTEM, AND IN-VEHICLE ELECTRONIC DEVICE FOR IMPLEMENTING THE METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12579716
MRI RECONSTRUCTION BASED ON CONTRASTIVE LEARNING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
91%
Grant Probability
94%
With Interview (+2.3%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 710 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month