Prosecution Insights
Last updated: April 19, 2026
Application No. 18/196,700

Systems and Methods for Analysis of Home Telematics Using Generative AI

Non-Final OA §101§103
Filed
May 12, 2023
Examiner
KASSIM, IMAD MUTEE
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
State Farm Mutual Automobile Insurance Company
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
116 granted / 160 resolved
+17.5% vs TC avg
Strong +34% interview lift
Without
With
+33.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
23 currently pending
Career history
183
Total Applications
across all art units

Statute-Specific Performance

§101
22.6%
-17.4% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 160 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims’ subject matter eligibility will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50-57 (January 7, 2019) (“2019 PEG”). With respect to claim 1. Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 1 recites a method, which is a process. Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes—the limitations identified below each, under its broadest reasonable interpretation, covers mental processes abstract idea grouping (concepts performed in the human mind (including an observation, evaluation, judgment, opinion)), see MPEP 2106.04(a)(2), subsection III and the 2019 PEG, but for the recitation of generic computer components: “analyzing, (Mental processes- concept of observation and evaluation, wherein analyzing information uses the mental human mind and generating a dialogue output is a form of information presentation). Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No—the judicial exception is not integrated into a practical application. “receiving, by one or more processors, home telematics data at a generative artificial intelligence (AI) model, wherein the home telematics data includes sensor data regarding a property associated with a user;” involves the mere gathering of data, which is insignificant extra-solution activity. See MPEP § 2106.05(g). “a generative artificial intelligence (AI) model” and “, by the one or more processors and using the generative Al model”: Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). “a dialogue output to present to the user”; Adding insignificant extra-solution activity to the judicial exception, post solution and data presentation, as discussed in MPEP § 2106.05(g). The generic computer components in these steps are recited at a high-level of generality (i.e., as a generic computer component performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No—there are no additional limitations beyond the mental processes identified above. The limitation treated above, are directed to the well-understood, routine, and conventional activity of storing and retrieving information in memory. See MPEP § 2106.05(d)(II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). It also includes limitations that Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). The additional element is insignificant application, which is similar to examples of activities that the courts have found to be insignificant extra-solution activity, in accordance with MPEP 2106.05(g), Insignificant Extra-Solution Activity. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 2. Step 1: A method, as above. Step 2A Prong 1: The claim recites that “determining, by the one or more processors and based upon at least the home telematics analysis, a likelihood of an insurance event occurring; wherein the dialogue output includes at least one of (i) information related to the insurance event or (ii) one or more actions to address the insurance event.”: This limitation merely specifies mental processes- concept of observation and evaluation of the analyzed information. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 3. Step 1: A method, as above. Step 2A Prong 1: The claim recites that “wherein the one or more actions include at least one of: (i) one or more preventative actions to prevent the insurance event, (ii) one or more mitigating actions to mitigate damage associated with the insurance event, or (iii) one or more prescriptive actions to fix damage associated with the insurance event.”: This limitation merely specifies mental processes- concept of observation and evaluation of the analyzed information. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 4. Step 1: A method, as above. Step 2A Prong 1: The claim recites that “determining, by the one or more processors, a likelihood of user remembrance for one or more locations for sensor placement; and wherein generating the dialogue output includes: generating, by the one or more processors, one or more recommendations for sensor placement based upon at least the determined likelihood of user remembrance.”: This limitation merely specifies mental processes- concept of observation and evaluation of the analyzed information. Step 2A Prong 2, Step 2B: The claim recites that “obtaining a set of the potential queries;” involves the mere gathering of data, which is insignificant extra-solution activity. See MPEP § 2106.05(g). This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 5. Step 1: A method, as above. Step 2A Prong 1: The claim recites that “determining, by the one or more processors, a likelihood of user maintenance for one or more locations for sensor placement; and wherein generating the dialogue output includes: generating, by the one or more processors, one or more recommendations for sensor placement based upon at least the determined likelihood of user maintenance.”: This limitation merely specifies mental processes- concept of observation and evaluation of the analyzed information. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 6. Step 1: A method, as above. Step 2A prong 1: same mental abstract idea of claim 1. Step 2A Prong 2, Step 2B: The claim recites that “wherein the sensor data includes security system data associated with the property and wherein the dialogue output includes an alert regarding potential intruders.” involves the mere gathering of data, which is insignificant extra-solution activity. See MPEP § 2106.05(g). This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 7. Step 1: A method, as above. Step 2A Prong 1: The claim recites that “comparing, by the one or more processors, the home telematics data to historical home telematics data to detect a difference in behavior; and determining, by the one or more processors and based upon at least the detected difference in behavior, that unusual activity is occurring on the property; wherein generating the dialogue output is responsive to determining that unusual activity is occurring.”: This limitation merely specifies mental processes- concept of observation and evaluation of the analyzed information. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 8. Step 1: A method, as above. Step 2A prong 1: same mental abstract idea of claim 1. Step 2A Prong 2, Step 2B: The claim recites that “wherein the generative AI model includes at least one of: (i) an AI or machine learning (ML) chatbot or (ii) an AI or ML voice bot.” for step 2A Prong 2: Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f) and also for step 2B: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984 (see MPEP § 2106.05(f). Claims 9-16 Step 1: The claims recite an system; therefore, they fall into the statutory category of machines. Step 2A Prong 1: The claims recite the same mental processes as claims 1-8, respectively. Step 2A Prong 2: This judicial exception is not integrated into a practical application. Claims 9-16 recite generic computer components, namely “one or more processors; a communication unit”. As before, the mere recitation that the method is to be performed on a generic computer amounts to a mere instruction to apply the exception on the computer. See MPEP § 2106.05(f). With that exception, the analysis mirrors that of claims 1-8, respectively. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The analysis, with the one exception noted above, mirrors that of claims 1-8, respectively. Claims 17-20 Step 1: The claims recite a tangible, non-transitory computer-readable medium; therefore, they fall into the statutory category of machines. Step 2A Prong 1: The claims 17-20 recite the same mental processes as claims 1-4, respectively. Step 2A Prong 2: This judicial exception is not integrated into a practical application. Claims 17-20 recite generic computer components, namely “tangible, non-transitory computer-readable medium storing instructions for analyzing home telematics data that, when executed by one or more processors of a computing device”. As before, the mere recitation that the method is to be performed on a generic computer amounts to a mere instruction to apply the exception on the computer. See MPEP § 2106.05(f). With that exception, the analysis mirrors that of claims 1-4, respectively. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The analysis, with the one exception noted above, mirrors that of claims 1-4, respectively. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bohl et al. (US 20240371376 A1) in view of Zhang et al. (“DIALOGPT :Large-Scale Generative Pre-training for Conversational Response Generation”, A collaboration between Microsoft Research and Microsoft Dynamics 365 AI Research, pages 270–278, July 5- July 10, 2020.). Regarding claim 1. Bohl teaches a computer-implemented method for analyzing home telematics data and generating a dialogue output, the computer-implemented method comprising: receiving, by one or more processors, home telematics data at a see ¶ 19, “the virtual personal assistant module receives sensor data from one or more sensors and determines a state based on the sensors data. The virtual personal assistant module automatically generates an output, such as natural language speech output, based on the state”, also see ¶ 47, “virtual personal assistant module 234 may process speech data acquired via one or more audio input devices 114 to determine a request (e.g., a question) and/or a concept (e.g., meaning, intent) of the request. Virtual personal assistant module 234 may then determine a response to the request based on sensor data from one or more sensors 116, generate an output (e.g., a speech output) based on the response, and cause the output to be output to a user (e.g., via an audio output device)”, also see ¶ 48, “virtual personal assistant module 234 may process sensor data from one or more sensors 116 to detect a state of the user(s), the vehicle, and/or the environment.”, also ¶ 51 and 85, “a two-way virtual personal assistant system enables a user to interact with a virtual personal assistant to obtain information about an area, such as a vehicle and the surrounding environment or a home and the surrounding environment, monitored by one or more sensors.”); analyzing, by the one or more processors see ¶ 64-66, “analyzes the request text 424 to generate an extracted meaning 426 that represents a meaning or intent of the user who uttered the words represented by the request text 424.”, ¶ 81, “virtual personal assistant system 101 analyzes the text segment to generate an extracted meaning that represents a meaning or intent of the user who uttered the words represented by the text segment.”, and ¶ 88, “obtaining first sensor data from a first sensor included in a plurality of sensors; analyzing the first sensor data to generate a first result; obtaining second sensor data from a second sensor included in the plurality of sensors; analyzing the second sensor data and the first result to generate a second result; and outputting a natural language audio output to the user based on the second result.”); and generating, by the one or more processors and using see ¶ 20, “The virtual personal assistant determines a request based on the speech event and determines a response to the request based on sensor data obtained from one or more sensors. The virtual personal assistant further generates an output, such as natural language speech output, based on the response to the request.”, also see ¶ 46, “virtual personal assistant module 234 engages with the user(s) of a vehicle via two-way natural language dialog. This two-way natural language dialog includes user(s)-to-vehicle and vehicle-to-user(s) communications. Virtual personal assistant module 234 combines data from internal sensors and data from external sensors to generate inferences about situations occurring within the vehicle and about situations occurring external to the vehicle.”, also see ¶ 46, “virtual personal assistant module 234 engages with the user(s) of a vehicle via two-way natural language dialog. This two-way natural language dialog includes user(s)-to-vehicle and vehicle-to-user(s) communications. Virtual personal assistant module 234 combines data from internal sensors and data from external sensors to generate inferences about situations occurring within the vehicle and about situations occurring external to the vehicle. Further, virtual personal assistant module 234 combines data from internal sensors and data from external sensors to generate predictions or other results regarding the user(s), vehicle, and/or environment.”), wherein the dialogue output is further generated based upon at least a predicted user understanding of the dialogue output (see ¶ 78, “Via these various modules 700, the user(s) interact with virtual personal assistant system 101 in a natural manner based on a wide array of sensor data from sensors 710. As a result, the user(s) receive relevant and timely information in an understandable manner regarding life threatening events 730, better driving experience events 750, and user-to-vehicle information retrieval events 770.”, also see ¶ 46-47, also ¶ 48, “virtual personal assistant module 234 may process sensor data from one or more sensors 116 to detect a state of the user(s), the vehicle, and/or the environment. In some embodiments, virtual personal assistant module 234 may determine whether the state exceeds a threshold. In response to detecting the state (and, in some embodiments, in response to determining that the state exceeds the threshold), virtual personal assistant module 234 generates an output (e.g., a natural language speech output) and causes the output to be output to a user (e.g., via an audio output device).”). Bohl do not teach using a generative artificial intelligence (AI) model for producing dialogue. Zhang teaches using a generative artificial intelligence (AI) model (see abstract, “tunable neural conversational response generation model, DIALOGPT (dialogue generative pre-trained transformer).”, also see page 271, “DIALOGPT is formulated as an autoregressive (AR) language model, and uses the multi-layer transformer as model architecture. Unlike GPT-2, however, DIALOGPT is trained on large-scale dialogue pairs/sessions extracted from Reddit discussion chains.”, also see page 271, “OpenAI GPT-2 to model a multiturn dialogue session as a long text and frame the generation task as language modeling.”). Both Bohl and Zhang pertain to the problem of neural network dialogue systems, thus being analogous. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine Bohl and Zhang to teach the above limitations. The motivation for doing so would be “DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response generation and the development of more intelligent open domain dialogue systems.” (see Zhang abstract). Regarding claim 2. Bohl and Zhang teaches the computer-implemented method of claim 1, Bohl further teaches the method further comprising: determining, by the one or more processors and based upon at least the home telematics analysis, a likelihood of an insurance event occurring; wherein the dialogue output includes at least one of (i) information related to the insurance event or (ii) one or more actions to address the insurance event (see ¶ 27, “During vehicle-to-user(s) communications, virtual personal assistant system 101 monitors the internal and external vehicle environment by utilizing multiple sensors and notifying the users when a significant deviation from the current condition is detected. Virtual personal assistant system 101 generates audio speech output that describes the event, generates one or more alerts, and/or provides possible suggestions or solutions.”, also see ¶ 69, “virtual personal assistant system 101 may assume or exercise autonomous control of the vehicle at various control levels depending on certain conditions. In cases where the user is drunk, inattentive, or drowsy, or where a life threatening condition is present, virtual personal assistant system 101 may increase the level of autonomy to take control of driving away from the user. If virtual personal assistant system 101 subsequently determines that the driver is sober, attentive, and awake, virtual personal assistant system 101 may decrease the level of autonomy to transfer control of driving back to the user.”). Regarding claim 3. Bohl and Zhang teaches the computer-implemented method of claim 2, Bohl further teaches wherein the one or more actions include at least one of: (i) one or more preventative actions to prevent the insurance event, (ii) one or more mitigating actions to mitigate damage associated with the insurance event, or (iii) one or more prescriptive actions to fix damage associated with the insurance event (see ¶ 72, “machine learning outputs 630 generated by the various machine learning modules and logic 610 are transmitted to an analysis model 650. Analysis model 650 generates one or more predicted actions of the user(s) and transmits the predicted actions to visual personal assistance system 101 via communications network 104. Visual personal assistance system 101 then performs one or more responsive actions based on the predicted actions received from analysis model 650. For example, visual personal assistance system 101 may generate natural language responses based on the predicted actions thresholds and then output the natural language responses to the user(s).”, also see ¶ 74-78). Regarding claim 4. Bohl and Zhang teaches the computer-implemented method of claim 1, Bohl further teaches wherein analyzing the home telematics data includes: determining, by the one or more processors, a likelihood of user remembrance for one or more locations for sensor placement; and wherein generating the dialogue output includes: generating, by the one or more processors, one or more recommendations for sensor placement based upon at least the determined likelihood of user remembrance (see ¶ 53-56, “virtual personal assistant module 234 also provides information to users according to an assistant-to-user approach. In the assistant-to-user approach, virtual personal assistant module 234 monitors sensor data from sensors 116 (e.g., continuously monitor the sensor data) and determines one or more states based on the sensor data in any technically feasible manner. For example, virtual personal assistant module 234 may process or “fuse” the data from sensors 116, including applying similar machine learning and deep learning models as described above to the sensor data, to detect one or more states in the area (e.g., the speeds of the surrounding vehicles, the activity status of vehicle occupants).”, i.e. monitoring sensor data and fusing to detect more states in the area corresponds to a person known where are the sensors and recommending where sensors should be placed). Regarding claim 5. Bohl and Zhang teaches the computer-implemented method of claim 1, Bohl further teaches wherein analyzing the home telematics data includes: determining, by the one or more processors, a likelihood of user maintenance for one or more locations for sensor placement; and wherein generating the dialogue output includes: generating, by the one or more processors, one or more recommendations for sensor placement based upon at least the determined likelihood of user maintenance (see ¶ 53-56, “virtual personal assistant module 234 also provides information to users according to an assistant-to-user approach. In the assistant-to-user approach, virtual personal assistant module 234 monitors sensor data from sensors 116 (e.g., continuously monitor the sensor data) and determines one or more states based on the sensor data in any technically feasible manner. For example, virtual personal assistant module 234 may process or “fuse” the data from sensors 116, including applying similar machine learning and deep learning models as described above to the sensor data, to detect one or more states in the area (e.g., the speeds of the surrounding vehicles, the activity status of vehicle occupants).”, i.e. monitoring sensor data and fusing to detect more states in the area corresponds to a person known where are the sensors and recommending where sensors should be placed). Regarding claim 6. Bohl and Zhang teaches the computer-implemented method of claim 1, Bohl further teaches wherein the sensor data includes security system data associated with the property and wherein the dialogue output includes an alert regarding potential intruders (see ¶ 54-55, “in response to the detected states, virtual personal assistant module 234 may first compare a detected state to an associated threshold. The threshold may be predefined and/or user-configurable. If the state does not exceed the threshold, then virtual personal assistant module 234 takes no action with respect to the state. If the state exceeds the threshold, then virtual personal assistant module 234 may generate a notice or alert output associated with the state. In some embodiments, virtual personal assistant module 234 may generate a notice or alert output for at least one or more states without regard for whether the state exceeded the threshold.”, also see ¶ 68, “virtual personal assistant system 101 initiates a conversation to notify or alert the user with information about the vehicle, the environment external to the vehicle, and/or the environment inside the vehicle cabin. Various models monitor visual data and other sensor data on a continuing basis to detect anomalies.”). Regarding claim 7. Bohl and Zhang teaches the computer-implemented method of claim 6, Bohl further teaches wherein analyzing the home telematics data includes: comparing, by the one or more processors, the home telematics data to historical home telematics data to detect a difference in behavior; and determining, by the one or more processors and based upon at least the detected difference in behavior, that unusual activity is occurring on the property; wherein generating the dialogue output is responsive to determining that unusual activity is occurring (see ¶ 54-55, “in response to the detected states, virtual personal assistant module 234 may first compare a detected state to an associated threshold. The threshold may be predefined and/or user-configurable. If the state does not exceed the threshold, then virtual personal assistant module 234 takes no action with respect to the state. If the state exceeds the threshold, then virtual personal assistant module 234 may generate a notice or alert output associated with the state. In some embodiments, virtual personal assistant module 234 may generate a notice or alert output for at least one or more states without regard for whether the state exceeded the threshold.”, also see ¶ 68, “virtual personal assistant system 101 initiates a conversation to notify or alert the user with information about the vehicle, the environment external to the vehicle, and/or the environment inside the vehicle cabin. Various models monitor visual data and other sensor data on a continuing basis to detect anomalies.”). Regarding claim 8. Bohl and Zhang teaches the computer-implemented method of claim 1, Zhang further teaches wherein the generative AI model includes at least one of: (i) an AI or machine learning (ML) chatbot or (ii) an AI or ML voice bot (see abstract, “tunable neural conversational response generation model, DIALOGPT (dialogue generative pre-trained transformer).”, also see page 271, “DIALOGPT is formulated as an autoregressive (AR) language model, and uses the multi-layer transformer as model architecture. Unlike GPT-2, however, DIALOGPT is trained on large-scale dialogue pairs/sessions extracted from Reddit discussion chains.”, also see page 271, “OpenAI GPT-2 to model a multiturn dialogue session as a long text and frame the generation task as language modeling.”, also see page 274, “We provide sample generated dialogues in Table 4 (interactive chat) and Table 5 (a self-playing bot with user prompt).”). The motivation utilized in the combination of claim 1, super, applies equally as well to claim 8. Claims 9-16 recites a system comprising one or more processors; a communication unit to perform the method recited in claims 1-8. Therefore the rejection of claims 1-8 above applies equally here. Bohl also teaches the addition elements of claim 9 not recited in claim 1 comprising one or more processors; a communication unit (see ¶ 43, “Processor 202 communicates to other computing devices and systems via network interface 208, where network interface 208 is configured to transmit and receive data via a communications network.”). Claims 17-20 recites a tangible, non-transitory computer-readable medium storing instructions to perform the method recited in claims 1-4. Therefore the rejection of claims 1-4 above applies equally here. Bohl also teaches the addition elements of claim 9 not recited in claim 1 comprising one or more processors (see ¶ 43, “Processor 202 communicates to other computing devices and systems via network interface 208, where network interface 208 is configured to transmit and receive data via a communications network.”). Related arts: Abramson et al. (US 20180293483 A1) teaches social data may be used to create or modify a special index in the theme of the specific person's personality. The special index may be used to train a chat bot to converse in the personality of the specific person. During such conversations, one or more conversational data stores and/or APIs may be used to reply to user dialogue and/or questions for which the social data does not provide data. In some aspects, a 2D or 3D model of a specific person may be generated using images, depth information, and/or video data associated with the specific person. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to IMAD M KASSIM whose telephone number is (571)272-2958. The examiner can normally be reached 10:30AM-5:30PM, M-F (E.S.T.). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J. Huntley can be reached at (303) 297 - 4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IMAD KASSIM/ Primary Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

May 12, 2023
Application Filed
Jan 24, 2026
Non-Final Rejection — §101, §103
Apr 08, 2026
Interview Requested
Apr 16, 2026
Examiner Interview Summary
Apr 16, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596923
MACHINE LEARNING OF KEYWORDS
2y 5m to grant Granted Apr 07, 2026
Patent 12572843
AGENT SYSTEM FOR CONTENT RECOMMENDATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12572854
ROOT CAUSE DISCOVERY ENGINE
2y 5m to grant Granted Mar 10, 2026
Patent 12566980
SYSTEM AND METHOD HAVING THE ARTIFICIAL INTELLIGENCE (AI) ALGORITHM OF K-NEAREST NEIGHBORS (K-NN)
2y 5m to grant Granted Mar 03, 2026
Patent 12566861
IDENTIFYING AND CORRECTING VULNERABILITIES IN MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+33.8%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 160 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month