Prosecution Insights
Last updated: April 19, 2026
Application No. 18/457,211

SYSTEMS AND METHODS FOR SYSTEM GENERATED DAMAGE ANALYSIS

Non-Final OA §103§DP
Filed
Aug 28, 2023
Examiner
CASS, JEAN PAUL
Art Unit
3666
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Allstate Insurance Company
OA Round
5 (Non-Final)
73%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
719 granted / 984 resolved
+21.1% vs TC avg
Strong +26% interview lift
Without
With
+25.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
83 currently pending
Career history
1067
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
12.6%
-27.4% vs TC avg
§112
12.8%
-27.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 984 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to the Applicant’s arguments The previous rejection is withdrawn. Applicant’s amendments are entered. Applicant’s remarks are also entered into the record. A new search was made necessitated by the applicant’s amendments. A new reference was found. A new rejection is made herein. Applicant’s arguments are now moot in view of the new rejection of the claims. Claim 1 and the independent claim are amended to recite and Michel teaches“...iteratively, and until a confidence metric associated with the event interpretation [[meet]] data meets a predetermined threshold: generating notification data requesting additional data related to the vehicle damage, wherein the additional data comprises data from a source different from the one or more sensors; and (see paragraph 47 where the drone can have not enough information and have to move to shift a location or consult a server in paragraph 59 in that the server can inform the drone that 1. Dozens or 100 vehicle in the lot all experience damage from the hail storm and that is 2. Vehicle was in that hail storm and 3. If this vehicle had insurance coverage during the hail storm or not to immediately deny the claim if they do not for fraud detection) after receiving the additional data, regenerating the event interpretation data [[based on the additional data]] by integrating the additional data with the damage model and the scene data to produce an updated likelihood assessment”. (see paragraph 22, 47-48 where the drone can examine the accident and then obtain a different vantage point of the scene from a sensor and/or a remote server and this is provided to the Ai module and the second AI module and can determine 1. In paragraph 59 that there were dozens of the vehicle that were damaged by a hail storm and the hood damage also was from the hail storm and this vehicle did not have insurance during that period of the hail storm and the claim is denied due to fraud or , he storm/damage analysis and claim handling concepts and descriptions of which are hereby incorporated by reference herein, for example, to execute one or more insurance-related actions, such as approving and/or paying a claim (partially or in full) or denying a claim (partially or in full) for the location, a particular subset of the discrete objects, and/or for a particular discrete object at the location.) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1 and 9 and 15 are rejected under 35 U.S.C. sec.103 as being unpatentable as obvious in view of U.S. Patent No.: 9,805,423 to Konrady that was filed in 2014 and published in 2017 (hereinafter “KONRADY ") and in view of U.S. Patent Application Pub. No.; US20180260793Al to Li and in view of United States Patent No.: US11676215B1 to Feiteira et al. that was filed in 2018 and is assigned to LIBERTY MUTUAL™ and in view of United States Patent Application Pub. No.: US 2021/0248374 A1 to CHIKKAVEERAPPA et al. that was filed on 8-8-18 and in view of U.S. Patent Application Pub. NO.: US 2019/0303982 A1 to Michel et al. that was filed in 3-30-18. PNG media_image1.png 856 768 media_image1.png Greyscale PNG media_image2.png 693 950 media_image2.png Greyscale In regard to claim 1, 9 and 15, Konrardy ‘432 discloses “...1. A system, comprising: at least one processor; and memory storing instructions that, when executed by the at least one processor, cause the system to perform operations comprising: (see FIG. 1 where the vehicle cam access a cloud server with a memory 162, and processor 162 and a ram and database 146 and a second handheld mobile device 110) Claim 1 is amended to recite and the primary reference is silent but CHIKKAVEERAPPA teaches “....receiving from one or more sensors (see paragraph 40 where a camera can be used with a database of damage) associated with a vehicle sensor data that indicates one or more sensed impacts to the vehicle; (see claim 1-8 where the impacts can be identified on the vehicle and the severity level and notes in an augmented reality and paragraph 30-34 where the front passenger side and the rear bumper has serious damage) determining based on the sensed impacts a damage model that indicates respective severities of one or more damaged elements of the vehicle (see paragraph 33 where using a camera and an AI engine the damage in the accident can be identified and a superimposed text can be provided to the image that indicates the degree of damage as minor, moderate and severe) and one or more respective points of impact for the one or more damaged elements...wherein the event interpretation data is based on the damage model.(see paragraph 30)..wherein the event interpretation data indicates a likelihood that the damaged indicated by the damage model associated with damage”. (see FIG. 6a to 6b and paragraph 33-35 and 68 where using a camera and an AI engine the damage in the accident can be identified and a superimposed text can be provided to the image that indicates the degree of damage as minor, moderate and severe and in paragraph 68 where the accident damage is shown as the less severe is on the front door but the more severe is another area and the AI engine can provide an indicator of the damages); PNG media_image3.png 934 708 media_image3.png Greyscale It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of KONRARDY and the teachings of CHIKKAVEERAPPA since CHIKKAVEERAPPA teaches that a computer with a camera and a model of vehicle information using a neural network can scan a vehicle and formulate an image. The vehicle can then be determined, for example, to have damage to a portion of the vehicle and an impact can be determined and a severity level of the damage can be provided and superimposed using augmented reality to show severe damage and less severe damage on a scale. See paragraph 30-61. In regard to claim 1, 9 and 15, Konrardy ‘432 discloses “...receiving, from one or more sensors associated with an object, sensor data; determining, based on the sensor data, (see col. 50, lines 1-55 where the sensor 120 and camera data can indicate the damage to the vehicle) Konrardy is silent but Li teaches “...a damage model indicating a severity of one or more damage elements;” (see abstract where the device can scan a damage part and a second damaged part and then access a database and then the repair cost for the first and the second damaged part can be determined) {see paragraph 60 where the vehicle telematics data including a speed of the vehicle is provided to the processor and the route and acceleration and see FIG. 4 where an inference is reached as to how the damage occurred in block 406; see paragraph 153-i 56 where a neural network of past accidents is accessed and then an indicator of a damage can be shown and then other second parts in the vehicle based on prior accidents that may have been damaged as wen from a different accident can be accessed to link other parts that are likely damaged but hidden from the view of the camera) Konrardy discloses “...determining, based on the sensor data, scene data identifying a geographic location; and generating, by at least one machine classifier, event interpretation data for each of the one or more damage elements, wherein the event interpretation data is based on the damage model, and the scene data. (See col. 52, lines 1-17where the bridge collapsing or an animal caused the damage or it is a defect in the vehicle itself and see FIG.13, block 1310 where the third party is indicated to be at fault in block 1310 and there was no chance to avoid the accident by the instant vehicle in block 1312 and if there was a chance to avoid the accident by the subject vehicle then in block 1312 the subject vehicle is assessed to be a high er risk individual at block 1322; see col. 52, lines 1-31) It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of KONRARDY and the teachings of LI since LI teaches that a computer with a camera can scan a vehicle and formulate an image. The vehicle can then be determined, for example, to have damage to a panel of the vehicle. The computer also includes a neural network. The neural network has information from thousands of past accidents and also thousands of past repairs. The computer can then determine that in addition to the obvious panel that needs to be repaired there is likely other second different components behind the panel that also necessitate being repaired as well that are hidden from view. This is from the neural network and prediction from the history of observed past accidents and thousands of repairs. As described, the method shown in FIG. 27 provides a technique to identify which external parts of a vehicle are damaged, and also which portions of those parts are damaged. As described in FIG. 4 at step 406, from this information, the server may also infer internal damage to the vehicle from detected external damage. Once the externally damaged parts are identified, the server can look up in a database which internal parts are also likely to be repaired or replaced based on the set of damaged external parts. This inference can be based on historical models for which internal parts needed to be replaced given certain external damage in prior repairs. This can provide an advantageous computerized tool as a repair and repair cost can be formulated instantly without having to take apart the vehicle and merely from an exterior photo which saves time and expenses. See paragraph 60-70 and 153-170 and 354 and claims 1-12 and the abstract. The independent claims are amended to recite and Konrardy is silent but Feiteira teaches “...receiving from a sensor...sensor data that indicates damage to a vehicle. ..wherein the event interpretation data indicates a likelihood that damage indicated by....is associated with damage indicated by a user in a first notice of loss”. (see claims 1-15 where the notice of loss can indicate a claim of a user and then a camera tool can overlay an augmented reality view interface and then using the ai and classifier then a mapping can be made where the images do not conform to the damage and then the fraud can be made or if there is no fraud then a payment can be made; and see sensors that include an audio sensor of the user to detect a speech pattern that indicates fraud when describing the damage; n some embodiments, the loss scoping module 230 may also utilize AI image recognition tools to be applied to any imagery collected to fully identify specific materials (e.g., drywall, insulation, carpet, and/or the like), contents items, equipment, and other items as illustrated in FIG. 7 . At step/operation 702, the AI engine in the loss scoping module 230 is configured to access one or more captured images provided by a user. At step/operation 704, the AI engine in the loss scoping module 230 may be configured to compare the captured images to a bank of available images which have been learned by or pre-programmed into the AI engine with structured descriptions of the image subject. For example, when a subject in the captured image matches the subject in the learned or pre-programmed image, a description of the item may be produced. The AI engine in the loss scoping module 230 may also utilize additional data points such as OCR to add further details to the item description for heightened detail at step/operation 706. In addition, the AI engine in the loss scoping module 230 may also compare user provided annotations, audio transcripts, and structured data from converted documents to the generated AI identification to enhance accuracy. Characteristics of damage, such as deviations in shading, shape, and other physical characteristics which indicate damage, may also be learned by or pre-programmed into the AI engine in the loss scoping module 230 based on proprietary or third party imagery and compared with the captured images at step/operation 708. After the AI engine in the loss scoping module 230 compares the previously identified materials (drywall, paint, insulation, and/or the like) and any contents items to the damage characteristics, the loss scoping module 230 is configured to generate an estimate on the scope of damage to be generated at step/operation 710. The estimate on the scope of damage may be part of the loss estimate generated at step/operation 604. Timing of the image capture and estimate scoping may be dependent on environmental conditions or completion of mitigation and other site stabilization activities to ensure the damage can be clearly captured in total.... In some embodiments, the AI based image recognition tool may analyze each photo captured to identify the item and check for fraud. If the AI based image recognition tool determines that there is a low confidence in the recognition of the item or failed to identify the item, the camera tool may prompt the user to type, speak (speech-to-text), or use per generated suggestions the user can select from. The user generated classification may be fed back into the classifier to improve future image recognition. The AI based image recognition tool may check for potential fraud in the captured images by analyzing metadata on the images, checking against public or private third party images sources (including search engine, social media and others). In some embodiments, the AI based image recognition tool may compare GPS metadata associated with the captured image with address of the insured property associated with the claim. In some embodiments, the AI based image recognition tool may also check time stamp data to determine whether the image is captured before or after a date of loss asserted in the claim.) It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the disclosure of Konrady with the teachings of FEITERIA with a reasonable expectation of success since FEITERIA teaches that the AI module can use photos and be triggered form the first notice of loss to indicate that the damage was previously made based on the time stamp and also based on the speech patterns and then deny the claim as this indicates a potential fraud. See claims 1-15. PNG media_image4.png 746 556 media_image4.png Greyscale Claim 1 is amended to recite and the primary reference is silent but MICHEL teaches “..iteratively, and until a confidence metric associated with the event interpretation ...... a predetermined threshold: generating notification data requesting additional data related to the vehicle damage....and after receiving the additional data, regenerating the event interpretation data based on the additional data”. (see paragraph 27-33 and 43-45 and Fig. 4 where the drone can take an image of the damage on all of the vehicles and analyze the image and then based on the identification of the damage the drone will be commanded with a second command in paragraph 50 to 1. Change the flight plan,2 change the altitude 3 change the orientation and 4 heading and 6 angle of the objects and the air vehicle to provide a different view and more data to further calculated the estimated damage form the second images 418 and blocks 426) It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of KONRARDY and the teachings of MIHCEL et al of Travelers since Michel teaches that a drone with a camera can scan a vehicle and formulate an image of a damage and then this can be not sufficient. The drone can then move . The drone can then 1 change the flight plan,2 change the altitude 3 change the orientation and 4 heading and 6 angle of the objects and the air vehicle to provide a different more rich view and more data to further calculated the estimated damage from the second images 418 and blocks 426. This can provide a better estimate of the damage and repair. PNG media_image5.png 624 869 media_image5.png Greyscale Claim 1 is amended to recite and Michel teaches“...iteratively, and until a confidence metric associated with the event interpretation [[meet]] data meets a predetermined threshold: generating notification data requesting additional data related to the vehicle damage, wherein the additional data comprises data from a source different from the one or more sensors; and (see paragraph 47 where the drone can have not enough information and have to move to shift a location or consult a server in paragraph 59 in that the server can inform the drone that 1. Dozens or 100 vehicle in the lot all experience damage from the hail storm and that is 2. Vehicle was in that hail storm and 3. If this vehicle had insurance coverage during the hail storm or not to immediately deny the claim if they do not for fraud detection) after receiving the additional data, regenerating the event interpretation data [[based on the additional data]] by integrating the additional data with the damage model and the scene data to produce an updated likelihood assessment”. (see paragraph 22, 47-48 where the drone can examine the accident and then obtain a different vantage point of the scene from a sensor and/or a remote server and this is provided to the Ai module and the second AI module and can determine 1. In paragraph 59 that there were dozens of the vehicle that were damaged by a hail storm and the hood damage also was from the hail storm and this vehicle did not have insurance during that period of the hail storm and the claim is denied due to fraud or , he storm/damage analysis and claim handling concepts and descriptions of which are hereby incorporated by reference herein, for example, to execute one or more insurance-related actions, such as approving and/or paying a claim (partially or in full) or denying a claim (partially or in full) for the location, a particular subset of the discrete objects, and/or for a particular discrete object at the location.) Claims 3 and 11 and 17 are rejected under 35 U.S.C. sec.103 as being unpatentable as obvious in view of U.S. Patent No.: 9,805,423 to Konrady that was filed in 2014 and published in 2017 and in view of U.S. Patent Application Pub. No.; US20180260793Al to Li and in view of United States Patent No.: US 10497108 Bl to Knuffman et al. that was filed in 12-2016 (hereinafter 'Knuffman") and in view of United States Patent No.: US11676215B1 to Feiteira et al. that was filed in 2018 and is assigned to LIBERTY MUTUAL™ and CHIKKAVEERAPPA and Michel. In regard to claim 3, 11, and 17, Knuffman teaches “...3. The system of claim 1, wherein the operations further comprise obtaining additional information regarding the geographic location, the additional information comprising at least one of a road type, an intersection type, a speed limit, a road condition, or weather information”. (see col. 14, lines 51 to col. 16, line 45 and col 1, line 40-45 where the vehicle is determined to be in a parking lot but needs to be brought to a repair shop and col. 4, line 49 where weather caused the accident) It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of KONRARDY and the teachings of KNUFFMAN since KNUFFMAN teaches that a drone can be dispatched to a vehicle upon a trigger that an insurance claim is made. The drone can be an unmanned aerial vehicle or a ground based drone that can look under the car on a road or over a road. The drone can be trained to detect damage using images that are correctly showing an indication of damage. This provides a training. Using this image capture and comparing these new images to the trained images the UAV can determine the amount of damage. If there is damage the UAV can also immediately order a repair of the damage to put the vehicle back on the road fast. However, if there is no damage, then the insurance carrier can then alternatively adjust the risk as the driver is reporting a claim that has no damage or is presenting a false claim that has inconsistent damage with the claim. For example, the drone can look and image the bumper and other areas to see if it is consistent with the type of accident. See col. 2, lines 65 to col. 3, line 35 and col. 9 lines1 -61 and col. 15. 1-20 and claims 1-10 and the abstract. Claims 4 and 18 are rejected under 35 U.S.C. sec.103 as being unpatentable as obvious in view of U.S. Patent No.: 9,805,423 to Konrady that was filed in 2014 and published in 2017 and in view of U.S. Patent Application Pub. No.; US20180260793Al to Li and in view of U.S. Patent No.: US10373387B1 to Fields et al. and in view of United States Patent No.: US11676215B1 to Feiteira et al. that was filed in 2018 and is assigned to LIBERTY MUTUAL™ and CHIKKAVEERAPPA. In regard to claim 4, and 18, Fields teaches “...4. The system of claim 1, wherein: the sensor data comprises image data of the one or more damage elements; and the damage model is based on the image data”. (see col. 15, lines 40 to 67 and col. 15, lines 1-60). It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of KONRARDY and the teachings of FIELDS since FIELDS teaches that a user using a smartphone can report a claim. See FIG. Sa. A model of an accident scene can be determined via 360 degree photos, sat. images, GoogleMaps TM streetview™ and text. The text can include the claim information, and details about the loss and a description of the facts. Using a smartphone then the user can provide audio and text data and indicating the details of the accident. Then using the user interface the claim can be reported using immersive multimedia images that can be annotated. This allows the claims handler to view the details to verify the veracity of the claim using the images as evidence. This can provide increased automation of the claim to determine a damage and assess the fault without having to go to the scene of the accident. For example a rear end collision can be claimed but the images can reveal a side collision which can apportion the claim fault differently. See Fields at col. 14 line 40 to col. 15, line 60 and the abstract. Claims 5 and 12, and 19 are rejected under 35 U.S.C. sec.103 as being unpatentable as obvious in view of U.S. Patent No.: 9,805,423 to Konrady that was filed in 2014 and published in 2017 and in view of U.S. Patent Application Pub. No.; US20180260793Al to Li and in view of U.S. Patent No.: 10,497,108 to Knuffman et al. and in view of United States Patent No.: US11676215B1 to Feiteira et al. that was filed in 2018 and is assigned to LIBERTY MUTUAL™ and CHIKKAVEERAPPA and Michel. In regard to claim 5, 12 and 19, Knuffman teaches “...5. The system of claim 1, wherein the event interpretation data comprises: a confidence metric comprising a likelihood of liability for the one or more damage elements, an indication of how damage to the one or more damage elements occurred, and an indication of a party that is at fault”. (See col 21 line 65 to col 31 line 35 and col 9, lines 1 -61 and claims 1 -2 and col 15, lines 1 -20 were based on the training there is no damage and the claim is fraud). It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of Konrardy and the teachings of KNUFFMAN since KNUFFMAN teaches that a drone can be dispatched to a vehicle upon a trigger that an insurance claim is made. The drone can be an unmanned aerial vehicle or a ground based drone that can look under the car. The drone can be trained to detect damage using images that are correctly showing an indication of damage. This provides a training. Using this image capture and comparing these new images to the trained images the UAV can determine the amount of damage. If there is damage the UAV can also immediately order a repair of the damage to put the vehicle back on the road fast. However, if there is no damage, then the insurance carrier can then alternatively adjust the risk as the driver is reporting a claim that has no damage or is presenting a false claim that has inconsistent damage with the claim. For example, the drone can look and image the bumper and other areas to see if it is consistent with the type of accident. See col. 2, lines 65 to col. 3, line 35 and col. 9 lines1 -61 and col. 15. 1-20 and claims 1-10 and the abstract. Claims 6-7 and 13-14 and 20 are rejected under 35 U.S.C. sec.103 as being unpatentable as obvious in view of U.S. Patent No.: 9,805,423 to Konrady that was filed in 2014 and published in 2017 and in view of U.S. Patent Application Pub. No.; US20180260793Al to Li and in view of U.S. Patent No.: US10373387B1 to Fields et al. and in view of United States Patent No.: US11676215B1 to Feiteira et al. that was filed in 2018 and is assigned to LIBERTY MUTUAL™ and CHIKKAVEERAPPA and Michel. In regard to claim 6 and 13, Fields teaches “...6. The system of claim 1, wherein the operations further comprise obtaining satellite image data for the geographic location”. (See col 14, lines 40-67 and col. 15, lines 1-60). It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of KONRARDY and the teachings of FIELDS since FIELDS teaches that a user using a smartphone can report a claim. See FIG. Sa. A model of an accident scene can be determined via 360 degree photos, sat. images, GoogleMaps TM streetview™ and text. The text can include the claim information, and details about the loss and a description of the facts. Using a smartphone then the user can provide audio and text data and indicating the details of the accident. Then using the user interface the claim can be reported using immersive multimedia images that can be annotated. This allows the claims handler to view the details to verify the veracity of the claim using the images as evidence. This can provide increased automation of the claim to determine a damage and assess the fault without having to go to the scene of the accident. For example a rear end collision can be claimed but the images can reveal a side collision which can apportion the claim fault differently. See Fields at col. 14 line 40 to col. 15, line 60 and the abstract. PNG media_image6.png 660 851 media_image6.png Greyscale PNG media_image7.png 836 760 media_image7.png Greyscale In regard to claim 7 and 14 and 20, Fields teaches “..7. The system of claim 1, wherein the operations further comprise: generating a scene rendering for the geographic location based on the event interpretation data and the satellite image data;(see Fig. 5a to 5b where the loss description is provided as with the damage and scene of the impact) (see Fig. 6-7) generating a user interface comprising the scene rendering and the event interpretation data; and providing the user interface. (See col 14, lines 40-67 and col. 15, lines 1-60). (see Fig. 6-7 block 708 where the annotated image can be received and a damage amount determined by the visualization of the photos and model for an amount of damage) It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of KONRARDY and the teachings of FIELDS since FIELDS teaches that a user using a smartphone can report a claim. See FIG. Sa. A model of an accident scene can be determined via 360 degree photos, sat. images, GoogleMaps TM streetview™ and text. The text can include the claim information, and details about the loss and a description of the facts. Using a smartphone then the user can provide audio and text data and indicating the details of the accident. Then using the user interface the claim can be reported using immersive multimedia images that can be annotated. This allows the claims handler to view the details to verify the veracity of the claim using the images as evidence. This can provide increased automation of the claim to determine a damage and assess the fault without having to go to the scene of the accident. For example a rear end collision can be claimed but the images can reveal a side collision which can apportion the claim fault differently. See Fields at col. 14 line 40 to col. 15, line 60 and the abstract. Claim 8 is rejected under 35 U.S.C. sec.103 as being unpatentable as obvious in view of U.S. Patent No.: 9,805,423 to Konrady that was filed in 2014 and published in 2017 and in view of U.S. Patent Application Pub. No.; US20180260793Al to Li and in view of U.S. Patent No.: 10475127 B1 to Potter et al. and in view of United States Patent No.: US11676215B1 to Feiteira et al. that was filed in 2018 and is assigned to LIBERTY MUTUAL™ and CHIKKAVEERAPPA and Michel. Potter teaches “...8. The system of claim 1, wherein the sensor data comprises information regarding operation of a vehicle including at least one of a speed, an acceleration, the geographic location, or impact data from an impact sensor”. (see co!. 2, line 40-60 where the telematics data comprises the speed of the vehicle on a route) (see col 14 line 30-56 where an insured profile includes if the user is moving under or over a speed limit on the route)( see col 14, line 45- 65 where the user is deemed to be driving on a vehicle roadway that is urban or suburban or is driving on a highway type of road or a rural road); It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of KONRARDY and the teachings of POTTER since POTTER teaches that a server and a vehicle telematics unit can be connected to each other via an interface to receive signals from each other. The server can then track parameters of a vehicle to determine if the vehicle is operating dangerously and speeding on rural roads in excess of a posted speed limit. For example, moving in excess of 100 MPH in a 35 MPH zone. The server can determine if his driver is speeding frequently and all the time. The server can then provide this data to an insurance provider to correctly assess the driver as a high risk driver and raise the insurance premiums or cancel the policy. Additionally, the server can also provide that vehicles that always move within the speed limit can be indicated as one or more safe drivers and low risk and then maintain the same insurance premiums for that safe driver. For example, moving at 35 MPH in a 35 MPH zone. The insurance carrier can then correctly adjust the risk as the vehicles are monitored in real time. See col. 12, line 1 to col. 14, line 65 and the abstract. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1 and 9 and 15 are rejected under 35 U.S.C. sec.103 as being unpatentable as obvious in view of U.S. Patent No.: 9,805,423 to Konrady that was filed in 2014 and published in 2017 (hereinafter “KONRADY ") and in view of U.S. Patent Application Pub. No.; US20180260793Al to Li. and in view of United States Patent No.: US11676215B1 to Feiteira et al. that was filed in 2018 and is assigned to LIBERTY MUTUAL™ and CHIKKAVEERAPPA and Michel. PNG media_image4.png 746 556 media_image4.png Greyscale Claim 1 is amended to recite and the primary reference is silent but MICHEL teaches “..iteratively, and until a confidence metric associated with the event interpretation meet a predetermined threshold: generating notification data requesting additional data related to the vehicle damage; and after receiving the additional data, regenerating the event interpretation data based on the additional data”. (see paragraph 27-33 and 43-45 and Fig. 4 where the drone can take an image of the damage on all of the vehicles and analyze the image and then based on the identification of the damage the drone will be commanded with a second command in paragraph 50 to 1. Change the flight plan,2 change the altitude 3 change the orientation and 4 heading and 6 angle of the objects and the air vehicle to provide a different view and more data to further calculated the estimated damage form the second images 418 and blocks 426) It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of the primary reference and the teachings of MIHCEL et al of Travelers since Michel teaches that a drone with a camera can scan a vehicle and formulate an image of a damage and then this can be not sufficient. The drone can then move . The drone can then 1 change the flight plan,2 change the altitude 3 change the orientation and 4 heading and 6 angle of the objects and the air vehicle to provide a different more rich view and more data to further calculated the estimated damage from the second images 418 and blocks 426. This can provide a better estimate of the damage and repair. PNG media_image1.png 856 768 media_image1.png Greyscale PNG media_image2.png 693 950 media_image2.png Greyscale In regard to claim 1, 9 and 15, Konrardy ‘432 discloses “...1. A system, comprising: at least one processor; and memory storing instructions that, when executed by the at least one processor, cause the system to perform operations comprising: (see FIG. 1 where the vehicle cam access a cloud server with a memory 162, and processor 162 and a ram and database 146 and a second handheld mobile device 110)....(see col. 50, lines 1-55 where the sensor 120 and camera data can indicate the damage to the vehicle) Claim 1 is amended to recite and the primary reference is silent but CHIKKAVEERAPPA teaches “....receiving from one or more sensors (see paragraph 40 where a camera can be used with a database of damage) associated with a vehicle sensor data that indicates one or more sensed impacts to the vehicle; (see claim 1-8 where the impacts can be identified on the vehicle and the severity level and notes in an augmented reality and paragraph 30-34 where the front passenger side and the rear bumper has serious damage) determining based on the sensed impacts a damage model that indicates respective severities of one or more damaged elements of the vehicle (see paragraph 33 where using a camera and an AI engine the damage in the accident can be identified and a superimposed text can be provided to the image that indicates the degree of damage as minor, moderate and severe) and one or more respective points of impact for the one or more damaged elements...wherein the event interpretation data is based on the damage model.(see paragraph 30)..wherein the event interpretation data indicates a likelihood that the damaged indicated by the damage model associated with damage”. (see FIG. 6a to 6b and paragraph 33-35 and 68 where using a camera and an AI engine the damage in the accident can be identified and a superimposed text can be provided to the image that indicates the degree of damage as minor, moderate and severe and in paragraph 68 where the accident damage is shown as the less severe is on the front door but the more severe is another area and the AI engine can provide an indicator of the damages); PNG media_image3.png 934 708 media_image3.png Greyscale It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of KONRARDY and the teachings of CHIKKAVEERAPPA since CHIKKAVEERAPPA teaches that a computer with a camera and a model of vehicle information using a neural network can scan a vehicle and formulate an image. The vehicle can then be determined, for example, to have damage to a portion of the vehicle and an impact can be determined and a severity level of the damage can be provided and superimposed using augmented reality to show severe damage and less severe damage on a scale. See paragraph 30-61. Konrardy is silent but Li teaches “...a damage model indicating a severity of one or more damage elements;” (see abstract where the device can scan a damage part and a second damaged part and then access a database and then the repair cost for the first and the second damaged part can be determined) {see paragraph 60 where the vehicle telematics data including a speed of the vehicle is provided to the processor and the route and acceleration and see FIG. 4 where an inference is reached as to how the damage occurred in block 406; see paragraph 153-i 56 where a neural network of past accidents is accessed and then an indicator of a damage can be shown and then other second parts in the vehicle based on prior accidents that may have been damaged as wen from a different accident can be accessed to link other parts that are likely damaged but hidden from the view of the camera) Konrardy discloses “...determining, based on the sensor data, scene data identifying a geographic location; and generating, by at least one machine classifier, event interpretation data for each of the one or more damage elements, wherein the event interpretation data is based on the .... the damage model, and the scene data. (See col. 52, lines 1-17where the bridge collapsing or an animal caused the damage or it is a defect in the vehicle itself and see FIG.13, block 1310 where the third party is indicated to be at fault in block 1310 and there was no chance to avoid the accident by the instant vehicle in block 1312 and if there was a chance to avoid the accident by the subject vehicle then in block 1312 the subject vehicle is assessed to be a high er risk individual at block 1322; see col. 52, lines 1-31) It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of KONRARDY and the teachings of LI since LI teaches that a computer with a camera can scan a vehicle and formulate an image. The vehicle can then be determined, for example, to have damage to a panel of the vehicle. The computer also includes a neural network. The neural network has information from thousands of past accidents and also thousands of past repairs. The computer can then determine that in addition to the obvious panel that needs to be repaired there is likely other second different components behind the panel that also necessitate being repaired as well that are hidden from view. This is from the neural network and prediction from the history of observed past accidents and thousands of repairs. As described, the method shown in FIG. 27 provides a technique to identify which external parts of a vehicle are damaged, and also which portions of those parts are damaged. As described in FIG. 4 at step 406, from this information, the server may also infer internal damage to the vehicle from detected external damage. Once the externally damaged parts are identified, the server can look up in a database which internal parts are also likely to be repaired or replaced based on the set of damaged external parts. This inference can be based on historical models for which internal parts needed to be replaced given certain external damage in prior repairs. This can provide an advantageous computerized tool as a repair and repair cost can be formulated instantly without having to take apart the vehicle and merely from an exterior photo which saves time and expenses. See paragraph 60-70 and 153-170 and 354 and claims 1-12 and the abstract. T Konrardy 432 is silent but Feiteira teaches “...receiving from a wherein the event interpretation data indicates a likelihood that damage indicated by sensor data is associated with damage indicated by a user in a first notice of loss”. (see claims 1-15 where the notice of loss can indicate a claim of a user and then a camera tool can overlay an augmented reality view interface and then using the ai and classifier then a mapping can be made where the images do not conform to the damage and then the fraud can be made or if there is no fraud then a payment can be made; and see sensors that include an audio sensor of the user to detect a speech pattern that indicates fraud when describing the damage; n some embodiments, the loss scoping module 230 may also utilize AI image recognition tools to be applied to any imagery collected to fully identify specific materials (e.g., drywall, insulation, carpet, and/or the like), contents items, equipment, and other items as illustrated in FIG. 7; At step/operation 702, the AI engine in the loss scoping module 230 is configured to access one or more captured images provided by a user. At step/operation 704, the AI engine in the loss scoping module 230 may be configured to compare the captured images to a bank of available images which have been learned by or pre-programmed into the AI engine with structured descriptions of the image subject. For example, when a subject in the captured image matches the subject in the learned or pre-programmed image, a description of the item may be produced. The AI engine in the loss scoping module 230 may also utilize additional data points such as OCR to add further details to the item description for heightened detail at step/operation 706. In addition, the AI engine in the loss scoping module 230 may also compare user provided annotations, audio transcripts, and structured data from converted documents to the generated AI identification to enhance accuracy. Characteristics of damage, such as deviations in shading, shape, and other physical characteristics which indicate damage, may also be learned by or pre-programmed into the AI engine in the loss scoping module 230 based on proprietary or third party imagery and compared with the captured images at step/operation 708. After the AI engine in the loss scoping module 230 compares the previously identified materials (drywall, paint, insulation, and/or the like) and any contents items to the damage characteristics, the loss scoping module 230 is configured to generate an estimate on the scope of damage to be generated at step/operation 710. The estimate on the scope of damage may be part of the loss estimate generated at step/operation 604. Timing of the image capture and estimate scoping may be dependent on environmental conditions or completion of mitigation and other site stabilization activities to ensure the damage can be clearly captured in total.... In some embodiments, the AI based image recognition tool may analyze each photo captured to identify the item and check for fraud. If the AI based image recognition tool determines that there is a low confidence in the recognition of the item or failed to identify the item, the camera tool may prompt the user to type, speak (speech-to-text), or use per generated suggestions the user can select from. The user generated classification may be fed back into the classifier to improve future image recognition. The AI based image recognition tool may check for potential fraud in the captured images by analyzing metadata on the images, checking against public or private third party images sources (including search engine, social media and others). In some embodiments, the AI based image recognition tool may compare GPS metadata associated with the captured image with address of the insured property associated with the claim. In some embodiments, the AI based image recognition tool may also check time stamp data to determine whether the image is captured before or after a date of loss asserted in the claim.) It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the disclosure of Konrady 432 with the teachings of FEITERIA with a reasonable expectation of success since FEITERIA teaches that the AI module can use photos and be triggered form the first notice of loss to indicate that the damage was previously made based on the time stamp and also based on the speech patterns and then deny the claim as this indicates a potential fraud. See claims 1-15. PNG media_image5.png 624 869 media_image5.png Greyscale Claim 1 is amended to recite and Michel teaches“...iteratively, and until a confidence metric associated with the event interpretation [[meet]] data meets a predetermined threshold: generating notification data requesting additional data related to the vehicle damage, wherein the additional data comprises data from a source different from the one or more sensors; and (see paragraph 47 where the drone can have not enough information and have to move to shift a location or consult a server in paragraph 59 in that the server can inform the drone that 1. Dozens or 100 vehicle in the lot all experience damage from the hail storm and that is 2. Vehicle was in that hail storm and 3. If this vehicle had insurance coverage during the hail storm or not to immediately deny the claim if they do not for fraud detection) after receiving the additional data, regenerating the event interpretation data [[based on the additional data]] by integrating the additional data with the damage model and the scene data to produce an updated likelihood assessment”. (see paragraph 22, 47-48 where the drone can examine the accident and then obtain a different vantage point of the scene from a sensor and/or a remote server and this is provided to the Ai module and the second AI module and can determine 1. In paragraph 59 that there were dozens of the vehicle that were damaged by a hail storm and the hood damage also was from the hail storm and this vehicle did not have insurance during that period of the hail storm and the claim is denied due to fraud or , he storm/damage analysis and claim handling concepts and descriptions of which are hereby incorporated by reference herein, for example, to execute one or more insurance-related actions, such as approving and/or paying a claim (partially or in full) or denying a claim (partially or in full) for the location, a particular subset of the discrete objects, and/or for a particular discrete object at the location.) Claims 3 and 11 and 17 are rejected under 35 U.S.C. sec.103 as being unpatentable as obvious in view of U.S. Patent No.: 9,805,423 to Konrady that was filed in 2014 and published in 2017 and in view of U.S. Patent Application Pub. No.; US20180260793Al to Li and in view of United States Patent No.: US 10497108 Bl to Knuffman et al. that was filed in 12-2016 (hereinafter 'Knuffman") and in view of United States Patent No.: US11676215B1 to Feiteira et al. that was filed in 2018 and is assigned to LIBERTY MUTUAL™ and in view of CHIKKAVEERAPPA and MIchel. In regard to claim 3, 11, and 17, Knuffman teaches “...3. The system of claim 1, wherein the operations further comprise obtaining additional information regarding the geographic location, the additional information comprising at least one of a road type, an intersection type, a speed limit, a road condition, or weather information”. (see col. 14, lines 51 to col. 16, line 45 and col 1, line 40-45 where the vehicle is determined to be in a parking lot but needs to be brought to a repair shop and col. 4, line 49 where weather caused the accident) It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of KONRARDY and the teachings of KNUFFMAN since KNUFFMAN teaches that a drone can be dispatched to a vehicle upon a trigger that an insurance claim is made. The drone can be an unmanned aerial vehicle or a ground based drone that can look under the car on a road or over a road. The drone can be trained to detect damage using images that are correctly showing an indication of damage. This provides a training. Using this image capture and comparing these new images to the trained images the UAV can determine the amount of damage. If there is damage the UAV can also immediately order a repair of the damage to put the vehicle back on the road fast. However, if there is no damage, then the insurance carrier can then alternatively adjust the risk as the driver is reporting a claim that has no damage or is presenting a false claim that has inconsistent damage with the claim. For example, the drone can look and image the bumper and other areas to see if it is consistent with the type of accident. See col. 2, lines 65 to col. 3, line 35 and col. 9 lines1 -61 and col. 15. 1-20 and claims 1-10 and the abstract. Claims 4 and 18 are rejected under 35 U.S.C. sec.103 as being unpatentable as obvious in view of U.S. Patent No.: 9,805,423 to Konrady that was filed in 2014 and published in 2017 and in view of U.S. Patent Application Pub. No.; US20180260793Al to Li and in view of U.S. Patent No.: US10373387B1 to Fields et al. and in view of United States Patent No.: US11676215B1 to Feiteira et al. that was filed in 2018 and is assigned to LIBERTY MUTUAL™ and in view of CHIKKAVEERAPPA and Michel. In regard to claim 4, and 18, Fields teaches “...4. The system of claim 1, wherein: the sensor data comprises image data of the one or more damage elements; and the damage model is based on the image data”. (see col. 15, lines 40 to 67 and col. 15, lines 1-60). It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of KONRARDY and the teachings of FIELDS since FIELDS teaches that a user using a smartphone can report a claim. See FIG. Sa. A model of an accident scene can be determined via 360 degree photos, sat. images, GoogleMaps TM streetview™ and text. The text can include the claim information, and details about the loss and a description of the facts. Using a smartphone then the user can provide audio and text data and indicating the details of the accident. Then using the user interface the claim can be reported using immersive multimedia images that can be annotated. This allows the claims handler to view the details to verify the veracity of the claim using the images as evidence. This can provide increased automation of the claim to determine a damage and assess the fault without having to go to the scene of the accident. For example a rear end collision can be claimed but the images can reveal a side collision which can apportion the claim fault differently. See Fields at col. 14 line 40 to col. 15, line 60 and the abstract. Claims 5 and 12, and 19 are rejected under 35 U.S.C. sec.103 as being unpatentable as obvious in view of U.S. Patent No.: 9,805,423 to Konrady that was filed in 2014 and published in 2017 and in view of U.S. Patent Application Pub. No.; US20180260793Al to Li and in view of U.S. Patent No.: 10,497,108 to Knuffman et al. and in view of United States Patent No.: US11676215B1 to Feiteira et al. that was filed in 2018 and is assigned to LIBERTY MUTUAL™ and in view of CHIKKAVEERAPPA and Michel. In regard to claim 5, 12 and 19, Knuffman teaches “...5. The system of claim 1, wherein the event interpretation data comprises: a confidence metric comprising a likelihood of liability for the one or more damage elements, an indication of how damage to the one or more damage elements occurred, and an indication of a party that is at fault”. (See col 21 line 65 to col 31 line 35 and col 9, lines 1 -61 and claims 1 -2 and col 15, lines 1 -20 were based on the training there is no damage and the claim is fraud). It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of Konrardy and the teachings of KNUFFMAN since KNUFFMAN teaches that a drone can be dispatched to a vehicle upon a trigger that an insurance claim is made. The drone can be an unmanned aerial vehicle or a ground based drone that can look under the car. The drone can be trained to detect damage using images that are correctly showing an indication of damage. This provides a training. Using this image capture and comparing these new images to the trained images the UAV can determine the amount of damage. If there is damage the UAV can also immediately order a repair of the damage to put the vehicle back on the road fast. However, if there is no damage, then the insurance carrier can then alternatively adjust the risk as the driver is reporting a claim that has no damage or is presenting a false claim that has inconsistent damage with the claim. For example, the drone can look and image the bumper and other areas to see if it is consistent with the type of accident. See col. 2, lines 65 to col. 3, line 35 and col. 9 lines1 -61 and col. 15. 1-20 and claims 1-10 and the abstract. Claims 6-7 and 20 are rejected under 35 U.S.C. sec.103 as being unpatentable as obvious in view of U.S. Patent No.: 9,805,423 to Konrady that was filed in 2014 and published in 2017 and in view of U.S. Patent Application Pub. No.; US20180260793Al to Li and in view of U.S. Patent No.: US10373387B1 to Fields et al. and in view of United States Patent No.: US11676215B1 to Feiteira et al. that was filed in 2018 and is assigned to LIBERTY MUTUAL™ and in view of CHIKKAVEERAPPA and Michel. Fields teaches “...6. The system of claim 1, wherein the operations further comprise obtaining satellite image data for the geographic location”. (See col 14, lines 40-67 and col. 15, lines 1-60). It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of KONRARDY and the teachings of FIELDS since FIELDS teaches that a user using a smartphone can report a claim. See FIG. Sa. A model of an accident scene can be determined via 360 degree photos, sat. images, GoogleMaps TM streetview™ and text. The text can include the claim information, and details about the loss and a description of the facts. Using a smartphone then the user can provide audio and text data and indicating the details of the accident. Then using the user interface the claim can be reported using immersive multimedia images that can be annotated. This allows the claims handler to view the details to verify the veracity of the claim using the images as evidence. This can provide increased automation of the claim to determine a damage and assess the fault without having to go to the scene of the accident. For example a rear end collision can be claimed but the images can reveal a side collision which can apportion the claim fault differently. See Fields at col. 14 line 40 to col. 15, line 60 and the abstract. PNG media_image6.png 660 851 media_image6.png Greyscale PNG media_image7.png 836 760 media_image7.png Greyscale In regard to claim 7 and 20, Fields teaches “..7. The system of claim 1, wherein the operations further comprise: generating a scene rendering for the geographic location based on the event interpretation data and the satellite image data;(see Fig. 5a to 5b where the loss description is provided as with the damage and scene of the impact) (see Fig. 6-7) generating a user interface comprising the scene rendering and the event interpretation data; and providing the user interface. (See col 14, lines 40-67 and col. 15, lines 1-60). (see Fig. 6-7 block 708 where the annotated image can be received and a damage amount determined by the visualization of the photos and model for an amount of damage) It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of KONRARDY and the teachings of FIELDS since FIELDS teaches that a user using a smartphone can report a claim. See FIG. Sa. A model of an accident scene can be determined via 360 degree photos, sat. images, GoogleMaps TM streetview™ and text. The text can include the claim information, and details about the loss and a description of the facts. Using a smartphone then the user can provide audio and text data and indicating the details of the accident. Then using the user interface the claim can be reported using immersive multimedia images that can be annotated. This allows the claims handler to view the details to verify the veracity of the claim using the images as evidence. This can provide increased automation of the claim to determine a damage and assess the fault without having to go to the scene of the accident. For example a rear end collision can be claimed but the images can reveal a side collision which can apportion the claim fault differently. See Fields at col. 14 line 40 to col. 15, line 60 and the abstract. Claim 8 is rejected under 35 U.S.C. sec.103 as being unpatentable as obvious in view of U.S. Patent No.: 9,805,423 to Konrady that was filed in 2014 and published in 2017 and in view of U.S. Patent Application Pub. No.; US20180260793Al to Li and in view of U.S. Patent No.: 10475127 B1 to Potter et al. and in view of United States Patent No.: US11676215B1 to Feiteira et al. that was filed in 2018 and is assigned to LIBERTY MUTUAL™ and in view of CHIKKAVEERAPPA and Michel. Potter teaches “...8. The system of claim 1, wherein the sensor data comprises information regarding operation of a vehicle including at least one of a speed, an acceleration, the geographic location, or impact data from an impact sensor”. (see co!. 2, line 40-60 where the telematics data comprises the speed of the vehicle on a route) (see col 14 line 30-56 where an insured profile includes if the user is moving under or over a speed limit on the route)( see col 14, line 45- 65 where the user is deemed to be driving on a vehicle roadway that is urban or suburban or is driving on a highway type of road or a rural road); It would have been obvious for one of ordinary skill in the art before the effective filing date to combine the disclosure of KONRARDY and the teachings of POTTER since POTTER teaches that a server and a vehicle telematics unit can be connected to each other via an interface to receive signals from each other. The server can then track parameters of a vehicle to determine if the vehicle is operating dangerously and speeding on rural roads in excess of a posted speed limit. For example, moving in excess of 100 MPH in a 35 MPH zone. The server can determine if his driver is speeding frequently and all the time. The server can then provide this data to an insurance provider to correctly assess the driver as a high risk driver and raise the insurance premiums or cancel the policy. Additionally, the server can also provide that vehicles that always move within the speed limit can be indicated as one or more safe drivers and low risk and then maintain the same insurance premiums for that safe driver. For example, moving at 35 MPH in a 35 MPH zone. The insurance carrier can then correctly adjust the risk as the vehicles are monitored in real time. See col. 12, line 1 to col. 14, line 65 and the abstract. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1, 3-9, 11-15 and 17-20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-15 of U.S. Patent No. 11741763. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims recite using a machine classifier based on image data to determine a confidence metric and then show how a fault occurred and who is at fault. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEAN PAUL CASS whose telephone number is (571)270-1934. The examiner can normally be reached Monday to Friday 7 am to 7 pm; Saturday 10 am to 12 noon. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott A. Browne can be reached on 571-270-0151. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JEAN PAUL CASS/Primary Examiner, Art Unit 3668
Read full office action

Prosecution Timeline

Aug 28, 2023
Application Filed
Apr 18, 2024
Non-Final Rejection — §103, §DP
Jul 22, 2024
Response Filed
Oct 01, 2024
Final Rejection — §103, §DP
Feb 03, 2025
Request for Continued Examination
Feb 05, 2025
Response after Non-Final Action
Feb 13, 2025
Non-Final Rejection — §103, §DP
Jun 18, 2025
Response Filed
Aug 29, 2025
Final Rejection — §103, §DP
Jan 02, 2026
Request for Continued Examination
Feb 12, 2026
Response after Non-Final Action
Mar 06, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593752
SYSTEM AND METHOD FOR CONTROLLING HARVESTING IMPLEMENT OPERATION OF AN AGRICULTURAL HARVESTER BASED ON TILT ACTUATOR FORCE
2y 5m to grant Granted Apr 07, 2026
Patent 12596986
GLOBAL ADDRESS SYSTEM AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12590801
REAL TIME DETERMINATION OF PEDESTRIAN DIRECTION OF TRAVEL
2y 5m to grant Granted Mar 31, 2026
Patent 12583572
MARINE VESSEL AND MARINE VESSEL PROPULSION CONTROL SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12571183
EXCAVATOR
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+25.9%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 984 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month