Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-10 and 12-21 are all the claims pending in the application.
Claims 11 and 22 are cancelled.
Claims 1 and 12 are amended.
Claims 1-10 and 12-21 are rejected.
The following is a Final Office Action in response to amendments and remarks filed August 18, 2025.
Response to Arguments
Regarding the 112(b) rejections, the rejections are withdrawn in light of the cancellation of the claims.
Regarding the 101 rejections, Applicant first asserts the rejections should be withdrawn because the claims as amended do not recite an abstract idea. Examiner respectfully does not find this assertion persuasive because the claims still recite reviewing recordings to assess risk and damage, and reporting the assessment which is a part of providing insurance or mitigating risk.
Second, Applicant asserts the claims reflect a significant improvement in a computer vision system and specific, customized, unconventional machine learning layers. Examiner respectfully does not find this assertion persuasive because persuasive because a bare assertion of an improvement without the detail necessary to be apparent is not sufficient to show an improvement, see pg. MPEP 2106.04(d)(1) (discussing MPEP 2106.05(a)). That is, it is not clear how the claimed layers reflect more than a general link to computer vision techniques. Accordingly, the 101 rejections are maintained, please see below for the complete rejections of the claims as amended.
Regarding the 102 and 103 rejections, the rejections are withdrawn at least because the cited references do not teach nodes representing a presence or absence of a hazard. Please see below for the new 103 rejections of the claims as amended.
In response to arguments in reference to any depending claims that have not been individually addressed, all rejections made towards these dependent claims are maintained due to a lack of reply by Applicant in regards to distinctly and specifically pointing out the supposed errors in Examiner's prior office action (37 CFR 1.111). Examiner asserts that Applicant only argues that the dependent claims should be allowable because the independent claims are unobvious and patentable over the prior art.
Claim Objections
Claims 1 and 12 are objected to because of the following informalities: claims 1 and 12 repeats the word “a” in the limitation (emphasized) “…a hazard associated with the feature using a a computer vision model including a first plurality of layers…” Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-10 and 12-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Under Step 1 of the patent eligibility analysis, it must first be determined whether the claims are directed to one of the four statutory categories of invention. Applying Step 1 to the claims it is determined that: claims 1-10 are directed to a machine; and claims 12-21 are directed to a machine. Therefore, we proceed to Step 2.
Independent Claims
Under Step 2A Prong 1 of the patent eligibility analysis, it must be determined whether the claims recite an abstract idea that falls within one or more designated categories or “buckets” of patent ineligible subject matter that amount to a judicial exception to patentability.
The independent claims recite an abstract idea. Specifically, independent claim 1 recites an abstract idea in the limitations (emphasized):
…obtain the media content;
segmenting the media content to detect and classify a feature in the media content corresponding to the asset;
process the media content to detect a hazard associated with the feature;
using a a computer vision model including a first plurality of layers and a second plurality of layers, the first plurality of layers comprising a plurality of trained convolution layers which extract the feature from the media content, the second plurality of layers comprising a plurality of fully- connected layers which process outputs of the plurality of trained convolution layers to classify the hazard, and each of the plurality of fully-connected layers including a plurality of nodes, each of the nodes representing a presence or absence of a hazard;
process the media content to detect damage associated with the feature; and
generate an output indicating the feature, the hazard associated with the feature, and the damage associated with the feature.
These limitations recite an abstract idea because these limitations recite fundamental economic principles or practices (i.e., insurance or mitigating risk)1. These limitations recite fundamental economic principles or practice (i.e., insurance or mitigating risk) because these limitations essentially encompass reviewing recordings to assess risk and damage, and reporting the assessment which is a part of providing insurance or mitigating risk. That is, these limitations essentially encompass reporting on risk and damage, which is a part of providing insurance or risk mitigation (i.e., assessing risk to determine claims and rates, or reviewing risks and damage to determine mitigation strategies). Claims 1 and 12 recite an abstract idea.
Under Step 2A Prong 2 of the patent eligibility analysis, it must be determined whether the identified, recited abstract idea includes additional elements that integrate the abstract idea into a practical application.
The additional elements of the independent claims do not integrate the abstract idea into a practical application. First, claim 1 recites the additional elements: “…a memory storing media content indicative of an asset; and a processor in communication with the memory, the processor programmed to: obtain the media content…” and claim 12 recites the additional elements: “…retrieving by a processor media content corresponding to an asset and stored in a memory in communication with the processor…” These additional elements, when considered individually or in combination, do not integrate the abstract idea into a practical application because the additional elements encompass a generic computer performing a generic function of receiving data (i.e. receiving media data), see MPEP 2106.05(f)(2) (noting the use of computers in their ordinary capacity to receive, store, or transmit data does not integrate a judicial exception into a practical application).
Second, claims 1 and 12 recite the additional elements of using a computer vision model with the various layers and nodes, as claimed. These additional elements, when considered individually or in combination, do not integrate the abstract idea into a practical application because the additional elements are only a general link to a field of use or technological environment, see MPEP 2106.05(h) (discussing Affinity Labs). That is, although these additional elements do limit the use of the abstract idea, these types of limitations merely confine the use of the abstract idea to a particular technological environment (computer vision or image analysis techniques) and do not integrate the abstract idea into a practical application or add an inventive concept to the claims. Claims 1 and 12 are directed to an abstract idea.
Under Step 2B of the patent eligibility analysis, the additional elements are evaluated to determine whether they amount to something “significantly more” than the recited abstract idea (i.e., an innovative concept).
The independent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply the exception and a general link to a field of use. Mere instructions to apply an exception and a general link to a field of use cannot provide an inventive concept. Claims 1 and 12 are not patent eligible.
Dependent Claims
The dependent claims are rejected under 35 USC 101 as directed to an abstract idea for the following reasons.
Claims 2-4, 7, 13-15 and 18 recite the additional elements of using a segmentation model to detect a structure feature. These additional elements, when considered individually or in combination, do not integrate the abstract idea into a practical application because the additional elements are only a general link to a field of use or technological environment, see MPEP 2106.05(h) (discussing Affinity Labs). That is, although these additional elements do limit the use of the abstract idea, this type of limitation merely confines the use of the abstract idea to a particular technological environment (i.e., neural network based computer vision techniques) and does not integrate the abstract idea into a practical application or add an inventive concept to the claims.
Claims 5, 6, 16 and 17 recite the same abstract idea as the independent claims because detecting materials is a part of insurance and risk mitigation (e.g., identifying dangerous or flammable materials).
Claims 8 and 19 recite the same abstract idea as the independent claims because classifying materials is a part of insurance and risk mitigation (e.g., identifying dangerous or flammable materials).
Claims 9, 10, 20, and 21 recite the same abstract idea as the independent claims because calculating hazard and damage severities is a part of insurance and risk mitigation.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-6, 8-10, 12-17, and 19-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schenkel et al, US Pub. No. 2024/0020969, herein referred to as “Schenkel”, in view of Sticlaru, Anca. "Material classification using neural networks." arXiv preprint arXiv:1710.06854 (2017), herein referred to as “Sticlaru”, further in view of Lin et al, US Pat. No. 10,803,334, herein referred to as “Lin”.
Regarding claim 1, Schenkel teaches:
a memory (e.g., ¶¶[0025], [0123])
storing media content indicative of an asset (receives digital image of property, e.g., ¶¶[0024], [0072]);
and a processor in communication with the memory, the processor programmed to (processor, e.g., ¶[0025]):
obtain the media content (receives digital image of property, e.g., ¶¶[0024], [0072], [0085]; see also Fig. 1 and ¶[0070] generally discussing the receiving process);
segmenting the media content (identifies different sections of roofs, ¶¶[0068], [0169]),
to detect and classify a feature in the media content corresponding to the asset (recognizes and classifies damage, ¶¶[0059], [0164]-[0165] and Figs. 23, 24);
process the media content to detect a hazard associated with the feature (identifies various hazards like flood footprint or a hurricane track footprint, ¶¶[0097], [0107]);
using a a computer vision model including a first plurality of layers and a second plurality of layers (uses computer vision and deep learning, e.g., ¶¶[0002], [0071], [0094]),
process the media content to detect damage associated with the feature (assess damage based on optical data, e.g. ¶¶[0124], [0163]-[0165]);
and generate an output indicating the feature, the hazard associated with the feature, and the damage associated with the feature (user receives information on event, ¶[0099]; and outputs rooftop footprint with damage, severity classes, ¶¶[0173], [0182]).
However Schenkel does not teach but Sticlaru does teach:
the first plurality of layers comprising a plurality of trained convolution layers which extract the feature from the media content (trained convolutional layers extract features from image, pgs. 10-12),
the second plurality of layers comprising a plurality of fully-connected layers (fully connected layers, pgs. 10-12)
which process outputs of the plurality of trained convolution layers to classify (performs image classification, pgs. 14-15)
and each of the plurality of fully-connected layers including a plurality of nodes (fully connected layers, pgs. 10-12)
Further, it would have been obvious before the effective filing date of the claimed invention, to combine the property damage recognition of Schenkel with the material classification of Sticlaru because known work in one field of endeavor may prompt variations of it for use in the same field based on design incentives, see MPEP 2143.I.F. That is, Schenkel teaches identifying roofs, e.g., ¶¶[0068]. One of ordinary skill would have recognized identifying rooftops would likely be improved by identifying the materials in the image (i.e., to distinguish between roofs and other objects).
However the combination of Schenkel and Sticlaru does not teach but Lin does teach:
which process outputs of the plurality of trained convolution layers to classify the hazard (uses neural network to classify unsafe conditions based on captured images, Col. 7, ll. 46-57; see also e.g., Col. 8, ll. 49-58, Col. 10, ll. 34-44 discussing convolutional neural networks),
and each of the plurality of fully-connected layers including a plurality of nodes, each of the nodes representing a presence or absence of a hazard (output nodes correspond to states reflecting dangers, Col. 8, ll. 4-11).
Further, it would have been obvious before the effective filing date of the claimed invention, to combine the property damage recognition with the material classification of Schenkel and Sticlaru with the output nodes reflecting danger states of Lin because known work in one field of endeavor may prompt variations of it for use in the same field based on design incentives, see MPEP 2143.I.F. That is, one of ordinary skill would have recognized the neural network of Schenkel and Sticlaru would likely involve output nodes reflecting dangers, e.g. as taught by Lin, because Schenkel teaches determining dangers.
Regarding claim 2, the combination of Schenkel, Sticlaru and Lin teaches all the limitations of claim 1 and Schenkel further teaches:
wherein the processor segments the media content using a segmentation model (identifies different sections of roofs using model, ¶¶[0068], [0169]).
Regarding claim 3, the combination of Schenkel, Sticlaru and Lin teaches all the limitations of claim 2 and Schenkel further teaches:
wherein the feature comprises a structural feature and the media content is segmented using a segmentation model that detects the structural feature (identifies different sections of roofs using model, ¶¶[0068], [0169]; see also ¶¶[0134]-[0148] discussing identifying different types of properties).
Regarding claim 4, the combination of Schenkel, Sticlaru and Lin teaches all the limitations of claim 2 and Sticlaru further teaches:
wherein the segmentation model comprises one or more feature extraction neural network layers and one or more classifier neural network layers (includes layers for pooling, convolution and output (i.e., material classification), Sect. 2.1.1., pgs. 10-14).
Further, it would have been obvious before the effective filing date of the claimed invention, to combine the property damage recognition of Schenkel with the material classification of Sticlaru because known work in one field of endeavor may prompt variations of it for use in the same field based on design incentives, see MPEP 2143.I.F. That is, Schenkel teaches identifying roofs, e.g., ¶¶[0068]. One of ordinary skill would have recognized identifying rooftops would likely be improved by identifying the materials in the image (i.e., to distinguish between roofs and other objects).
Regarding claim 5, the combination of Schenkel, Sticlaru and Lin teaches all the limitations of claim 1 and Schenkel further teaches:
wherein the processor processes the media content to detect a material associated with the feature (detects roofs shingles, ¶¶[0186]-[0190]).
Regarding claim 6, the combination of Schenkel, Sticlaru and Lin teaches all the limitations of claim 5 and Schenkel further teaches:
wherein the processor detects the material associated with the feature using a material classification model (uses CNN models to classify materials, e.g., Conclusion, pgs. 37-38).
Further, it would have been obvious before the effective filing date of the claimed invention, to combine the property damage recognition of Schenkel with the material classification of Sticlaru because known work in one field of endeavor may prompt variations of it for use in the same field based on design incentives, see MPEP 2143.I.F. That is, Schenkel teaches identifying roofs, e.g., ¶¶[0068]. One of ordinary skill would have recognized identifying rooftops would likely be improved by identifying the materials in the image (i.e., to distinguish between roofs and other objects).
Regarding claim 8, the combination of Schenkel, Sticlaru and Lin teaches all the limitations of claim 1 and Schenkel further teaches:
wherein the feature comprises a structural feature of the asset (identifies different sections of roofs using model, ¶¶[0068], [0169]; see also ¶¶[0134]-[0148] discussing identifying different types of properties).
However Schenkel does not teach but Sticlaru does teach:
and the processor classifies material corresponding to the structural item (uses CNN models to classify materials, e.g., Conclusion, pgs. 37-38).
Further, it would have been obvious before the effective filing date of the claimed invention, to combine the property damage recognition of Schenkel with the material classification of Sticlaru because known work in one field of endeavor may prompt variations of it for use in the same field based on design incentives, see MPEP 2143.I.F. That is, Schenkel teaches identifying roofs, e.g., ¶¶[0068]. One of ordinary skill would have recognized identifying rooftops would likely be improved by identifying the materials in the image (i.e., to distinguish between roofs and other objects).
Regarding claim 9, the combination of Schenkel, Sticlaru and Lin teaches all the limitations of claim 1 and Schenkel further teaches:
wherein the processor calculates a hazard severity corresponding to the hazard associated with the asset (determines hazard intensity/damage degree ratio, ¶[0107] and Fig. 9).
Regarding claim 10, the combination of Schenkel, Sticlaru and Lin teaches all the limitations of claim 1 and Schenkel further teaches:
wherein the processor calculates a damage severity corresponding to the damage associated with the asset (determines damage degree, ¶¶[0109]-[0110], [0193]).
Regarding claims 12-17 and 19-21, claims 12-17 and 19-21 recite similar limitations as claims 1-6 and 9-11 and accordingly are rejected for similar reasons as claims 1-6 and 9-112.
Claim(s) 7 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schenkel, Sticlaru and Lin, further in view of Day, US Pat. No. 11,651,456, herein referred to as “Day”.
Regarding claim 7, the combination of Schenkel and Sticlaru teaches all the limitations of claim 6 and Schenkel further teaches:
wherein the material classification model is a region-of-interest (ROI) (region of interest, e.g., ¶[0112]).
However the combination of Schenkel, Sticlaru and Lin does not teach but Day does teach:
a region-of-interest (ROI) masked-based attention model (performs masking to detect object, Col. 24, ll. 28-59; see also Col. 21, ll. 54-67 discussing region of interest).
Further, it would have been obvious before the effective filing date of the claimed invention, to combine the property damage recognition with material classification of Schenkel, Sticlaru and Lin with the masking of Day because known work in one field of endeavor may prompt variations of it for use in the same field based on design incentives, see MPEP 2143.I.F. That is, one of ordinary skill would have recognized the computer vision of Schenkel, Sticlaru and Lin would likely be improved by masking, e.g., as taught by Day, and accordingly would have modified Schenkel, Sticlaru and Lin to preform masking.
Regarding claim 18, claim 18 recites similar limitations as claim 7 and accordingly is rejected for similar reasons as claim 7.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRENDAN S O'SHEA whose telephone number is (571)270-1064. The examiner can normally be reached Monday to Friday 10-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Uber can be reached at (571) 270-3923. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRENDAN S O'SHEA/Examiner, Art Unit 3626
1 Examiner notes the exact language of claims 1 and 12 differs but does not find these differences significantly alter the eligibility analysis and accordingly analyzes the claims concurrently here for the sake of brevity.
2 Examiner notes claim 15 depends from claim 14 whereas claim 4 depends from claim 2 but the language of the claims is still sufficiently similar such that Examiner analyses them concurrently here for the sake of brevity.