DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Application
This communication is a Final Office Action in response to the Amendments, Remarks, and Arguments filed on the 16th day of September, 2025. Claims 21-40 are pending. No Claims are allowed.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/03/2025 was filed in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 21-40 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Under MPEP 2106, when considering subject matter eligibility under 35 U.S.C. § 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (step 1). If the claim does fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea) (step 2A prong 1), and if so, it must additionally be determined whether the claim is integrated into a practical application (step 2A prong 2). If an abstract idea is present in the claim without integration into a practical application, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself (step 2B).
In the instant case, claims 21-40 are directed to a method, device and a tangible, non-transitory computer-readable medium. Thus, each of the claims fall within one of the four statutory categories. However, the claims also fall within the judicial exception of an abstract idea. Although claims 21, 30, and 36 are directed to different categories the claim language is substantial similar and will be addressed together below.
Under Step 2A Prong 1, the test is to identify whether the claims are “directed to” a judicial exception. Examiner notes that the claimed invention is directed to an abstract idea in that the instant application is directed to certain methods of organizing human activity specifically commercial interactions and behaviors and managing personal behavior and/or interactions between people (see MPEP 2106.04(a)(2)(II) and mental processes (see MPEP 2106.04(a)(2)(III),.
Claims 21, 30, and 36 recite computer-implemented method for for generating and/or displaying an overall home score for a property, the computer-implemented method comprising: determining, by one or more processors, two or more home score factors, wherein the two or more home score factors are selected from among (i) a fire hazard score, (ii) a safety score, (iii) a weather hazard score, and (iv) a property feature hazard score; generating, by the one or more processors, the overall home score of the property based upon the two or more home score factors; and displaying, via the one or more processors, the overall home score, the claims are similar to the abstract idea found in Electric Power Group.
Examiner notes that claim 21-40 recite a system for receiving a plurality of attributes related to a property, and calculating an overall rating and score related to the property which is directed to concepts that are performed mentally and a product of human mental work. The limitations suggest a process similar to standard practice risk management when buying or insuring a property where historical data and historical attributes related to the house are considered prior to purchase. Because the limitations above closely follow the steps of receiving information, processing the information, and displaying the results of the processing, and the steps involved human judgments, observations and evaluations that can be practically or reasonably performed in the human mind, the claim recites an abstract idea consistent with the “mental process” grouping set forth in the see MPEP 2106.04(a)(2)(III). If a claim, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of a generic processor executing computer code stored on a computer medium, then it falls within the “Mental Processes” grouping of abstract idea. Accordingly, the claims recite an abstract idea.
Examiner notes that the claimed invention amounts to a mental process in that the system is collecting command information, analyzing or processing the information to determine property characteristics, and the subsequently displaying the results of the analysis which is similar to the abstract ideas identified in Electric Power Group and Classen.
Furthermore, the claims recite the familiar concept of property valuation. As the Supreme Court explained in Alice, claims involving “a fundamental economic practice long prevalent in our system of commerce,” such as the concepts of hedging and inter-mediated settlement, are patent-ineligible abstract ideas. Alice, 134 S. Ct. at 2356 (quoting Bilski v. Kappos, 561 U.S. 593, 611 (2010)). It follows that the claims at issue here are directed to an abstract idea. Applicants’ claims recite one or more computers configured to receive a user’s property valuations, and display that information. Like the risk hedging in Bilski and the concept of intermediated settlement in Alice, the concept of property valuation, that is, determining a property’s market value, is “a fundamental economic practice long prevalent in our system of commerce.” Id. (quoting Bilski, 561 U.S. at 611). Prospective sellers and buyers have long valued property and doing so is necessary to the functioning of the residential real estate market. As such, claims 1, 11, and 16 are directed to the abstract idea of property valuation. and is similar to the abstract idea identified in MPEP 2106.04(a)(2)(II) in grouping “II” in that the claims recite certain methods of organizing human activity such as fundamental economic practices. This is merely further embellishments of the abstract idea and does not further limit the claimed invention to render the claims patentable subject matter. The limitations, substantially comprising the body of the claim, recite standard processes found in standard practice in property valuations. This is common practice when purchasing or insuring a piece of property. Because the limitations above closely follow the steps standard in fundamental economic practices such as process valuation, and the steps of the claims involve organizing human activity, the claim recites an abstract idea consistent with the “organizing human activity” grouping set forth in the see MPEP 2106.04(a)(2)(II).
Each claim is similar to the abstract idea identified in the MPEP 2106.04(a)(2)(II)) in grouping “II” in that the claims recite certain methods of organizing human activity such as fundamental business practices of property valuations. The claims invention involves standard practice within the real estate industry in that information related to properties being sold or insured is monitored and results presented regarding the property. This merely amounts to further embellishments of the abstract idea and does not further limit the claims to render the subject matter patentable.
Because the limitations above closely follow the steps standard in business transactions related to enable and monitor rental agreements and rental properties, and interactions between people such as the behaviors of renters within a rented property, and the steps of the claims involve organizing human activity, the claim recites an abstract idea consistent with the “organizing human activity” grouping set forth in the MPEP 2106.04(a)(2)(II)).
The phrase "methods of organizing human activity" is used to describe concepts relating to: fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations); and managing personal behavior or relationships or interactions between people, (including social activities, teaching, and following rules or instructions).
"Commercial interactions" or "legal interactions" include agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations.
An example of a claim reciting a commercial or legal interaction, where the interaction is an agreement in the form of contracts, is found in buySAFE, Inc. v. Google, Inc., 765 F.3d. 1350, 112 USPQ2d 1093 (Fed. Cir. 2014). The agreement at issue in buySAFE was a transaction performance guaranty, which is a contractual relationship. 765 F.3d at 1355, 112 USPQ2d at 1096. The patentee claimed a method in which a computer operated by the provider of a safe transaction service receives a request for a performance guarantee for an online commercial transaction, the computer processes the request by underwriting the requesting party in order to provide the transaction guarantee service, and the computer offers, via a computer network, a transaction guaranty that binds to the transaction upon the closing of the transaction. 765 F.3d at 1351-52, 112 USPQ2d at 1094. The Federal Circuit described the claims as directed to an abstract idea because they were "squarely about creating a contractual relationship--a ‘transaction performance guaranty’." 765 F.3d at 1355, 112 USPQ2d at 1096.
Other examples of subject matter where the commercial or legal interaction is an agreement in the form of contracts include: ii. processing insurance claims for a covered loss or policy event under an insurance policy (i.e., an agreement in the form of a contract), Accenture Global Services v. Guidewire Software, Inc., 728 F.3d 1336, 1338-39, 108 USPQ2d 1173, 1175-76 (Fed. Cir. 2013).
An example of a claim reciting advertising is found in Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 714-15, 112 USPQ2d 1750, 1753-54 (Fed. Cir. 2014). The patentee in Ultramercial claimed an eleven-step method for displaying an advertisement (ad) in exchange for access to copyrighted media, comprising steps of receiving copyrighted media, selecting an ad, offering the media in exchange for watching the selected ad, displaying the ad, allowing the consumer access to the media, and receiving payment from the sponsor of the ad. 772 F.3d. at 715, 112 USPQ2d at 1754. The Federal Circuit determined that the "combination of steps recites an abstraction—an idea, having no particular concrete or tangible form" and thus was directed to an abstract idea, which the court described as "using advertising as an exchange or currency."
An example of a claim reciting a commercial or legal interaction in the form of a legal obligation is found in Fort Properties, Inc. v. American Master Lease, LLC, 671 F.3d 1317, 101 USPQ2d 1785 (Fed Cir. 2012). The patentee claimed a method of "aggregating real property into a real estate portfolio, dividing the interests in the portfolio into a number of deedshares, and subjecting those shares to a master agreement." 671 F.3d at 1322, 101 USPQ2d at 1788. The legal obligation at issue was the tax-free exchanges of real estate. The Federal Circuit concluded that the real estate investment tool designed to enable tax-free exchanges was an abstract concept. 671 F.3d at 1323, 101 USPQ2d at 1789. Examiner notes that the claimed invention is similar to the abstract idea found within Fort Properties in that the system is processing information in the form of registry commands based on the “master agreement” in the form of the consent record stored within the registry.
An example of a claim reciting business relations is found in Credit Acceptance Corp. v. Westlake Services, 859 F.3d 1044, 123 USPQ2d 1100 (Fed. Cir. 2017). The business relation at issue in Credit Acceptance is the relationship between a customer and dealer when processing a credit application to purchase a vehicle. The patentee claimed a "system for maintaining a database of information about the items in a dealer’s inventory, obtaining financial information about a customer from a user, combining these two sources of information to create a financing package for each of the inventoried items, and presenting the financing packages to the user." 859 F.3d at 1054, 123 USPQ2d at 1108. The Federal Circuit described the claims as directed to the abstract idea of "processing an application for financing a loan" and found "no meaningful distinction between this type of financial industry practice" and the concept of intermediated settlement in Alice or the hedging concept in Bilski. 859 F.3d at 1054, 123 USPQ2d at 1108. Examiner notes that the claimed invention is similar to the abstract idea in Credit Acceptance Corp., in that the system is processing information related to properties and their associated hazard risks.
Examiner notes that the claimed invention is more like buySAFE, Accenture, and Ultramercial, in that the invention revolves around analyzing real properties characteristics. Examiner respectfully submits that the claimed invention falls squarely in the grouping “II” and is directed to an abstract idea.
For the above reasons the examiner concludes that the claimed invention has a concept similar to those that the courts have found to be abstract and that the claims are directed to a judicial exception in the form of an abstract idea.
The conclusion that the claim recites an abstract idea within the groupings of the MPEP 2106.04(a)(2) remains grounded in the broadest reasonable interpretation consistent with the description of the invention in the specification. For example, (App. Spec. ¶ 2), the system amounts to a “method and system for evaluating and generating a home score for a property”. Accordingly, the Examiner submits claims 21, 30, and 36, recite an abstract idea based on the language identified in claims 21, 30, and 36, and the abstract ideas previously identified based on that language that remains consistent with the groupings of Step 2A Prong 1 of the MPEP 2106.04(a)(1).
If the claims are directed toward the judicial exception of an abstract idea, it must then be determined under Step 2A Prong 2 whether the judicial exception is integrated into a practical application. Examiner notes that considerations under Step 2A Prong 2 comprise most the consideration previously evaluated in the context of Step 2B. The Examiner submits that the considerations discussed previously determined that the claim does not recite “significantly more” at Step 2B would be evaluated the same under Step 2A Prong 1 and result in the determination that the claim does not integrate the abstract idea into a practical application.
Claims can recite a mental process even if they are claimed as being performed on a computer. The Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea. The Court concluded that the algorithm could be performed purely mentally even though the claimed procedures "can be carried out in existing computers long in use, no new machinery being necessary." 409 U.S at 67, 175 USPQ at 675. See also Mortgage Grader, 811 F.3d at 1324, 117 USPQ2d at 1699 (concluding that concept of "anonymous loan shopping" recited in a computer system claim is an abstract idea because it could be "performed by humans without a computer").
In evaluating whether a claim that requires a computer recites a mental process, examiners should carefully consider the broadest reasonable interpretation of the claim in light of the specification. For instance, examiners should review the specification to determine if the claimed invention is described as a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept. In these situations, the claim is considered to recite a mental process.
An example of a case identifying a mental process performed on a generic computer as an abstract idea is Voter Verified, Inc. v. Election Systems & Software, LLC, 887 F.3d 1376, 1385, 126 USPQ2d 1498, 1504 (Fed. Cir. 2018). In this case, the Federal Circuit relied upon the specification in explaining that the claimed steps of voting, verifying the vote, and submitting the vote for tabulation are "human cognitive actions" that humans have performed for hundreds of years. The claims therefore recited an abstract idea, despite the fact that the claimed voting steps were performed on a computer. 887 F.3d at 1385, 126 USPQ2d at 1504. Another example is FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 120 USPQ2d 1293 (Fed. Cir. 2016). The patentee in FairWarning claimed a system and method of detecting fraud and/or misuse in a computer environment, in which information regarding accesses of a patient’s personal health information was analyzed according to one of several rules (i.e., related to accesses in excess of a specific volume, accesses during a pre-determined time interval, or accesses by a specific user) to determine if the activity indicates improper access. 839 F.3d. at 1092, 120 USPQ2d at 1294. The court determined that these claims were directed to a mental process of detecting misuse, and that the claimed rules here were "the same questions (though perhaps phrased with different words) that humans in analogous situations detecting fraud have asked for decades, if not centuries." 839 F.3d. at 1094-95, 120 USPQ2d at 1296.
An example of a case in which a computer was used as a tool to perform a mental process is Mortgage Grader, 811 F.3d. at 1324, 117 USPQ2d at 1699. The patentee in Mortgage Grader claimed a computer-implemented system for enabling borrowers to anonymously shop for loan packages offered by a plurality of lenders, comprising a database that stores loan package data from the lenders, and a computer system providing an interface and a grading module. The interface prompts a borrower to enter personal information, which the grading module uses to calculate the borrower’s credit grading, and allows the borrower to identify and compare loan packages in the database using the credit grading. 811 F.3d. at 1318, 117 USPQ2d at 1695. The Federal Circuit determined that these claims were directed to the concept of "anonymous loan shopping", which was a concept that could be "performed by humans without a computer." 811 F.3d. at 1324, 117 USPQ2d at 1699. Another example is Berkheimer v. HP, Inc., 881 F.3d 1360, 125 USPQ2d 1649 (Fed. Cir. 2018), in which the patentee claimed methods for parsing and evaluating data using a computer processing system. The Federal Circuit determined that these claims were directed to mental processes of parsing and comparing data, because the steps were recited at a high level of generality and merely used computers as a tool to perform the processes. 881 F.3d at 1366, 125 USPQ2d at 1652-53.
Both product claims (e.g., computer system, computer-readable medium, etc.) and process claims may recite mental processes. For example, in Mortgage Grader, the patentee claimed a computer-implemented system and a method for enabling borrowers to anonymously shop for loan packages offered by a plurality of lenders, comprising a database that stores loan package data from the lenders, and a computer system providing an interface and a grading module. The Federal Circuit determined that both the computer-implemented system and method claims were directed to "anonymous loan shopping", which was an abstract idea because it could be "performed by humans without a computer." 811 F.3d. at 1318, 1324-25, 117 USPQ2d at 1695, 1699-1700. See also FairWarning IP, 839 F.3d at 1092, 120 USPQ2d at 1294 (identifying both system and process claims for detecting improper access of a patient's protected health information in a health-care system computer environment as directed to abstract idea of detecting fraud); Content Extraction & Transmission LLC v. Wells Fargo Bank, N.A., 776 F.3d 1343, 1345, 113 USPQ2d 1354, 1356 (Fed. Cir. 2014) (system and method claims of inputting information from a hard copy document into a computer program). Accordingly, the phrase "mental processes" should be understood as referring to the type of abstract idea, and not to the statutory category of the claim.
Examples of product claims reciting mental processes include: An application program interface for extracting and processing information from a diversity of types of hard copy documents – Content Extraction, 776 F.3d at 1345, 113 USPQ2d at 1356; and A computer readable medium containing program instructions for detecting fraud – CyberSource, 654 F.3d at 1368 n. 1, 99 USPQ2d at 1692 n.1.
Examiner notes that the claimed in invention is similar to the Voter Verified, Inc., FairWarning, Mortgage Grader, Berkheimer, Content Extraction and CyberSource applications wherein the court identified computer system or using “machine learning” as merely serving as a the generic computer, computing environment, or tool to perform the mental process or abstract idea.
The second part of the Alice/Mayo test is often referred to as a search for an inventive concept. Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 217, 110 USPQ2d 1976, 1981 (2014) (citing Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 71-72, 101 USPQ2d 1961, 1966 (2012)).
Evaluating additional elements to determine whether they amount to an inventive concept requires considering them both individually and in combination to ensure that they amount to significantly more than the judicial exception itself. Because this approach considers all claim elements, the Supreme Court has noted that "it is consistent with the general rule that patent claims ‘must be considered as a whole.’" Alice Corp., 573 U.S. at 218 n.3, 110 USPQ2d at 1981 (quoting Diamond v. Diehr, 450 U.S. 175, 188, 209 USPQ 1, 8-9 (1981)). Consideration of the elements in combination is particularly important, because even if an additional element does not amount to significantly more on its own, it can still amount to significantly more when considered in combination with the other elements of the claim. See, e.g., Rapid Litig. Mgmt. v. CellzDirect, 827 F.3d 1042, 1051, 119 USPQ2d 1370, 1375 (Fed. Cir. 2016) (process reciting combination of individually well-known freezing and thawing steps was "far from routine and conventional" and thus eligible); BASCOM Global Internet Servs. v. AT&T Mobility LLC, 827 F.3d 1341, 1350, 119 USPQ2d 1236, 1242 (Fed. Cir. 2016) (inventive concept may be found in the non-conventional and non-generic arrangement of components that are individually well-known and conventional).
Limitations that the courts have found not to be enough to qualify as "significantly more" when recited in a claim with a judicial exception include ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)); and Generally linking the use of the judicial exception to a particular technological environment or field of use, e.g., a claim describing how the abstract idea of hedging could be used in the commodities and energy markets, as discussed in Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or a claim limiting the use of a mathematical formula to the petrochemical and oil-refining fields, as discussed in Parker v. Flook, 437 U.S. 584, 588-90, 198 USPQ 193, 197-98 (1978) (MPEP § 2106.05(h)).
It is important to note that in order for a method claim to improve computer functionality, the broadest reasonable interpretation of the claim must be limited to computer implementation. That is, a claim whose entire scope can be performed mentally, cannot be said to improve computer technology. Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 120 USPQ2d 1473 (Fed. Cir. 2016) (a method of translating a logic circuit into a hardware component description of a logic circuit was found to be ineligible because the method did not employ a computer and a skilled artisan could perform all the steps mentally). Similarly, a claimed process covering embodiments that can be performed on a computer, as well as embodiments that can be practiced verbally or with a telephone, cannot improve computer technology. See RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1328, 122 USPQ2d 1377, 1381 (Fed. Cir. 2017) (process for encoding/decoding facial data using image codes assigned to particular facial features held ineligible because the process did not require a computer).
Examples that the courts have indicated may not be sufficient to show an improvement in computer-functionality: ii. Accelerating a process of analyzing audit log data when the increased speed comes solely from the capabilities of a general-purpose computer, FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016), iii. Mere automation of manual processes, such as using a generic computer to process an application for financing a purchase, Credit Acceptance Corp. v. Westlake Services, 859 F.3d 1044, 1055, 123 USPQ2d 1100, 1108-09 (Fed. Cir. 2017) or speeding up a loan-application process by enabling borrowers to avoid physically going to or calling each lender and filling out a loan application, LendingTree, LLC v. Zillow, Inc., 656 Fed. App'x 991, 996-97 (Fed. Cir. 2016) (non-precedential); vii. Providing historical usage information to users while they are inputting data, in order to improve the quality and organization of information added to a database, because "an improvement to the information stored by a database is not equivalent to an improvement in the database’s functionality," BSG Tech LLC v. Buyseasons, Inc., 899 F.3d 1281, 1287-88, 127 USPQ2d 1688, 1693-94 (Fed. Cir. 2018).
To show that the involvement of a computer assists in improving the technology, the claims must recite the details regarding how a computer aids the method, the extent to which the computer aids the method, or the significance of a computer to the performance of the method. Merely adding generic computer components to perform the method is not sufficient. Thus, the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology. See MPEP § 2106.05(f) for more information about mere instructions to apply an exception.
Examples that the courts have indicated may not be sufficient to show an improvement to technology include: i. A commonplace business method being applied on a general purpose computer, Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1976; Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015).
The instant application fails to integrate the judicial exception into a practical application because the instant application merely recites words “apply it” (or an equivalent) with the judicial exception or merely includes instructions to implement an abstract idea. The instant application is directed to a method instructing the reader to implement the identified method of organizing human activity of fundamental business and economic practices such as property valuation and risk mitigation. For instance, the additional elements or combination of elements other than the abstract idea itself include the elements such as a “processor”, “memory”, “sensor”, and “using a trained machine learning data evaluation model” recited at a high level of generality. The claimed computer structure read in light of the specification can be “processor”, “memory”, and “using a trained machine learning data evaluation model” and includes any wide range of possible devises comprising a number of components that are “well-known” and include an indiscriminate “computer” (e.g., processor, memory). Thus, the claimed structure amounts to appending generic computer elements to abstract idea comprising the body of the claim. Nothing amounts to improvement to the machine learning techniques or processes. The system is merely appending the computer processes to perform thein intended purposes to the abstract idea. The computing elements are only involved at a general, high level, and do not have the particular role within any of the functions but to be a generically claimed “device” and “units”.
Examiner notes that the claimed invention is more like the implementations of computer elements found in FairWarning IP, LLC, Credit Acceptance Corp. v. Westlake Services, LendingTree, LLC v. Zillow, Inc., BSG Tech LLC v. Buyseasons, Inc., Alice Corp. and Versata Dev. Group, Inc. and fail to implement a technical improvement, practical application, or significantly more than the abstract idea.
Similarly, reciting the abstract idea as software functions used to program a generic computer is not significant or meaningful: generic computers are programmed with software to perform various functions every day. A programmed generic computer is not a particular machine and by itself does not amount to an inventive concept because, as discussed in MPEP 2106.05(a), adding the words “apply it” (or an equivalent) with the judicial exception, or more instructions to implement an abstract idea on a computer, as discussed in Alice, 134 S. Ct. at 2360, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)), is not enough to integrate the exception into a practical application. Further, it is not relevant that a human may perform a task differently from a computer. It is necessarily true that a human might apply an abstract idea in a different manner from a computer. What matters is the application, “stating an abstract idea while adding the words ‘apply it with a computer’” will not render an abstract idea non-abstract. Tranxition v. Lenovo, Nos. 2015-1907, -1941, -1958 (Fed. Cir. Nov. 16, 2016), slip op. at 7-8.
In evaluating whether a claim that requires a computer recites a mental process, examiners should carefully consider the broadest reasonable interpretation of the claim in light of the specification. For instance, examiners should review the specification to determine if the claimed invention is described as a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept. In these situations, the claim is considered to recite a mental process.
An example of a case identifying a mental process performed on a generic computer as an abstract idea is Voter Verified, Inc. v. Election Systems & Software, LLC, 887 F.3d 1376, 1385, 126 USPQ2d 1498, 1504 (Fed. Cir. 2018). In this case, the Federal Circuit relied upon the specification in explaining that the claimed steps of voting, verifying the vote, and submitting the vote for tabulation are "human cognitive actions" that humans have performed for hundreds of years. The claims therefore recited an abstract idea, despite the fact that the claimed voting steps were performed on a computer. 887 F.3d at 1385, 126 USPQ2d at 1504. Another example is FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 120 USPQ2d 1293 (Fed. Cir. 2016). The patentee in FairWarning claimed a system and method of detecting fraud and/or misuse in a computer environment, in which information regarding accesses of a patient’s personal health information was analyzed according to one of several rules (i.e., related to accesses in excess of a specific volume, accesses during a pre-determined time interval, or accesses by a specific user) to determine if the activity indicates improper access. 839 F.3d. at 1092, 120 USPQ2d at 1294. The court determined that these claims were directed to a mental process of detecting misuse, and that the claimed rules here were "the same questions (though perhaps phrased with different words) that humans in analogous situations detecting fraud have asked for decades, if not centuries." 839 F.3d. at 1094-95, 120 USPQ2d at 1296.
An example of a case in which a computer was used as a tool to perform a mental process is Mortgage Grader, 811 F.3d. at 1324, 117 USPQ2d at 1699. The patentee in Mortgage Grader claimed a computer-implemented system for enabling borrowers to anonymously shop for loan packages offered by a plurality of lenders, comprising a database that stores loan package data from the lenders, and a computer system providing an interface and a grading module. The interface prompts a borrower to enter personal information, which the grading module uses to calculate the borrower’s credit grading, and allows the borrower to identify and compare loan packages in the database using the credit grading. 811 F.3d. at 1318, 117 USPQ2d at 1695. The Federal Circuit determined that these claims were directed to the concept of "anonymous loan shopping", which was a concept that could be "performed by humans without a computer." 811 F.3d. at 1324, 117 USPQ2d at 1699. Another example is Berkheimer v. HP, Inc., 881 F.3d 1360, 125 USPQ2d 1649 (Fed. Cir. 2018), in which the patentee claimed methods for parsing and evaluating data using a computer processing system. The Federal Circuit determined that these claims were directed to mental processes of parsing and comparing data, because the steps were recited at a high level of generality and merely used computers as a tool to perform the processes. 881 F.3d at 1366, 125 USPQ2d at 1652-53.
Both product claims (e.g., computer system, computer-readable medium, etc.) and process claims may recite mental processes. For example, in Mortgage Grader, the patentee claimed a computer-implemented system and a method for enabling borrowers to anonymously shop for loan packages offered by a plurality of lenders, comprising a database that stores loan package data from the lenders, and a computer system providing an interface and a grading module. The Federal Circuit determined that both the computer-implemented system and method claims were directed to "anonymous loan shopping", which was an abstract idea because it could be "performed by humans without a computer." 811 F.3d. at 1318, 1324-25, 117 USPQ2d at 1695, 1699-1700. See also FairWarning IP, 839 F.3d at 1092, 120 USPQ2d at 1294 (identifying both system and process claims for detecting improper access of a patient's protected health information in a health-care system computer environment as directed to abstract idea of detecting fraud); Content Extraction & Transmission LLC v. Wells Fargo Bank, N.A., 776 F.3d 1343, 1345, 113 USPQ2d 1354, 1356 (Fed. Cir. 2014) (system and method claims of inputting information from a hard copy document into a computer program). Accordingly, the phrase "mental processes" should be understood as referring to the type of abstract idea, and not to the statutory category of the claim.
Examples of product claims reciting mental processes include: An application program interface for extracting and processing information from a diversity of types of hard copy documents – Content Extraction, 776 F.3d at 1345, 113 USPQ2d at 1356; and A computer readable medium containing program instructions for detecting fraud – CyberSource, 654 F.3d at 1368 n. 1, 99 USPQ2d at 1692 n.1.
Examiner notes that the claimed in invention is similar to the Voter Verified, Inc., FairWarning, Mortgage Grader, Berkheimer, Content Extraction and CyberSource applications wherein the court identified computer system and “machine learning” is merely serving as a the generic computer, computing environment, or tool to perform the mental process.
Here, the instructions entirely comprise the abstract idea, leaving little if any aspects of the claim for further consideration under Step 2A Prong 2. In short, the role of the generic computing elements recited in claims 1, 11, and 16, is the same as the role of the computer in the claims considered by the Supreme Court in Alice, and the claim as whole amounts merely to an instruction to apply the abstract idea on the generic computing system. Therefore, the claims have failed to integrate a practical application (2106.04(d)). Under the MPEP 2106.05, this supports the conclusion that the claim is directed to an abstract idea, and the analysis proceeds to Step 2B.
While many considerations in Step 2A need not be reevaluated in Step 2B because the outcome will be the same. Here, on the basis of the additional elements other than the abstract idea, considered individually and in combination as discussed above, the Examiner respectfully submits that the claims 21, 30, and 36, do not contain any additional elements that individually or as an ordered combination amount to an inventive concept and the claims are ineligible.
With respect to the dependent claims, they have been considered and are not found to be reciting anything that amounts to being significantly more than the abstract idea.
Claims 22-29, 31-35, and 37-40 are directed to further embellishments of the central theme of the abstract idea which is processing information in order to the valuations provided on the property information. This is not enough, as addressed above, to provide significantly more to the claims.
Therefore, since there are no limitations in the claim that transform the abstract idea into a patent eligible application such that the claim amounts to significantly more than the abstract idea itself, the claims are rejected under 35 U.S.C. § 101 as being directed to non-statutory subject matter. See MPEP 2106.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 21, 30, and 36 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 20220405856 to Hedges et al. (hereinafter Hedges) in view of U.S. Patent Application Publication No. 20220335366 to Sanchez.
Referring to Claim 21, 30, and 36 (substantially similar in scope and language), Hedges discloses a computer-implemented method for generating and/or displaying an updated overall home score for a first property having at least a first smart device of a plurality of smart devices associated with the first property (see at least Hedges: Title and Abstract), a computer system, and the tangible, non-transitory computer-readable medium of the tangible, non-transitory computer-readable medium of wherein the non-transitory computer-readable medium further includes instructions that, when executed by the one or more processors, cause the computing device to, the computer-implemented method comprising:
Hedges disclose displaying, via the one or more processors, the overall home score (see at least Hedges: ¶ 81 “the hazard score is a risk score (e.g., an overall risk score). The risk score can be associated with or represent the overall likelihood of a claim loss being filed, predicted claim loss frequency, expected loss severity, and/or any other key metric. This risk score is preferably dependent on the likelihood of hazard exposure (e.g., in contrast to the vulnerability score), but can alternatively be independent of and/or conditional on the hazard exposure. The risk score can be predicted based on: property measurements (e.g., directly), property attribute values extracted from property measurements, historical weather and/or hazard data, another hazard score (e.g., regional exposure score), and/or any other suitable information.”; see also Hedges: ¶ 114 “the risk score can represent an overall risk of a claim filing, incorporating both regional risk and vulnerability”; see also Hedges: ¶ 62 “Attribute values can be determined using an attribute value model that can include: CV/ML attribute extraction, any neural network and/or cascade of neural networks, one or more neural networks per attribute, key point extraction, SIFT, calculation, heuristics (e.g., inferring the number of stories of a property based on the height of a property), classification models (e.g., binary classifiers, multiclass classifiers, semantic segmentation models, instance-based segmentation models, etc.), regression models, object detectors, any computer vision and/or machine learning method, and/or any other technique. Different attribute values can be determined using different methods, but can alternatively be determined in the same manner.”).
retrieving, via one or more processors, additional home telematics data, wherein the additional home telematics data is associated with at least an additional property having similar home characteristics as the first property
Hedges discloses retrieving, via one or more processors, additional house or home data, wherein additional home data is associated with a least an additional property having similar home characteristics as the first property (see at least Hedges: ¶ 26 “method can analyze the effect of mitigation measures for a property, including determining the effect of one or more mitigation measures on the property vulnerability to a given hazard. For example, the method can use a mitigated vulnerability score to determine whether a given mitigation measure or measures will be effective and/or worth spending resources on, to determine which mitigations to recommend, to identify a set of properties (e.g., for insurance, maintenance, valuation etc.), to determine whether community mitigation measures should be implemented, and/or for any other use.”; see also Hedges: ¶ 25 “evaluating hazard exposure risk based on property location (e.g., based on historical weather data), the method can include training a model to ingest property-specific attribute values to estimate the probability that a claim associated with the property (e.g., insurance claim, aid claim, etc.) will be submitted and accepted and/or estimate other claim parameters (e.g., loss amount, etc.)”; see also Hedges: ¶ 34 “Determining a property S100 can function to identify a property for hazard analysis, such as attribute value determination, for hazard score calculation, and/or for hazard model training. S100 can be performed before S200, after S300 (e.g., where attribute values have been previously determined for each of a set of properties), during S500, and/or at any other time.”; see also Hedges: ¶ 37 “S100 can include determining a single property, determining a set of properties, and/or any other suitable number of properties.”; see also Hedges: ¶ 51 “determine property-specific values of one or more components of the property of interest. S300 can be performed after S200, in response to a request (e.g., for a property), in batches for groups of properties, iteratively for each of a set of properties”; see also Hedges: ¶ 77; see also Hedges: ¶ 62 “attribute values can be determined by: extracting features from property measurements (e.g., wherein the attribute values are determined based on the extracted feature values), extracting attribute values directly from property measurements, retrieving values from a database or a third party source (e.g., third-party database, MLS database, city permitting database, historical weather and/or hazard database, tax assessor database, etc.), using a predetermined value (e.g., assuming a given mitigation action has been performed as described in S400), calculating and/or adjusting a value (e.g., from an extracted value and a scaling factor; adjusting a previously determined attribute value as described in S400; etc.), and/or otherwise determined”; see also Hedges: ¶ 80-82, 84, and 90-91).
Hedges does not explicitly state that the data is retrieved from home telematics data system of a plurality of homes.
However, Sanchez, which talks about a method and system for processing information for insurance purposes, teaches it is known to incorporate machine learning techniques when processing asset information such as home properties using intelligent home telematics information to train the model to determine asset/property characteristics (see at least Sanchez: ¶ 81-85 “the machine learning programs may be trained by inputting sample data sets or certain data into the programs, such as image, mobile device, vehicle telematics, autonomous vehicle, and/or intelligent home telematics data.”).
Sanchez further teaches wherein the trained machine model is trained with home telematics data to determine home scores, wherein the telematics data was gathered by: at a plurality of time intervals over a time period, collecting sensor data from one or more sensors, and transmitting (the information) the sensor data collected over the time period to the one or more processors (see at least Sanchez: ¶ 49 “The smart home computing device may automatically acquire image data using sensors on the one or more additional devices and transmit the acquired image data to PM computing device in image data message 126.”; see also Sanchez: ¶ 81 and 85-86).
Hedges further discloses the hazard score and “Any score can be associated with a timeframe (e.g., the probability of hazard exposure within the timeframe, the probability of damage occurring within the timeframe, the probability of filing a claim within the timeframe, etc.) and/or unassociated with a timeframe.” (see at least Hedges ¶ 72; see also Hedges: ¶ 74 “dates (e.g., a timeframe under consideration, dates of a hypothetical or real claim filing, dates of previous hazard events, etc.)”; see also Hedges: ¶ 91; see also Hedges: ¶ 102: “the training data is segmented into positive and negative sets, wherein the positive or negative classification for each property is the binary training target. In a first example of the first embodiment, for a hazard model (e.g., vulnerability model, risk model, etc.) with binary claim occurrence as the training target, properties in the set of training properties with claims submitted for fire damage (e.g., within the historical timeframe) are in the positive dataset”).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to incorporate the feature of wherein the trained machine learning data evaluation model is trained with home telematics data to determine home characteristic data (as disclosed by Sanchez) into the method and system for home scoring based on property characteristics determining and applying a weighting factor when scoring a home based on property characteristics using trained machine learning algorithms (as disclosed by Hedges). One of ordinary skill in the art would have been motivated to incorporate the feature of wherein the trained machine learning data evaluation model is trained with home telematics data to determine home characteristic data because it would aid the insurance provider in determining policy rates and additionally aid the policyholder in determining the amount of coverage they will need (see Sanchez ¶ 6).
Furthermore, it would have been obvious to one of ordinary skill in the art at the time of filing to incorporate the feature of wherein the trained machine learning data evaluation model is trained with home telematics data to determine home characteristic data (as disclosed by Sanchez) into the method and system for home scoring based on property characteristics determining and applying a weighting factor when scoring a home based on property characteristics using trained machine learning algorithms (as disclosed by Hedges), because the claimed invention is merely a simple arrangement of old elements, with each performing the same function it had been known to perform, yielding no more than one would expect from such arrangement. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). In other words, all of the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art at the time of the invention (i.e., predictable results are obtained by adding the well-known feature of wherein the trained machine learning data evaluation model is trained with home telematics data to determine home characteristic data into the method and system for home scoring based on property characteristics determining and applying a weighting factor when scoring a home based on property characteristics using trained machine learning algorithms). See also MPEP § 2143(I)(A).
The combination of Hedges and Sanchez further teaches:
determining, by one or more processors, two or more home score factors, wherein the two or more home score factors are representative of the additional property having one or more home characteristics similar to the first property and are selected from among (i) a fire hazard score, (ii) a safety score, (iii) a weather hazard score, and (iv) a property feature hazard score (see at least Hedges: ¶ 66-84 “the risk score can be determined using a risk model that ingests: property attribute values and historical weather and/or hazard data for the property location”; see at least Hedges: ¶ 19 “risk model and/or vulnerability model can be trained on historical insurance claim data, such that the respective scores are associated with a probability of or expected: claim occurrence, claim loss, damage, claim rejection, and/or any other metric”; see also Hedges: ¶ 25 “evaluating hazard exposure risk based on property location (e.g., based on historical weather data), the method can include training a model to ingest property-specific attribute values to estimate the probability that a claim associated with the property (e.g., insurance claim, aid claim, etc.) will be submitted and accepted and/or estimate other claim parameters (e.g., loss amount, etc.)”; see also Hedges: ¶ 62 “attribute values can be determined by: extracting features from property measurements (e.g., wherein the attribute values are determined based on the extracted feature values), extracting attribute values directly from property measurements, retrieving values from a database or a third party source (e.g., third-party database, MLS database, city permitting database, historical weather and/or hazard database, tax assessor database, etc.), using a predetermined value (e.g., assuming a given mitigation action has been performed as described in S400), calculating and/or adjusting a value (e.g., from an extracted value and a scaling factor; adjusting a previously determined attribute value as described in S400; etc.), and/or otherwise determined”; see also Hedges: ¶ 80-82, 84, and 90-91; see at least Hedges: ¶ 26 “method can analyze the effect of mitigation measures for a property, including determining the effect of one or more mitigation measures on the property vulnerability to a given hazard. For example, the method can use a mitigated vulnerability score to determine whether a given mitigation measure or measures will be effective and/or worth spending resources on, to determine which mitigations to recommend, to identify a set of properties (e.g., for insurance, maintenance, valuation etc.), to determine whether community mitigation measures should be implemented, and/or for any other use.”; see also Hedges: ¶ 25 “evaluating hazard exposure risk based on property location (e.g., based on historical weather data), the method can include training a model to ingest property-specific attribute values to estimate the probability that a claim associated with the property (e.g., insurance claim, aid claim, etc.) will be submitted and accepted and/or estimate other claim parameters (e.g., loss amount, etc.)”; see also Hedges: ¶ 34 “Determining a property S100 can function to identify a property for hazard analysis, such as attribute value determination, for hazard score calculation, and/or for hazard model training. S100 can be performed before S200, after S300 (e.g., where attribute values have been previously determined for each of a set of properties), during S500, and/or at any other time.”; see also Hedges: ¶ 37 “S100 can include determining a single property, determining a set of properties, and/or any other suitable number of properties.”; see also Hedges: ¶ 51 “determine property-specific values of one or more components of the property of interest. S300 can be performed after S200, in response to a request (e.g., for a property), in batches for groups of properties, iteratively for each of a set of properties”; see also Hedges: ¶ 77; see also Shoup: ¶ 86: “he system 10 may intake background information relating to the residents of the neighboring properties adjacent to or in close proximity to the property of interest. For example, the system 10 may intake information such as criminal records, sex offences and other types of information. This information may then be provided to the user Un via the reports and/or maps as described in other sections”; see also Shoup: ¶ 94-96: “Once the safety data has been processed, the system 10 (via the data reporting application 121) may provide safety and rating report(s), interactive map(s) (FIG. 3) and/or other formats of safety data as described herein.”);
generating, by the one or more processors, and the retrieved additional home telematics data, the first overall home score of the first property based upon the two or more home score factors associated with at least the additional property having similar home characteristics as the first property (see at least Hedges: ¶ 81 “the hazard score is a risk score (e.g., an overall risk score). The risk score can be associated with or represent the overall likelihood of a claim loss being filed, predicted claim loss frequency, expected loss severity, and/or any other key metric. This risk score is preferably dependent on the likelihood of hazard exposure (e.g., in contrast to the vulnerability score), but can alternatively be independent of and/or conditional on the hazard exposure. The risk score can be predicted based on: property measurements (e.g., directly), property attribute values extracted from property measurements, historical weather and/or hazard data, another hazard score (e.g., regional exposure score), and/or any other suitable information.”; see also Hedges: ¶ 114 “the risk score can represent an overall risk of a claim filing, incorporating both regional risk and vulnerability”; see also Hedges: ¶ 62 “Attribute values can be determined using an attribute value model that can include: CV/ML attribute extraction, any neural network and/or cascade of neural networks, one or more neural networks per attribute, key point extraction, SIFT, calculation, heuristics (e.g., inferring the number of stories of a property based on the height of a property), classification models (e.g., binary classifiers, multiclass classifiers, semantic segmentation models, instance-based segmentation models, etc.), regression models, object detectors, any computer vision and/or machine learning method, and/or any other technique. Different attribute values can be determined using different methods, but can alternatively be determined in the same manner.”; see at least Sanchez: ¶ 49 “The smart home computing device may automatically acquire image data using sensors on the one or more additional devices and transmit the acquired image data to PM computing device in image data message 126.”; see also Sanchez: ¶ 81 and 85-86; see at least Hedges: ¶ 26 “method can analyze the effect of mitigation measures for a property, including determining the effect of one or more mitigation measures on the property vulnerability to a given hazard. For example, the method can use a mitigated vulnerability score to determine whether a given mitigation measure or measures will be effective and/or worth spending resources on, to determine which mitigations to recommend, to identify a set of properties (e.g., for insurance, maintenance, valuation etc.), to determine whether community mitigation measures should be implemented, and/or for any other use.”; see also Hedges: ¶ 25 “evaluating hazard exposure risk based on property location (e.g., based on historical weather data), the method can include training a model to ingest property-specific attribute values to estimate the probability that a claim associated with the property (e.g., insurance claim, aid claim, etc.) will be submitted and accepted and/or estimate other claim parameters (e.g., loss amount, etc.)”; see also Hedges: ¶ 34 “Determining a property S100 can function to identify a property for hazard analysis, such as attribute value determination, for hazard score calculation, and/or for hazard model training. S100 can be performed before S200, after S300 (e.g., where attribute values have been previously determined for each of a set of properties), during S500, and/or at any other time.”; see also Hedges: ¶ 37 “S100 can include determining a single property, determining a set of properties, and/or any other suitable number of properties.”; see also Hedges: ¶ 51 “determine property-specific values of one or more components of the property of interest. S300 can be performed after S200, in response to a request (e.g., for a property), in batches for groups of properties, iteratively for each of a set of properties”; see also Hedges: ¶ 77; see at least Sanchez: ¶ 49 “The smart home computing device may automatically acquire image data using sensors on the one or more additional devices and transmit the acquired image data to PM computing device in image data message 126.”; see also Sanchez: ¶ 81 and 85-86);
retrieving, via one or more processors, first property home telematics data from at least the first smart device of a plurality of smart devices located within the first property, wherein the first property home telematics data is indicative of at least a mitigating factor not present within the additional home telematics data associated with at least the additional property having similar home characteristics as the first property (see at least Hedges: ¶ 121 “the outputs can be used to determine a set of mitigation measures for the property (e.g., high-impact mitigation measures that change the hazard score above a threshold amount). In an illustrative example, an unmitigated hazard score can be compared to each of a set of mitigated hazard scores, wherein each mitigated hazard score corresponds to a different mitigation measure, to determine one or more high-impact mitigation measures (e.g., with the largest difference between the unmitigated and mitigated hazard scores)”; see also Hedges: ¶ 115 recalculated hazard scores based on mitigating factors; see also Hedges: ¶ 83-85 and 87 “Mitigation measures (e.g., mitigation actions) can be represented as an adjustment to one or more attribute values (e.g., mitigable attributes, where an adjustment is associated with each mitigable attribute)”; see at least Sanchez: ¶ 49 “The smart home computing device may automatically acquire image data using sensors on the one or more additional devices and transmit the acquired image data to PM computing device in image data message 126.”; see also Sanchez: ¶ 81 and 85-86);
generating, by one or more processors, an updated overall home score, wherein the updated overall home score weights the two or more home score factors differently than the first overall home score based at least in part on the mitigating factor not present within the additional home telematics data (see at least Hedges: ¶ 67 “through hazard score validation, during model training (e.g., attributes with weights above a threshold value are selected), using a deep learning model, based on the mitigation and/or zone classification, and/or via any other selection method or combination of methods”; see also Hedges: ¶ 56 “Condition-related attributes can be a rating for a single structure, a minimum rating across multiple structures, a weighted rating across multiple structures, and/or any other individual or aggregate value.”; see also Hedges: ¶ 56 “Condition-related attributes can include: roof condition (e.g., tarp presence, material degradation, rust, missing or peeling material, sealing, natural and/or unnatural discoloration, defects, loose organic matter, ponding, patching, streaking, etc.), exterior condition, accessory structure condition, yard debris and/or lot debris (e.g., presence, coverage, ratio of coverage, etc.), lawn condition, pool condition, driveway condition, tree parameters (e.g., overhang information, height, etc.), vegetation parameters (e.g., coverage, density, setback, location within one or more zones relative to the property), presence of vent coverings (e.g., ember-proof vent coverings), structure condition, occlusion (e.g., pool occlusion, roof occlusion, etc.), pavement condition (e.g., percent of paved area that is deteriorated), resource usage (e.g., energy usage, gas usage, etc.), and/or other parameters that are variable and/or controllable by a resident.”); and
displaying, via the one or more processors, the updated overall home score (see at least Hedges: ¶ 81 “the hazard score is a risk score (e.g., an overall risk score). The risk score can be associated with or represent the overall likelihood of a claim loss being filed, predicted claim loss frequency, expected loss severity, and/or any other key metric. This risk score is preferably dependent on the likelihood of hazard exposure (e.g., in contrast to the vulnerability score), but can alternatively be independent of and/or conditional on the hazard exposure. The risk score can be predicted based on: property measurements (e.g., directly), property attribute values extracted from property measurements, historical weather and/or hazard data, another hazard score (e.g., regional exposure score), and/or any other suitable information.”; see also Hedges: ¶ 114 “the risk score can represent an overall risk of a claim filing, incorporating both regional risk and vulnerability”; see also Hedges: ¶ 62 “Attribute values can be determined using an attribute value model that can include: CV/ML attribute extraction, any neural network and/or cascade of neural networks, one or more neural networks per attribute, key point extraction, SIFT, calculation, heuristics (e.g., inferring the number of stories of a property based on the height of a property), classification models (e.g., binary classifiers, multiclass classifiers, semantic segmentation models, instance-based segmentation models, etc.), regression models, object detectors, any computer vision and/or machine learning method, and/or any other technique. Different attribute values can be determined using different methods, but can alternatively be determined in the same manner.”).
Claim(s) 22-29, 31-35, and 37-40 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 20220405856 to Hedges et al. (hereinafter Hedges) in view of U.S. Patent Application Publication No. 20220335366 to Sanchez in view of U.S. Patent Application Publication No. 20210150651 to Shoup.
Referring to Claims 22, 31, and 37 (substantially similar in scope and language), the combination of Hedges and Sanchez teaches the computer-implemented method of claim 21, computer system of claim 30, and non-transitory computer-readable medium of claim 36, but fails to state:
wherein the method further comprises displaying, via the one or more processors, a map showing indications of home scores of other properties, and wherein the indications of home scores of other properties comprise color coded indications of overall home scores of other properties.
Hedges disclose displaying, via the one or more processors, the overall home score (see at least Hedges: ¶ 81 “the hazard score is a risk score (e.g., an overall risk score). The risk score can be associated with or represent the overall likelihood of a claim loss being filed, predicted claim loss frequency, expected loss severity, and/or any other key metric. This risk score is preferably dependent on the likelihood of hazard exposure (e.g., in contrast to the vulnerability score), but can alternatively be independent of and/or conditional on the hazard exposure. The risk score can be predicted based on: property measurements (e.g., directly), property attribute values extracted from property measurements, historical weather and/or hazard data, another hazard score (e.g., regional exposure score), and/or any other suitable information.”; see also Hedges: ¶ 114 “the risk score can represent an overall risk of a claim filing, incorporating both regional risk and vulnerability”; see also Hedges: ¶ 62 “Attribute values can be determined using an attribute value model that can include: CV/ML attribute extraction, any neural network and/or cascade of neural networks, one or more neural networks per attribute, key point extraction, SIFT, calculation, heuristics (e.g., inferring the number of stories of a property based on the height of a property), classification models (e.g., binary classifiers, multiclass classifiers, semantic segmentation models, instance-based segmentation models, etc.), regression models, object detectors, any computer vision and/or machine learning method, and/or any other technique. Different attribute values can be determined using different methods, but can alternatively be determined in the same manner.”).
However, Shoup, which talks about a method and system for determining a property and neighborhood assessment system and method, teaches it is known to provide a map of properties to a user wherein the properties are color coded based on the assessed data which teaches displaying, via the one or more processors, a map including indications of other properties (see at least Shoup: ¶ 81-83 “the geographic map database 132 may include maps of each geographical area that the system 10 may support, with the maps including street address and other types of identifying information for each property included within the geographic regions”);
in response to receiving the selection, displaying, via the one or more processors, information of the second property including: (i) a fire hazard score of the second property, (ii) a weather hazard score of the second property, and/or (iii) a property feature hazard score of the second property (see at least Shoup: ¶ 81-83 “the geographic map database 132 may include maps of each geographical area that the system 10 may support, with the maps including street address and other types of identifying information for each property included within the geographic regions”; see also Shoup: ¶ 85 “the system 10 may create and provide various reports to the user(s) Un via the data reporting application 121. The reports may include the time period over which the safety information has been aggregated, and textual data describing the safety data for each applicable offense On, the safety ratings(s) (scores) of the property and its surrounding neighborhood, charts that graphically display the safety data and the ratings (e.g., bar charts, pie charts, etc.), maps, any other types of data representations and any combinations thereof. In some embodiments, safety data and/or ratings contained in the report(s) may be color-coded as described above (with respect to FIG. 3) to provide a visual representation of the data”; see also Shoup: ¶ 86: “he system 10 may intake background information relating to the residents of the neighboring properties adjacent to or in close proximity to the property of interest. For example, the system 10 may intake information such as criminal records, sex offences and other types of information. This information may then be provided to the user Un via the reports and/or maps as described in other sections”; see also Shoup: ¶ 94-96: “Once the safety data has been processed, the system 10 (via the data reporting application 121) may provide safety and rating report(s), interactive map(s) (FIG. 3) and/or other formats of safety data as described herein.”).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to apply the known technique of displaying properties in adjacent to other properties that have similar information, hazard features, and safety ratings within a displayed map (as disclosed by Shoup) to the known determining, based on first sensor readings for at least one sensor associated with the first one of the plurality of homes and second sensor readings from corresponding sensors associated with at least a second one of the plurality of homes, a live home score for the first one of the plurality of homes; and sending the live home score to the computing device associated with the first one of the plurality of homes (as disclosed by Hedges) to assess the safety of a property and its associated neighborhood, including a system that aggregates, standardizes, and transforms safety information from third party sources. One of ordinary skill in the art would have been motivated to apply the known technique of displaying properties in adjacent to other properties that have similar information, hazard features, and safety ratings within a displayed map because it would assess the safety of a property and its associated neighborhood, including a system that aggregates, standardizes, and transforms safety information from third party sources and to quantify the overall safety of the property and of the surrounding neighborhood (see Shoup: ¶ 2 and 46).
Furthermore, it would have been obvious to one of ordinary skill in the art at the time of filing to apply the known technique of displaying properties in adjacent to other properties that have similar information, hazard features, and safety ratings within a displayed map (as disclosed by Shoup) to the known determining, based on first sensor readings for at least one sensor associated with the first one of the plurality of homes and second sensor readings from corresponding sensors associated with at least a second one of the plurality of homes, a live home score for the first one of the plurality of homes; and sending the live home score to the computing device associated with the first one of the plurality of homes (as disclosed by Hedges) to assess the safety of a property and its associated neighborhood, including a system that aggregates, standardizes, and transforms safety information from third party sources, because the claimed invention is merely applying a known technique to a known method ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). In other words, all of the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art at the time of the invention (i.e., predictable results are obtained by applying the known technique of displaying properties in adjacent to other properties that have similar information, hazard features, and safety ratings within a displayed map to the known determining, based on first sensor readings for at least one sensor associated with the first one of the plurality of homes and second sensor readings from corresponding sensors associated with at least a second one of the plurality of homes, a live home score for the first one of the plurality of homes; and sending the live home score to the computing device associated with the first one of the plurality of homes to assess the safety of a property and its associated neighborhood, including a system that aggregates, standardizes, and transforms safety information from third party sources). See also MPEP § 2143(I)(D).
Referring to Claims 23, 32, and 38 (substantially similar in scope and language), the combination of Hedges and Sanchez teaches the computer-implemented method of claim 21, computer system of claim 30, and non-transitory computer-readable medium of claim 36; The combination fails to state wherein the property is a first property, and wherein the method further comprises: displaying, via the one or more processors, a map showing indications of home scores of other properties.
However, Shoup teaches wherein the property is a first property, and wherein the method further comprises: displaying, via the one or more processors, a map showing indications of home scores of other properties (see at least Shoup: ¶ 81-83 “the geographic map database 132 may include maps of each geographical area that the system 10 may support, with the maps including street address and other types of identifying information for each property included within the geographic regions”; see also Hedges: ¶ 81 “the hazard score is a risk score (e.g., an overall risk score). The risk score can be associated with or represent the overall likelihood of a claim loss being filed, predicted claim loss frequency, expected loss severity, and/or any other key metric. This risk score is preferably dependent on the likelihood of hazard exposure (e.g., in contrast to the vulnerability score), but can alternatively be independent of and/or conditional on the hazard exposure. The risk score can be predicted based on: property measurements (e.g., directly), property attribute values extracted from property measurements, historical weather and/or hazard data, another hazard score (e.g., regional exposure score), and/or any other suitable information.”; see also Hedges: ¶ 114 “the risk score can represent an overall risk of a claim filing, incorporating both regional risk and vulnerability”; see also Hedges: ¶ 62 “Attribute values can be determined using an attribute value model that can include: CV/ML attribute extraction, any neural network and/or cascade of neural networks, one or more neural networks per attribute, key point extraction, SIFT, calculation, heuristics (e.g., inferring the number of stories of a property based on the height of a property), classification models (e.g., binary classifiers, multiclass classifiers, semantic segmentation models, instance-based segmentation models, etc.), regression models, object detectors, any computer vision and/or machine learning method, and/or any other technique. Different attribute values can be determined using different methods, but can alternatively be determined in the same manner.”);
receiving, via the one or more processors, a user selection of a second property on the map (see at least Shoup: ¶ 81-83 “the geographic map database 132 may include maps of each geographical area that the system 10 may support, with the maps including street address and other types of identifying information for each property included within the geographic regions”; see also Shoup: ¶ 85 “the system 10 may create and provide various reports to the user(s) Un via the data reporting application 121. The reports may include the time period over which the safety information has been aggregated, and textual data describing the safety data for each applicable offense On, the safety ratings(s) (scores) of the property and its surrounding neighborhood, charts that graphically display the safety data and the ratings (e.g., bar charts, pie charts, etc.), maps, any other types of data representations and any combinations thereof. In some embodiments, safety data and/or ratings contained in the report(s) may be color-coded as described above (with respect to FIG. 3) to provide a visual representation of the data”; see also Shoup: ¶ 86: “he system 10 may intake background information relating to the residents of the neighboring properties adjacent to or in close proximity to the property of interest. For example, the system 10 may intake information such as criminal records, sex offences and other types of information. This information may then be provided to the user Un via the reports and/or maps as described in other sections”; see also Shoup: ¶ 94-96: “Once the safety data has been processed, the system 10 (via the data reporting application 121) may provide safety and rating report(s), interactive map(s) (FIG. 3) and/or other formats of safety data as described herein.”); and
in response to receiving the selection, displaying, via the one or more processors, information of the second property including: (i) an overall home score of the second property, (ii) a fire hazard score of the second property, (ii) a safety score of the second property, (iii) a weather hazard score of the second property, and/or (iv) a property feature hazard score of the second property (see at least Shoup: ¶ 81-83 “the geographic map database 132 may include maps of each geographical area that the system 10 may support, with the maps including street address and other types of identifying information for each property included within the geographic regions”; see also Shoup: ¶ 84-85 “the system 10 may create and provide various reports to the user(s) Un via the data reporting application 121. The reports may include the time period over which the safety information has been aggregated, and textual data describing the safety data for each applicable offense On, the safety ratings(s) (scores) of the property and its surrounding neighborhood, charts that graphically display the safety data and the ratings (e.g., bar charts, pie charts, etc.), maps, any other types of data representations and any combinations thereof. In some embodiments, safety data and/or ratings contained in the report(s) may be color-coded as described above (with respect to FIG. 3) to provide a visual representation of the data”; see also Shoup: ¶ 86: “he system 10 may intake background information relating to the residents of the neighboring properties adjacent to or in close proximity to the property of interest. For example, the system 10 may intake information such as criminal records, sex offences and other types of information. This information may then be provided to the user Un via the reports and/or maps as described in other sections”; see also Shoup: ¶ 94-96: “Once the safety data has been processed, the system 10 (via the data reporting application 121) may provide safety and rating report(s), interactive map(s) (FIG. 3) and/or other formats of safety data as described herein.”).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to apply the known technique of displaying properties in adjacent to other properties that have similar information, hazard features, and safety ratings within a displayed map (as disclosed by Shoup) to the known determining, based on first sensor readings for at least one sensor associated with the first one of the plurality of homes and second sensor readings from corresponding sensors associated with at least a second one of the plurality of homes, a live home score for the first one of the plurality of homes; and sending the live home score to the computing device associated with the first one of the plurality of homes (as disclosed by Hedges) to assess the safety of a property and its associated neighborhood, including a system that aggregates, standardizes, and transforms safety information from third party sources. One of ordinary skill in the art would have been motivated to apply the known technique of displaying properties in adjacent to other properties that have similar information, hazard features, and safety ratings within a displayed map because it would assess the safety of a property and its associated neighborhood, including a system that aggregates, standardizes, and transforms safety information from third party sources and to quantify the overall safety of the property and of the surrounding neighborhood (see Shoup: ¶ 2 and 46).
Referring to Claims 24, 34 and 40 (substantially similar in scope and language), the combination of Hedges and Sanchez teaches the computer-implemented method of claim 21, computer system of claim 30, and non-transitory computer-readable medium of claim 36; The combination of Hedges, Sanchez, and Shoup teaches further comprising retrieving, by the one or more processors, two or more attributes of the property, the two or more attributes comprising: (i) one or more fire hazard attributes, (ii) one or more safety attributes, (iii) one or more weather hazard attributes, and/or (iv) one or more property feature hazard attributes; wherein the determining the two or more home score factors comprises determining: the fire hazard score based upon the one or more fire hazard attributes; the safety score based upon the one or more safety attributes; the weather hazard score based upon the one or more weather hazard attributes; and/or the property feature hazard score based upon the one or more property feature attributes (see at least Hedges: ¶ 81 “the hazard score is a risk score (e.g., an overall risk score). The risk score can be associated with or represent the overall likelihood of a claim loss being filed, predicted claim loss frequency, expected loss severity, and/or any other key metric. This risk score is preferably dependent on the likelihood of hazard exposure (e.g., in contrast to the vulnerability score), but can alternatively be independent of and/or conditional on the hazard exposure. The risk score can be predicted based on: property measurements (e.g., directly), property attribute values extracted from property measurements, historical weather and/or hazard data, another hazard score (e.g., regional exposure score), and/or any other suitable information.”; see also Hedges: ¶ 114 “the risk score can represent an overall risk of a claim filing, incorporating both regional risk and vulnerability”; see also Hedges: ¶ 62 “Attribute values can be determined using an attribute value model that can include: CV/ML attribute extraction, any neural network and/or cascade of neural networks, one or more neural networks per attribute, key point extraction, SIFT, calculation, heuristics (e.g., inferring the number of stories of a property based on the height of a property), classification models (e.g., binary classifiers, multiclass classifiers, semantic segmentation models, instance-based segmentation models, etc.), regression models, object detectors, any computer vision and/or machine learning method, and/or any other technique. Different attribute values can be determined using different methods, but can alternatively be determined in the same manner.”; see also Shoup: ¶ 81-86: discussing mapping properties and neighborhoods; see at least Shoup: ¶ 81-83 “the geographic map database 132 may include maps of each geographical area that the system 10 may support, with the maps including street address and other types of identifying information for each property included within the geographic regions”; see also Shoup: ¶ 85 “the system 10 may create and provide various reports to the user(s) Un via the data reporting application 121. The reports may include the time period over which the safety information has been aggregated, and textual data describing the safety data for each applicable offense On, the safety ratings(s) (scores) of the property and its surrounding neighborhood, charts that graphically display the safety data and the ratings (e.g., bar charts, pie charts, etc.), maps, any other types of data representations and any combinations thereof. In some embodiments, safety data and/or ratings contained in the report(s) may be color-coded as described above (with respect to FIG. 3) to provide a visual representation of the data”; see also Shoup: ¶ 86: “he system 10 may intake background information relating to the residents of the neighboring properties adjacent to or in close proximity to the property of interest. For example, the system 10 may intake information such as criminal records, sex offences and other types of information. This information may then be provided to the user Un via the reports and/or maps as described in other sections”; see also Shoup: ¶ 94-96: “Once the safety data has been processed, the system 10 (via the data reporting application 121) may provide safety and rating report(s), interactive map(s) (FIG. 3) and/or other formats of safety data as described herein.”).
Referring to Claims 25 (substantially similar in scope and language), the combination of Hedges, Sanchez, and Shoup teaches the computer-implemented method of claim 24, including wherein the determining the two or more home score factors comprises determining the fire hazard score based upon the one or more fire hazard attributes, and wherein the one or more fire hazard attributes comprise a grade based upon a distance from the property to water and/or a distance from the property to a fire station (see at least Hedges: ¶ 74 and 100 “property measurements, other hazard scores (e.g., calculated using a hazard model, retrieved from a third-party hazard database, etc.), property location, data from a third-party database (e.g., property data, hazard exposure risk data, claim/loss data, policy data, weather and/or hazard data, fire station locations, insurer database, etc.” and “training inputs for each training property can include and/or be based on: property measurements (e.g., acquired before a hazard event, after a hazard event, and/or unrelated to a hazard event), property attribute values, a property location, a hazard score, data from a third-party database (e.g., property data, hazard risk data, claim/loss data, policy data, weather data, hazard data, fire station locations, tax assessor database, insurer database, etc.), dates, and/or any other input (e.g., as described in S400)”).
Referring to Claims 26 (substantially similar in scope and language), the combination of Hedges, Sanchez, and Shoup teaches the computer-implemented method of claim 24; The combination of Hedges and Sanchez fails to state wherein the determining the two or more home score factors comprises determining the safety score based upon the one or more safety attributes, and wherein the one or more safety attributes comprise: (i) a burglary grade based upon a burglary likelihood, and/or (ii) a motor vehicle theft grade based upon a motor vehicle theft likelihood.
However, Shoup teaches determining the safety score based upon the one or more safety attributes, and wherein the one or more safety attributes comprise: (i) a burglary grade based upon a burglary likelihood, and/or (ii) a motor vehicle theft grade based upon a motor vehicle theft likelihood (see at least Shoup: ¶ 78 “In some embodiments, the system 10 may use the data rating application 118 to rate the categorized safety data using a variety of criteria. For example, the categorized data may be rated by the type and severity of the crime committed, with violent crimes such as murder, assault and battery, armed robbery, etc. rated higher than non-violent crimes such as shop lifting, theft and drug related offenses. In this way, the severity of the crimes may be rated and classified as a crime severity rating. In another example, the data may be rated according to the perpetrators, with repeat offenders being rated higher than first-time offenders. In another example, the data may be rated by amount of time that has passed since the crime(s) were committed, the amount of time between each crime committed, and/or the frequency or patterns of the crimes committed. Other types of rating criteria also may be used. In this way, each crime may be given one or more scores that may represent the severity of the crime committed. Once the scores have been calculated, the scores and the rated data may be stored in the rated data database 128”; see also Shoup: ¶ 44-45 “As used herein, the term “safety information” may include information related to (without limitation): violent crimes committed, non-violent crimes committed, registered sex offenders, and other types of safety information. The safety information may include the type and/or category of the offense(s) committed, the location of the offense (exact and/or approximate), the distance of the offense from the location of the property (exact and/or approximate), the date and time of the offense(s), whether or not the perpetrators were apprehended (if available), whether or not the perpetrators are still in custody (if available), other information regarding the perpetrators (such as his/her prior record, release date if previously incarcerated and/or institutionalized, terms served, all if available, etc.), and other types of information. The safety information may be aggregated over specified time periods (e.g., over the past week, month, year, etc.).”; see also Shoup: ¶ 71-75, 77-80, and 88).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to apply the known technique of displaying properties in adjacent to other properties that have similar information, hazard features, and safety ratings within a displayed map (as disclosed by Shoup) to the known determining, based on first sensor readings for at least one sensor associated with the first one of the plurality of homes and second sensor readings from corresponding sensors associated with at least a second one of the plurality of homes, a live home score for the first one of the plurality of homes; and sending the live home score to the computing device associated with the first one of the plurality of homes (as disclosed by Hedges) to assess the safety of a property and its associated neighborhood, including a system that aggregates, standardizes, and transforms safety information from third party sources. One of ordinary skill in the art would have been motivated to apply the known technique of displaying properties in adjacent to other properties that have similar information, hazard features, and safety ratings within a displayed map because it would assess the safety of a property and its associated neighborhood, including a system that aggregates, standardizes, and transforms safety information from third party sources and to quantify the overall safety of the property and of the surrounding neighborhood (see Shoup: ¶ 2 and 46).
Referring to Claims 27, the combination of Hedges, Sanchez, and Shoup teaches the computer-implemented method of claim 24, wherein the determining the two or more home score factors comprises determining the weather hazard score based upon the one or more weather hazard attributes, and wherein the one or more weather hazard attributes comprise: an earthquake grade, a wind grade, a hail grade, a tornado grade, a lightning grade, a flood grade, a wildfire grade, a drought grade, a tsunami grade, a hurricane grade, a volcano grade, a wind born debris grade, a costal storm surge grade, and/or a convection storm grade (see at least Hedges: ¶ 25, 56 “subject to weather-related conditions; for example: average annual rainfall, presence of high-speed and/or dry seasonal winds (e.g., the Santa Ana winds), vegetation dryness and/or greenness index, regional hazard risks, and/or any other variable parameter.”; see also Hedges: ¶ 62 “extracting attribute values directly from property measurements, retrieving values from a database or a third party source (e.g., third-party database, MLS database, city permitting database, historical weather and/or hazard database”; see also Hedges: ¶ 74, 78, 80-82, and 90-91).
Referring to Claims 28, the combination of Hedges, Sanchez, and Shoup teaches the computer-implemented method of claim 24, wherein the determining the two or more home score factors comprises determining the property feature hazard score based upon the one or more property feature hazard attributes, and wherein the one or more property feature hazard attributes comprise: a roof condition rating, a tree overhang rating, a radon grade, a mold index grade, a slope risk grade, an aspect risk grade, an ice damage grade, and/or a frozen pipe grade (see at least Hedges: ¶ 56 “roof condition (e.g., tarp presence, material degradation, rust, missing or peeling material, sealing, natural and/or unnatural discoloration, defects, loose organic matter, ponding, patching, streaking, etc.)… accessory structure condition, yard debris and/or lot debris (e.g., presence, coverage, ratio of coverage, etc.), lawn condition, pool condition, driveway condition, tree parameters (e.g., overhang information, height, etc.), vegetation parameters (e.g., coverage, density, setback, location within one or more zones relative to the property),”).
Referring to Claim 29, the combination of Hedges, Sanchez, and Shoup teaches the computer-implemented method of claim 24; The combination of Hedges and Shoup teaches the computer-implemented method of claim 1 and applying weights to selected attributes (see at least Hedges: ¶ 67 “The set of attributes (e.g., for a given hazard model) can be selected: manually, automatically, randomly, recursively, using an attribute selection model, using lift analysis (e.g., based on an attribute's lift), using any explainability and/or interpretability method (e.g., as described in S600), based on an attribute's correlation with a given metric (e.g., claim frequency, loss severity, etc.), using predictor variable analysis, through hazard score validation, during model training (e.g., attributes with weights above a threshold value are selected), using a deep learning model, based on the mitigation and/or zone classification, and/or via any other selection method or combination of methods.”), including wherein the determining the two or more home score factors comprises determining, by the one or more processors: the fire hazard score by applying a fire hazard score weight to the one or more fire hazard attributes; the safety score by applying a safety score weight to the one or more safety attributes; the weather hazard score by applying a weather hazard score weight to the one or more weather hazard attributes; and/or the property feature hazard score by applying a property feature hazard score weight to the one or more property feature hazard attributes.
Shoup, which talks about a method and system for determining a property and neighborhood assessment system and method, teaches it is known to provide a map of properties to a user wherein the properties are color coded based on the assessed data which teaches wherein the determining the two or more home score factors comprises determining, by the one or more processors: the fire hazard score by applying a fire hazard score weight to the one or more fire hazard attributes; the safety score by applying a safety score weight to the one or more safety attributes; the weather hazard score by applying a weather hazard score weight to the one or more weather hazard attributes; and/or the property feature hazard score by applying a property feature hazard score weight to the one or more property feature hazard attributes (see at least Shoup: ¶ 79 “the data rating application 118 may calculate one or more overall safety ratings for each property of interest by applying weight factors and/or algorithms to each type of safety data associated with the property. For example, a property located near the locations of identified violent crimes may receive a lower overall safety score compared to a property associated with a lesser number or less severe crimes within the same or similar geographic radius and/or time frame. In this way, the property's overall safety rating may generally represent a quality-of-life aspect associated with the property”).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to apply the known technique of generating a property and neighborhood score and applying various weights (as disclosed by Shoup) to the known determining, based on first sensor readings for at least one sensor associated with the first one of the plurality of homes and second sensor readings from corresponding sensors associated with at least a second one of the plurality of homes, a live home score for the first one of the plurality of homes; and sending the live home score to the computing device associated with the first one of the plurality of homes (as disclosed by Hedges) to assess the safety of a property and its associated neighborhood, including a system that aggregates, standardizes, and transforms safety information from third party sources. One of ordinary skill in the art would have been motivated to apply the known technique of generating a property and neighborhood score and applying various weights because it would assess the safety of a property and its associated neighborhood, including a system that aggregates, standardizes, and transforms safety information from third party sources and to quantify the overall safety of the property and of the surrounding neighborhood (see Shoup: ¶ 2 and 46).
Furthermore, it would have been obvious to one of ordinary skill in the art at the time of filing to apply the known technique of generating a property and neighborhood score and applying various weights (as disclosed by Shoup) to the known determining, based on first sensor readings for at least one sensor associated with the first one of the plurality of homes and second sensor readings from corresponding sensors associated with at least a second one of the plurality of homes, a live home score for the first one of the plurality of homes; and sending the live home score to the computing device associated with the first one of the plurality of homes (as disclosed by Hedges) to assess the safety of a property and its associated neighborhood, including a system that aggregates, standardizes, and transforms safety information from third party sources, because the claimed invention is merely applying a known technique to a known method ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). In other words, all of the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art at the time of the invention (i.e., predictable results are obtained by applying the known technique of generating a property and neighborhood score and applying various weights to the known determining, based on first sensor readings for at least one sensor associated with the first one of the plurality of homes and second sensor readings from corresponding sensors associated with at least a second one of the plurality of homes, a live home score for the first one of the plurality of homes; and sending the live home score to the computing device associated with the first one of the plurality of homes to assess the safety of a property and its associated neighborhood, including a system that aggregates, standardizes, and transforms safety information from third party sources). See also MPEP § 2143(I)(D).
Referring to Claim 33 and 39 (substantially similar in scope and language), the combination of Hedges and Sanchez teaches the computer system of 30 and the tangible, non-transitory computer-readable medium of the tangible, non-transitory computer-readable medium of Claim 36, including wherein the non-transitory computer-readable medium further includes instructions that, when executed by the one or more processors, cause the computing device to generate a neighborhood score by determining an overall home scores of nearby properties (see at least Hedges: ¶ 81 “the hazard score is a risk score (e.g., an overall risk score). The risk score can be associated with or represent the overall likelihood of a claim loss being filed, predicted claim loss frequency, expected loss severity, and/or any other key metric. This risk score is preferably dependent on the likelihood of hazard exposure (e.g., in contrast to the vulnerability score), but can alternatively be independent of and/or conditional on the hazard exposure. The risk score can be predicted based on: property measurements (e.g., directly), property attribute values extracted from property measurements, historical weather and/or hazard data, another hazard score (e.g., regional exposure score), and/or any other suitable information.”; see also Hedges: ¶ 114 “the risk score can represent an overall risk of a claim filing, incorporating both regional risk and vulnerability”; see also Hedges: ¶ 62 “Attribute values can be determined using an attribute value model that can include: CV/ML attribute extraction, any neural network and/or cascade of neural networks, one or more neural networks per attribute, key point extraction, SIFT, calculation, heuristics (e.g., inferring the number of stories of a property based on the height of a property), classification models (e.g., binary classifiers, multiclass classifiers, semantic segmentation models, instance-based segmentation models, etc.), regression models, object detectors, any computer vision and/or machine learning method, and/or any other technique. Different attribute values can be determined using different methods, but can alternatively be determined in the same manner.”).
The combination of Hedges and Sanchez fails to explicitly state:
generating, via the one or more processors, a neighborhood score by determining an average of overall home scores of nearby properties
Shoup, which talks about a method and system for determining a property and neighborhood assessment system and method, teaches it is known to provide a map of properties to a user wherein the properties are color coded based on the assessed data which teaches generating, via the one or more processors, a neighborhood score by determining an average of overall home scores of nearby properties (see at least Shoup: ¶ 81-86: discussing mapping properties and neighborhoods; see also Shoup: ¶ 41, 44-46, 85, and 88-89: also discussing scoring properties and neighborhoods).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to apply the known technique of generating a property and neighborhood score (as disclosed by Shoup) to the known determining, based on first sensor readings for at least one sensor associated with the first one of the plurality of homes and second sensor readings from corresponding sensors associated with at least a second one of the plurality of homes, a live home score for the first one of the plurality of homes; and sending the live home score to the computing device associated with the first one of the plurality of homes (as disclosed by the combination of Hedges and Sanchez) to assess the safety of a property and its associated neighborhood, including a system that aggregates, standardizes, and transforms safety information from third party sources. One of ordinary skill in the art would have been motivated to apply the known technique of generating a property and neighborhood score because it would assess the safety of a property and its associated neighborhood, including a system that aggregates, standardizes, and transforms safety information from third party sources and to quantify the overall safety of the property and of the surrounding neighborhood (see Shoup: ¶ 2 and 46).
Furthermore, it would have been obvious to one of ordinary skill in the art at the time of filing to apply the known technique of generating a property and neighborhood score (as disclosed by Shoup) to the known determining, based on first sensor readings for at least one sensor associated with the first one of the plurality of homes and second sensor readings from corresponding sensors associated with at least a second one of the plurality of homes, a live home score for the first one of the plurality of homes; and sending the live home score to the computing device associated with the first one of the plurality of homes (as disclosed by the combination of Hedges and Sanchez) to assess the safety of a property and its associated neighborhood, including a system that aggregates, standardizes, and transforms safety information from third party sources, because the claimed invention is merely applying a known technique to a known method ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). In other words, all of the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art at the time of the invention (i.e., predictable results are obtained by applying the known technique of generating a property and neighborhood score to the known determining, based on first sensor readings for at least one sensor associated with the first one of the plurality of homes and second sensor readings from corresponding sensors associated with at least a second one of the plurality of homes, a live home score for the first one of the plurality of homes; and sending the live home score to the computing device associated with the first one of the plurality of homes to assess the safety of a property and its associated neighborhood, including a system that aggregates, standardizes, and transforms safety information from third party sources). See also MPEP § 2143(I)(D).
Referring to Claim 35 (substantially similar in scope and language), the combination of Hedges, Sanchez, and Shoup teaches the computer system of claim 34, including wherein the one or more processors are further configured to retrieve the two or more attributes of a property by retrieving the two or more attributes of the property from an external database and/or a mobile device associated with the property (see at least Hedges: ¶ 16-17 “extracting attribute values for each of a set of property attributes from the images. The property attributes are preferably structural attributes, such as the presence or absence of a property component (e.g., roof, vegetation, etc.), property component geometric descriptions (e.g., roof shape, slope, complexity, building height, living area, structure footprint, etc.), property component appearance descriptions (e.g., condition, roof covering material, etc.), and/or neighboring property components or geometric descriptions (e.g., presence of neighboring structures within a predetermined distance, etc.), but can additionally or alternatively include other attributes, such as built year, number of beds and baths, or other descriptors. One or more hazard scores (e.g., vulnerability score, risk score, regional exposure score, etc.) can then be calculated for the property.”; see also Hedges: ¶ 33 “configured to extract values for one or more attributes”; see also Hedges: ¶ 52-62: discussing attributes).
Response to Arguments
Examiner notes that Applicant has submitted that the amendments provide an “improvement in the functioning of a computer”. Examiner respectfully disagrees. The rejection has been updated to reflect the submitted amendments.
Applicant’s arguments with respect to claim(s) 21-40 have been considered but are moot considering the updated rejected necessitated by the submitted amendments to the independent claims. The claims stand rejected.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C YOUNG whose telephone number is (571)272-1882. The examiner can normally be reached M-F: 7:00 p.m.- 3:00 p.m. EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nate Uber can be reached at (571)270-3923. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Michael Young/ Examiner, Art Unit 3626