DETAILED ACTION
Status of the Claims
The following is a Non-final Office Action in response to amendments and remarks filed 10 October 2025.
Claims 1, 8, 15, and 20 have been amended.
Claims 1-3, 5-10, 12-17, and 19-20 pending and have been examined.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10 October 2025 has been entered.
Response to Arguments
Applicant’s remarks regarding the claim objections have been fully considered, and in light of the amendments, are persuasive. As such, the outstanding objections have been withdrawn.
Applicant’s argue that the 35 U.S.C. 112 rejection is overcome in light of the amendments, however the Examiner respectfully disagrees. Here, Applicant has not addressed or amended the new matter issue and has included additional new matter, as noted in the rejection below. Here, the Examiner again notes that there is no discussion or support for the generation of the models, simply that a model is used. The claimed invention seeks to patent some sort of model generation as part of the predictive service request processing, however the specification lacks any support whatsoever. To put another way, the specification only supports the use of some sort of natural language model which could simply be some sort of off-the-shelf or outsourced model previously generated and applied in the invention; not the generation of some customized model as the instant limitations attempt to claim. As such this argument is not persuasive, and the rejection not overcome.
Applicants argue that the 35 U.S.C. 101 rejection under the Alice Corp. vs. CLS Bank Int’l be withdrawn; however the Examiner respectfully disagrees. As an initial note, the arguments are not compliant under 37 CFR 1.111(b) as they amount to a mere allegation of patent eligibility. The Examiner notes that in order to be patent eligible under 35 U.S.C. 101, the claims must be directed towards a patent eligible concept, which, the instant claims are not directed. Contrary to Applicants’ assertion that the claims are not a mental process or a certain method of organizing human activity, the Examiner notes that processing of a service request to identify and select content related to the service request is a function that service providers, maintenance, apartment management etc. have traditionally performed/provided for users/residents. Next, the claims are not directed to a practical application of the concept. The claims do not result in improvements to the functioning of a computer or to any other technology or technical field. They do not effect a particular treatment for a disease. They are not applied with or by a particular machine. They do not effect a transformation or reduction of a particular article to a different state or thing. And they are not applied in some other meaningful way beyond generally linking the use of the judicial exception (i.e., processing of a service request to identify and select content related to the service request) to a particular technological environment (i.e., electronic content, with the use of natural language processing). Here, again as noted in the previous rejection, mere instructions to apply an exception using a generic computer component cannot provide an inventive concept - MPEP 2016.05(f). The claims recitation of the “computing device” “one or more natural language processing techniques,” and “electronic content” are only generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.04(d)(I) discussing MPEP 2106.05(h). The claim(s) is/are not patent eligible.
Applicant’s next argue that the claims are eligible as the claims cannot be “practically performed in the human mind;” or characterized as organizing human activity (due to the recitation of NLP) however the Examiner respectfully disagrees. As an intimal note, the general theory behind natural language processing is to model a computer around the human brain, such that the computer learns as our brains do. Secondly, while the specification may discuss sophisticated techniques and advanced functions such as natural language processing, it is the claims that are deemed eligible or ineligible under §101. Here, the claims are more of a generalized guideline of how to arrange a software model to implement the overarching abstract idea. Thirdly, this argument appears to be whether or not the use of computer or computing components for increased speed and efficiency makes the claims eligible; however the Examiner respectfully disagrees. Nor, in addressing the second step of Alice, does claiming the improved speed or efficiency inherent with applying the abstract idea on a computer provide a sufficient inventive concept. See Bancorp Servs., LLC v. Sun Life Assurance Co. of Can., 687 F.3d 1266, 1278 (Fed. Cir. 2012) (“[T]he fact that the required calculations could be performed more efficiently via a computer does not materially alter the patent eligibility of the claimed subject matter.”); CLS Bank, Int’l v. Alice Corp., 717 F.3d 1269, 1286 (Fed. Cir. 2013) (en banc) aff’d, 134 S. Ct. 2347 (2014) (“[S]imply appending generic computer functionality to lend speed or efficiency to the performance of an otherwise abstract concept does not meaningfully limit claim scope for purposes of patent eligibility.” (citations omitted)). The claim(s) is/are not patent eligible and the rejection not withdrawn.
Applicant next argues that the claims are analogous to Example 35 claim 2; however the Examiner respectfully disagrees. Example 35 claim 2, recited code generation, image capture, analysis and decryption; a plurality of steps which provided a combination of elements that are non-conventional and non-generic. Contrary to Applicant’s assertions, the instant claims recite no such combination of elements. The Examiner also notes that the examples provided by the USPTO are purely hypothetical for exemplary purposes and do not serve as the benchmark for patent eligibility. The claim(s) is/are not patent eligible and the rejection not withdrawn.
Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references.
Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections. The Examiner refers Applicant to the updated rejection below, addressing the newly amended claims.
Applicant argues that the cited references do not expressly disclose “obtaining, by the server system, data indicating a service request submitted by a computing device of a user, wherein the service request is (i) received via a property management application executed on the computing device, (ii) includes user-generated input specifying an issue associated with the property, and (iii) obtained during a time when maintenance personnel is not available to respond to the service request;” however the Examiner respectfully disagrees for a plurality of reasons. Firstly, as previously cited, Lerick is able to “The computer system 104 can additionally store information regarding service requests, such as the date on which service was performed, the cause of the issue identified by a user as determined by service providers, components that were verified to be problematic, actions that were performed (e.g., test/diagnose issue, review product/training manual on issue, repair components) by the service provider to resolve the issue, costs for resolving the issue, the time it took to resolve the issue (e.g., time for the service provider to arrive at the building, time to resolve the issue once at the building), reviews of the service provider, and/or information identifying new components/parts that were installed in the building. Such information can be stored in one or more data repositories, such as in the building data system 106 (Lerick ¶34)” which is clearly obtaining information regarding the service request, including issues identified by a user, and thus clearly reads upon the “obtaining, by the server system, data indicating a service request submitted by a computing device of a user, wherein the service request...includes user-generated input specifying an issue associated with the property.” Next, Lerick, as previously cited, disclose the ability to “wherein the service request is obtained during a time when maintenance personnel is not available to respond to the service request” which clearly reads upon the ability to predict or troubleshoot when there is no maintenance personnel available. To put another way, why would the system of Lerick triage problems to identify potential solutions, risks, candidate parties and an appropriate timeframe to resolve the issue if the maintenance personnel as currently available to resolve the issue immediately? One of ordinary skill in the art would clearly interpret the ability to tirage i.e. find a candidate party and timeframe to resolve the issue as the maintenance personnel being unavailable at the time. As such this argument is not persuasive, and the rejection not overcome.
In response to arguments in reference to any depending claims that have not been individually addressed, all rejections made towards these dependent claims are maintained due to a lack of reply by the Applicants in regards to distinctly and specifically pointing out the supposed errors in the Examiner's prior office action (37 CFR 1.111). The Examiner asserts that the Applicants only argue that the dependent claims should be allowable because the independent claims are unobvious and patentable over the prior art.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-3, 5-10, 12-17, and 19-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. Claims 1, 8, and 15 contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. The claims recite the newly amended limitation “generating… a model configured to apply one or more natural language processing techniques to predict, from among service topics specified by maintenance history data, likely service topics for future service requests to be associated with a property, wherein the service topics within maintenance history data associated with a property management system” and “using, by the server system, the model to perform a set of operations relating to processing information associated with the service request, the set of operations comprising: identifying a set of terms included within the service request by applying natural language processing techniques; determining that the set of terms includes one or more terms corresponding to a particular service topic, determining that the set of terms includes one or terms that identify a particular request type, and generating an output associated with the particular request type by applying a ranking model trained to prioritize frequently occurring service issues within the property” however there is no discussion, throughout the entirety of the specification and drawings, of any generation of models or use thereof whatsoever (the terms “generate,” “generating,” or any mention of “model” or “modelling” are not present in the specification or originally filed parent application whatsoever). While there is discussion of the use of natural language processing techniques in [0064], [0085], and [0094], there is no mention or discussion as to how a model is used, let alone how said model is generated and applied. As such, the Examiner asserts this as evidence that the newly amended claims are new matter.
Dependent claims 2-3, 5-7, 9-10, 12-14, 16-17, and 19-20 are also rejected for their dependencies and failing to remedy the deficiencies of claims 1, 8, and 15.
Claims 1-3, 5-10, 12-17, and 19-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. Claims 1, 8, and 15 contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. The claims recite the newly amended limitation “each electronic content is dynamically ranked using a weighted scoring system based on prior user engagement data and issue resolution success rates” however there is no discussion, throughout the entirety of the specification and drawings, of any ranking, weighting, scoring whatsoever (the terms “rank” “ranking” “score” “scoring” “weight” weigh” “weighted” “weighing” “weighting” any mention of how to ascertain some sort of score or rating are not present in the specification or originally filed parent application whatsoever). As such, the Examiner asserts this as evidence that the newly amended claims are new matter.
Dependent claims 2-3, 5-7, 9-10, 12-14, 16-17, and 19-20 are also rejected for their dependencies and failing to remedy the deficiencies of claims 1, 8, and 15.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3, 5-10, 12-17, and 19-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims are directed to a process (an act, or series of acts or steps), a machine (a concrete thing, consisting of parts, or of certain devices and combination of devices), and a manufacture (an article produced from raw or prepared materials by giving these materials new forms, qualities, properties, or combinations, whether by hand labor or by machinery). Thus, each of the claims falls within one of the four statutory categories (Step 1). However, the claim(s) recite(s) processing of a service request to identify and select content related to the service request which is an abstract idea of organizing human activities as well as a mental process performed on said gathered or organized activities.
The limitations of “generating a model configured to apply one or more natural language processing techniques to predict, from among service topics specified by maintenance history data, likely service topics for future service requests to be associated with a property, wherein the service topics within the maintenance history data are associated with a property management system; applying, by the server system, the trained model to perform a set of operations relating to processing information associated with the service request, the set of operations comprising: identifying a set of terms included within the service request; determining that the set of terms includes one or more terms corresponding to a particular service topic, determining that the set of terms includes one or terms that identify a particular request type, and generating an output associated with the particular request type by applying a ranking model trained to prioritize frequently occurring service issues within the property; predicting, by the server system, a service topic likely to be associated with the service request based on the output provided by the model; selecting, from among a collection of electronic content, a subset of electronic content corresponding to the service topic likely to be associated with the service request, wherein: the collection of electronic content includes multiple formats of electronic content associated with prior maintenance operations performed at the property, and each electronic content is dynamically ranked using a weighted scoring system based on prior user engagement data and issue resolution success rates,” as drafted, is a process that, under its broadest reasonable interpretation, covers organizing human activities--fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) but for the recitation of generic computer components (Step 2A Prong 1). That is, other than reciting “a computer-implemented method,” by a server system” or “a computing device,” nothing in the claim element precludes the step from the methods of organizing human interactions grouping. For example, but for the “computing device” language, “generating,” “using,” “identifying,” “determining,” “determining,” “generating,” “predicting,” and “selecting,” in the context of this claim encompasses the user manually receiving a service request and communicating information back based upon the received service request which is a maintenance service and business relation as well as a potential commercial or legal interaction. Similarly, this is also a mental process of thinking that or learning that the received service request of a similar type could have a related issue or common solution. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation as one of the certain methods of organizing human activities and/or mental processes, but for the recitation of generic computer components, then it falls within the “Certain Methods of Organizing Human Activities” and/or “Mental Process” grouping of abstract ideas. Accordingly, the claim(s) recite(s) an abstract idea.
This judicial exception is not integrated into a practical application (Step 2A Prong Two). The “obtaining,” “providing,” and “updating” steps are simply extrasolution data gathering activities. Next, the claims only recite one additional element – using the use of a computer-implemented method, server system, a/one or more computing devices, or one or more processors to perform the steps. The computer-implemented method, computing devices, and processors in the steps is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of electronic data storage, query, and retrieval) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Specifically the claims amount to nothing more than an instruction to apply the abstract idea using a generic computer or invoking computers as tools by adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.04(d)(I) discussing MPEP 2106.05(f). The claims recitation of the “computing device” “one or more natural language processing techniques,” and “electronic content” are only generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.04(d)(I) discussing MPEP 2106.05(h). Accordingly, the combination of these additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea, even when considered as a whole.
The claim does not include a combination of additional elements that are sufficient to amount to significantly more than the judicial exception (Step 2B). As discussed above with respect to integration of the abstract idea into a practical application (Step 2A Prong 2), the combination of additional elements of using a computer-implemented method, server system, a/one or more computing devices, or one or more processors in the steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Reevaluating here in step 2B, the “obtaining,” “providing,” and “updating” step(s) which are insignificant extrasolution activities are also determined to be well-understood, routine and conventional activity in the field. The Symantec, TLI, and OIP Techs court decisions in MPEP 2106.05(d)(II) indicate that the mere receipt or transmission of data over a network is well-understood, routine, and conventional function when it is claimed in a merely generic manner (as is here). Therefore, when considering the additional elements alone, and in combination, there is no inventive concept in the claim. As such, the claim(s) is/are not patent eligible, even when considered as a whole.
Claims 2-3, 9-10, 12, 16-17, and 19 are dependent on claims 1, 8, and 15 and include all the limitations of claims 1, 8, and 15. Therefore, claims 2-3, 9-10, 12, 16-17, and 19 recite the same abstract idea of “processing of a service request to identify and select content related to the service request.” The claim recites the additional limitations further limiting the abstract idea previously identified and is not an inventive concept that meaningfully limits the abstract idea. Again, as discussed with respect to claims 1, 8, and 15, the claims are simply limitations which are no more than mere instructions to apply the exception using a computer or with computing components. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Even when considered as a whole, the claims do not integrate the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Claims 6-7, 13-14, and 20 are dependent on claims 1, 8, and 15 and include all the limitations of claims 1, 8, and 15. Therefore, claims 6-7, 13-14, and 20 recite the same abstract idea of “processing of a service request to identify and select content related to the service request.” The claim recites the additional limitations further limiting how the abstract idea is performed (software, text messages) which is only limiting the particular environment and is still directed towards the abstract idea previously identified and is not an inventive concept that meaningfully limits the abstract idea. Again, as discussed with respect to claims 1, 8, and 15, the claims are simply limitations which are no more than mere instructions to apply the exception using a computer or with computing components. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Even when considered as a whole, the claims do not integrate the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Claims 1-3, 5-10, 12-17, and 19-20 is therefore not eligible subject matter, even when considered as a whole.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3, 5-10, 12-17, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lerick et al. (US PG Pub. 2016/0117785) and further in view of Adjaoute (US PG Pub. 2015/0339586).
As per claims 1, 8, and 15, Lerick discloses a computer-implemented method for improving predictive service request processing, system comprising: one or more computing devices; and at least one non-transitory computer-readable storage media storing instructions that, when executed by the one or more computing devices, cause the one or more computing devices to perform operations comprising:, and at least one non-transitory computer-readable storage device storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: the method comprising (method, Lerick ¶91; computing devices, computer system, ¶87-¶88; processors, memories, ¶89-¶90):
generating, by a server system, a model configured to apply one or more [machine learning] processing techniques to predict, from among service topics specified by maintenance history data, likely service topics for future service requests to be associated with a property, wherein the service topics within the maintenance history data are associated with a property management system (PMS) (The computer system 104 can receive the information describing issues from owners and can, based on the information, identify one or more of the components as candidates for being a source of the issue. For example, an owner may indicate that they have noticed a water leak from the ceiling in a particular room in a building. The computer system 104 can reference the components that are installed in the building from the building data system 106 to identify possible components and/or systems that may be causing the water leak in the particular room. For instance, the computer system 104 can identify components of water systems within the building, such as domestic water systems (e.g., domestic water lines and components, waste water lines and components) and fire suppression systems (e.g., sprinkler systems and components) as candidate components that may be causing the example building issue (water leak), Lerick ¶32; Component-based building assessments can use various computer-implemented techniques (e.g., machine learning techniques) to leverage the performance history of components across a collection of buildings to identify and highlight data records providing a better indication of quality, which can be weighted more than less significant data records when determining a high-level building quality assessment, ¶14; Such analytics can be provided using any of a variety of appropriate techniques, such as machine learning techniques (e.g., neural networks, clustering, regressions, decision trees) that can identify correlations and associations across large data sets, such as correlations that can indicate that particular components are faulty. The data that is generated and stored by the computer system 104, and the analytics based on the data can also be used to provide component-based building assessments, as described throughout this document, ¶34; For example, a particular component may be problematic in certain instances while operating without a problem in other instances. Machine learning algorithms can be used to generate data models that can identify data points that identify circumstances (e.g., geographic region of building (climate), other installed components, history of service requests for component, history of particular service requests) indicating that a particular component is likely to be problematic and other data points that indicate other circumstances indicating that the particular component is not likely to be problematic. For instance, a machine learning algorithm may be able to tease out a pattern of service requests that indicate that a component is likely to fail in the near future (e.g., particular installed component has low quality), whereas the absence of such a pattern can indicate that the component will maintain good operating condition for near future, ¶44) (Examiner notes the ability to identify issues, tease out or troubleshoot as the ability to predict the likely service topics for future service requests, based upon obtained historical data);
obtaining, by the server system, data indicating a service request submitted by a computing device of a user, wherein the service request is (i) received via a property management application executed on the computing device, (ii) includes user-generated input specifying an issue associated with the property, and (iii) obtained during a time when maintenance personnel is not available to respond to the service request (service requests, Lerick ¶34; computer system can act as an intermediary, ¶33; The computer system 302 can use one or more data sources 324a-d, such as a building data source 324a that can store information about buildings (e.g., component information), a contractor data source 324b that can store information about contractors/builders/vendors/service providers, an owner data source 324c that can store information about building owners, and a service request data source 324d that can store and log information about both completed and pending service requests, ¶67; mobile application on a computing device, ¶82);
using, by the server system, the model to perform a set of operations relating to processing information associated with the service request, the set of operations comprising (The computer system 104 can receive the information describing issues from owners and can, based on the information, identify one or more of the components as candidates for being a source of the issue. For example, an owner may indicate that they have noticed a water leak from the ceiling in a particular room in a building. The computer system 104 can reference the components that are installed in the building from the building data system 106 to identify possible components and/or systems that may be causing the water leak in the particular room. For instance, the computer system 104 can identify components of water systems within the building, such as domestic water systems (e.g., domestic water lines and components, waste water lines and components) and fire suppression systems (e.g., sprinkler systems and components) as candidate components that may be causing the example building issue (water leak), Lerick ¶32; Such analytics can be provided using any of a variety of appropriate techniques, such as machine learning techniques (e.g., neural networks, clustering, regressions, decision trees) that can identify correlations and associations across large data sets, such as correlations that can indicate that particular components are faulty. The data that is generated and stored by the computer system 104, and the analytics based on the data can also be used to provide component-based building assessments, as described throughout this document, ¶34):
identifying a set of terms included within the service request by applying natural language processing techniques (The computer system 104 can additionally store information regarding service requests, such as the date on which service was performed, the cause of the issue identified by a user as determined by service providers, components that were verified to be problematic, actions that were performed (e.g., test/diagnose issue, review product/training manual on issue, repair components) by the service provider to resolve the issue, costs for resolving the issue, the time it took to resolve the issue (e.g., time for the service provider to arrive at the building, time to resolve the issue once at the building), reviews of the service provider, and/or information identifying new components/parts that were installed in the building. Such information can be stored in one or more data repositories, such as in the building data system 106. This information can be combined with similar information from other service calls for the same or other buildings, and can be used to provide analytics to any of a variety of entities, such as to the builders, contractors, vendors, service providers, manufacturers, insurers and/or warranty providers, building owners/users, and/or to other parties/systems (e.g., real estate computer systems). For example, analytics can be provided to builders, contractors, and vendors indicating which components and systems building owners are having problems with, when the issues that are reported are a result of component malfunction/failure vs. user error, and/or the timeframe within which these issues are arising. Such information can be helpful to the builders, contractors, and vendors so as to better inform them of which components and systems to select for future projects, gaps in education of systems to building owners, and appropriately prices and lengths of time for warranties. Such analytics can be provided using any of a variety of appropriate techniques, such as machine learning techniques (e.g., neural networks, clustering, regressions, decision trees) that can identify correlations and associations across large data sets, such as correlations that can indicate that particular components are faulty. The data that is generated and stored by the computer system 104, and the analytics based on the data can also be used to provide component-based building assessments, as described throughout this document, Lerick ¶34);
determining that the set of terms includes one or more terms corresponding to a particular service topic (The computer system 104 can additionally store information regarding service requests, such as the date on which service was performed, the cause of the issue identified by a user as determined by service providers, components that were verified to be problematic, actions that were performed (e.g., test/diagnose issue, review product/training manual on issue, repair components) by the service provider to resolve the issue, costs for resolving the issue, the time it took to resolve the issue (e.g., time for the service provider to arrive at the building, time to resolve the issue once at the building), reviews of the service provider, and/or information identifying new components/parts that were installed in the building. Such information can be stored in one or more data repositories, such as in the building data system 106. This information can be combined with similar information from other service calls for the same or other buildings, and can be used to provide analytics to any of a variety of entities, such as to the builders, contractors, vendors, service providers, manufacturers, insurers and/or warranty providers, building owners/users, and/or to other parties/systems (e.g., real estate computer systems). For example, analytics can be provided to builders, contractors, and vendors indicating which components and systems building owners are having problems with, when the issues that are reported are a result of component malfunction/failure vs. user error, and/or the timeframe within which these issues are arising. Such information can be helpful to the builders, contractors, and vendors so as to better inform them of which components and systems to select for future projects, gaps in education of systems to building owners, and appropriately prices and lengths of time for warranties. Such analytics can be provided using any of a variety of appropriate techniques, such as machine learning techniques (e.g., neural networks, clustering, regressions, decision trees) that can identify correlations and associations across large data sets, such as correlations that can indicate that particular components are faulty. The data that is generated and stored by the computer system 104, and the analytics based on the data can also be used to provide component-based building assessments, as described throughout this document, Lerick ¶34),
determining that the set of terms includes one or terms that identify a particular request type (The computer system 104 can additionally store information regarding service requests, such as the date on which service was performed, the cause of the issue identified by a user as determined by service providers, components that were verified to be problematic, actions that were performed (e.g., test/diagnose issue, review product/training manual on issue, repair components) by the service provider to resolve the issue, costs for resolving the issue, the time it took to resolve the issue (e.g., time for the service provider to arrive at the building, time to resolve the issue once at the building), reviews of the service provider, and/or information identifying new components/parts that were installed in the building. Such information can be stored in one or more data repositories, such as in the building data system 106. This information can be combined with similar information from other service calls for the same or other buildings, and can be used to provide analytics to any of a variety of entities, such as to the builders, contractors, vendors, service providers, manufacturers, insurers and/or warranty providers, building owners/users, and/or to other parties/systems (e.g., real estate computer systems). For example, analytics can be provided to builders, contractors, and vendors indicating which components and systems building owners are having problems with, when the issues that are reported are a result of component malfunction/failure vs. user error, and/or the timeframe within which these issues are arising. Such information can be helpful to the builders, contractors, and vendors so as to better inform them of which components and systems to select for future projects, gaps in education of systems to building owners, and appropriately prices and lengths of time for warranties. Such analytics can be provided using any of a variety of appropriate techniques, such as machine learning techniques (e.g., neural networks, clustering, regressions, decision trees) that can identify correlations and associations across large data sets, such as correlations that can indicate that particular components are faulty. The data that is generated and stored by the computer system 104, and the analytics based on the data can also be used to provide component-based building assessments, as described throughout this document, Lerick ¶34), and
generating an output associated with the particular request type by applying a ranking model trained to prioritize frequently occurring service issues within the property (The computer system 202 can also provide building issue and servicing features, as indicated by step B (220). The building issues and servicing can include building issue triaging, such as identifying possible problems with the component that may be causing a problem and a range of potential solutions, which may include repairing or replacing the component as well as related components. The triaging can also include determining an appropriate timeframe to resolve the issue, which can be determined based on a level of urgency/severity of the issue as well as other risks associated with the issue. The triaging can further include identifying candidate parties to resolve the issue within the appropriate timeframe. Such candidate parties may be parties (e.g., companies, individual workers, contractors) who are preapproved to service the issue under one or more warranties that are covering components that are likely the source of the issue, such as parties who manufactured and/or installed the components. The triaging can further include identifying one or more actions that can be taken to mitigate further damage and/or risk from the issue before a service technician or other appropriate party is able to respond to and resolve the issue, Lerick ¶52);
predicting, by the server system, a service topic likely to be associated with the service request based on the output provided by the model (The computer system 202 can also provide building issue and servicing features, as indicated by step B (220). The building issues and servicing can include building issue triaging, such as identifying possible problems with the component that may be causing a problem and a range of potential solutions, which may include repairing or replacing the component as well as related components. The triaging can also include determining an appropriate timeframe to resolve the issue, which can be determined based on a level of urgency/severity of the issue as well as other risks associated with the issue. The triaging can further include identifying candidate parties to resolve the issue within the appropriate timeframe. Such candidate parties may be parties (e.g., companies, individual workers, contractors) who are preapproved to service the issue under one or more warranties that are covering components that are likely the source of the issue, such as parties who manufactured and/or installed the components. The triaging can further include identifying one or more actions that can be taken to mitigate further damage and/or risk from the issue before a service technician or other appropriate party is able to respond to and resolve the issue, Lerick ¶52)
selecting, from among a collection of electronic content, a subset of electronic corresponding to the service topic likely to be associated with the service request, wherein (component and warranty information, building information, Lerick ¶27-¶28; the computer system 104 can assist owners in triaging and resolving issues that may arise with buildings and/or the components in buildings, ¶29; issue, additional information, ¶33-¶34; identifying possible problems and potential solutions, ¶52):
the collection of electronic content is (i) stored in a structured content repository indexed based on service topics and request types; and (ii) includes multiple formats of electronic content associated with prior maintenance operations performed at the property (component and warranty information, building information, databases, Lerick ¶27-¶28; For example, the computer system 104 can store the information in one or more data repositories (e.g., databases, file systems), such as the example building data system 106. For instance, the building data system 106 can include building information and can include one or more data sources, such as data repositories (e.g., databases, file systems) for building data, entity data, owner data, and/or other appropriate data repositories. A building data repository, for example, can store information regarding a building, such as information about the building (e.g., address, type of building, date of construction, unique identifier for the building), information about components of the building (e.g., component information, warranty information), and/or other appropriate information. An entity data repository, for example, can store information about entities (e.g., builders, contractors, and vendors) associated with the building, such as contact information for the entity (e.g., name, telephone number, fax number, email address, unique identifier for the entity), information identifying specialties and other ways in which the entities are associated with buildings and their components (e.g., builder, installer, manufacturer, warranty provider, insurer), account information for the entities (e.g., username, password), and other appropriate information. An owner data repository, for example, can include a variety of information about an owner of the building, such as the owner's name and contact information, buildings that the owner is associated with, account information (e.g., username, password), and other appropriate information for the owner. The information that is stored in the building data system 106 can be populated by one or more other systems, such as the building issue system 104, the real estate system 108, and/or by others (e.g., real estate companies, property management companies, property owners, tenants, service providers), Once stored, the computer system 104 can maintain and manage the component information and the warranty information, such as part a subscription service that is offered over a period of time or for perpetuity on behalf of the builders, contractors, vendors, and/or other users who built buildings and/or installed/repaired components in the buildings. Such management can include providing owners with access to information about their buildings, such as component and warranty information. Additionally, the management can include providing owners with reminders about warranty periods, upcoming warranty expirations, obtaining quotes from one or more warranty providers for extended warranties, and/or facilitating the purchasing and management of extended warranties. Additionally, the computer system 104 can assist owners in triaging and resolving issues that may arise with buildings and/or the components in buildings, Lerick ¶28-¶29; issue, additional information, ¶33-¶34; identifying possible problems and potential solutions, ¶52; maintenance, service, groups, roles organized, ¶76-¶77), and
each electronic content is dynamically ranked using a weighted scoring system based on prior user engagement data and issue resolution success rates (For example, instead of providing a user with a complete data file on the components within a building and their service history, computer-implemented techniques described in this document can automatically analyze the component information to provide overall quality ratings information for buildings based on their components, quality ratings for the components themselves, quality ratings for maintenance provided on the components, and/or other quality-related metrics that can provide a user with a higher-level assessment of the quality of a building (or a portion thereof), Lerick ¶6; Ratings can additionally and/or alternatively be generated by the computer system 102 using one or more computer-based techniques to generate inferences as to building component quality based on disparate bits of information. For example, the computer system 102 can use one or more machine learning algorithms (e.g., neural networks, clustering, decision trees) to determine correlations between data that can indicate the quality of components presently and/or in the future, and to identify data correlations that are not indicators of component quality. Such machine learning algorithms can be run across data for each individual building, for groups of buildings (e.g., buildings located in the same geographical region, buildings governed by one or more similar building codes), and/or for all buildings to generate multiple different data models correlating component data (as described above) to differing levels of quality. Component-based ratings for a building can be generated using one or more models (e.g., individual building models, group-based building mod