DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status
This action is in response to the amendment filed on 11/25/2025. Claims 1, 5-7, 11-16 are pending. Claim 1 is amended. No claims have been added. No claims are currently cancelled.
Response to Arguments
Applicant's arguments filed 11/25/2025 have been fully considered but they are not persuasive. The applicant has argued the 101 rejection specifically “The particular inputs to the calculation of how to split the large processing task into several smaller processing subtasks is a novel approach useful when the size of the problem exceeds the capability of any one node and where the processing task needs to be completed relatively quickly in order to be useful. Paragraphs [0009] through [0013] of the original specification note how timely processing of real-estate data may prevent some of the distressed sales that occur near a real-estate valuation bubble peak or burst due to lack of current information.” The examiner respectfully disagrees. Merely choosing a storage location or rearranging data to manage space is a routine computing function that does not provide a technical solution to a technical problem. Moving data based on size thresholds is a known data-processing activity that does not constitute an inventive concept. To be eligible, the claim must improve the functioning of the computer itself. A simple diversion strategy based on storage size does not make the computer function better; it merely uses the computer as a tool to manage storage capacity. The claim relies on conventional hardware (memory, processor, interface) to perform standard tasks. The claims attempt to monopolize the general concept of diverting data to save space. As such, the claim merely uses a computer as a tool to perform a business practice, failing to provide an inventive concept.
The applicant argues the claims in view of Bascom specifically “Under Step 2A, Prong 2, the presently pending claims recite a specific, practical application for predicting real estate values. The specifically recited feature provides more than the abstract idea of "Certain Methods of Organizing Human Activities," and thus its integrates the abstract idea into a practical application consistent with MPEP §2106.05(e) (other meaningful limitations) and provide a service similar to the filtering discussed by the Federal Circuit in BASCOM: While the claims of the '606 patent are directed to the abstract idea of filtering content, BASCOM has adequately alleged that the claims pass step two of Alice's two-part framework. BASCOM has alleged that an inventive concept can be found in the ordered combination of claim limitations that transform the abstract idea of filtering content into a particular, practical application of that abstract idea. We find nothing on this record that refutes those allegations as a matter of law or justifies dismissal under Rule 12(b)(6). We therefore vacate the district court's order granting AT&T's motion to dismiss under FRCP 12(b)(6) and remand so that the case may proceed.” The examiner respectfully disagrees. The key fact in BASCOM was the presence of a structural change in “installation of a filtering tool at a specific location, remote from the end users, with customizable filtering features specific to each end user. This design gives the filtering tool both the benefits of a filter on a local computer and the benefits of a filter on the ISP server.” BASCOM, 827 F.3d at 1350. The instant claims have no analogous structural benefit.
The applicant has argued “Like the invention claimed in McRO, the presently pending claims provide "a particular way to achieve a desired outcome," here, of dividing the workload into subtasks manageable by the nodes. Accordingly, for at least the foregoing reasons, Applicant respectfully submits that under Step 2A, Prong 2, all presently pending claims are directed to a practical application, and Applicant requests that the § 101 rejections be withdrawn.” The examiner respectfully disagrees. In McRO, the court held that, although the processes were previously performed by humans, “the traditional process and newly claimed method . . . produced . . . results in fundamentally different ways.” FairWarning v. Iatric Systems, 839 F.3d 1089, 1094 (Fed. Cir. 2016) (differentiating the claims at issue from those in McRO). In McRO, “it was the incorporation of the claimed rules not the use of the computer, that improved the existing technology process,” because the prior process performed by humans “was driven by subjective determinations rather than specific, limited mathematical rules.” 837 F.3d at 1314 (internal quotation marks, citation, and alterations omitted). In contrast, the claims of the instant application merely implements dividing storage for data. The applicant has not argued that the claimed processes of distributing data in a manner technologically different from those which humans used, albeit with less efficiency, before the invention was claimed. The claims in McRO were not directed to an abstract idea, but instead were directed to “a specific asserted improvement in computer animation, i.e., the automatic use of rules of a particular type.” The “claimed improvement [was] allowing computers to produce ‘accurate and realistic lip synchronization and facial expressions in animated characters’ that previously could only be produced by human animators.” The claimed rules in McRO transformed a traditionally subjective process performed by human artists into a mathematically automated process executed on computers. Applicant’s arguments are not found persuasive.
Claim 1 is broad enough to cover only two transactions, two users and two merchants and as such does not recite millions of transactions or “a massive amount” of data. The applicant is not is not improving upon big data, claiming at best a large dataset is being stored and processed. The claimed invention is simply utilizing a generic computer for the benefits that computers provide, i.e. computers are faster and more efficient, while failing to improve upon the technology. Not only does the applicant fail to claim big data, the broadest reasonable interpretation of the claims does not require “big data” to perform the steps of the invention. Further, the applicant has filed to claim or define “intelligent bigdata chunking.” The claims also do not recite additional elements sufficient to integrate the judicial exceptions into a practical application. Aside from the abstract idea of generating a prediction, the additional elements in the apparatus of claim 1 are a memory device, a network interface, and a processor configured to (a) communicate with remote sources of data pertaining to historical real estate values, (b) transmit requests (c) calculate the data to a plurality of network nodes according to the nodes’ workloads, (d) distribute the historical variable data (e) receive the data from the nodes, (f) identify a plurality of peaks in the historical real estate values, and (g) generate a prediction of future peaks based on the data, transmit an alert that includes the prediction. The applicant is claiming something that maybe barely over one terabyte but has support for “large data” being several terabytes. Clearly, the applicant is not claiming “a massive amount of data.” The claim is not limited to big data. The machines recited in the claims perform as commonly-understood and routinely expected in the art. Applicant’s inventive concept still seems to lie completely in the abstract idea (judicial exception) itself. It appears that all the features of the computer system perform as normal, generic features of a system. The Supreme Court’s concern that drives this “exclusionary principle” is pre-emption. Alice Corp., 573 U.S. at 216, 110 USPQ2d at 1980. The Court has held that a claim may not preempt abstract ideas, laws of nature, or natural phenomena, even if the judicial exception is narrow (e.g., a particular mathematical formula such as the Arrhenius equation). See, e.g., Mayo, 566 U.S. at 79-80, 86-87, 101 USPQ2d at 1968-69, 1971 (claims directed to “narrow laws that may have limited applications” held ineligible); Flook, 437 U.S. at 589-90, 198 USPQ at 197 (claims that did not “wholly preempt the mathematical formula” held ineligible). This is because such a patent would “in practical effect [] be a patent on the [abstract idea, law of nature or natural phenomenon] itself.” Benson, 409 U.S. at 71- 72, 175 USPQ at 676. The concern over preemption was expressed as early as 1852. See Le Roy v. Tatham, 55 U.S. (14 How.) 156, 175 (1852) (“A principle, in the abstract, is a fundamental truth; an original cause; a motive; these cannot be patented, as no one can claim in either of them an exclusive right.”).
Applicant’s invention is directed to a “method for predicting real estate bubbles based on big data analytics.” The Specification explains that “[v]ast amounts of historical data may need to be digitally processed to produce a quality prediction of ebbs and flows in the real estate market.” Spec. ¶ 13. To accommodate the large amount of data used to predict peaks in real estate values, Applicant’s invention distributes the data among “a plurality of nodes on a network such that a size of a [data] portion assigned to a respective node is in accordance with a real-time workload of the respective node.” ¶ 14. Although the claims recite “computer systems and software to process the data to perform the claimed abstract idea steps, this implementing the abstract idea in the manner of ‘apply it’ and confining the abstract idea to a particular technological environment . . . does not provide ‘something more’ to make the claims patent eligible.”
Claim 1 recites a generic apparatus with a processor that generate[s] a prediction of a future peak in real estate values based at least partially on [a] plurality of previous peaks. Forming a prediction as recited in claim 1 involves making a judgment as to a future condition based on an evaluation of past conditions, which can be accomplished using the human mind. Accordingly claim 1 recites an abstract idea, in the form of a mental process. Claim 1 involves using a processor. As explained in the 2019 Office Guidance, however, “[i]f a claim, under its broadest reasonable interpretation, covers performance in the mind but for the recitation of generic computer components, then it is still in the mental processes category unless the claim cannot practically be performed in the mind.” 84 Fed. Reg. at 52 n.14 (citing Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1318 (Fed. Cir. 2016)). In the present case, claim 1 merely requires generating a prediction based on historical data. The dependent claims involve calculations and dividing the calculated values. The claims recite using a mathematical calculation, which is an abstract idea. See Office Guidance (84 Fed. Reg. at 52 (mathematical concepts include mathematical relationships and mathematical calculations)).
Claims can recite a mental process even if they are claimed as being performed on a computer. The Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea. The Court concluded that the algorithm could be performed purely mentally even though the claimed procedures “can be carried out in existing computers long in use, no new machinery being necessary.” 409 U.S at 67, 175 USPQ at 675. See also Mortgage Grader, 811 F.3d at 1324, 117 USPQ2d at 1699 (concluding that concept of “anonymous loan shopping” recited in a computer system claim is an abstract idea because it could be “performed by humans without a computer”). The applicant is merely using a computer as a tool to perform a mental process. An example of a case in which a computer was used as a tool to perform a mental process is Mortgage Grader, 811 F.3d. at 1324, 117 USPQ2d at 1699. The patentee in Mortgage Grader claimed a computer-implemented system for enabling borrowers to anonymously shop for loan packages offered by a plurality of lenders, comprising a database that stores loan package data from the lenders, and a computer system providing an interface and a grading module. The interface prompts a borrower to enter personal information, which the grading module uses to calculate the borrower’s credit grading, and allows the borrower to identify and compare loan packages in the database using the credit grading. 811 F.3d. at 1318, 117 USPQ2d at 1695. The Federal Circuit determined that these claims were directed to the concept of “anonymous loan shopping”, which was a concept that could be “performed by humans without a computer.” 811 F.3d. at 1324, 117 USPQ2d at 1699. Another example is Berkheimer v. HP, Inc., 881 F.3d 1360, 125 USPQ2d 1649 (Fed. Cir. 2018), in which the patentee claimed methods for parsing and evaluating data using a computer processing system. The Federal Circuit determined that these claims were directed to mental processes of parsing and comparing data, because the steps were recited at a high level of generality and merely used computers as a tool to perform the processes. 881 F.3d at 1366, 125 USPQ2d at 1652-53.
Because claim 1 merely recites using the computer elements, including the processor, as tools to perform the abstract idea (the prediction), the computer elements in claim 1 do not integrate the abstract idea into a practical application. See Office Guidance (84 Fed. Reg. at 55 (example in which a judicial exception is not integrated into a practical application includes situation in which claim “merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea”). The data gathering and organizing steps, or the step of transmitting an alert that includes the prediction, does not integrate the prediction into a practical application. See Office Guidance (84 Fed. Reg. at 55 n.31 (additional element that merely adds insignificant extra-solution activity to a judicial exception includes “mere data gathering such as a step of obtaining information about credit card transactions so that the information can be analyzed in order to detect whether the transactions were fraudulent”)). The previous 101 rejection is maintained.
In view of the previous 103 rejection the applicant has argued “Applicant respectfully submits that independent claims 1 and 7 are not taught by the cited art, either separately or in any combination. The art cited against the element:
such that a size of a portion assigned to a respective node is selected in accordance with a real-time workload of the respective node, wherein the calculating the respective portions of the historical variable data to each node comprises, in response to the respective real-time workloads, assigning time periods based on a size of the historical variable data covering a respective time period; is Zhang, page 345.
But Zhang does not teach balancing based on input data size. Nordstrom teaches dividing work among nodes, but the actual size of the data to be processed in not given in Nordstrom as a consideration. Even though Nordstrom teaches that "Ownership of the partitions 60 may also change in order to more evenly balance workload between the nodes 27" (column 6, lines 16 through 18), there is no teaching that the balancing can be due to the size of available memory compared to the size of data to be processed. Instead, the balancing could be based on comparative processing load. Therefore, the presently pending claims are patentable over the cited art and should be allowed.” The examiner respectfully disagrees. The claim itself does not include the limitations of balancing based on input data size. However, the prior art discloses workload balancing, focusing on using ordinal optimization and evolutionary algorithms to manage dynamic workloads. It aims to achieve efficient scheduling in elastic cloud environments. It utilizes iterative ordinal optimization (IOO) and evolutionary algorithms to produce "good-enough" schedules quickly, which helps in balancing workloads across virtual clusters. The article discloses optimizing task distribution among heterogeneous nodes. It proposes an approach to optimize the scheduling of tasks, which involves distributing, or balancing, tasks across heterogeneous computing nodes based on resource demands and data constraints to improve throughput. Applicant’s arguments are not persuasive. The previous 103 rejection is maintained.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 5-7, 11-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter because the claim(s) as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than an abstract idea. The claim(s) is/are directed to the abstract idea of receiving and manipulating historical data to generate a prediction. The claimed invention is directed to an abstract idea without significantly more.
Step 2A Prong 1
Claims 1, 7, recite communicating historic data, transmitting requests, calculating portions of historical data, distributing the historical data, receiving historical data, identifying peaks in the historical data, generating a prediction of a future peak, and transmitting an alert constituting an abstract idea based on “a mathematical formula” related to managing personal behavior or interactions between individuals including social activities. The claim(s) recite(s) (mathematical relationships/formulas, mental process or certain methods of organizing human activity). Specifically the independent claim recites:
(a) mental process: as drafted, the claim recites the limitations of communicate, transmit, calculate, distribute, receive, identify, generate, and transmit data which is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “a network interface” nothing in the claim encompasses the user using data to create a prediction. Forming a prediction as recited in claim involves making a judgment as to a future condition based on an evaluation of past conditions, which can be accomplished using the human mind. The mere nominal recitation of a generic apparatus with generic components does not take the claim limitation out of the mental processes grouping. This limitation is a mental process.
(b) mathematical formula: The claim recites a mathematical concept (which can include a mathematical relationships, mathematical formulas or equations, and mathematical calculations), and in this case the calculation of historical variable data and a prediction of a future peak in real estate values. Thus, the claim recites a mathematical calculations. “Mathematical Calculations” A claim that recites a mathematical calculation will be considered as falling within the “mathematical concepts” grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word “calculating” in order to be considered a mathematical calculation. For example, a step of “determining” a variable or number using mathematical methods or “performing” a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation.
Dependent claims 5-6, 11-16 further narrow the abstract idea identified in the independent claims and do not introduce further additional elements for consideration.
Step 2A Prong 2
Independent claims 1 and 7 do not integrate the judicial exception into a practical application. Claim 1 is an apparatus comprising “a memory device; a network interface; at least one processor.” Claim 1 further recites the additional elements of “communicate via the network interface with remote data sources”, “transmit, via the network interface”, “distribute, by transmission via the network interface”, “receive, via the network interface“, “nodes on a network.” Claim 7 is a method of an electronic device. Claim 7 further recites the additional elements of “communicating via the network interface with remote data sources”, “transmitting, via the network interface”, “distributing, by transmission via the network interface”, “receiving, via the network interface“, “nodes on a network.” These additional elements are mere instructions to implement an abstract idea using a computer in its ordinary capacity, or merely uses the computer as a tool to perform the identified abstract idea. Use of a computer or other machinery in its ordinary capacity for performing the steps of the abstract idea or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) does not integrate a judicial exception into a practical application. See MPEP 2106.05(f).
Therefore, the additional elements of the independent claims, when considered both individually and in combination, are not sufficient to prove integration into a practical application.
Dependent claims 5-6, 11-16 further narrow the abstract idea identified in the independent claims and do not introduce further additional elements for consideration, which does not integrate the judicial exception into a practical application.
Step 2B
Independent claims 1, 7 do not comprise anything significantly more than the judicial exception. As can be seen above with respect to Step 2A, Prong 2, Claim 1 is an apparatus comprising “a memory device; a network interface; at least one processor.” Claim 1 further recites the additional elements of “communicate via the network interface with remote data sources”, “transmit, via the network interface”, “distribute, by transmission via the network interface”, “receive, via the network interface“, “nodes on a network.” Claim 7 is a method of an electronic device. Claim 7 further recites the additional elements of “communicating via the network interface with remote data sources”, “transmitting, via the network interface”, “distributing, by transmission via the network interface”, “receiving, via the network interface“, “nodes on a network.” These additional elements are mere instructions to implement an abstract idea using a computer in its ordinary capacity, or merely uses the computer as a tool to perform the identified abstract idea. Use of a computer or other machinery in its ordinary capacity for performing the steps of the abstract idea or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) is not anything significantly more than the judicial exception. See MPEP 2106.05(f).
The additional elements of the independent claims, when considered both individually and in combination, do not comprise anything significantly more than the judicial exception.
Dependent claims 5-6, 11-16 further narrow the abstract idea identified in the independent claims and do not introduce further additional elements for consideration, which is not anything significantly more than the judicial exception.
Therefore based on the above analysis as conducted based on MPEP 2106 from the United States Patent and Trademark Office the claims are viewed as a court recognized abstract idea, are viewed as a judicial exception, does not integrate the claims into a practical application, does not provide significantly more, and does not provide an inventive concept, therefore the claims are ineligible.
The additional elements of the dependent claims, when considered both individually and in the context of the independent claims, are not anything significantly more than the judicial exception.
Accordingly, claims 1, 5-7, 11-16 are rejected under 35 USC 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 5-7, 11-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fleming (US 20130282596 A1) in view of Nordstrom at al. (US 8601112 B1) in view of Florance et al. (US 20050203768 A1) in view of Roehner “Real estate price peaks: a comparative overview” (2006) (hereinafter Roehner) in view of F. Zhang et al., "Evolutionary Scheduling of Dynamic Multitasking Workloads for Big-Data Analytics in Elastic Cloud," (2014) (hereafter Zhang).
Regarding claim 1, Fleming teaches a memory device; a network interface; at least one processor (Fig. 2, ¶ 24, 33, 61);
communicate via the network interface with remote data sources containing historical variable data associated with real estate assets, the historical variable data being stored in a plurality of diverse data sets (¶ 24, discloses the use of a network interface and remote data sources. 28-31, discloses the use of historical property data and assets and a plurality of different data sets);
identify a plurality of previous peaks in the historical real estate values based at least partially on the historical real estate values received from the plurality of nodes (¶ 28-31, discloses the use and manipulation of historical data. ¶ 44, discloses various relevant historical real estate values. ¶ 20, discloses a peak value of distribution. ¶ 29, Fig. 1);
and transmit an alert comprising a prediction (Fig. 2, discloses an appraisal analytics system. ¶ 56, discloses the sending/ communicating of the accessed (predicted) values. ¶ 50, discloses the reporting of predicted values. ¶ 6.)
Fleming does not specifically teach which is taught by Nordstrom, wherein a total size of the historical variable data is larger than an available size in the memory device (col. 5, lines 30-60, discloses the use of a larger number of partitions than expected. col. 6, lines 10-43, discloses a load that is overloaded. Col. 9, lines 9-60, In another exemplary embodiment, connections between the nodes 27 and the computers 12 may change in order to more evenly balance workload between the nodes 27. For example, a node 27 that is heavily loaded may stop accepting new connections and/or drop existing connections with one or more of the computers 12. When rejecting or dropping a connection, the node 27 may first confirm that one or more other nodes 27 exist which have capacity to take on additional load, and then provide the computer 15 with a list of alternative nodes 27.);
transmit, via the network interface, requests respectively to each of a plurality of nodes on a network to identify respective real-time workloads of each of the plurality of nodes, and receive at least the respective real-time workloads from each of the plurality of nodes, including current available memory and a number of available processors (col. 5, lines 32-63, discloses the storing and sending of the current state of the system. Col. 9, lines 9-60, discloses the current available memory and processors receiving real-time workloads. Col. 17, line 55 – col. 18, line 20, Col. 10, line 37- col. 11, line 10, col. 12, lines 8-65, col. 5, line 63 – col. 6, line 42, col. 17, lines 37-50);
calculate, for each of the plurality of nodes, respective portions of the historical variable data via the network interface to the plurality of nodes on the network, such that a size of a portion assigned to a respective node is selected in accordance with a real-time workload of the respective node (col. 23, lines 1-30, discloses calculating a summation of the historical data and performing and analysis. For example, a calculation descriptor may be inserted in the calculation table 48 which creates a data pipe-time-series processor pair configured to obtain the per-minute summations from the data repository 20 and to compute an overall total. Of course, this computation could instead be performed in real time by inserting the calculation descriptor in the calculation table 48 while the data messages are still being received. Col. 5, lines 32-62, discloses data collection and analysis including the distribution of data across nodes and assignments to the various nodes in real time. Col. 12, lines 7-26, col. 16, lines 1-20, col. 17, lines 39-50, col. 23, lines 1-45, Fig. 4);
distribute, by transmission via the network interface, the respective portions of the historical variable data to the plurality of nodes on the network for processing (col. 5, lines 32-62, discloses distributing a workload across a plurality of nodes for processing. Col. 23, lines 1-45, discloses sending data messages to a node processing data. Sorting incoming data messages and the historical analysis of data. Col. 11, line 55- col. 12, line 7, discloses data pipe-time-series processing and construction of data including historical data. Col. 19, lines 10-26, col. 21, line 65- col. 22, line 36).
receive, via the network interface (col. 17, lines 35-50, As previously described, a datapipe-time-series processor pair 64 may be created that is configured to receive information useable to create calculation descriptors from the user computers 18. In an exemplary embodiment, a graphical user interface (GUI) application may be configured to provide a web-based interface to receive information useable to create calculation descriptors. The website may, for example, include instructions on how to generate calculation descriptors, such that the calculation descriptors may specify virtually any calculation that may be conceived of by a user and for which the requisite precursor inputs are available from the data source computers 12.)
It would have been obvious to one of ordinary skill in the art at the time of Applicant’s invention to modify Fleming to include/perform a respective node is selected in accordance with a real-time workload of the respective node, as taught/suggested by Nordstrom. This known technique is applicable to the system of Fleming as they both share characteristics and capabilities, namely, they are directed to the specifics of work processing. One of ordinary skill in the art would have recognized that applying the known technique of Nordstrom would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Nordstrom to the teachings of Fleming would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such selecting a size of a portion features into similar systems. Further, applying an a respective node is selected in accordance with a real-time workload of the respective node would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow for the ability to a more focus processing of data and selecting in real-time.
The combination of Fleming and Nordstrom does not specifically teach wherein a total size of the historical variable data exceeds one terabyte. However, Florance discloses:
wherein a total size of the historical variable data exceeds one terabyte (¶ 683).
It would have been obvious to one of ordinary skill in the art at the time of Applicant’s invention to modify Fleming to include/perform wherein a total size of the historical variable data exceeds one terabyte, as taught/suggested by Florance. This known technique is applicable to the system of Fleming as they both share characteristics and capabilities, namely, they are directed to collection, distribution and use of information in connection with real estate. One of ordinary skill in the art would have recognized that applying the known technique of Florance would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Florance to the teachings of Fleming would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such specific size based data features into similar systems. Further, applying wherein a total size of the historical variable data exceeds one terabyte would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow the user to manage their storage proactively.
Fleming does not specifically teach generating a prediction of future peaks. However, Roehner discloses:
receive historical real estate values from the plurality of nodes that are based at least partially on the distributed portions of the historical variable data (pg. 1, First, we emphasize that the real estate price peaks which are currently under way in many industrialized countries (one important exception is Japan) share many of the characteristics of previous historical price peaks. In particular, we show that: (i) In the present episode real price increases are, at least for now, of the same order of magnitude as in previous episodes, typically of the order of 80% to 100% . (ii) Historically, price peaks turned out to be symmetrical with respect to the peak; soft landing, i.e. an upgoing phase followed by a plateau, has rarely (if ever) been observed. (iii) The inflated demand is mainly boosted by investors and high-income buyers. (iv) In the present as well as in previous episodes, the main engines in the upgoing phase have been the “hot” markets which developed in major cities such as London, Los Angeles, New York, Paris, San Francisco or Sydney. In our conclusion, we propose a prediction for real estate prices in the West of the United States over the period 2005-2011. We also point out that investment funds, which already play a key role in stock markets, have in recent times began to heavily invest in real estate. In the future, one can expect them to become major players in property markets worldwide. The outcome of the present episode will tell us how quickly this transformation evolves. Thus, if the height of the present peak substantially surpasses the magnitude of previous ones, one may infer that investment funds have
been able to establish strong communication channels between real estate assets on the one hand and
financial assets (e.g. bonds, stocks, options) on the other hand. Pg. 2, The three previous peaks provide possible guiding lines as to the future of the fourth peak that is currently under way. Naturally, there is no absolute certitude that the present peak will unfold as the previous ones; there are mainly two new factors (i) The present peak has a bigger amplitude and duration than the previous ones; in itself this would probably not preclude a repetition of the previous scenarios (ii) Investment funds (in which we include pension funds, equity funds, hedge funds) have taken a much greater part in the present episode than in previous ones. This explains the exceptional size of the peak but, as these institutions can mobilize much larger amounts of capital than the real estate companies which operated in previous episodes. Pg. 4, Other historical examples of real estate price peaks are given in Fig. 2b, 3 and 4. Fig. 2b is of interest because it emphasizes the similarity in the shape of two peaks which occurred in different time periods and in distant countries. In the same way, Fig. 3 points out the close parallelism between the peaks of 1889 and 1929: they have the same amplitude (amplitude of a peak being defined as the ratio (peak price) / (initial price)) and almost the same duration. Fig. 4 shows the 1985-1995 price peak in Britain (more will be said about this case in section 4).
identify a plurality of previous peaks in the historical real estate values based at least partially on the historical real estate values received from the plurality of nodes (pg. 8, The major role played by big cities is illustrated by the fact that in the United States there have been price peaks of an amplitude greater than two almost only in the North-East (Boston, New York) and in California (Los Angeles, San Francisco) and marginally in the Chicago Metropolitan Area. Fig. 4 further explains how price increases spread from big cities to neighboring areas. The highest curve corresponds to London; as can be seen the time lag between the maximum in London and the maxima in northern counties is comprised between one and two years. This observation has interesting practical implications. It means that the market downturn can first be observed in the hottest places. For instance, during the third and fourth quarter of 2004, prices in London have been falling, whereas they were still increasing in the rest of Britain (albeit at a slower rate than earlier). This seems to suggest that by July 2005 (at time of writing) Britain already was in the downward phase of the price peak. The same conclusion holds for Australia, where Sydney (which is the hottest market) has seen declining real estate prices since 2004. The fact that price increases are positively correlated with prices at the beginning of the price peak is illustrated in Fig. 7 in the case of the Western region of the United States. Thus, the average price in Las Vegas, which was $ 120,000 in 1995 had been multiplied by 1.2 in 2002, whereas the average price in San Francisco, namely $ 280,000 in 1995, had been multiplied by 1.8 in 2002. Additional evidence about the price multiplier effect can be found in Roehner (2000), Roehner (2001, chapter 6), Maslov et al. (2003), and Roehner (2004). Fig. 6-9).
generate a prediction of a future peak in real estate values based at least partially on the plurality of previous peaks (Fig. 6-9, pg. 8-11, In previous papers we described speculative price peaks by a function of the form:
PNG
media_image1.png
51
221
media_image1.png
Greyscale
where p2, t2 denote the peak-price and peak-time respectively; and α or Ƭ are two adjustable parameters. In the present case it turns out that the exponents are almost equal to 1, namely 1970-1982: α = 0.99, 1982-1992: α = 1.06; the parameters Ƭ turn out to be almost the same as well, namely of
the order of 13.5 quarters (i.e. 3.3 years). As a result, one is encouraged to model the downgoing
path of the current episode by the same parameters. This leads to the dotted line projection in Fig.
97. Needless to say, this projection rests on the assumption that there is no fundamental change with
respect to the two previous episodes. In particular, we assume that in spite of their expanding assets,
investment funds will not be able to rule property markets in coming years to the same extent as they
are able to direct stock markets. Apart from their own specific interest, speculative episodes in property markets are also of great value because they are similar to, but simpler than, speculative episodes in stock markets. The first point, the similarity of price peaks in property versus stock has been briefly summarized above (more details can be found in Roehner 2004a). The fact that property markets are “simpler” than stock markets can be attributed to the following circumstances. (i) Transactions take much longer in real estate than in stocks, typically one or two months compared to one or two minutes; as a result property prices are subject to only low frequency shocks whereas stock prices are subject to shocks whose frequency spectrum extends over several orders of magnitude (from 1/minute to 1/year). (ii)Most of the financial instruments available on stock markets (such as for instance options, futures, convertible bonds) do not (yet) exist in property markets. (iii) As shown in a former paper (Roehner 2004b), the strategy of big investment funds have a determinant impact on the price of stocks. Such funds also have much influence in commercial real estate; in contrast, their involvement in residential real estate has so far been smaller although this situation may change in the next decades.)
The combination of Fleming and Roehner discloses transmit an alert comprising the prediction. Fleming discloses alerting with real estate information while Roehner discloses the prediction.
It would have been obvious to one of ordinary skill in the art at the time of Applicant’s invention to modify Fleming to include/perform generate a prediction of a future peak in real estate values based at least partially on the plurality of previous peaks, as taught/suggested by Roehner. This known technique is applicable to the system of Fleming as they both share characteristics and capabilities, namely, they are directed to evaluating properties and evaluations. One of ordinary skill in the art would have recognized that applying the known technique of Roehner would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Roehner to the teachings of Fleming would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such peak prediction features into similar systems. Further, applying generate a prediction of a future peak in real estate values based at least partially on the plurality of previous peaks would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow the user to predict when the next real-estate bubble might occur. This would be useful information for an investor.
Fleming does not specifically teach generating assigning time periods based on a size of the historical variable data covering a respective time period. However, Zhang discloses:
wherein the calculating the respective portions of the historical variable data to each node comprises, in response to the respective real-time workloads, assigning time periods based on a size of the historical variable data covering a respective time period (pg. 345, Scenario 2: Simulation while execution. In the lower part of Fig. 8, simulation and execution stages overlap. At time t0, a random schedule is used for the time being, while the workload information between [t1, t2] is used to generate a better quality schedule to be used at time t1. Similar scheduling is conducted repeatedly until the end of experiment. Most of the real-time multitasking scheduling applications are carried out based on Scenario 2. pg. 338, Scheduling of dynamic and multitasking workloads for big-data analytics is a challenging issue, as it requires a significant amount of parameter sweeping and iterations. Therefore, real-time scheduling becomes essential to increase the throughput of many-task computing. The difficulty lies in obtaining a series of optimal yet responsive schedules. In dynamic scenarios, such as virtual clusters in cloud, scheduling must be processed fast enough to keep pace with the unpredictable fluctuations in the workloads to optimize the overall system performance. In this paper, ordinal optimization using rough models and fast simulation is introduced to obtain suboptimal solutions in a much shorter timeframe. While the scheduling solution for each period may not be the best, ordinal optimization can be processed fast in an iterative and evolutionary way to capture the details of big-data workload dynamism. Experimental results show that our evolutionary approach compared with existing methods, such as Monte Carlo and Blind Pick, can achieve higher overall average scheduling performance, such as throughput, in real-world applications with dynamic workloads. Furthermore, performance improvement is seen by implementing an optimal computing budget allocating method that smartly allocates computing cycles to the most promising schedules. Pg. 340, In Fig. 2, we show stacked workloads that are dispatched to the C VCs in a timeline. From ti−1 (or also named ti−1,0), schedule θ(ti−1) is applied until ti (or ti,0), where a new schedule θ(ti) is used. The new schedule θ(ti) is generated during the previous simulation stage from ti−1 to ti . This stage is named Si−1 as shown in the figure. Between ti and ti+1, new workloads (at time points ti,1, ti,2,. . .) arrive that are also shown in the figure. Therefore, the dynamic workload scheduling model is built on such a sequentially overlapped simulation-execution phases. In each phase, one schedule is applied and in the meanwhile, the workloads of the subsequent stage are analyzed to simulate and generate the schedule of the following stages. Pg. 341, For example, the table tells that the execution time is a normal distribution (X~N(20, 52 )) when running [100, 110] tasks of task class two with 10 VMs, each being small Amazon EC2 instance, failure rate being 0 and utilization being in a range of [70%, 80%]. Then, before each experiment runs, if we have profiled the average VM utilization is 75%, and there are 105 tasks and 10 VMs, then we must sample the normal distribution above to estimate pc(ti), and apply this sample value to estimate the throughput of each schedule, and finally choose the best schedule for use. Pg. 342, 1) Monte Carlo Method: Suppose that the overhead of using the Monte Carlo method is denoted by OM , which is the length of thet scheduling period. Thereafter, we search through all of the time points between ti and ti+1, when new workloads are generated, to calculate the CET for each VC as shown in Line 14 and 18. This step is followed by calculating the EET during the scheduling period, as shown in Line 20. Following this, the Makespan and throughput are calculated. 2) Blind Pick Method: Instead of searching through the whole schedule space Uas done in the Monte Carlo method, the Blind Pick (BP) method, randomly selects a portion of the schedules only within U for evaluation. The ratio to be selected is defined by a value α (0 ≤ α ≤ 1). The algorithm of applying BP is exactly the same as Monte Carlo, except for the scheduling sample space U and scheduling overhead. The length of the scheduling period is α × OM . 3) Iterative Ordinal Optimization Method: The OO, a suboptimal low overhead scheduling method is thoroughly introduced in [15]. Different from the aforementioned methods, the OO uses a rough model (n repeated runs) to generate a rough order of all of the schedules in U, and uses the accurate model (N repeated runs, n N) to evaluate the tops schedules in the rough order list. Pg. 342-343, Section A. EVOLUTIONARY ORDINAL OPTIMIZATION discloses the scheduling periods based on similarity values. Pg. 344-345, Section A. EXPERIMENTAL SETTINGS discloses carrying out cloud experiments. Pg. 349.)
It would have been obvious to one of ordinary skill in the art at the time of Applicant’s invention to modify Fleming to include/perform assigning time periods based on a size of the historical variable data covering a respective time period, as taught/suggested by Zhang. This known technique is applicable to the system of Fleming as they both share characteristics and capabilities, namely, they are directed to the specifics of work processing. One of ordinary skill in the art would have recognized that applying the known technique of Zhang would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Zhang to the teachings of Fleming would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such assigned time period features into similar systems. Further, applying assigning time periods based on a size of the historical variable data covering a respective time period would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow the user to schedule the processing of the data.
Regarding claims 5 and 11, the combination of Fleming and Nordstrom teaches the technical details of the financial modeling. Fleming does not teach which is taught by Nordstrom
wherein the size of the portion assigned to the respective node is selected according to the calculation of the respective sizes (col. 7, lines 15-67, discloses data calculation as well as routing to specific data points. Col. 8, line 49 – col. 9, line 8, col. 9, line 9- col. 10, line 31, col. 12, line 65- col. 13, line 27).
It would have been obvious to one of ordinary skill in the art at the time of Applicant’s invention to modify Fleming to include/perform calculate respective sizes of portions of the historical variable data for processing, as taught/suggested by Nordstrom. This known technique is applicable to the system of Fleming as they both share characteristics and capabilities, namely, they are directed to the specifics of work processing. One of ordinary skill in the art would have recognized that applying the known technique of Nordstrom would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Nordstrom to the teachings of Fleming would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such calculating features into similar systems. Further, applying calculate respective sizes of portions of the historical variable data for processing would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow for the ability to a more focus processing of information related to the previous variable data.
Regarding claims 6 and 12, the combination of Fleming and Nordstrom teaches the technical details of the financial modeling. Fleming does not teach which is taught by Nordstrom
wherein the processor is further to: divide the historical variable data into a plurality of assigned portions based on the calculated respective sizes of the portions (col. 9, lines 20-37, discloses dividing portions among the nodes based on how heavy the loads are. col. 11, line 55- col. 12, line 7, discloses pairing historical data based on calculated sizes. Col. 21, lines 25-40, col. 19, lines 10-25, col. 23, lines 1-45).
It would have been obvious to one of ordinary skill in the art at the time of Applicant’s invention to modify Fleming to include/perform divide the historical variable data into a plurality of assigned portions based on the calculated respective sizes of the portions, as taught/suggested by Nordstrom. This known technique is applicable to the system of Fleming as they both share characteristics and capabilities, namely, they are directed to the specifics of work processing. One of ordinary skill in the art would have recognized that applying the known technique of Nordstrom would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Nordstrom to the teachings of Fleming would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such dividing features into similar systems. Further, applying divide the historical variable data into a plurality of assigned portions based on the calculated respective sizes of the portions would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow for the ability to a more focus processing of information related to assist in the rebalancing.
Regarding claim 7, Fleming teaches
communicating, via a network interface with remote data sources containing historical variable data associated with real estate assets, the historical variable data being stored in a plurality of diverse data sets (¶ 24, discloses the use of a network interface and remote data sources. 28-31, discloses the use of historical property data and assets and a plurality of different data sets);
identifying, via the at least one processor, a plurality of previous peaks in the historical real estate values based at least partially on the historical real estate values received from the plurality of nodes (¶ 28-31, discloses the use and manipulation of historical data. ¶ 44, discloses various relevant historical real estate values. ¶ 20, discloses a peak value of distribution. ¶ 29, Fig. 1);
and transmitting an alert comprising a prediction (Fig. 2, discloses an appraisal analytics system. ¶ 56, discloses the sending/ communicating of the accessed (predicted) values. ¶ 50, discloses the reporting of predicted values. ¶ 6.).
Fleming does not specifically teach which is taught by Nordstrom,
wherein a total size of the historical variable data is larger than an available size in the memory device (col. 5, lines 30-60, discloses the use of a larger number of partitions than expected. col. 6, lines 10-43, discloses a load that is overloaded. Col. 9, lines 9-60, In another exemplary embodiment, connections between the nodes 27 and the computers 12 may change in order to more evenly balance workload between the nodes 27. For example, a node 27 that is heavily loaded may stop accepting new connections and/or drop existing connections with one or more of the computers 12. When rejecting or dropping a connection, the node 27 may first confirm that one or more other nodes 27 exist which have capacity to take on additional load, and then provide the computer 15 with a list of alternative nodes 27.);
transmitting, via the network interface, requests respectively to each of a plurality of nodes on a network to identify respective real-time workloads of each of the plurality of nodes, and receive at least the respective real-time workloads from each of the plurality of nodes, including current available memory and a number of available processors (col. 5, lines 32-63, discloses the storing and sending of the current state of the system. Col. 9, lines 9-60, discloses the current available memory and processors receiving real-time workloads of available processors. Col. 17, line 55 – col. 18, line 20, Col. 10, line 37- col. 11, line 10, col. 12, lines 8-65, col. 5, line 63 – col. 6, line 42, col. 17, lines 37-50);
calculating, via at least one processor, for each of the plurality of nodes, respective portions of the historical variable data via the network interface to the plurality of nodes on the network, such that a size of a portion assigned to a respective node is selected in accordance with a real-time workload of the respective node (col. 23, lines 1-30, discloses calculating a summation of the historical data and performing and analysis. For example, a calculation descriptor may be inserted in the calculation table 48 which creates a data pipe-time-series processor pair configured to obtain the per-minute summations from the data repository 20 and to compute an overall total. Of course, this computation could instead be performed in real time by inserting the calculation descriptor in the calculation table 48 while the data messages are still being received. Col. 5, lines 32-62, discloses data collection and analysis including the distribution of data across nodes and assignments to the various nodes in real time. Col. 12, lines 7-26, col. 16, lines 1-20, col. 17, lines 39-50, col. 23, lines 1-45, Fig. 4);
distributing, by transmission via the network interface, the respective portions of the historical variable data to the plurality of nodes on the network for processing (col. 5, lines 32-62, discloses distributing a workload across a plurality of nodes for processing. Col. 23, lines 1-45, discloses sending data messages to a node processing data. Sorting incoming data messages and the historical analysis of data. Col. 11, line 55- col. 12, line 7, discloses data pipe-time-series processing and construction of data including historical data. Col. 19, lines 10-26, col. 21, line 65- col. 22, line 36).
receiving, via the network interface (col. 17, lines 35-50, As previously described, a datapipe-time-series processor pair 64 may be created that is configured to receive information useable to create calculation descriptors from the user computers 18. In an exemplary embodiment, a graphical user interface (GUI) application may be configured to provide a web-based interface to receive information useable to create calculation descriptors. The website may, for example, include instructions on how to generate calculation descriptors, such that the calculation descriptors may specify virtually any calculation that may be conceived of by a user and for which the requisite precursor inputs are available from the data source computers 12.)
It would have been obvious to one of ordinary skill in the art at the time of Applicant’s invention to modify Fleming to include/perform a respective node is selected in accordance with a real-time workload of the respective node, as taught/suggested by Nordstrom. This known technique is applicable to the system of Fleming as they both share characteristics and capabilities, namely, they are directed to the specifics of work processing. One of ordinary skill in the art would have recognized that applying the known technique of Nordstrom would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Nordstrom to the teachings of Fleming would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such selecting a size of a portion features into similar systems. Further, applying an a respective node is selected in accordance with a real-time workload of the respective node would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow for the ability to a more focus processing of data and selecting in real-time.
The combination of Fleming and Nordstrom does not specifically teach wherein a total size of the historical variable data exceeds one terabyte. However, Florance discloses:
wherein a total size of the historical variable data exceeds one terabyte (¶ 683).
It would have been obvious to one of ordinary skill in the art at the time of Applicant’s invention to modify Fleming to include/perform wherein a total size of the historical variable data exceeds one terabyte, as taught/suggested by Florance. This known technique is applicable to the system of Fleming as they both share characteristics and capabilities, namely, they are directed to collection, distribution and use of information in connection with real estate. One of ordinary skill in the art would have recognized that applying the known technique of Florance would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Florance to the teachings of Fleming would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such specific size based data features into similar systems. Further, applying wherein a total size of the historical variable data exceeds one terabyte would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow the user to manage their storage proactively.
Fleming does not specifically teach generating a prediction of future peaks. However, Roehner discloses:
receiving, via the network interface, historical real estate values from the plurality of nodes that are based at least partially on the distributed portions of the historical variable data (pg. 1, First, we emphasize that the real estate price peaks which are currently under way in many industrialized countries (one important exception is Japan) share many of the characteristics of previous historical price peaks. In particular, we show that: (i) In the present episode real price increases are, at least for now, of the same order of magnitude as in previous episodes, typically of the order of 80% to 100% . (ii) Historically, price peaks turned out to be symmetrical with respect to the peak; soft landing, i.e. an upgoing phase followed by a plateau, has rarely (if ever) been observed. (iii) The inflated demand is mainly boosted by investors and high-income buyers. (iv) In the present as well as in previous episodes, the main engines in the upgoing phase have been the “hot” markets which developed in major cities such as London, Los Angeles, New York, Paris, San Francisco or Sydney. In our conclusion, we propose a prediction for real estate prices in the West of the United States over the period 2005-2011. We also point out that investment funds, which already play a key role in stock markets, have in recent times began to heavily invest in real estate. In the future, one can expect them to become major players in property markets worldwide. The outcome of the present episode will tell us how quickly this transformation evolves. Thus, if the height of the present peak substantially surpasses the magnitude of previous ones, one may infer that investment funds have been able to establish strong communication channels between real estate assets on the one hand and financial assets (e.g. bonds, stocks, options) on the other hand. Pg. 2, The three previous peaks provide possible guiding lines as to the future of the fourth peak that is currently under way. Naturally, there is no absolute certitude that the present peak will unfold as the previous ones; there are mainly two new factors (i) The present peak has a bigger amplitude and duration than the previous ones; in itself this would probably not preclude a repetition of the previous scenarios (ii) Investment funds (in which we include pension funds, equity funds, hedge funds) have taken a much greater part in the present episode than in previous ones. This explains the exceptional size of the peak but, as these institutions can mobilize much larger amounts of capital than the real estate companies which operated in previous episodes. Pg. 4, Other historical examples of real estate price peaks are given in Fig. 2b, 3 and 4. Fig. 2b is of interest because it emphasizes the similarity in the shape of two peaks which occurred in different time periods and in distant countries. In the same way, Fig. 3 points out the close parallelism between the peaks of 1889 and 1929: they have the same amplitude (amplitude of a peak being defined as the ratio (peak price) / (initial price)) and almost the same duration. Fig. 4 shows the 1985-1995 price peak in Britain (more will be said about this case in section 4).
identifying, via the at least one processor, a plurality of previous peaks in the historical real estate values based at least partially on the historical real estate values received from the plurality of nodes (pg. 8, The major role played by big cities is illustrated by the fact that in the United States there have been price peaks of an amplitude greater than two almost only in the North-East (Boston, New York) and in California (Los Angeles, San Francisco) and marginally in the Chicago Metropolitan Area. Fig. 4 further explains how price increases spread from big cities to neighboring areas. The highest curve corresponds to London; as can be seen the time lag between the maximum in London and the maxima in northern counties is comprised between one and two years. This observation has interesting practical implications. It means that the market downturn can first be observed in the hottest places. For instance, during the third and fourth quarter of 2004, prices in London have been falling, whereas they were still increasing in the rest of Britain (albeit at a slower rate than earlier). This seems to suggest that by July 2005 (at time of writing) Britain already was in the downward phase of the price peak. The same conclusion holds for Australia, where Sydney (which is the hottest market) has seen declining real estate prices since 2004. The fact that price increases are positively correlated with prices at the beginning of the price peak is illustrated in Fig. 7 in the case of the Western region of the United States. Thus, the average price in Las Vegas, which was $ 120,000 in 1995 had been multiplied by 1.2 in 2002, whereas the average price in San Francisco, namely $ 280,000 in 1995, had been multiplied by 1.8 in 2002. Additional evidence about the price multiplier effect can be found in Roehner (2000), Roehner (2001, chapter 6), Maslov et al. (2003), and Roehner (2004). Fig. 6-9).
generating, via the at least one processor, a prediction of a future peak in real estate values based at least partially on the plurality of previous peaks (Fig. 6-9, pg. 8-11, In previous papers we described speculative price peaks by a function of the form:
PNG
media_image1.png
51
221
media_image1.png
Greyscale
where p2, t2 denote the peak-price and peak-time respectively; and α or Ƭ are two adjustable parameters. In the present case it turns out that the exponents are almost equal to 1, namely 1970-1982: α = 0.99, 1982-1992: α = 1.06; the parameters Ƭ turn out to be almost the same as well, namely of the order of 13.5 quarters (i.e. 3.3 years). As a result, one is encouraged to model the downgoing path of the current episode by the same parameters. This leads to the dotted line projection in Fig. 97. Needless to say, this projection rests on the assumption that there is no fundamental change with respect to the two previous episodes. In particular, we assume that in spite of their expanding assets, investment funds will not be able to rule property markets in coming years to the same extent as they are able to direct stock markets. Apart from their own specific interest, speculative episodes in property markets are also of great value because they are similar to, but simpler than, speculative episodes in stock markets. The first point, the similarity of price peaks in property versus stock has been briefly summarized above (more details can be found in Roehner 2004a). The fact that property markets are “simpler” than stock markets can be attributed to the following circumstances. (i) Transactions take much longer in real estate than in stocks, typically one or two months compared to one or two minutes; as a result property prices are subject to only low frequency shocks whereas stock prices are subject to shocks whose frequency spectrum extends over several orders of magnitude (from 1/minute to 1/year). (ii)Most of the financial instruments available on stock markets (such as for instance options, futures, convertible bonds) do not (yet) exist in property markets. (iii) As shown in a former paper (Roehner 2004b), the strategy of big investment funds have a determinant impact on the price of stocks. Such funds also have much influence in commercial real estate; in contrast, their involvement in residential real estate has so far been smaller although this situation may change in the next decades.)
The combination of Fleming and Roehner discloses transmit an alert comprising the prediction. Fleming discloses alerting with real estate information while Roehner discloses the prediction.
It would have been obvious to one of ordinary skill in the art at the time of Applicant’s invention to modify Fleming to include/perform generate a prediction of a future peak in real estate values based at least partially on the plurality of previous peaks, as taught/suggested by Roehner. This known technique is applicable to the system of Fleming as they both share characteristics and capabilities, namely, they are directed to evaluating properties and evaluations. One of ordinary skill in the art would have recognized that applying the known technique of Roehner would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Roehner to the teachings of Fleming would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such peak prediction features into similar systems. Further, applying generate a prediction of a future peak in real estate values based at least partially on the plurality of previous peaks would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow the user to predict when the next real-estate bubble might occur. This would be useful information for an investor.
Fleming does not specifically teach generating assigning time periods based on a size of the historical variable data covering a respective time period. However, Zhang discloses:
wherein the calculating the respective portions of the historical variable data to each node comprises, in response to the respective real-time workloads, assigning time periods based on a size of the historical variable data covering a respective time period (pg. 345, Scenario 2: Simulation while execution. In the lower part of Fig. 8, simulation and execution stages overlap. At time t0, a random schedule is used for the time being, while the workload information between [t1, t2] is used to generate a better quality schedule to be used at time t1. Similar scheduling is conducted repeatedly until the end of experiment. Most of the real-time multitasking scheduling applications are carried out based on Scenario 2. pg. 338, Scheduling of dynamic and multitasking workloads for big-data analytics is a challenging issue, as it requires a significant amount of parameter sweeping and iterations. Therefore, real-time scheduling becomes essential to increase the throughput of many-task computing. The difficulty lies in obtaining a series of optimal yet responsive schedules. In dynamic scenarios, such as virtual clusters in cloud, scheduling must be processed fast enough to keep pace with the unpredictable fluctuations in the workloads to optimize the overall system performance. In this paper, ordinal optimization using rough models and fast simulation is introduced to obtain suboptimal solutions in a much shorter timeframe. While the scheduling solution for each period may not be the best, ordinal optimization can be processed fast in an iterative and evolutionary way to capture the details of big-data workload dynamism. Experimental results show that our evolutionary approach compared with existing methods, such as Monte Carlo and Blind Pick, can achieve higher overall average scheduling performance, such as throughput, in real-world applications with dynamic workloads. Furthermore, performance improvement is seen by implementing an optimal computing budget allocating method that smartly allocates computing cycles to the most promising schedules. Pg. 340, In Fig. 2, we show stacked workloads that are dispatched to the C VCs in a timeline. From ti−1 (or also named ti−1,0), schedule θ(ti−1) is applied until ti (or ti,0), where a new schedule θ(ti) is used. The new schedule θ(ti) is generated during the previous simulation stage from ti−1 to ti . This stage is named Si−1 as shown in the figure. Between ti and ti+1, new workloads (at time points ti,1, ti,2,. . .) arrive that are also shown in the figure. Therefore, the dynamic workload scheduling model is built on such a sequentially overlapped simulation-execution phases. In each phase, one schedule is applied and in the meanwhile, the workloads of the subsequent stage are analyzed to simulate and generate the schedule of the following stages. Pg. 341, For example, the table tells that the execution time is a normal distribution (X~N(20, 52 )) when running [100, 110] tasks of task class two with 10 VMs, each being small Amazon EC2 instance, failure rate being 0 and utilization being in a range of [70%, 80%]. Then, before each experiment runs, if we have profiled the average VM utilization is 75%, and there are 105 tasks and 10 VMs, then we must sample the normal distribution above to estimate pc(ti), and apply this sample value to estimate the throughput of each schedule, and finally choose the best schedule for use. Pg. 342, 1) Monte Carlo Method: Suppose that the overhead of using the Monte Carlo method is denoted by OM , which is the length of thet scheduling period. Thereafter, we search through all of the time points between ti and ti+1, when new workloads are generated, to calculate the CET for each VC as shown in Line 14 and 18. This step is followed by calculating the EET during the scheduling period, as shown in Line 20. Following this, the Makespan and throughput are calculated. 2) Blind Pick Method: Instead of searching through the whole schedule space Uas done in the Monte Carlo method, the Blind Pick (BP) method, randomly selects a portion of the schedules only within U for evaluation. The ratio to be selected is defined by a value α (0 ≤ α ≤ 1). The algorithm of applying BP is exactly the same as Monte Carlo, except for the scheduling sample space U and scheduling overhead. The length of the scheduling period is α × OM . 3) Iterative Ordinal Optimization Method: The OO, a suboptimal low overhead scheduling method is thoroughly introduced in [15]. Different from the aforementioned methods, the OO uses a rough model (n repeated runs) to generate a rough order of all of the schedules in U, and uses the accurate model (N repeated runs, n N) to evaluate the tops schedules in the rough order list. Pg. 342-343, Section A. EVOLUTIONARY ORDINAL OPTIMIZATION discloses the scheduling periods based on similarity values. Pg. 344-345, Section A. EXPERIMENTAL SETTINGS discloses carrying out cloud experiments. Pg. 349.)
It would have been obvious to one of ordinary skill in the art at the time of Applicant’s invention to modify Fleming to include/perform assigning time periods based on a size of the historical variable data covering a respective time period, as taught/suggested by Zhang. This known technique is applicable to the system of Fleming as they both share characteristics and capabilities, namely, they are directed to the specifics of work processing. One of ordinary skill in the art would have recognized that applying the known technique of Zhang would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Zhang to the teachings of Fleming would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such assigned time period features into similar systems. Further, applying assigning time periods based on a size of the historical variable data covering a respective time period would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow the user to schedule the processing of the data.
Regarding claims 13-14, the combination of Fleming and Nordstrom teaches the technical details of the financial modeling. Fleming does not teach which is taught by Nordstrom
wherein the calculated respective portions of the historical variable data represent a time period within the historical variable data, such that each calculated respective portion is selected so as to represent data within a historical start time and a historical end time (col. 12, lines 50-65, The time-series processors 68 are constructed in tandem with their partner datapipes 66. The time-series processor 68 is an object that knows how to process one or more datapoints in some useful way. The time-series processor 68 may perform a logical or physical aggregation of data. For example, a time-series processor 68 may be used to calculate the sum of a series of input values across an interval and emit the result at the end of its interval. The output of a time-series processor 68 is one or more datapoints. By employing a separate datapipe and time-series processor, the issue of what data to process (and where that data comes from) is decoupled from the issue of how to process the data. As will be seen below, this allows real-time analyses, historical analyses, and future projections based on simulations to be implemented in generally the same fashion. Col. 23, lines 1-30, discloses a historical start and end time based on a set month. Col. 21, line 25 – col. 22, line 11).
It would have been obvious to one of ordinary skill in the art at the time of Applicant’s invention to modify Fleming to include/perform that the historical variable data represents a time period within the historical variable data, as taught/suggested by Nordstrom. This known technique is applicable to the system of Fleming as they both share characteristics and capabilities, namely, they are directed to the specifics of work processing. One of ordinary skill in the art would have recognized that applying the known technique of Nordstrom would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Nordstrom to the teachings of Fleming would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such historical variable data features into similar systems. Further, applying the historical variable data represents a time period within the historical variable data, would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow for additional time defined historic data for the analyzing and processing of the data.
Claim(s) 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fleming (US 20130282596 A1) in view of Nordstrom at al. (US 8601112 B1) in view of Florance et al. (US 20050203768 A1) in view of Roehner (2006) in view of Zhang (2014) in further view of C. Rampell “Housing Affordability at a Record High” (2009) (hereafter Rampell).
Regarding claims 15-16, the combination of Fleming, Nordstrom, and Roehner teaches the technical details of the financial modeling. Fleming does not specify the previous peaks as claimed.
However, Rampell teaches wherein the plurality of previous peaks identified is in 1981, 1984, 1994, 2000 and 2007 (pg. 1, see the diagram that contains the various data points including peaks).
It would have been obvious to one of ordinary skill in the art at the time of Applicant’s invention to modify Fleming and Roehner to include/perform wherein the plurality of previous peaks identified is in 1981, 1984, 1994, 2000 and 2007, as taught/suggested by Rampell. This known technique is applicable to the system of Fleming as they both share characteristics and capabilities, namely, they are directed to the specifics of assessing the real estate market. One of ordinary skill in the art would have recognized that applying the known technique of Rampell would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Rampell to the teachings of Fleming would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such variable data features into similar systems. Further, applying wherein the plurality of previous peaks identified is in 1981, 1984, 1994, 2000 and 2007 would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow for additional time defined historic data for the analyzing and processing of the data.
Other pertinent prior art includes Hamilton et al. (US 20040064293 A1) which discloses maintenance of historical data while substantially retaining measured peaks and valleys in the data. Hrischuk et al. (US 20150199388 A1) which discloses monitoring and analyzing quality of service (QOS) performance in a storage system. Kagarlis et a. (US 20080167941 A1) which discloses price indexing. Dozier (US 20150058234 A1) which discloses valuation modeling. Birtel et al. (US 20110218826 A1) which discloses residential real estate risk mitigation and, more particularly, to a system and method of assigning home price volatility using a new financial instrument. Kagarlis et al. (US 20080167889 A1) which discloses price indexing.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMIE H AUSTIN whose telephone number is (571)272-7363. The examiner can normally be reached Monday, Tuesday, Thursday, Friday 7am-2pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Epstein can be reached at (571) 270 5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JAMIE H. AUSTIN
Examiner
Art Unit 3625
/JAMIE H AUSTIN/Primary Examiner, Art Unit 3625