DETAILED ACTION
Claims 1-15 and 20-24 are pending in the present application after Applicant filed a preliminary amendment on 1/9/2025 and are under examination on the merits. This communication is the first action on the merits (FAOM).
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
Applicant filed an Information Disclosure Statement (IDS) on 7/7/2025. This filing is in compliance with 37 C.F.R. 1.97.
As required by M.P.E.P. 609(C), the applicant's submission of the Information Disclosure Statement is acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P. 609(C), a copy of the PTOL -1449 form, initialed and dated by the examiner, is attached to the instant office action.
Drawings
The drawings filed on 1/9/2025 are acceptable as filed.
Claim Rejections - 35 USC§ 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-15 and 20-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Here, under considerations of the broadest reasonable interpretation of the claimed invention, Examiner finds that the Applicant invented a method and system for determining grades and scores of the physical facilities of a specific business. Examiner formulates an abstract idea analysis, following the framework described in the MPEP as follows:
Step 1: The claims are directed to a statutory category, namely a "method" (claims 1-14) and "system" (claims 15 and 20-24).
Step 2A - Prong 1: The claims are found to recite limitations that set forth the abstract idea(s), namely, regarding claim 1:
An information processing method, comprising: acquiring a plurality of business information of a physical area on target business, wherein the plurality of business information comprises at least information configured to describe states of different facilities in the physical area;
clustering the plurality of business information according to a plurality of preset dimensions to obtain a plurality of dimension information respectively corresponding to the plurality of preset dimensions;
acquiring a plurality of evaluation parameters respectively corresponding to the plurality of preset dimensions;
determining grading parameters of the physical area on the target business based on the plurality of evaluation parameters and the plurality of dimension information, wherein the grading parameters are configured to indicate a grade to which the physical area belongs.
Dependent claims 2-15, and 20-24 recite the same or similar abstract idea(s) as independent claim 1 with merely a further narrowing of the abstract idea(s) to particular data characterization and/or additional data analyses performed as part of the abstract idea.
The limitations in claims 1-15 and 20-24 above falling well-within the groupings of subject matter identified by the courts as being abstract concepts, specifically the claims are found to correspond to the category of:
"Certain methods of organizing human activity- fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions)" as the limitations identified above are directed to determining grades and scores of the physical facilities of a specific business and thus is a method of organizing human activity including at least commercial or business interactions or relations and/or a management of user personal behavior; and/or
"Mental processes - concepts performed in the human mind (including an observation, evaluation, judgement, opinion)" as the limitations identified above include mere data observations, evaluations, judgements, and/or opinions, e.g. including user observation and evaluation by determining grades and scores of the physical facilities of a specific business, which is capable of being performed mentally and/or using pen and paper.
Step 2A - Prong 2: Claims 1-15 and 20-24 are found to clearly be directed to the abstract idea identified above because the claims, as a whole, fail to integrate the claimed judicial exception into a practical application, specifically the claims recite the additional elements of:
" An information processing system, comprising an Internet of Things platform and a plurality of information collection devices connected to the Internet of Things platform, wherein: the plurality of information collection devices is configured to collect business information of a physical area on target business; and the Internet of Things platform is configured to execute the information processing method," (claim 15); “A non-transitory computer-readable storage medium, wherein a computer program stored therein causes a processor to perform the information processing method,” (claim 20), however the aforementioned elements merely amount to generic components of a general purpose computer used to "apply" the abstract idea (MPEP 2106.0S(f)) and thus fails to integrate the recited abstract idea into a practical application, furthermore the high-level recitation of receiving data from a generic "system" is at most an attempt to limit the abstract to a particular field of use (MPEP 2106.0S(h), e.g.: "For instance, a data gathering step that is limited to a particular data source (such as the Internet) or a particular type of data (such as power grid data or XML tags) could be considered to be both insignificant extra-solution activity and a field of use limitation. See, e.g., Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (limiting use of abstract idea to the Internet); Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data); Intellectual Ventures I LLC v. Erie lndem. Co., 850 F.3d 1315, 1328-29, 121 USPQ2d 1928, 1939 (Fed. Cir. 2017) (limiting use of abstract idea to use with XML tags).") and/or merely insignificant extra-solution activity (MPE 2106.05(g)) and thus further fails to integrate the abstract idea into a practical application;
Step 2B: Claims 1-15 and 20-24 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements as described above with respect to Step 2A Prong 2 merely amount to a general purpose computer that attempts to apply the abstract idea in a technological environment (MPEP 2106.0S(f)), including merely limiting the abstract idea to a particular field of use of analysis of "business information" as explained above, and/or performs insignificant extra-solution activity, e.g. data gathering or output, (MPEP 2106.0S(g)), as identified above, which is further found under step 2B to be merely well-understood, routine, and conventional activities as evidenced by MPEP 2106.0S(d)(II) (describing conventional activities that include transmitting and receiving data over a network, electronic recordkeeping, storing and retrieving information from memory, electronically scanning or extracting data from a physical document, and a web browser's back and forward button functionality). Therefore, similarly the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that the claims amount to significantly more than the abstract idea directed to determining grades and scores of the physical facilities of a specific business.
Claims 1-15 and 20-24 are accordingly rejected under 35 USC§ 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea(s)) without significantly more.
Note: The analysis above applies to all statutory categories of invention. As such, the presentment of any claim otherwise styled as a machine or manufacture, for example, would be subject to the same analysis.
For further authority and guidance, see:
MPEP § 2106
https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102(A)(1) that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(A)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-4, 9-15, and 20-23 are rejected under 35 U.S.C. 102(A)(1) as being anticipated by U.S. Patent Application Publication Number 2014/0279674 to Michels et al. (hereafter referred to as Michels).
As per claim 1, Michels teaches:
An information processing method, comprising: acquiring a plurality of business information of a physical area on target business, wherein the plurality of business information comprises at least information configured to describe states of different facilities in the physical area (Paragraph Number [0037] teaches a mechanism for analyzing information about entities of interest and for rating or scoring the entities of interest based on the analyzed information. The rating or the score of an entity of interest can sometimes be referred to as a placerank value of an entity of interest. In some embodiments, an "entity of interest" (EOI) can include any entity that has a physical location, such as a restaurant, a national park, a store, travel agency, or a coffee shop, or a government entity, such as a registry of motor vehicles, or any other business or non-business entity. In other embodiments, an EOI can include any other types of entities, for example, products, people, buildings, or computers. Paragraph Number [0038] teaches a placerank value of an EOI is indicative of an importance or relevance of the EOI in view of predetermined characteristics or criteria. For example, a placerank value of an EOI can be indicative of the "family friendliness" of the EOI. In this example, a high placerank value can indicate that the associated EOI is family friendly, whereas a low placerank value can indicate that the associated EOI is not family friendly. In some embodiments, an EOI may be associated with a plurality of placerank values, each associated with particular characteristics or criteria, such as predetermined audience or a predetermined scenario. For example, a restaurant can be associated with three placerank values: placerank.foodie, placerank.social, and placerank.good for singles. placerank.foodie can indicate a popularity of the EOI to gourmets; placerank.social can indicate a popularity of the EOI for social events; and placerank.good for singles can indicate a popularity of the EOI to singles).
clustering the plurality of business information according to a plurality of preset dimensions to obtain a plurality of dimension information respectively corresponding to the plurality of preset dimensions (Paragraph Number [0038] teaches a placerank value of an EOI is indicative of an importance or relevance of the EOI in view of predetermined characteristics or criteria. For example, a placerank value of an EOI can be indicative of the "family friendliness" of the EOI. In this example, a high placerank value can indicate that the associated EOI is family friendly, whereas a low placerank value can indicate that the associated EOI is not family friendly. In some embodiments, an EOI may be associated with a plurality of placerank values, each associated with particular characteristics or criteria, such as predetermined audience or a predetermined scenario. For example, a restaurant can be associated with three placerank values: placerank.foodie, placerank.social, and placerank.good for singles. placerank.foodie can indicate a popularity of the EOI to gourmets; placerank.social can indicate a popularity of the EOI for social events; and placerank.good for singles can indicate a popularity of the EOI to singles. Paragraph Number [0047] teaches the benefit of the disclosed placerank computation mechanism, which considers a variety of information types as further described below, is that the variety of information types can provide a robust signal across many dimensions of interest, some of which may not be explicit. For example, for a social networking service to rate a place as "good for kids", it must ask users to provide an explicit rating for that feature. In contrast, the disclosed placerank computation mechanism can estimate a value for this dimension (e.g., aspect) based on, for example, words in reviews, the websites the review links to, and/or whether the review is on a popular blog amongst mothers. Therefore, the disclosed placerank computation mechanism can allow a service provider to add new dimensions without explicitly asking reviewers to provide the information on such new dimensions).
acquiring a plurality of evaluation parameters respectively corresponding to the plurality of preset dimensions (Paragraph Number [0102] teaches the PG module 112 can use a specific, targeted function for computing a placerank value when a general function produces poor quality. In particular, the PG module 112 can be configured to use a different function for computing placerank value based on the type of the placerank value, characteristics of the EOI associated with the placerank value, a type of the EOI associated with the placerank value. For example, the PG module 112 can be configured to use different weights to combine features when the EOI is within a specific country, region, locality, or by industry category or sub-category. In other words, the PG module 112 may use a first function for producing placerank values for restaurants in New York City and may use a second function for producing placerank values for restaurants in Boston. As another example, the PG module 112 can be configured to use different weights to combine features when the placerank value to be computed is associated with a particular type, such as "family friendliness," "proximity to subway stations," or "price." In some embodiments, the PG module 112 can learn the specific, targeted function using a supervised learning technique. For example, the PG module 112 can learn the specific, targeted function by learning a regression mapping (e.g., a function or a table) that maps the characteristics of the EOI or the type of the placerank value to the desired specific, targeted function or parameters of the specific, targeted function).
determining grading parameters of the physical area on the target business based on the plurality of evaluation parameters and the plurality of dimension information (Paragraph Number [0113] teaches the contextual conditions can include time, a geographic location (e.g., a Global Positioning System data), an application that sent the information request, an identification or a profile of a user that sent the information request, and/or a client device that sent the information request. For example, if an application that sent the information request is a social check-in application, the PG module 112 or the QR module 114 can be configured to rate certain types of EOIs, such as restaurants, higher than other types of EOIs, such as warehouses, since users of the social check-in application generally visit restaurants more often than warehouses. The bias for the dynamic re-ordering can be learned using machine learning techniques. The bias can be represented as a function that combines multiple placerank values (e.g. child friendly and foodie) with different weights (e.g. multipliers for each placerank value and/or an addition constant into a composite score). In some cases, the function can also take into account other parameters, such as distance, for example, linearly, logarithmically, or exponentially).
wherein the grading parameters are configured to indicate a grade to which the physical area belongs (Paragraph Number [0114] teaches the context information can also include one or more features determined at query time. In some cases, the combination of features determined at query time can include information associated with or that is a part of the information request. The PG module 112 or the QR module 114 can combine the one or more features with one or more placerank values to determine a final score for a EOI. The PG module 112 or the QR module 114 can then use the final score to dynamically re-order the EOIs and send the reordered EOIs to a client device that sent the information request).
As per claim 2, Michels teaches each of the limitations of claim 1.
In addition, Michels teaches:
acquiring a plurality of evaluation factors respectively corresponding to the plurality of preset dimensions, and constructing an association relationship between the plurality of evaluation factors (Paragraph Number [0071] teaches the server 102 can associate (or link) external data in the external database 306 to internal data in the internal database 304. In some cases, the server 102 can automatically determine such association by matching certain attributes of the external data to the internal data. For example, the server 102 can automatically determine the association between a Wikipedia page (i.e., external data) and the internal data by matching the title of the Wikipedia page to the "name" attribute of EOIs in the internal database 304. Once the server 102 determines the association, the PG module 112 can use information in the Wikipedia page to derive a feature for the associated EOI in the internal database 304).
wherein the association relationship comprises a direct association relationship and an indirect association relationship. (Paragraph Number [0068] teaches the server 102 is configured to receive an information request for an EOI from one or more clients 106, requesting the server 102 to provide information about the EOI stored in an internal database. The information request can be received via a public API endpoint. In some cases, an information request can be associated with one of a plurality of information request types. The information request types can include a search request type or a direct EOI identification request type. An example of an information request associated with the search request type can be a textual string, such as "Chinese restaurants in New York." An example of an information request associated with the direct EOI identification request type can be a textual string, such as "World Trade Center in New York.").
determining an initial weight of each of the plurality of evaluation factors based on the association relationship (Paragraph Number [0094] teaches the function for computing the placerank values can be periodically updated and the set of placerank values can be reproduced when the function is so updated. One way to update the function is to re-weight underlying features using machine learning techniques. For example, during a recession, lower priced restaurants can be biased to receive higher placerank value. As another example, if a source of collected information falls in quality, the weights attributed to features from that source can be reduced. Continuing that example, an individual's online blog could be a source of features used to produce placerank values. If the person's blog rates restaurants (e.g., as "excellent" or "terrible"), the PG module 112 can extract those ratings from the blog and use them as features for the placerank computation. As long as the person's blog is considered a reliable information source, the features generated from the blog can be considered important (or given high weights) in producing placerank values. However, if the PG module 112 determines that the person's blog is no longer reliable, the importance (or weights) of features generated from that blog can be decreased).
determining the plurality of evaluation parameters respectively corresponding to the plurality of preset dimensions based on the initial weights (Paragraph Number [0094] teaches the function for computing the placerank values can be periodically updated and the set of placerank values can be reproduced when the function is so updated. One way to update the function is to re-weight underlying features using machine learning techniques. For example, during a recession, lower priced restaurants can be biased to receive higher placerank value. As another example, if a source of collected information falls in quality, the weights attributed to features from that source can be reduced. Continuing that example, an individual's online blog could be a source of features used to produce placerank values. If the person's blog rates restaurants (e.g., as "excellent" or "terrible"), the PG module 112 can extract those ratings from the blog and use them as features for the placerank computation. As long as the person's blog is considered a reliable information source, the features generated from the blog can be considered important (or given high weights) in producing placerank values. However, if the PG module 112 determines that the person's blog is no longer reliable, the importance (or weights) of features generated from that blog can be decreased).
As per claim 3, Michels teaches each of the limitations of claims 1 and 2.
In addition, Michels teaches:
receiving an influence relationship inputted for the plurality of evaluation factors, wherein the influence relationship is configured to characterize a direct association between the plurality of evaluation factors (Paragraph Number [0094] teaches the function for computing the placerank values can be periodically updated and the set of placerank values can be reproduced when the function is so updated. One way to update the function is to re-weight underlying features using machine learning techniques. For example, during a recession, lower priced restaurants can be biased to receive higher placerank value. As another example, if a source of collected information falls in quality, the weights attributed to features from that source can be reduced. Continuing that example, an individual's online blog could be a source of features used to produce placerank values. If the person's blog rates restaurants (e.g., as "excellent" or "terrible"), the PG module 112 can extract those ratings from the blog and use them as features for the placerank computation. As long as the person's blog is considered a reliable information source, the features generated from the blog can be considered important (or given high weights) in producing placerank values. However, if the PG module 112 determines that the person's blog is no longer reliable, the importance (or weights) of features generated from that blog can be decreased).
constructing the association relationship based on the influence relationship (Paragraph Number [0049] teaches this allows the function to compute a placerank value that can be useful for searching medical providers (e.g., the placerank value is higher for EOIs relating to medical service providers.) In other embodiments, the adaptation of the function can be performed by training the function using appropriate label data. For example, the function can be trained using social labels so that the function can compute placerank values that are correlated with social importance. As another example, the function can be trained using medical labels so that the function can compute placerank values correlated with medical service providers).
As per claim 4, Michels teaches each of the limitations of claims 1 and 2.
In addition, Michels teaches:
determining an influence parameter of a first evaluation factor on a second evaluation factor that has an association relationship with the first evaluation factor (Paragraph Number [0094] teaches the function for computing the placerank values can be periodically updated and the set of placerank values can be reproduced when the function is so updated. One way to update the function is to re-weight underlying features using machine learning techniques. For example, during a recession, lower priced restaurants can be biased to receive higher placerank value. As another example, if a source of collected information falls in quality, the weights attributed to features from that source can be reduced. Continuing that example, an individual's online blog could be a source of features used to produce placerank values. If the person's blog rates restaurants (e.g., as "excellent" or "terrible"), the PG module 112 can extract those ratings from the blog and use them as features for the placerank computation. As long as the person's blog is considered a reliable information source, the features generated from the blog can be considered important (or given high weights) in producing placerank values. However, if the PG module 112 determines that the person's blog is no longer reliable, the importance (or weights) of features generated from that blog can be decreased).
wherein the influence parameter is configured to characterize an influence degree of the first evaluation factor on the second evaluation factor and the first evaluation factor is any evaluation factor among the plurality of evaluation factors (Paragraph Number [0094] teaches the function for computing the placerank values can be periodically updated and the set of placerank values can be reproduced when the function is so updated. One way to update the function is to re-weight underlying features using machine learning techniques. For example, during a recession, lower priced restaurants can be biased to receive higher placerank value. As another example, if a source of collected information falls in quality, the weights attributed to features from that source can be reduced. Continuing that example, an individual's online blog could be a source of features used to produce placerank values. If the person's blog rates restaurants (e.g., as "excellent" or "terrible"), the PG module 112 can extract those ratings from the blog and use them as features for the placerank computation. As long as the person's blog is considered a reliable information source, the features generated from the blog can be considered important (or given high weights) in producing placerank values. However, if the PG module 112 determines that the person's blog is no longer reliable, the importance (or weights) of features generated from that blog can be decreased).
adjusting an initial weight of the second evaluation factor based on the influence parameter (Paragraph Number [0094] teaches the function for computing the placerank values can be periodically updated and the set of placerank values can be reproduced when the function is so updated. One way to update the function is to re-weight underlying features using machine learning techniques. For example, during a recession, lower priced restaurants can be biased to receive higher placerank value. As another example, if a source of collected information falls in quality, the weights attributed to features from that source can be reduced. Continuing that example, an individual's online blog could be a source of features used to produce placerank values. If the person's blog rates restaurants (e.g., as "excellent" or "terrible"), the PG module 112 can extract those ratings from the blog and use them as features for the placerank computation. As long as the person's blog is considered a reliable information source, the features generated from the blog can be considered important (or given high weights) in producing placerank values. However, if the PG module 112 determines that the person's blog is no longer reliable, the importance (or weights) of features generated from that blog can be decreased).
determining the plurality of evaluation parameters respectively corresponding to the plurality of preset dimensions based on adjusted initial weights (Paragraph Number [0094] teaches the function for computing the placerank values can be periodically updated and the set of placerank values can be reproduced when the function is so updated. One way to update the function is to re-weight underlying features using machine learning techniques. For example, during a recession, lower priced restaurants can be biased to receive higher placerank value. As another example, if a source of collected information falls in quality, the weights attributed to features from that source can be reduced. Continuing that example, an individual's online blog could be a source of features used to produce placerank values. If the person's blog rates restaurants (e.g., as "excellent" or "terrible"), the PG module 112 can extract those ratings from the blog and use them as features for the placerank computation. As long as the person's blog is considered a reliable information source, the features generated from the blog can be considered important (or given high weights) in producing placerank values. However, if the PG module 112 determines that the person's blog is no longer reliable, the importance (or weights) of features generated from that blog can be decreased).
As per claim 9, Michels teaches each of the limitations of claim 1.
In addition, Michels teaches:
quantifying the plurality of dimension information to obtain a quantified value corresponding to each dimension information (Paragraph Number [0046] teaches unlike existing rating systems, which often only use numerical information to generate its ratings, the disclosed placerank computation mechanism can compute ratings or scores based on various information types. For example, the disclosed placerank computation mechanism can use advertisements about the EOI, textual descriptions of the EOI, which websites describe the EOI and the text on those websites about the EOI, attributes of the EOI, as well as user reviews about the EOI to determine the placerank value of the EOI. The ability to use various information types has significant benefits compared to existing rating systems because the amount of information for computing a placerank value can be significantly greater than the amount of numerical information for computing star-based numerical scores. When the disclosed placerank computation mechanism uses a user review, the disclosed placerank computation mechanism can use not just numerical ratings, but also the tone of the text in the review and the quality/reliability of the review).
determining quantified score values respectively corresponding to the plurality of evaluation factors based on the plurality of evaluation parameters and the plurality of quantified values (Paragraph Number [0046] teaches unlike existing rating systems, which often only use numerical information to generate its ratings, the disclosed placerank computation mechanism can compute ratings or scores based on various information types. For example, the disclosed placerank computation mechanism can use advertisements about the EOI, textual descriptions of the EOI, which websites describe the EOI and the text on those websites about the EOI, attributes of the EOI, as well as user reviews about the EOI to determine the placerank value of the EOI. The ability to use various information types has significant benefits compared to existing rating systems because the amount of information for computing a placerank value can be significantly greater than the amount of numerical information for computing star-based numerical scores. When the disclosed placerank computation mechanism uses a user review, the disclosed placerank computation mechanism can use not just numerical ratings, but also the tone of the text in the review and the quality/reliability of the review).
determining the grading parameters based on the quantified score values (Paragraph Number [0115] teaches the PG module 112 or the QR module 114 can be configured to receive a location, typically the location of the device, from a client device that sent the information request. This location information can be represented as a location identifier, such as a Global Positioning System (GPS) coordinate or a latitude/longitude coordinate pair, and can be included in the information request. For example, the PG module 112 or the QR module 114 can determine a physical distance or a travel time between the location provided by the client device and a particular EOI, and combine the distance and one or more placerank values of the particular EOI to determine a score for the particular EOI. The PG module 112 or the QR module 114 can repeat this process for each of the EOIs to generate a plurality of scores. Then, the PG module 112 or the QR module 114 can use the plurality of scores to reorder the EOIs, thereby taking into account the importance of an EOI and how far the EOI is from the location provided by the client device).
As per claim 10, Michels teaches each of the limitations of claims 1 and 9.
In addition, Michels teaches:
determining differences between the evaluation parameters and the corresponding quantified score values (Paragraph Number [0098] teaches the cost function g can include a linear function, a logarithm function, an exponential function, a non-linear function, or any other functions that can penalize a difference between the labeled placerank value and the placerank value estimated by the placerank estimator .eta.. In other cases, the PG module 112 can generate feature weights .omega. using linear regression, non-linear regression, kernel regression, Bayesian techniques, such as Naive Bayesian, and/or gradient descent techniques. Paragraph Number [0102] teaches the PG module 112 can use a specific, targeted function for computing a placerank value when a general function produces poor quality. In particular, the PG module 112 can be configured to use a different function for computing placerank value based on the type of the placerank value, characteristics of the EOI associated with the placerank value, a type of the EOI associated with the placerank value. For example, the PG module 112 can be configured to use different weights to combine features when the EOI is within a specific country, region, locality, or by industry category or sub-category. In other words, the PG module 112 may use a first function for producing placerank values for restaurants in New York City and may use a second function for producing placerank values for restaurants in Boston. As another example, the PG module 112 can be configured to use different weights to combine features when the placerank value to be computed is associated with a particular type, such as "family friendliness," "proximity to subway stations," or "price." In some embodiments, the PG module 112 can learn the specific, targeted function using a supervised learning technique. For example, the PG module 112 can learn the specific, targeted function by learning a regression mapping (e.g., a function or a table) that maps the characteristics of the EOI or the type of the placerank value to the desired specific, targeted function or parameters of the specific, targeted function).
determining at least one target evaluation factor from the plurality of evaluation factors based on the differences (Paragraph Number [0102] teaches the PG module 112 can use a specific, targeted function for computing a placerank value when a general function produces poor quality. In particular, the PG module 112 can be configured to use a different function for computing placerank value based on the type of the placerank value, characteristics of the EOI associated with the placerank value, a type of the EOI associated with the placerank value. For example, the PG module 112 can be configured to use different weights to combine features when the EOI is within a specific country, region, locality, or by industry category or sub-category. In other words, the PG module 112 may use a first function for producing placerank values for restaurants in New York City and may use a second function for producing placerank values for restaurants in Boston. As another example, the PG module 112 can be configured to use different weights to combine features when the placerank value to be computed is associated with a particular type, such as "family friendliness," "proximity to subway stations," or "price." In some embodiments, the PG module 112 can learn the specific, targeted function using a supervised learning technique. For example, the PG module 112 can learn the specific, targeted function by learning a regression mapping (e.g., a function or a table) that maps the characteristics of the EOI or the type of the placerank value to the desired specific, targeted function or parameters of the specific, targeted function).
outputting recommendation information corresponding to each of the at least one target evaluation factor (Paragraph Number [0089] teaches the PG module 112 is configured to generate a placerank value based on the generated features (or the normalized features). In some cases, the PG module 112 can use a function to aggregate the values of the generated features (or the normalized features). The output of the function can be a raw placerank value 308. The function can be configured so that popular EOIs are assigned higher placerank values compared to unpopular EOIs. For example, restaurants that receive more physical customer visits can have a higher placerank than restaurants that receive fewer physical customer visits).
As per claim 11, Michels teaches each of the limitations of claims 1 and 9.
In addition, Michels teaches:
in a case that the grading parameter being lower than a preset grading parameter, determining that the difference between the quantified score value and the evaluation parameter is greater than a target preset dimension of a preset difference (Paragraph Number [0098] teaches the cost function g can include a linear function, a logarithm function, an exponential function, a non-linear function, or any other functions that can penalize a difference between the labeled placerank value and the placerank value estimated by the placerank estimator .eta.. In other cases, the PG module 112 can generate feature weights .omega. using linear regression, non-linear regression, kernel regression, Bayesian techniques, such as Naive Bayesian, and/or gradient descent techniques).
controlling a target information collection device to output alarm information, wherein the target information collection device is configured to collect business information of the target preset dimension (Paragraph Number [0038] teaches a placerank value of an EOI is indicative of an importance or relevance of the EOI in view of predetermined characteristics or criteria. For example, a placerank value of an EOI can be indicative of the "family friendliness" of the EOI. In this example, a high placerank value can indicate that the associated EOI is family friendly, whereas a low placerank value can indicate that the associated EOI is not family friendly. In some embodiments, an EOI may be associated with a plurality of placerank values, each associated with particular characteristics or criteria, such as predetermined audience or a predetermined scenario. For example, a restaurant can be associated with three placerank values: placerank.foodie, placerank.social, and placerank.good for singles. placerank.foodie can indicate a popularity of the EOI to gourmets; placerank.social can indicate a popularity of the EOI for social events; and placerank.good for singles can indicate a popularity of the EOI to singles. Paragraph Number [0055] teaches the server 102 can include one or more interfaces 116. The one or more interfaces 116 provide an input and/or output mechanism to communicate internal to, and external to, the server 102. For example, the one or more interfaces 116 enable communication with clients 106 over the communication network 104. The one or more interfaces 116 can also provide an application programming interface (API) to other servers or computers coupled to the network 104 so that the server 102 can receive information based on which placerank values can be computed. The one or more interfaces 116 are implemented in hardware to send and receive signals in a variety of mediums, such as optical, copper, and wireless, and in a number of different protocols some of which may be non-transitory).
As per claim 12, Michels teaches each of the limitations of claim 1.
In addition, Michels teaches:
acquiring corresponding grading parameters of the physical area in a plurality of different time periods (Paragraph Number [0069] teaches the PG module 112 can use the information requests to derive an information request feature for the placerank value computation. The information request feature can include one or more of (1) a total number of information requests, (2) a total number of each information request type, and/or (3) a list of origins of the information requests, such as an IP address associated with clients sending the information requests. The PG module 112 can use a snapshot of the information request feature and its time-dependent characteristics to find time-dependent popularity of EOIs).
outputting grading statistical information of the physical area based on the corresponding grading parameters in the plurality of different time periods to indicate grading changes of the physical area in different time periods in the target business (Paragraph Number [0094] teaches the function for computing the placerank values can be periodically updated and the set of placerank values can be reproduced when the function is so updated. One way to update the function is to re-weight underlying features using machine learning techniques. For example, during a recession, lower priced restaurants can be biased to receive higher placerank value. As another example, if a source of collected information falls in quality, the weights attributed to features from that source can be reduced. Continuing that example, an individual's online blog could be a source of features used to produce placerank values. If the person's blog rates restaurants (e.g., as "excellent" or "terrible"), the PG module 112 can extract those ratings from the blog and use them as features for the placerank computation. As long as the person's blog is considered a reliable information source, the features generated from the blog can be considered important (or given high weights) in producing placerank values. However, if the PG module 112 determines that the person's blog is no longer reliable, the importance (or weights) of features generated from that blog can be decreased).
As per claim 13, Michels teaches each of the limitations of claim 1.
In addition, Michels teaches:
when a preset time period arrives, in response to a grading objective of the preset time period, acquiring business information of the physical area corresponding to the grading objective in the target business (Paragraph Number [0069] teaches the PG module 112 can use the information requests to derive an information request feature for the placerank value computation. The information request feature can include one or more of (1) a total number of information requests, (2) a total number of each information request type, and/or (3) a list of origins of the information requests, such as an IP address associated with clients sending the information requests. The PG module 112 can use a snapshot of the information request feature and its time-dependent characteristics to find time-dependent popularity of EOIs).
wherein the grading objectives corresponding to the different preset time periods are the same or different (Paragraph Number [0069] teaches the PG module 112 can use the information requests to derive an information request feature for the placerank value computation. The information request feature can include one or more of (1) a total number of information requests, (2) a total number of each information request type, and/or (3) a list of origins of the information requests, such as an IP address associated with clients sending the information requests. The PG module 112 can use a snapshot of the information request feature and its time-dependent characteristics to find time-dependent popularity of EOIs).
the plurality of preset dimensions correspond to the grading objectives (Paragraph Number [0114] teaches the context information can also include one or more features determined at query time. In some cases, the combination of features determined at query time can include information associated with or that is a part of the information request. The PG module 112 or the QR module 114 can combine the one or more features with one or more placerank values to determine a final score for a EOI. The PG module 112 or the QR module 114 can then use the final score to dynamically re-order the EOIs and send the reordered EOIs to a client device that sent the information request).
As per claim 14, Michels teaches each of the limitations of claim 1.
In addition, Michels teaches:
sending an information collection instruction to the plurality of information collection devices located in the physical area to indicate the information collection devices to collect facility states of facilities with which the information collection devices are configured (Paragraph Number [0059] teaches a method for computing a placerank value of an EOI in accordance with some embodiments. At a high level, the PG module 112 can be configured to compute a placerank value in three steps. In step 202, the PG module 112 is configured to collect information about the EOI, based on which a placerank value can be computed. In step 204, the PG module 112 is configured to generate features based on the collected information. In step 206, the PG module 112 is configured to combine the generated features to determine the placerank value for the EOI).
acquiring business information collected after the information collection performed by the plurality of information collection devices (Paragraph Number [0094] teaches the function for computing the placerank values can be periodically updated and the set of placerank values can be reproduced when the function is so updated. One way to update the function is to re-weight underlying features using machine learning techniques. For example, during a recession, lower priced restaurants can be biased to receive higher placerank value. As another example, if a source of collected information falls in quality, the weights attributed to features from that source can be reduced. Continuing that example, an individual's online blog could be a source of features used to produce placerank values. If the person's blog rates restaurants (e.g., as "excellent" or "terrible"), the PG module 112 can extract those ratings from the blog and use them as features for the placerank computation. As long as the person's blog is considered a reliable information source, the features generated from the blog can be considered important (or given high weights) in producing placerank values. However, if the PG module 112 determines that the person's blog is no longer reliable, the importance (or weights) of features generated from that blog can be decreased).
As per claim 15, Michels teaches each of the limitations of claim 1.
In addition, Michels teaches:
An information processing system, comprising an Internet of Things platform and a plurality of information collection devices connected to the Internet of Things platform, wherein: the plurality of information collection devices is configured to collect business information of a physical area on target business; and the Internet of Things platform is configured to execute the information processing method according to claim 1 (Paragraph Number [0123] teaches the subject matter described herein can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structural means disclosed in this specification and structural equivalents thereof, or in combinations of them. The subject matter described herein can be implemented as one or more computer program products, such as one or more computer programs tangibly embodied in an information carrier (e.g., in a machine-readable storage device), or embodied in a propagated signal, for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers). Paragraph Number [0128] teaches the subject matter described herein can be implemented in a computing system that includes a back-end component (e.g., a data server), a middleware component (e.g., an application server), or a front-end component (e.g., a client computer having a graphical user interface or a web browser through which a user can interact with an implementation of the subject matter described herein), or any combination of such back-end, middleware, and front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network ("LAN") and a wide area network ("WAN"), e.g., the Internet).
As per claim 20, Michels teaches each of the limitations of claim 1.
In addition, Michels teaches:
A non-transitory computer-readable storage medium, wherein a computer program stored therein causes a processor to perform the information processing method according to claim 1 (Paragraph Number [0127] teaches the techniques described herein can be implemented using one or more modules. As used herein, the term "module" refers to computing software, firmware, hardware, and/or various combinations thereof. At a minimum, however, modules are not to be interpreted as software that is not implemented on hardware, firmware, or recorded on a non-transitory processor readable recordable storage medium. Indeed "module" is to be interpreted to include at least some physical, non-transitory hardware such as a part of a processor or computer. Two different modules can share the same physical hardware (e.g., two different modules can use the same processor and network interface). The modules described herein can be combined, integrated, separated, and/or duplicated to support various applications. Also, a function described herein as being performed at a particular module can be performed at one or more other modules and/or by one or more other devices instead of or in addition to the function performed at the particular module. Further, the modules can be implemented across multiple devices and/or other components local or remote to one another. Additionally, the modules can be moved from one device and added to another device, and/or can be included in both devices).
The remainder of the claim limitations are substantially similar to those found in regard to claim 1 and are rejected for the same reasons put forth in regard to claim 1.
As per claim 21, Michels teaches each of the limitations of claims 1 and 15. Additionally, the claim language of claim 21 is substantially similar to that found in claim 2 and is rejected for the same reasons put forth in regard to claim 2.
As per claim 22, Michels teaches each of the limitations of claims 1 and 21. Additionally, the claim language of claim 22 is substantially similar to that found in claim 3 and is rejected for the same reasons put forth in regard to claim 3.
As per claim 23, Michels teaches each of the limitations of claims 1 and 21. Additionally, the claim language of claim 23 is substantially similar to that found in claim 4 and is rejected for the same reasons put forth in regard to claim 4.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 5-7 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication Number 2014/0279674 to Michels et al. (hereafter referred to as Michels) in view of U.S. Patent Application Publication Number 2008/0270363 to Hunt et al. (hereafter referred to as Hunt).
As per claim 5, Michels teaches each of the limitations of claims 1 and 2.
In addition, Michels teaches:
adjusting the initial weights of the plurality of evaluation factors in sequence, and when adjusting one evaluation factor each time, adjusting an initial weight of an evaluation factor that has an association relationship with the one evaluation factor (Paragraph Number [0094] teaches the function for computing the placerank values can be periodically updated and the set of placerank values can be reproduced when the function is so updated. One way to update the function is to re-weight underlying features using machine learning techniques. For example, during a recession, lower priced restaurants can be biased to receive higher placerank value. As another example, if a source of collected information falls in quality, the weights attributed to features from that source can be reduced. Continuing that example, an individual's online blog could be a source of features used to produce placerank values. If the person's blog rates restaurants (e.g., as "excellent" or "terrible"), the PG module 112 can extract those ratings from the blog and use them as features for the placerank computation. As long as the person's blog is considered a reliable information source, the features generated from the blog can be considered important (or given high weights) in producing placerank values. However, if the PG module 112 determines that the person's blog is no longer reliable, the importance (or weights) of features generated from that blog can be decreased).
repeating the step of sequentially adjusting the initial weights of the plurality of evaluation factors in a plurality of rounds to obtain the adjusted initial weights respectively corresponding to the plurality of evaluation factors in the plurality of rounds of adjustments (Paragraph Number [0094] teaches the function for computing the placerank values can be periodically updated and the set of placerank values can be reproduced when the function is so updated. One way to update the function is to re-weight underlying features using machine learning techniques. For example, during a recession, lower priced restaurants can be biased to receive higher placerank value. As another example, if a source of collected information falls in quality, the weights attributed to features from that source can be reduced. Continuing that example, an individual's online blog could be a source of features used to produce placerank values. If the person's blog rates restaurants (e.g., as "excellent" or "terrible"), the PG module 112 can extract those ratings from the blog and use them as features for the placerank computation. As long as the person's blog is considered a reliable information source, the features generated from the blog can be considered important (or given high weights) in producing placerank values. However, if the PG module 112 determines that the person's blog is no longer reliable, the importance (or weights) of features generated from that blog can be decreased).
acquiring the plurality of evaluation parameters respectively corresponding to the plurality of preset dimensions based on the adjusted initial weights respectively corresponding to the plurality of evaluation factors in the plurality of rounds of adjustments (Paragraph Number [0095] teaches the PG module 112 can use machine learning techniques to automatically determine (or learn) feature weights .omega..sub.i for the i.sup.th feature f.sub.i. The process of determining the feature weights can be referred to as weight training. Paragraph Number [0096] teaches the PG module 112 can use normalized feature values .lamda..sub.if.sub.i in the training set to determine the feature weights .omega..sub.i 310. In particular, the PG module 112 can be configured to use the normalized feature values .lamda..sub.if.sub.i and importance labels 312 to generate feature weights .omega..sub.i 310, such that the features that are reliable predictors of the given importance label 312 are assigned higher weights. [0099] The importance labels 312 can be indicative of which features are important in determining the placerank values. The importance labels 312 can be indicative of 1) a popularity of an EOI and/or a feature, 2) an importance of a feature from the perspective of consumers on the Internet, 3) an importance of a feature from the perspective of critics or reviewers, and/or 4) an importance of a feature from the perspective of the associated industry. The importance labels 312 can be used to optimize the placerank system for a particular application (e.g., a use case). Therefore, the importance labels 312 can be added based on a user demand. For example, if a user wants to find a dentist that causes the least amount of pain, then the user can add, to the importance labels 312, a "placerank_dentists_who_dont_hurt" label).
Michels teaches determining grades and scores of the physical facilities of a specific business but does not explicitly teach utilizing a random number as a starting point for probabilistic matching and associating relationships between scores as described by the following citations from Hunt:
based on a random number corresponding to the one evaluation factor (Paragraph Number [0238] teaches weighting factors of each attribute and of the total composite weight of the match may be important aspects to take into account in the matching process. Since the system may use probabilistic matching, the frequency analysis of each of the attribute values may be taken into account. The weight of the match of a specific attribute may be computed as the logarithm to the base two of the ratio of m and u, where the m probability is the probability that a field agrees given that the record pair being examined is a matched pair, which may be one minus the error rate of the field, and the u probability is the probability that a field agrees given that the record pair being examined is an unmatched pair, which may be the probability that the field agrees at random. The composite weight of the match, also referred to as match weight, may be the sum of each of the matched attribute weights. Each matched attribute may then be computed. If two attributes do not match, the disagreement weight may be calculated as: log.sub.2 [(1-m)/(1-u)]. Each time a match is accomplished, a match weight may also be calculated. Thresholds may be established to decide if this is a good match, a partial match, or not a match at all).
Both Michels and Hunt are directed to data analysis. Michels discloses determining grades and scores of the physical facilities of a specific business. Hunt improves upon Michels by disclosing utilizing a random number as a starting point for probabilistic matching and associating relationships between scores. One of ordinary skill in the art would be motivated to further include utilizing a random number as a starting point for probabilistic matching and associating relationships between scores, to efficiently provide a starting point for the analysis to determine if the collected results are above or below a set threshold so as to properly apply weighting. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of determining grades and scores of the physical facilities of a specific business in Michels to further utilize a random number as a starting point for probabilistic matching and associating relationships between scores as disclosed in Hunt, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 6, the combination of Michels and Hunt teaches each of the limitations of claims 1, 2, and 5.
Michels teaches determining grades and scores of the physical facilities of a specific business but does not explicitly teach utilizing a random number as a starting point for probabilistic matching and associating relationships between scores as described by the following citations from Hunt:
in a case that the initial weight of the one evaluation factor is greater than the random number, adjusting the initial weight of the evaluation factor that has the association relationship with the one evaluation factor (Paragraph Number [0238] teaches weighting factors of each attribute and of the total composite weight of the match may be important aspects to take into account in the matching process. Since the system may use probabilistic matching, the frequency analysis of each of the attribute values may be taken into account. The weight of the match of a specific attribute may be computed as the logarithm to the base two of the ratio of m and u, where the m probability is the probability that a field agrees given that the record pair being examined is a matched pair, which may be one minus the error rate of the field, and the u probability is the probability that a field agrees given that the record pair being examined is an unmatched pair, which may be the probability that the field agrees at random. The composite weight of the match, also referred to as match weight, may be the sum of each of the matched attribute weights. Each matched attribute may then be computed. If two attributes do not match, the disagreement weight may be calculated as: log.sub.2 [(1-m)/(1-u)]. Each time a match is accomplished, a match weight may also be calculated. Thresholds may be established to decide if this is a good match, a partial match, or not a match at all).
in a case that the initial weight of the one evaluation factor is not greater than the random number, retain the initial weight of the evaluation factor that has the association relationship with the one evaluation factor after a previous adjustment (Paragraph Number [0238] teaches weighting factors of each attribute and of the total composite weight of the match may be important aspects to take into account in the matching process. Since the system may use probabilistic matching, the frequency analysis of each of the attribute values may be taken into account. The weight of the match of a specific attribute may be computed as the logarithm to the base two of the ratio of m and u, where the m probability is the probability that a field agrees given that the record pair being examined is a matched pair, which may be one minus the error rate of the field, and the u probability is the probability that a field agrees given that the record pair being examined is an unmatched pair, which may be the probability that the field agrees at random. The composite weight of the match, also referred to as match weight, may be the sum of each of the matched attribute weights. Each matched attribute may then be computed. If two attributes do not match, the disagreement weight may be calculated as: log.sub.2 [(1-m)/(1-u)]. Each time a match is accomplished, a match weight may also be calculated. Thresholds may be established to decide if this is a good match, a partial match, or not a match at all).
A person of ordinary skill would have been motivated to combine these references for the same reasons put forth in regard to claim 5.
As per claim 7, the combination of Michels and Hunt teaches each of the limitations of claims 1, 2, and 5.
In addition, Michels teaches:
determining an influence parameter of the one evaluation factor on the evaluation factor that has an association relationship with the one evaluation factor, wherein the influence parameter is configured to characterize an influence degree of occurrence of an event characterized by an evaluation factor on occurrence of an event characterized by another evaluation factor (Paragraph Number [0046] teaches unlike existing rating systems, which often only use numerical information to generate its ratings, the disclosed placerank computation mechanism can compute ratings or scores based on various information types. For example, the disclosed placerank computation mechanism can use advertisements about the EOI, textual descriptions of the EOI, which websites describe the EOI and the text on those websites about the EOI, attributes of the EOI, as well as user reviews about the EOI to determine the placerank value of the EOI. The ability to use various information types has significant benefits compared to existing rating systems because the amount of information for computing a placerank value can be significantly greater than the amount of numerical information for computing star-based numerical scores. When the disclosed placerank computation mechanism uses a user review, the disclosed placerank computation mechanism can use not just numerical ratings, but also the tone of the text in the review and the quality/reliability of the review)
adjusting the initial weight of the evaluation factor that has the association relationship with the one evaluation factor based on the influence parameter (Paragraph Number [0094] teaches the function for computing the placerank values can be periodically updated and the set of placerank values can be reproduced when the function is so updated. One way to update the function is to re-weight underlying features using machine learning techniques. For example, during a recession, lower priced restaurants can be biased to receive higher placerank value. As another example, if a source of collected information falls in quality, the weights attributed to features from that source can be reduced. Continuing that example, an individual's online blog could be a source of features used to produce placerank values. If the person's blog rates restaurants (e.g., as "excellent" or "terrible"), the PG module 112 can extract those ratings from the blog and use them as features for the placerank computation. As long as the person's blog is considered a reliable information source, the features generated from the blog can be considered important (or given high weights) in producing placerank values. However, if the PG module 112 determines that the person's blog is no longer reliable, the importance (or weights) of features generated from that blog can be decreased).
d. As per claim 24, Michels teaches each of the limitations of claims 1 and 21. Additionally, the claim language of claim 21 is substantially similar to that found in claim 5 and is rejected for the same reasons put forth in regard to claim 5.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication Number 2014/0279674 to Michels et al. (hereafter referred to as Michels) in view of U.S. Patent Application Publication Number 2008/0270363 to Hunt et al. (hereafter referred to as Hunt) and in further view of U.S. Patent Application Publication Number 2020/0279190 to Kitajima (hereafter referred to as Kitajima).
As per claim 8, the combination of Michels and Hunt teaches each of the limitations of claims 1 and 5.
Michels teaches determining grades and scores of the physical facilities of a specific business but does not explicitly teach utilizing a probability matrix to perform iterative operations on data as described by the following citations from Kitajima:
constructing a cross-influence probability matrix based on the influence parameters between the evaluation factors that have the association relationship (Paragraph Number [0098] teaches the graph contracting unit 122 generates, from the graph information stored in the graph storing unit 121, contracted graph information representing a contracted graph and stores the contracted graph information in the contracted graph storing unit 123. The contracted graph is such that score-known nodes having the same known score have been aggregated, multiple edges between nodes have been aggregated, and edges between score-known nodes have been deleted. The contracted graph information includes new node IDs used to identify nodes included in the contracted graph, inter-node weights obtained after the aggregation of the multiple edges, and known scores assigned to some nodes. Paragraph Number [0100] teaches the score estimating unit 124 estimates unknown scores using the contracted graph information stored in the contracted graph storing unit 123 instead of the graph information stored in the graph storing unit 121. Specifically, the score estimating unit 124 calculates the transition probability matrix P from the adjacency matrix W listing the inter-node weights indicated by the contracted graph information and iteratively multiplies the score vector Φ by the transition probability matrix P, to thereby estimate unknown scores of score-unknown nodes. The score estimating unit 124 outputs estimation results of the unknown scores to the estimation result displaying unit 125).
performing a plurality of iteration presetting operations on the cross-influence probability matrix and the adjusted initial weights respectively corresponding to the plurality of evaluation factors in the plurality of rounds of adjustments until a preset end condition is reached (Paragraph Number [0100] teaches the score estimating unit 124 estimates unknown scores using the contracted graph information stored in the contracted graph storing unit 123 instead of the graph information stored in the graph storing unit 121. Specifically, the score estimating unit 124 calculates the transition probability matrix P from the adjacency matrix W listing the inter-node weights indicated by the contracted graph information and iteratively multiplies the score vector Φ by the transition probability matrix P, to thereby estimate unknown scores of score-unknown nodes. The score estimating unit 124 outputs estimation results of the unknown scores to the estimation result displaying unit 125).
acquiring the evaluation parameters respectively corresponding to the plurality of preset dimensions based on the cross-influence probability matrix when the preset end condition is reached (Paragraph Number [0101] teaches the estimation result displaying unit 125 causes the display device 111 to display a screen such as a score estimation request screen and a score estimation result screen, to thereby provide the user with a visual interface. The estimation result displaying unit 125 receives, on the score estimation request screen, a designation of score-unknown nodes whose scores are to be estimated and then instructs the score estimating unit 124 to perform score estimation. The estimation result displaying unit 125 acquires estimated scores from the score estimating unit 124 and displays the score estimation result screen. Note that the analyzer 100 may store the estimated scores in a non-volatile storage device, output them to an output device other than the display device 111, or transmit them to a different information processor).
Both the combination of Michels and Hunt and Kitajima are directed to data analysis. The combination of Michels and Hunt discloses determining grades and scores of the physical facilities of a specific business. Kitajima improves upon the combination of Michels and Hunt by disclosing utilizing a probability matrix to perform iterative operations on data. One of ordinary skill in the art would be motivated to further include utilizing a probability matrix to perform iterative operations on data, to efficiently analyze data utilizing a specific robust data analysis method that can produce the end results contemplated in the fewest steps possible. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of determining grades and scores of the physical facilities of a specific business in the combination of Michels and Hunt to further utilize utilizing a probability matrix to perform iterative operations on data as disclosed in Kitajima, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW H. DIVELBISS whose telephone number is (571) 270-0166. The fax phone number is 571-483-7110. The examiner can normally be reached on M-Th, 7:00 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached on (571) 272-6787.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.H.D/Examiner, Art Unit 3624
/Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624