Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-20 were rejected in the Non-Final Office action mailed on 11/28/2025. Applicant’s amended claimset, entered on 02/25/2026, amended Claims 1, 9, and 16. Herein this Final Office Action, Claims 1-20 are rejected.
Response to Arguments
Applicant’s arguments filed 02/25/2026, with respect to Rejections under 35 U.S.C. 112(a) for Claims 1-20, have been fully considered and are not persuasive.
On Pages 10, Applicant argues that the amendments overcome the rejection under 35 U.S.C. 112(a). Examiner does not agree. The entered amendments do not address the grounds of the rejection (i.e. the specification only provides that the historical data is configured to be the training data, and does not include creation of new training data). Additionally, the amendments introduce new limitations (i.e. the retraining of the model) that also lack specification support and are rejected under 35 U.S.C. 112(a) herein.
Applicant’s arguments filed 02/25/2026, with respect to Rejections under 35 U.S.C. 101 for Claims 1-20, have been fully considered and are not persuasive.
On Pages 10, Applicant argues that the amendments overcome the rejection under 35 U.S.C. 101. Examiner does not agree. Examiner responds that, as outlined in the rejection section below, the amended claims do not recite patent eligible subject matter, and therefore, are rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding Claim 1, Independent Claim 1 recites “creating a training dataset from receiving feedback data indicating performance of the trained ML model; retraining the trained ML model such that a retrained ML model is generated;” at lines 12-17.
Specification ¶¶96-98 states “[0096] As mentioned with respect to FIG. 1, the probability component may be configured to train one or more ML models, based on various data in the data store 124, to generate an IPO probability indicator representing a probability that a given candidate private company will have an IPO event within a threshold period of time (e.g., within the next 1-N years, with N being any integer greater than 1). The probability component may include one or more components, such as, for example, one or more ML model(s) and/or a training component. Additionally, or alternatively, the probability component may be configured to perform the operations described below with respect to the one or more components. [0097] A machine learning (ML) component 248 may be configured to train one or more ML model(s) using machine-learning mechanisms. For example, a machine-learning mechanism can analyze historical data 208, market data 244, and/or any other type of data stored or otherwise accessible by the data store 124, associated with one or more entities, technology spaces, and/or markets, configured as training data to train a data model that creates an output, which can be a recommendation, a score, a respective probability, a threshold probability, and/or another indication. Machine-learning mechanisms can include, but are not limited to supervised learning algorithms (e.g., artificial neural networks, Bayesian statistics, support vector machines, decision trees, classifiers, k-nearest neighbor, etc.), unsupervised learning algorithms (e.g., artificial neural networks, association rule learning, hierarchical clustering, cluster analysis, etc.), semi-supervised learning algorithms, deep learning algorithms, etc.), statistical models, etc. In at least one example, machine-trained data models can be stored in the data store(s) 124 associated with remote computing resources 104 for use at a time after the data models have been trained (e.g., at runtime). Additionally, or alternatively, in at least one example, the machine-learning mechanisms may include an extreme gradient boosting (XGBoost) ML algorithm, a multi-layered perception ML algorithm, a random forest ML algorithm, and/or the like. In some examples, an innovation metric may be generated using at least one ML model trained by the machine learning component 248 based on company data 202 associated with historical data 208, market data 244, and/or any other type of data stored or otherwise accessible by the data store 124, associated with one or more entities, technology spaces, and/or markets, configured as training data. [0098] Once the ML model(s) are trained by the machine learning component 248, the ML model(s) may output an innovation metric for a given entity, technology space, and/or market. In some examples, the innovation metric may be a percentage ranging from 0% to 100%, such as with the percentile innovation metric discussed herein. Additionally and/or alternatively, the innovation metric output by the ML model(s) may include a normalized innovation metric, as discussed herein, indicating an integer value difference from a mean innovation metric from a group of innovation metrics of similar entities, technology spaces, and/or markets.” (Emphasis added).
The original disclosure does not provide sufficient support for the claim limitation of “transforming the historical IP asset data into a training dataset, the training dataset representing a new dataset that differs from the historical IP asset data” because Specification ¶¶96-98 explicitly states that the historical data is training data, without mention of creating a new dataset. Additionally, Specification ¶¶96-98 makes no mention of receiving feedback data on the performance of the trained model to re-train the model. Therefore, Claim 1 introduces new matter and is rejected under 35 U.S.C. 112(a).
Regarding Claims 9 and 16, Independent Claims 9 and 16 recite a limitation similar to the limitation of Claim 1 discussed above. Therefore, Claims 9 and 16 are rejected under 35 U.S.C. 112(a) by similar justification to the rejection of Claim 1 discussed above.
Regarding Claims 2-8, 10-15, and 17-20, dependent Claims 2-8, 10-15, and 17-20 depend on Claims 1, 9, and 16, and therefore are rejected via dependency.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1
Claims 1-15 recite a method (i.e. a process) and Claims 16-20 recite a system (i.e. a machine or manufacture). Therefore, Claims 1-20 all fall within the one of the four statutory categories of invention of 35 U.S.C. 101.
Step 2A, Prong One
Independent Claim 1 recites the abstract idea of:
“identifying a first entity associated with first intellectual-property (IP) assets;
determining a respective date associated with individual ones of the first IP assets filed between a first date and a second date, the respective date comprising at least one of a filing date, a publication date, or a priority date;
determining a first average date associated with the first IP assets filed between the first date and the second date;
determining a first amount of time between the first average date and the second date;
generating a . . . (ML) model configured to generate scores associated with at least one of entities or IP assets utilizing time-based metrics;
generating historical IP asset data indicative of innovation metrics;
creating a training dataset from
training the ML model utilizing the training dataset such that a trained ML model is generated;
receiving feedback data indicating performance of the trained ML model;
retraining the trained ML model such that a retrained ML model is generated;
generating, using the retrained ML model, a first score associated with at least one of the first entity or the first IP assets based at least in part on the first amount of time;
identifying a second entity associated with second IP assets;
determining a respective date associated with individual ones of the second IP assets filed between the first date and the second date;
determining a second average date associated with the second IP assets filed between the first date and the second date;
determining a second amount of time between the second average date and the second date;
generating, using the retrained ML model, a second score associated with at least one of the second entity and/or the second IP assets based at least in part on the second amount of time;
determining a mean value associated with the first amount of time and the second amount of time;
generating a first normalized score associated with at least one of the first entity or the first IP assets based at least in part on comparing the first amount of time to the mean value;
generating a second normalized score associated with at least one of the second entity or the second IP assets based at least in part on comparing the second amount of time to the mean value;
generating a first percentile ranking associated with at least one of the first entity or the first IP assets based at least in part on comparing the first score to the second score;
generating a second percentile ranking associated with at least one of the second entity or the second IP assets based at least in part on comparing the first score to the second score,
wherein generating the first percentile ranking and the second percentile ranking comprises
determining a minimum value associated with at least one of the first score or the second score and determining a maximum value associated with at least one of the first score or the second score;
sending, . . . , transmitted data, the transmitted data being at least one of the first date, the second date, the respective date, the filing date, the publication date, the priority date, the first average date, the first amount of time, the first score, the first normalized score, the first percentile ranking, the second average date, the second amount of time, the second score, the second normalized score, or the second percentile ranking; and
displaying, to a user, . . . at least the transmitted data or a portion thereof in a quadrant graph that categorizes innovation types,
wherein the [displaying] dynamically updates based at least in part on receiving new user input data to provide a real-time report configured to improve innovation analysis by the user.”
The limitations stated above are processes/ functions that under broadest reasonable interpretation covers (1) identifying and determining entities and dates associated with IP assets, (2) determining an average date and an amount of time, (3) generating a score for the entity or asset based on a learned model trained and re-trained on training data based on historical data indicative of innovation metrics, (4) generating a normalized score and percental ranking based on certain data, (5) sending data, and (6) displaying the scores and percentiles in response to receiving the data via quadrant graph dynamically updated to provide real-time reports, all of which are: mathematical relationships (i.e. ranking percentiles and comparing scores) and mathematical calculations (i.e. generating scores, averages, and percentiles and training, including creating training data, retraining, and utilizing a model), which are mathematical concepts, an abstract idea, under MPEP 2106.04(a)(2)I; commercial or legal interactions (i.e. evaluating value, scoring and ranking assets, and sending and displaying asset/market information are “marketing or sales activities or behaviors”), which are certain methods of organizing human activity, an abstract idea, under MPEP 2106.04(a)(2)II; and observations (i.e. collecting data related to the IP asset), evaluations (i.e. determining a score related to the IP asset), and judgments (i.e. ranking the IP asset), which are mental processes, an abstract idea, under MPEP 2106.04(a)(2)III. The mere the recitation of generic computer components (i.e., a trained machine-learned (ML) model, a network protocol, a network, a computing device, and graphical user interface (GUI).) implementing the identified abstract idea does not take the claim out of the mathematical concepts, certain methods of organizing human activity, or mental processes groupings. MPEP 2106.04(d). If a claim limitation, under its broadest reasonable interpretation, covers “mathematical relationships,” “commercial or legal interactions,” “observations,” “evaluations,” and “judgements,” but for the recitation of generic computer components, then it falls in the mathematical concepts, certain methods of organizing human activity, or mental processes groupings of abstract ideas. MPEP 2106.04. Therefore, Claim 1 recites an abstract idea.
Step 2A, Prong Two
The judicial exception is not integrated into a practical application. Claim 1 as a whole amounts to: (i) merely invoking generic components as a tool to perform the abstract idea or “apply it” (or an equivalent) and (ii) generally links the use of a judicial exception to a particular technological environment or field of use. The claim recites the additional elements of:
(i) a (trained and retrained) machine-learned (ML) model,
(ii) a network protocol,
(iii) a network,
(iv)a computing device, and
(v) graphical user interface (GUI).
The additional elements of the (i) a “machine-learned (ML) model” (Fig. 2 and ¶¶97-98 shows “machine learning (ML) component 248.”), (ii) a “network protocol” (¶68 shows “For instance, each of the network interface(s) 110 and/or 118 may include a personal area network (PAN) component to enable messages over one or more short-range wireless message channels. For instance, the PAN component may enable messages compliant with at least one of the following standards IEEE 802.15.4 (ZigBee), IEEE 802.15.1 (Bluetooth), IEEE 802.11 (WiFi), or any other PAN message protocol. Furthermore, each of the network interface(s) 110 and/or 118 may include a wide area network (WAN) component to enable message over a wide area network.”), (iii) a “network” (Fig. 1 and ¶57 shows “network 106”), (iv) “computing device” (Fig. 1 and ¶58 shows “the electronic devices 102 may include, for example, a computing device, a mobile phone, a tablet, a laptop, and/or one or more servers”), and (v) “GUI” (Fig. 1 and ¶59 shows “By way of example, the user interface(s) 114 may include one or more of the user interfaces described elsewhere herein, such as the user interfaces described with respect to FIGS. 3-5, corresponding to a comprehensive score user interface, market analysis user interface, an innovation metric interface, etc.” Fig. 1 and ¶59 shows “The user interface(s) 114 may be configured to display information associated with the IP analysis platform and to receive user input associated with the IP analysis platform.”), are recited at a high-level of generality, such that, when viewed as whole/ordered combination (Fig. 1-2, ¶¶57-59, ¶¶67-69, and ¶¶97-98 shows elements in combination.), they amount to no more than mere instruction to apply the judicial exception using generic computer components or “apply it” (See MPEP 2106.05(f)).
The (i) a machine-learned (ML) model, (ii) a network protocol, (iii) a network, (iv) a computing device, and (v) graphical user interface (GUI), when viewed as whole/ordered combination (Fig. 1-2, ¶¶57-59, ¶¶67-69, and ¶¶97-98 shows elements in combination.), does no more than generally link the use of the judicial exception to a particular technological environment or field of use (i.e. computer environment) (See MPEP 2106.05(h)).
Accordingly, these additional elements, when viewed as a whole/ordered combination (Fig. 1-2, ¶¶57-59, ¶¶67-69, and ¶¶97-98 shows elements in combination.), do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Thus, the claim is directed to an abstract idea.
Step 2B
As discussed above with respect to Step 2A Prong Two, the additional elements amount to no more than: (i) “apply it” (or an equivalent) and (ii) generally link the use of a judicial exception to a particular technological environment or field of use, and are not a practical application of the abstract idea. The same analysis applies here in Step 2B, i.e., (i) merely invoking the generic components as a tool to perform the abstract idea or “apply it” (See MPEP 2106.05(f)) and (ii) generally linking the use of a judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)), does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B.
Therefore, the additional elements of (i) a machine-learned (ML) model, (ii) a network protocol, (iii) a network, (iv) a computing device, and (v) graphical user interface (GUI), do not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Thus, even when viewed as a whole/ordered combination (Fig. 1-2, ¶¶57-59, ¶¶67-69, and ¶¶97-98 shows elements in combination.), nothing in the claims adds significantly more (i.e., an inventive concept) to the abstract idea. Thus, the claim is ineligible.
Dependent Claims 2-8 recite the abstract idea of:
. . . identifying at least one of a first market or a first technology associated with third IP assets; determining a respective date associated with individual ones of the third IP assets filed between the first date and the second date; determining a third average date associated with the third IP assets filed between the first date and the second date; determining a third amount of time between the third average date and the second date; and generating a third score associated with at least one of the first market, the first technology, and/or the third IP assets based at least in part on the third amount of time. (Claim 2).
. . . associating at least one of the first score, the second score, the first normalized score, the second normalized score, the first percentile ranking, or the second percentile ranking with an innovation score, wherein the innovation score is associated with at least one of architectural innovation, incremental innovation, radical innovation, or disruptive innovation (Claim 3).
. . . further comprising determining at least one of a portfolio quality score associated with at least one of the first entity or the second entity; a filing velocity score associated with at least one of the first entity or the second entity; a revenue associated with at least one of the first entity or the second entity; a research and development score associated with at least one of the first entity or the second entity; or a prosecution cost score associated with at least one of the first entity or the second entity, wherein the innovation score is based at least in part on the portfolio quality score, the filing velocity score, the research and development score, the revenue, the prosecution cost score, the first score, the second score, the first normalized score, the second normalized score, the first percentile ranking, or the second percentile ranking. (Claim 4).
. . . wherein [displaying] a quadrant graph comprising: a first section associated with architectural innovation; a second section associated with incremental innovation; a third section associated with radical innovation; and a fourth section associated with disruptive innovation. (Claim 5).
. . . further comprising determining a number of IP assets associated with the first entity that have been published between the first date and the second date, wherein at least one of the first score, the first percentile ranking, or the first normalized score is based at least in part on the number of IP assets associated with the first entity that have been published between the first date and the second date. (Claim 6).
. . . wherein at least one of the first score, the second score, the first normalized score, the second normalized score, the first percentile ranking, or the second percentile ranking is determined . . . . (Claim 7).
. . . wherein at least one of the first score, the second score, the first normalized score, the second normalized score, the first percentile ranking, or the second percentile ranking is determined using a quadratic regression. (Claim 8).
Dependent Claims 2-8, have been given the full two-prong analysis including analyzing the further elements and limitations, both individually and in combination. When analyzed individually and in combination, these claims are also held to be patent ineligible under 35 U.S.C. 101. The further limitation of Claims 2-8 fail to establish claims that are not directed to an abstract idea because the further limitations (1) identify a market or technology and determining dates and times associated with assets, (2) generating scores and percentile rankings based on certain data, and (3) displaying a graph of certain scores. The further elements of Claims 2-8 fails to establish claims that are not directed to an abstract idea because the elements merely recite generic computer components. The organization of the further limitations of Claims 2-8 fail to integrate an abstract idea into a practical application just as discussed above for Claim 1. Additionally, performing the abstract idea of Claim 1 as recited in each of the further limitations of Claims 2-8, individually or in combination, does not (1) impose any meaningful limits on practicing the abstract ideas, or (2) provide improvements to the functioning of computing systems or to another technology or technical field, just as discussed above regarding Claim 1. Therefore, Claims 2-8 amount to mere instructions to implement the abstract idea (1) using generic computer components—using the computer, in its ordinary capacity, as a tool to perform the abstract idea, and (2) generally linked to a particular technology or field of use. Because the claims merely use a computer, in its ordinary capacity in a particular field of use, as a tool to perform the abstract idea cannot provide an inventive concept, the elements and limitations of Claims 2-8 fail to establish that the claims provide an inventive concept, just as in Claim 1. Therefore, Claims 2-8 fails the Subject Matter Eligibility Test and are consequently rejected under 35 U.S.C. 101.
Step 2A, Prong One
Independent Claim 9 recites the abstract idea of:
identifying a market associated with first IP assets;
determining a respective date associated with individual ones of the first IP assets filed between a first date and a second date, the respective date comprising at least one of a filing date, a publication date, or a priority date;
determining a first average date associated with the first IP assets filed between the first date and the second date;
determining a first amount of time between the first average date and the second date;
generating a . . . (ML) model configured to generate scores associated with markets utilizing time-based metrics;
generating historical IP asset data indicative of innovation metrics;
creating a training dataset from
training the ML model utilizing the training dataset such that a trained ML model is generated;
receiving feedback data indicating performance of the trained ML model;
retraining the trained ML model such that a retrained ML model is generated;
generating, utilizing the retrained ML model, a first score associated with the market based at least in part on the first amount of time;
identifying an entity associated with the market;
determining a second number of second IP assets filed by the entity between the first date and the second date, the second IP assets being associated with the market;
determining a respective date associated with individual ones of the second IP assets filed between the first date and a second date;
determining a second average date associated with the second IP assets filed between the first date and the second date;
determining a second amount of time between the second average date and the second date;
generating, utilizing the retrained ML model, a second score associated with the entity based at least in part on the second amount of time;
determining an innovation characteristic for the entity based on a normalized score associated with the entity, wherein the innovation characteristic comprises at least one of an architectural innovation, an incremental innovation, a radical innovation, or a disruptive innovation;
sending, . . . , transmitted data, the transmitted data being at least one of the first date, the second date, the respective date, the filing date, the publication date, the priority date, the first average date, the first amount of time, the first score, the second average date, the second amount of time, or the second score; and
displaying, to a user, . . . at least the transmitted data or a portion thereof in a quadrant graph that visually depicts innovation characteristics,
wherein the [displaying] dynamically updates based at least in part on receiving new user input data to enable interactive exploration of the innovation characteristic.”
The limitations stated above are processes/ functions that under broadest reasonable interpretation covers (1) identifying and determining markets and dates associated with IP assets, (2) determining an average date and an amount of time, (3) identifying an entity associated with the market, (4) generating a score for the entity or asset based on a learned model trained and re-trained on training data based on historical data indicative of innovation metrics, (5) determining an innovation characteristic, (6) sending data, and (7) displaying the scores and percentiles in response to receiving the data via quadrant graph dynamically updated to provide interactive exploration, all of which are: mathematical relationships (i.e. ranking percentiles and comparing scores) and mathematical calculations (i.e. generating scores, averages, and percentiles and training, including creating training data, retraining, and utilizing a model), which are mathematical concepts, an abstract idea, under MPEP 2106.04(a)(2)I; commercial or legal interactions (i.e. evaluating value, scoring and ranking assets, and sending and displaying asset/market information are “marketing or sales activities or behaviors”), which are certain methods of organizing human activity, an abstract idea, under MPEP 2106.04(a)(2)II; and observations (i.e. collecting data related to the IP asset), evaluations (i.e. determining a score and characteristics related to the IP asset), and judgments (i.e. ranking the IP asset), which are mental processes, an abstract idea, under MPEP 2106.04(a)(2)III. The mere the recitation of generic computer components (i.e., a machine learning (ML) model, a network protocol, a network, a computing device, and graphical user interface (GUI).) implementing the identified abstract idea does not take the claim out of the mathematical concepts, certain methods of organizing human activity, or mental processes groupings. MPEP 2106.04(d). If a claim limitation, under its broadest reasonable interpretation, covers “mathematical relationships,” “commercial or legal interactions,” “observations,” “evaluations,” and “judgements,” but for the recitation of generic computer components, then it falls in the mathematical concepts, certain methods of organizing human activity, or mental processes groupings of abstract ideas. MPEP 2106.04. Therefore, Claim 9 recites an abstract idea.
Step 2A, Prong Two
The judicial exception is not integrated into a practical application. Claim 9 as a whole amounts to: (i) merely invoking generic components as a tool to perform the abstract idea or “apply it” (or an equivalent) and (ii) generally links the use of a judicial exception to a particular technological environment or field of use. The claim recites the additional elements of:
(i) a machine learning model,
(ii) a network protocol,
(iii) a network,
(iv)a computing device, and
(v) graphical user interface (GUI).
The additional elements of the (i) a “machine-learned (ML) model” (Fig. 2 and ¶¶97-98 shows “machine learning (ML) component 248.”), (ii) a “network protocol” (¶68 shows “For instance, each of the network interface(s) 110 and/or 118 may include a personal area network (PAN) component to enable messages over one or more short-range wireless message channels. For instance, the PAN component may enable messages compliant with at least one of the following standards IEEE 802.15.4 (ZigBee), IEEE 802.15.1 (Bluetooth), IEEE 802.11 (WiFi), or any other PAN message protocol. Furthermore, each of the network interface(s) 110 and/or 118 may include a wide area network (WAN) component to enable message over a wide area network.”), (iii) a “network” (Fig. 1 and ¶57 shows “network 106”), (iv) “computing device” (Fig. 1 and ¶58 shows “the electronic devices 102 may include, for example, a computing device, a mobile phone, a tablet, a laptop, and/or one or more servers”), and (v) “GUI” (Fig. 1 and ¶59 shows “By way of example, the user interface(s) 114 may include one or more of the user interfaces described elsewhere herein, such as the user interfaces described with respect to FIGS. 3-5, corresponding to a comprehensive score user interface, market analysis user interface, an innovation metric interface, etc.” Fig. 1 and ¶59 shows “The user interface(s) 114 may be configured to display information associated with the IP analysis platform and to receive user input associated with the IP analysis platform.”), are recited at a high-level of generality, such that, when viewed as whole/ordered combination (Fig. 1-2, ¶¶57-59, ¶¶67-69, and ¶¶97-98 shows elements in combination.), they amount to no more than mere instruction to apply the judicial exception using generic computer components or “apply it” (See MPEP 2106.05(f)).
The (i) a machine learning model, (ii) a network protocol, (iii) a network, (iv) a computing device, and (v) graphical user interface (GUI), when viewed as whole/ordered combination (Fig. 1-2, ¶¶57-59, ¶¶67-69, and ¶¶97-98 shows elements in combination.), does no more than generally link the use of the judicial exception to a particular technological environment or field of use (i.e. computer environment) (See MPEP 2106.05(h)).
Accordingly, these additional elements, when viewed as a whole/ordered combination (Fig. 1-2, ¶¶57-59, ¶¶67-69, and ¶¶97-98 shows elements in combination.), do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Thus, the claim is directed to an abstract idea.
Step 2B
As discussed above with respect to Step 2A Prong Two, the additional elements amount to no more than: (i) “apply it” (or an equivalent) and (ii) generally link the use of a judicial exception to a particular technological environment or field of use, and are not a practical application of the abstract idea. The same analysis applies here in Step 2B, i.e., (i) merely invoking the generic components as a tool to perform the abstract idea or “apply it” (See MPEP 2106.05(f)) and (ii) generally linking the use of a judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)), does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B.
Therefore, the additional elements of the (i) a machine learning model, (ii) a network protocol, (iii) a network, (iv) a computing device, and (v) graphical user interface (GUI), do not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Thus, even when viewed as a whole/ordered combination (Fig. 1-2, ¶¶57-59, ¶¶67-69, and ¶¶97-98 shows elements in combination.), nothing in the claims adds significantly more (i.e., an inventive concept) to the abstract idea. Thus, the claim is ineligible.
Dependent Claims 10-15 recite the abstract idea of:
. . . , wherein at least one of the first score or the second score is associated with at least one of architectural innovation, incremental innovation, radical innovation, or disruptive innovation. (Claim 10).
. . . , further comprising determining a maturity score associating with the market, wherein the at least one of the first score or the second score is based at least in part on the maturity score. (Claim 11).
. . . , wherein [displaying] a quadrant comprising: a first section associated with architectural innovation. a second section associated with incremental innovation; a third section associated with radical innovation; and a fourth section associated with disruptive innovation. (Claim 12).
. . . , wherein at least one of the first score or the second score is based at least in part on a number of IP assets associated with the market that have been published between the first date and the second date. (Claim 13).
. . . , wherein the at least one of the first score or the second score is determined . . . . (Claim 14).
. . . , wherein the at least one of the first score or the second score is determined using a quadratic regression. (Claim 15).
Dependent Claims 10-15, have been given the full two-prong analysis including analyzing the further elements and limitations, both individually and in combination. When analyzed individually and in combination, these claims are also held to be patent ineligible under 35 U.S.C. 101. The further limitation of Claims 10-15 fail to establish claims that are not directed to an abstract idea because the further limitations (1) limit the type of scores, (2) displaying a graph of certain scores, and (3) generating scores and percentile rankings based on certain data. The further elements of Claims 10-15 (i.e. “machine learning” of Claim 14) fails to establish claims that are not directed to an abstract idea because the elements merely recite additional generic computer hardware (Fig. 2 and ¶¶97-98 shows “machine learning (ML) component 248”). The organization of the further limitations of Claims 10-15 fail to integrate an abstract idea into a practical application just as discussed above for Claim 9. Additionally, performing the abstract idea of Claim 9 as recited in each of the further limitations of Claims 10-15, individually or in combination, does not (1) impose any meaningful limits on practicing the abstract ideas, or (2) provide improvements to the functioning of computing systems or to another technology or technical field, just as discussed above regarding Claim 9. Therefore, Claims 10-15 amount to mere instructions to implement the abstract idea (1) using generic computer components—using the computer, in its ordinary capacity, as a tool to perform the abstract idea, and (2) generally linked to a particular technology or field of use. Because the claims merely use a computer, in its ordinary capacity in a particular field of use, as a tool to perform the abstract idea cannot provide an inventive concept, the elements and limitations of Claims 10-15 fail to establish that the claims provide an inventive concept, just as in Claim 9. Therefore, Claims 10-15 fails the Subject Matter Eligibility Test and are consequently rejected under 35 U.S.C. 101.
Step 2A, Prong One
Independent Claim 16 recites the abstract idea of:
“. . . identifying at least one of an entity or a market associated with IP assets;
determining a respective date associated with individual ones of the IP assets filed between a first date and a second date, the respective date comprising at least one of a filing date, a publication date, or a priority date;
determining an average date associated with the IP assets filed between the first date and the second date;
determining an amount of time between the average date and the second date;
generating a . . . (ML) model configured to generate scores associated with at least one of entities or markets utilizing time-based metrics;
generating historical IP asset data indicative of innovation metrics;
creating a training dataset from
training the ML model utilizing the training dataset such that a trained ML model is generated;
receiving feedback data indicating performance of the trained ML model;
retraining the trained ML model such that a retrained ML model is generated;
generating, utilizing the retrained ML model, a score associated with the at least one entity or market based at least in part on the amount of time;
determining a market maturity associated with the at least one entity or market;
determining an input to the retrained ML model based at least in part on the score and the market maturity;
determining an output from the retrained ML model based at least in part on the input;
determining an innovation characteristic for the at least one entity or market based on the output from the retrained ML model, the innovation characteristic comprising one or more of an architectural innovation, an incremental innovation, a radical innovation, or a disruptive innovation;
generating a normalized score and a percentile ranking for the at least one entity or market based on comparing the score to scores of other entities or markets;
sending, . . . , transmitted data, the transmitted data being at least one of the first date, the second date, the respective date, the filing date, the publication date, the priority date, the average date, the amount of time, the score, or the market maturity and
displaying, to a user, . . . at least the transmitted data or a portion thereof in a quadrant graph that categorizes displays innovation characteristics,
wherein the [displaying] dynamically updates based at least in part on receiving new user input data to provide real-time innovation analysis and enable interactive exploration of the innovation characteristic by the user.”
The limitations stated above are processes/ functions that under broadest reasonable interpretation covers (1) identifying and determining entities or markets and dates associated with IP assets, (2) determining an average date and an amount of time, (3) generating a score for the entity or asset based on a learned model trained and re-trained on training data based on historical data indicative of innovation metrics, (5) determining an innovation characteristic, (6) sending data, and (7) displaying the scores and percentiles in response to receiving the data via quadrant graph dynamically updated to provide real-time interactive exploration, all of which are: mathematical relationships (i.e. ranking percentiles and comparing scores) and mathematical calculations (i.e. generating scores, averages, and percentiles and training, including creating training data, retraining, and utilizing a model), which are mathematical concepts, an abstract idea, under MPEP 2106.04(a)(2)I; commercial or legal interactions (i.e. . evaluating value, scoring and ranking assets, and sending and displaying asset/market information are “marketing or sales activities or behaviors”), which are certain methods of organizing human activity, an abstract idea, under MPEP 2106.04(a)(2)II; and observations (i.e. collecting data related to the IP asset), evaluations (i.e. determining a scores and characteristics related to the IP asset), and judgments (i.e. ranking the IP asset), which are mental processes, an abstract idea, under MPEP 2106.04(a)(2)III. The mere the recitation of generic computer components (i.e., the system, processors, non-transitory computer-readable media, machine learning model, network protocol, network, computing device, and graphical user interface (GUI).) implementing the identified abstract idea does not take the claim out of the mathematical concepts, certain methods of organizing human activity, or mental processes groupings. MPEP 2106.04(d). If a claim limitation, under its broadest reasonable interpretation, covers “mathematical relationships,” “commercial or legal interactions,” “observations,” “evaluations,” and “judgements,” but for the recitation of generic computer components, then it falls in the mathematical concepts, certain methods of organizing human activity, or mental processes groupings of abstract ideas. MPEP 2106.04. Therefore, Claim 16 recites an abstract idea.
Step 2A, Prong Two
The judicial exception is not integrated into a practical application. Claim 16 as a whole amounts to: (i) merely invoking generic components as a tool to perform the abstract idea or “apply it” (or an equivalent) and (ii) generally links the use of a judicial exception to a particular technological environment or field of use. The claim recites the additional elements of:
(i) system,
(ii) processors
(iii) non-transitory computer-readable media
(iv) machine learning model;
(v) network protocol;
(vi) network;
(vii) computing device; and
(viii) graphical user interface (GUI).
The additional elements of the (i) system (Fig. 1-2, ¶57, and ¶60 shows “remote computing resources 104”), (ii) processors (Fig. 1-2 and ¶60 shows “one or more processors 116”), (iii) non-transitory computer-readable media (Fig. 1-2 and ¶60 shows “computer-readable media 120”), (iv) machine learning model (Fig. 2 and ¶¶97-98 shows “machine learning (ML) component 248”), (v) network protocol (¶68 shows “For instance, each of the network interface(s) 110 and/or 118 may include a personal area network (PAN) component to enable messages over one or more short-range wireless message channels. For instance, the PAN component may enable messages compliant with at least one of the following standards IEEE 802.15.4 (ZigBee), IEEE 802.15.1 (Bluetooth), IEEE 802.11 (WiFi), or any other PAN message protocol. Furthermore, each of the network interface(s) 110 and/or 118 may include a wide area network (WAN) component to enable message over a wide area network.”), (vi) network (Fig. 1 and ¶57 shows “network 106”), (vii) computing device (Fig. 1 and ¶58 shows “the electronic devices 102 may include, for example, a computing device, a mobile phone, a tablet, a laptop, and/or one or more servers”), and (viii) graphical user interface (GUI) (Fig. 1 and ¶59 shows “By way of example, the user interface(s) 114 may include one or more of the user interfaces described elsewhere herein, such as the user interfaces described with respect to FIGS. 3-5, corresponding to a comprehensive score user interface, market analysis user interface, an innovation metric interface, etc.” Fig. 1 and ¶59 shows “The user interface(s) 114 may be configured to display information associated with the IP analysis platform and to receive user input associated with the IP analysis platform.”), are recited at a high-level of generality, such that, when viewed as whole/ordered combination (Fig. 1-2, ¶¶57-59, and ¶¶67-69 shows elements in combination.), they amount to no more than mere instruction to apply the judicial exception using generic computer components or “apply it” (See MPEP 2106.05(f)).
The (i) system, (ii) processors, (iii) non-transitory computer-readable media, (iv) machine learning model, (v) network protocol, (vi) network, (vii) computing device; and (viii) graphical user interface (GUI), when viewed as whole/ordered combination (Fig. 1-2, ¶¶57-59, and ¶¶67-69 shows elements in combination.), does no more than generally link the use of the judicial exception to a particular technological environment or field of use (i.e. computer environment) (See MPEP 2106.05(h)).
Accordingly, these additional elements, when viewed as a whole/ordered combination (Fig. 1-2, ¶¶57-59, and ¶¶67-69 shows elements in combination.), do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Thus, the claim is directed to an abstract idea.
Step 2B
As discussed above with respect to Step 2A Prong Two, the additional elements amount to no more than: (i) “apply it” (or an equivalent) and (ii) generally link the use of a judicial exception to a particular technological environment or field of use, and are not a practical application of the abstract idea. The same analysis applies here in Step 2B, i.e., (i) merely invoking the generic components as a tool to perform the abstract idea or “apply it” (See MPEP 2106.05(f)) and (ii) generally linking the use of a judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)), does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B.
Therefore, the additional elements of (i) system, (ii) processors, (iii) non-transitory computer-readable media, (iv) machine learning model, (v) network protocol, (vi) network, (vii) computing device; and (viii) graphical user interface (GUI), do not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Thus, even when viewed as a whole/ordered combination (Fig. 1-2, ¶¶57-59, and ¶¶67-69 shows elements in combination.), nothing in the claims adds significantly more (i.e., an inventive concept) to the abstract idea. Thus, the claim is ineligible.
Dependent Claims 17-20 recite the abstract idea of:
. . . associating the score with an innovation score, wherein the innovation score is associated with at least one of architectural innovation, incremental innovation, radical innovation, or disruptive innovation. (Claim 17).
. . . determining at least one of a portfolio quality score associated with the at least one entity or market, a filing velocity score associated with the at least one entity or market, a revenue associated with the at least one entity or market, a research and development score associated with the at least one entity or market, or a prosecution cost score associated with the at least one entity or market, wherein the innovation score is based at least in part on the portfolio quality score, the filing velocity score, the research and development score, the revenue, the prosecution cost score, the score, or the market maturity. (Claim 18).
wherein the GUI comprises a quadrant comprising: a first section associated with architectural innovation; a second section associated with incremental innovation; a third section associated with radical innovation; and a fourth section associated with disruptive innovation
. . . [displaying] at least one of the score and the market maturity; and causing . . . to be displayed . . . , wherein [displaying] a quadrant comprising: a first section associated with architectural innovation; a second section associated with incremental innovation; a third section associated with radical innovation; and a fourth section associated with disruptive innovation. (Claim 19).
. . . determining a number of IP assets associated with the at least one entity or market that have been published between the first date and the second date, wherein the score is based at least in part on the number of IP assets associated with the at least one entity or market that have been published between the first date and the second date. (Claim 20).
Dependent Claims 17-20, have been given the full two-prong analysis including analyzing the further elements and limitations, both individually and in combination. When analyzed individually and in combination, these claims are also held to be patent ineligible under 35 U.S.C. 101. The further limitation of Claims 17-20 fail to establish claims that are not directed to an abstract idea because the further limitations (1) limit the type of scores, (2) displaying a graph of certain scores, and (3) generating scores based on certain data. The elements of Claims 17-20 (i.e. “computing device” and “GUI” of Claim 19) fails to establish claims that are not directed to an abstract idea because the elements merely recite additional generic computer hardware (Fig. 1 and ¶¶58-59 shows “user interface(s) 114” and “electronic devices 102”). The organization of the further limitations of Claims 17-20 fail to integrate an abstract idea into a practical application just as discussed above for Claim 16. Additionally, performing the abstract idea of Claim 1 as recited in each of the further limitations of Claims 17-20, individually or in combination, does not (1) impose any meaningful limits on practicing the abstract ideas, or (2) provide improvements to the functioning of computing systems or to another technology or technical field, just as discussed above regarding Claim 16. Therefore, Claims 17-20 amount to mere instructions to implement the abstract idea (1) using generic computer components—using the computer, in its ordinary capacity, as a tool to perform the abstract idea, and (2) generally linked to a particular technology or field of use. Because the claims merely use a computer, in its ordinary capacity in a particular field of use, as a tool to perform the abstract idea cannot provide an inventive concept, the elements and limitations of Claims 17-20 fail to establish that the claims provide an inventive concept, just as in Claim 16. Therefore, Claims 17-20 fails the Subject Matter Eligibility Test and are consequently rejected under 35 U.S.C. 101.
Reasons for No Art Rejection
The amended Claims 1-20 are not rejected under 35 U.S.C. 102 or 35 U.S.C. 103. In the Non-Final Office Action mailed on 08/28/2024, Claims 1-20 were not rejected over the prior art of record. The amended Claims 1-20 herein are not rejected over the prior art of record with a similar justification as Claims 1-20 in the Non-Final Office Action mailed on 08/28/2024.
The reasons for No Art Rejection of the Non-Final Office Action mailed on 08/28/2024 is copied below for reference.
Claims 1-20 are novel and non-obvious over the prior art of record.
The Closest prior art of record is:
US-20110246379-A1 (“Maddox”);
US-20230377074-A1 (“Block”);
US-20190066219-A1 (“Ouderkirk”);
US-20120278244-A1 (“Lee ‘244”);
US-20140279584-A1 (“Lee ‘584”);
US 20160350886-A1 (“Jessen”);
CN-106446071-A (“Cui”);
US-20140379590-A1 (Germeraad);
“Essays on Stock Return Predictability: Novel Measures Based on Technology Spillover and Firm’s Public Announcement” (“Bai” July 2014, UMI Number: 3638978, University of Cincinnati);
“A STUDY OF INNOVATION AND PATENTING IN THE LIFE SCIENCES” (“Zahringer” May 2014, ProQuest Number: 10157746, University of Missouri – Columbia);
US-20220277046-A1 (“Hanganu”);
US-10453144-B1 (“McRae”);
KR-20220102745-A (“Kyung”);
US-20090234688-A1 (“Masuyama”); and
US-20030036945-A1 (“Del Vecchio”).
The Following is an examiner’s statement of reasons that the claims overcome the prior art of record:
Maddox discloses analyzing a group of patents based on determining a value of parameters in selected target patents and comparison patents in a patent population, calculating the average value and standard deviation of those parameters in the patent population, and then base target patent scores relative to the average value and standard deviation of the parameter (Fig. 1 and ¶¶5-8. See also Fig. 4-5 showing example scores.). The parameters include “Age (in years)” and “priority date” among several other parameters (¶43).
Block discloses comparing the current age (i.e. distance between filing date and date of valuation) against a Patent Valuation Model which includes an average useful life of patents in a target market (¶¶50-52). Block further discloses comparing the average score of a group of patents to the overall average score for the entire industry (¶¶53-65).
Ouderkirk discloses a method of determining an inventor impact by calculating the average Architect scores, Innovator scores, and Specialist scores over time (¶97-124). Ouderkirk further discloses normalizing these scores against an organization’s value (Fig. 9a-9b and ¶¶161-62). The age of a patent from a priority date can be used as part of a weighted average for the calculated quality score (¶56). Additionally, the total number of patents within a certain time period can be used to calculate certain quality scores (Fig. 10 and ¶¶163-65. See also Example 5 in ¶¶143-51.).
Lee ‘244 discloses comparing patents parameters against averages of that parameter in a group of patents of a similar class (¶122). The scores are normalized (¶78).
Lee ‘584 discloses analyzing groups of patents against competitor groups of patents over a certain period of time, such as changes in number of granted or filed patents over a certain period of time (Fig. 9C, ¶33, and ¶48. See also Fig. 14-16 showing search methods.). Specifically, Lee ‘584 uses a growth rate analysis to compute a filing rate based on the number of filings period over period, and an acceleration rate based on the increase or decrease of filing period over period (Fig. 13 and ¶62).
Jessen discloses comparing patents parameters against averages of that parameter in a group of patents of a similar class, similar to Lee ‘244.
Cui discloses scoring a value of a single patent based on the distance from the date of the single patent from the average date of a group of patents in the fourth embodiment (Pages 10-11).
Germeraad discloses averaging dates of a group of patents to determine metrics of that group (Fig. 6 and ¶¶61-63).
Bai discloses a tech index of the ratio of the number of awarded patents in a certain technology field to a certain firm over a certain period of time against the number of awarded patents of that firm during that period of time for all technology fields (Page 9).
Zahringer discloses averaging the age of a group of patents as a performance metric (Page 18 and Table 1-1, Page 54 and Table 2-1, Page 59 and Table 2-3, and Pages 82-89.).
Hanganu discloses calculating the average difference between the filing date and the present date to represent the average age of a trademark registration for a group (Fig. 2-3 and 5, ¶6, ¶¶11, and ¶54). However, Hanganu teaches away from Applicant’s claims because a higher age indicates a higher value (¶38), in addition to the average age not being bounded by dates.
McRae discloses calculating the average difference between the filing date and publication of each patent for a group of patents (C22L11-35. See also Fig. 16-22 showing analysis of IP budget).
Kyung discloses using average filing date as the reference time for a patent portfolio (Pages 9-10).
Masuyama discloses calculating a cluster score for a cluster of patents such as the average filing date, publication date, registration date, etc (Fig. 13 and ¶¶299-314 shows the methods of cluster scores. See also Fig. 14-16D and ¶¶315-23 showing examples). The evacuation value makes it possible to consider that a newer document group as a stronger patent group (¶96).
Del Vecchio discloses creating a report of a patent portfolio based on the average patent term (Fig. 1, ¶¶17-18, and ¶¶54-60). The Report includes determining the strength of the patent is based on patent years and average patent term (Fig. 11, ¶25, and ¶¶114-21 showing more detail.)
Generally, the closest prior art teaches evaluating the value of an asset 1) during a certain time period, 2) based on averages, or 3) based on dates, but without the specific calculation outlined in the claims. With respect to independent Claims 1, the closest prior art, taken individually and in an ordered combination, does not explicitly or implicitly disclose the specific ordered combination of features and elements that include:
“determining a respective date associated with individual ones of the first IP assets filed between a first date and a second date, . . .
determining a first average date associated with the first IP assets filed between the first date and the second date;
determining a first amount of time between the first average date and the second date;
generating a first score associated with at least one of the first entity or the first IP assets based at least in part on the first amount of time;”
(Emphasis added).
Specifically, the score being based on a “first amount of time,” defined as the time between “second date” and “first average date,” which is defined as the mean of each of the “respective dates.” The broadest reasonable interpretation of this limitation excludes a score merely being 1) based on each individual respective date (without averaging the dates and using the difference of the average date and the second date, 1) based on the times between the “second date” and each individual “respective date,” 2) based on the average date (without using the time duration between the average date and second date), 3) based on the time between the current date and an average date (without the “first date”), and 4) based on a weighted average date (i.e. certain respective dates are given more weight than others). Based on this broadest reasonable interpretation of the claim language, Claim 1 is held to be novel and non-obvious over the prior art.
Independent Claims 9 and 16 recite similar combination of features of Claim 1, and therefore are also held to be novel and non-obvious over the prior art.
Dependent Claims 2-8, 10-15, and 18 narrow the scope of Independent Claims 1, 9, and 16, and therefore are also held to be novel and non-obvious over the prior art via dependency.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW PARKER GOODMAN whose telephone number is (571) 272-5698. The examiner can normally be reached on Monday-Thursday from 9:30 AM ET to 6:00 PM ET. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey Zimmerman, can be reached at telephone number (571) 272-4602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal. Should you have questions about access to the Private PAIR
system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
/MATTHEW PARKER GOODMAN/Examiner, Art Unit 3628
/JESSICA LEMIEUX/Supervisory Patent Examiner, Art Unit 3626