Prosecution Insights
Last updated: April 19, 2026
Application No. 17/480,168

PREDICTIVE ANALYTICS MODEL MANAGEMENT USING COLLABORATIVE FILTERING

Non-Final OA §102§103§112
Filed
Sep 21, 2021
Examiner
PARK, GRACE A
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
421 granted / 557 resolved
+20.6% vs TC avg
Strong +18% interview lift
Without
With
+18.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
23 currently pending
Career history
580
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 557 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 2, 2026 has been entered. Response to Amendment and Arguments Claims 1-27 are pending and are being examined in this application. In light of Applicant’s amendments to the claims, the 101 is withdrawn. The new limitation “wherein each predictive model is trained based on a corresponding subset of training data streams assigned to the corresponding data stream group, wherein the respective subsets of training data streams are based on the same feature set as the data stream” provides additional details of a technical solution (i.e., training each predictive model using subsets based on the same feature set) that, together with the previously presented grouping and selecting steps, address the technical problem of training separate models being impractical or infeasible. Applicant’s arguments with respect to 103 rejections have been considered, but are moot in view of the new mappings and/or ground(s) of rejection provided below. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 2, 10, and 21 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. The subject matter of claims 2, 10, and 21 is substantially similar to the limitation wherein each predictive model in the set of predictive models is trained to predict a target variable for a corresponding data stream group in the set of data stream groups, wherein each predictive model is trained based on a corresponding subset of training data streams assigned to the corresponding data stream group, wherein the respective subsets of training data streams are based on the same feature set as the data stream in claims 1, 9, and 20 from which they respectively depend. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 2, 9, 10, 20, 21, and 23 are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Tabuchi et al. (US Pub. 20190244129). Referring to claim 1, Tabuchi discloses A computing device, comprising: interface circuitry; and processing circuitry [fig. 3, computer system 300, network interface 318, processor 302] to: receive, via the interface circuitry [fig. 3; network interface 318 connects to network 330], a data stream associated with a machine captured at least partially by one or more sensors [pars. 66, 70, and 71; a set of raw data is collected from an information source in a network communication environment; the network communication environment may include a system of interconnected sensors that transmit active data (i.e., are configured to continuously transmit data)], wherein the data stream comprises a set of feature values corresponding to an unlabeled instance [par. 85; note instance-based algorithms, cluster analysis (i.e., an unsupervised machine learning technique used with unlabeled data), and combinations thereof] of a feature set, wherein at least some of the feature values are captured by one or more sensors associated with the machine [pars. 75 and 76; the raw data is analyzed using a data interpretation dictionary to determine a set of attributes (i.e., a feature set), map the set of attributes to corresponding values (i.e., feature values) in the raw data (e.g., sensor measurements), and generate a set of interpreted data (i.e., a set of feature values)]; assign the data stream to a data stream group wherein the data stream group is selected from a set of data stream groups based on the set of feature values in the data stream [fig. 1; pars. 45, 67, 73, and 77; the network communication environment includes a group of information sources (i.e., set of data stream groups), each having a set of information sources (i.e., data stream group); the raw data includes an information source identification attribute (i.e., feature value), which is used to identify the information source (and thus the set of information sources) from which the raw data was collected; note grouping according to attributes (e.g., information source)]; select, from a set of predictive models, a predictive model corresponding to the data stream group assigned to the data stream [pars. 78, 79, and 82; an AI logic unit (e.g., a natural language processing technique, image analysis technique, predictive analytics, statistical analysis, prescriptive analytics, market modeling, web analytics, security analytics, risk analytics, software analytics, and the like) is selected from available AI logic units based on different attributes (e.g., information sources); note that AI logic units are automatically determined to process specific sets of data ingested from a set of information sources, so that administrators need not manually select AI logic units for particular information sources], wherein each predictive model in the set of predictive models is trained to predict a target variable for a corresponding data stream group in the set of data stream groups, wherein each predictive model is trained based on a corresponding subset of training data streams assigned to the corresponding data stream group [pars. 73, 78, 79, and 84-86; subsets of interpreted data are used to respectively train AI logic units corresponding to different attributes (e.g., information sources)], wherein the respective subsets of training data streams are based on the same feature set as the data stream [pars. 84-86; note that the subsets are all subsets of the set of interpreted data]; and predict the target variable for the data stream using the predictive model, wherein the predictive model infers the target variable based on the set of feature values in the data stream [par. 80; the AI logic unit processes the set of interpreted data (e.g., to extract a conclusion or inference from the set of interpreted data)]. Referring to claim 2, see the rejection for claim 1. Referring to claim 9, see at least the rejection for claim 1. Tabuchi further discloses At least one non-transitory computer-readable storage medium having instructions stored thereon, wherein the instructions, when executed on processing circuitry, cause the processing circuitry to perform the claimed steps [fig. 3, computer system 300, memory 304, data orchestration platform management 350, processor 302]. Referring to claim 10, see the rejection for claim 9. Referring to claim 20, see the rejection for claim 1, which incorporates the claimed method. Referring to claim 21, see the rejection for claim 20. Referring to claim 23, see at least the rejection for claim 1. Tabuchi further discloses A system, comprising: one or more sensors [par. 66; interconnected sensors] interface circuitry; and processing circuitry to perform the claimed steps [fig. 3, computer system 300, network interface 318, processor 302]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3-7, 11-17, 22, and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Tabuchi in view of DeLand et al. (US Pub. 20170124178). Referring to claim 3, Tabuchi discloses The computing device of Claim 1, wherein the processing circuitry to assign the data stream to the data stream group is further to: select the data stream group to assign to the data stream using a grouping model [par. 77; the grouping according to attributes is performed including a machine learning model]. Tabuchi does not appear to explicitly disclose wherein the grouping model selects the data stream group from the set of data stream groups based on a comparison of the data stream to a grouping dataset, wherein the grouping dataset comprises a set of representative data streams for each data stream group in the set of data stream groups. However, DeLand discloses wherein the grouping model selects the data stream group from the set of data stream groups based on a comparison of the data stream to a grouping dataset, wherein the grouping dataset comprises a set of representative data streams for each data stream group in the set of data stream groups [fig. 2, steps 205-215; pars. 21, 22, and 33-35; a clustering algorithm (i.e., grouping model) assigns new streaming data to a cluster (i.e., data stream group) of a model core group based on its feature data by comparing the feature data to data representing each cluster]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the grouping according to attributes taught by Tabuchi so that the grouping is performed by the clustering algorithm taught by DeLand, with a reasonable expectation of success. The motivation for doing so would have been to provide improved clustering for multi-dimensional dynamic data [DeLand, pars. 22 and 23]. Referring to claim 4, DeLand discloses The computing device of Claim 3, wherein the processing circuitry to select the data stream group to assign to the data stream using the grouping model is further to: compute, based on a distance calculation, a distance of the data stream to each data stream group in the set of data stream groups, wherein the distance to each data stream group is computed based on the set of representative data streams for each data stream group; and select, from the set of data stream groups, the data stream group having a closest distance to the data stream [par. 21; the clustering algorithm is a k-means algorithm that assigns objects to a cluster determined to be the nearest to the object based on comparing the Euclidean distances along one or more data dimensions between the data representing the object and the data representing the cluster]. Referring to claim 5, DeLand discloses The computing device of Claim 3, wherein the grouping model comprises a clustering model [par. 21; note the clustering algorithm]. Referring to claim 6, Tabuchi and DeLand disclose The computing device of Claim 3, wherein the processing circuitry is further to: select the set of representative data streams for each data stream group from a training dataset, wherein the training dataset comprises a set of training data streams; and generate the grouping dataset for the grouping model, wherein the grouping dataset comprises the set of representative data streams selected for each data stream group [Tabuchi: pars. 73, 78, 79, and 84-86; note the subsets of interpreted data used to respectively train the AI logic units corresponding to different attributes / DeLand: fig. 2, steps 205-215; pars. 21, 22, and 33-35; note the assigning of the new streaming data to a cluster (i.e., data stream group) of the model core group based on its feature data by comparing the feature data to data representing each cluster]. Referring to claim 7, Tabuchi does not appear to explicitly disclose The computing device of Claim 1, wherein the processing circuitry is further to: detect a change in the set of feature values in the data stream; determine, based on the change in the set of feature values, that a grouping of the data stream is to be updated, wherein the data stream is to be reassigned to a second data stream group in the set of data stream groups; and dynamically update the set of data stream groups to reassign the data stream to the second data stream group. However, DeLand discloses The computing device of Claim 1, wherein the processing circuitry is further to: detect a change in the set of feature values in the data stream; determine, based on the change in the set of feature values, that a grouping of the data stream is to be updated, wherein the data stream is to be reassigned to a second data stream group in the set of data stream groups; and dynamically update the set of data stream groups to reassign the data stream to the second data stream group [fig. 2, step 220; pars. 20-22, and 33-35; a clustering algorithm (i.e., grouping model) assigns new streaming data to a cluster (i.e., data stream group) of a model core group based on its feature data by comparing the feature data to data representing each cluster; the model core group is dynamically updated in response to data changes in the new streaming data]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the grouping according to attributes taught by Tabuchi so that the grouping is performed by the clustering algorithm taught by DeLand, with a reasonable expectation of success. The motivation for doing so would have been to provide improved clustering for multi-dimensional dynamic data [DeLand, pars. 22 and 23]. Referring to claim 11, see the rejection for claim 3. Referring to claim 12, see the rejection for claim 4. Referring to claim 13, DeLand discloses The storage medium of Claim 12, wherein the distance calculation comprises: a Euclidean distance calculation; a Jaccard calculation; or a dynamic time warping calculation [par. 21; note the Euclidean distances]. Referring to claim 14, see the rejection for claim 5. Referring to claim 15, DeLand discloses The storage medium of Claim 14, wherein the clustering model comprises a k-means clustering model [par. 21; note the k-means algorithm]. Referring to claim 16, see the rejection for claim 6. Referring to claim 17, see the rejection for claim 7. Referring to claim 22, see the rejection for claim 7. Referring to claim 26, Tabuchi does not appear to explicitly disclose The computing device of Claim 1, wherein the processing circuitry to assign the data stream to the data stream group is further to: select the data stream group from the set of data stream groups based on a similarity between the data stream and one or more representative data streams in each data stream group, wherein the representative data streams are based on the same feature set as the data stream. However, DeLand discloses The computing device of Claim 1, wherein the processing circuitry to assign the data stream to the data stream group is further to: select the data stream group from the set of data stream groups based on a similarity between the data stream and one or more representative data streams in each data stream group, wherein the representative data streams are based on the same feature set as the data stream [fig. 2, steps 205-215; pars. 21, 22, and 33-35; a clustering algorithm (i.e., grouping model) assigns new streaming data to a cluster (i.e., data stream group) of a model core group based on its feature data by comparing the feature data to data representing each cluster; the clustering algorithm is a k-means algorithm that assigns objects to a cluster determined to be the nearest to the object based on comparing the Euclidean distances along one or more data dimensions between the data representing the object and the data representing the cluster; see also Tabuchi, pars. 73, 78, 79, and 84-86 disclosing training of respective AI logic units using subsets from the same set of interpreted data]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the grouping according to attributes taught by Tabuchi so that the grouping is performed by the clustering algorithm taught by DeLand, with a reasonable expectation of success. The motivation for doing so would have been to provide improved clustering for multi-dimensional dynamic data [DeLand, pars. 22 and 23]. Claims 8, 18, 19, 24, 25, and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Tabuchi in view of Wynne et al. (US Pub. 20200103857). Referring to claim 8, Tabuchi discloses The computing device of Claim 1, wherein: the computing device is: an edge server [par. 110; note edge servers]; a tool controller to control a tool; or a robot controller to control a robot. Tabuchi does not appear to explicitly disclose wherein the computing device is a tool controller to control a tool; or a robot controller to control a robot; and the target variable comprises a predicted quality level of a task performed by the tool or the robot. However, Wynne discloses wherein the computing device is a tool controller to control a tool; or a robot controller to control a robot; and the target variable comprises a predicted quality level of a task performed by the tool or the robot [pars. 19, 20, 26, 30, and 87; a manufacturing production line includes equipment such as a CNC mill, industrial robots, conveyor systems, and printers, each of which includes sensors; a machine learning module uses AI to process and analyze device data to perform various prediction tasks such as detecting hardware failures, manufacturing quality issues, and production inefficiencies in the manufacturing production line]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the processing performed by the AI logic units taught by Tabuchi so that the processing includes the various prediction tasks taught by Wynne, with a reasonable expectation of success. The motivation for doing so would have been to monitor and analyze sensor feedback in a manufacturing production line to optimize real-time and future production and improve production and business decisions [Wynne, par. 17]. Referring to claim 18, see the rejection for claim 8. Referring to claim 19, Wynne discloses The storage medium of Claim 18, wherein the task comprises a manufacturing task performed to manufacture a product [pars. 19, 20, 26, 30, and 87; note the various prediction tasks such as detecting hardware failures, manufacturing quality issues, and production inefficiencies in the manufacturing production line]. Referring to claim 24, Tabuchi does not appear to explicitly disclose The system of Claim 23, wherein: the machine is a tool or a robot; and the target variable comprises a predicted quality level of a task performed by the tool or the robot. However, Wynne discloses The system of Claim 23, wherein: the machine is a tool or a robot; and the target variable comprises a predicted quality level of a task performed by the tool or the robot [pars. 19, 20, 26, 30, and 87; a manufacturing production line includes equipment such as a CNC mill, industrial robots, conveyor systems, and printers, each of which includes sensors; a machine learning module uses AI to process and analyze device data to perform various prediction tasks such as detecting hardware failures, manufacturing quality issues, and production inefficiencies in the manufacturing production line]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the processing performed by the AI logic units taught by Tabuchi so that the processing includes the various prediction tasks taught by Wynne, with a reasonable expectation of success. The motivation for doing so would have been to monitor and analyze sensor feedback in a manufacturing production line to optimize real-time and future production and improve production and business decisions [Wynne, par. 17]. Referring to claim 25, Wynne discloses The system of Claim 24, wherein the tool is: a welding gun; a glue gun; a riveting machine; a screwdriver; or a pump [pars. 68 and 95; the manufacturing production line includes pumps]. Referring to claim 27, Tabuchi does not appear to explicitly disclose The computing device of Claim 1, wherein the machine comprises (i) a robot, (ii) a tool, or (iii) a robot and a tool. However, Wynne discloses The computing device of Claim 1, wherein the machine comprises (i) a robot, (ii) a tool, or (iii) a robot and a tool [pars. 19, 20, 26, 30, and 87; a manufacturing production line includes equipment such as a CNC mill, industrial robots, conveyor systems, and printers, each of which includes sensors; a machine learning module uses AI to process and analyze device data to perform various prediction tasks such as detecting hardware failures, manufacturing quality issues, and production inefficiencies in the manufacturing production line]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the processing performed by the AI logic units taught by Tabuchi so that the processing includes the various prediction tasks taught by Wynne, with a reasonable expectation of success. The motivation for doing so would have been to monitor and analyze sensor feedback in a manufacturing production line to optimize real-time and future production and improve production and business decisions [Wynne, par. 17]. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRACE PARK whose telephone number is (571)270-7727. The examiner can normally be reached M-F 8AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TAMARA KYLE can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Grace Park/Primary Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Sep 21, 2021
Application Filed
Jan 04, 2022
Response after Non-Final Action
Feb 18, 2025
Non-Final Rejection — §102, §103, §112
May 27, 2025
Response Filed
Sep 29, 2025
Final Rejection — §102, §103, §112
Oct 02, 2025
Response after Non-Final Action
Feb 02, 2026
Request for Continued Examination
Feb 09, 2026
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591807
SKETCHED AND CLUSTERED FEDERATED LEARNING WITH AUTOMATIC TUNING
2y 5m to grant Granted Mar 31, 2026
Patent 12585924
CAUSAL MULTI-TOUCH ATTRIBUTION
2y 5m to grant Granted Mar 24, 2026
Patent 12585728
METHOD AND APPARATUS FOR MACHINE LEARNING BASED INLET DEBRIS MONITORING
2y 5m to grant Granted Mar 24, 2026
Patent 12579150
Hybrid and Hierarchical Multi-Trial and OneShot Neural Architecture Search on Datacenter Machine Learning Accelerators
2y 5m to grant Granted Mar 17, 2026
Patent 12579431
METHOD AND SYSTEM FOR MACHINE LEARNING BASED UNDERSTANDING OF DATA ELEMENTS IN MAINFRAME PROGRAM CODE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
94%
With Interview (+18.2%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 557 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month