DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to communication filed 12/30/2025.
The instant application having application No. 18/227,500 filed on July 28, 2023, presents claims 1-20 for examination. The instant application has no priority data.
Status of the Claims
Claims 1, 13, and 20 are amended, claims 1-20 are currently pending in the application.
Response to Amendment
Regarding 35 U.S.C. § 101 rejection: Amended claims do not overcome the 101 rejections, the rejections are maintained as set forth below.
Examiner Notes
Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
With respect to claim 1 (Currently Amended), This claim is within at least one of the four categories of patent eligible subject matter as it is directed to a process claim under Step 1.
Under Prong 1, Step 2A:
However, the limitations of claim 1,
“normalizing, the subset of the OSS metadata, into normalized OSS metadata […] configured to execute data cleansing and standardization operations to transform heterogeneous metadata formats from multiple repositories into a standardized dataset format;
performing, on the normalized OSS metadata, ML data typification processing to create normalized static data snapshots […], wherein the data typification processing assigns data types to metadata fields to generate an n- dimensional space vector map representing multi-dimensional metadata characteristics;
performing, on the normalized static data snapshots, surface analysis […] to generate time-based surface analysis data including a rolling time- series n-dimensional space vector surface representing temporal trends in the OSS metadata;
performing, on the normalized static data snapshots, cluster analysis […] to generate time-based cluster analysis data using a density-based spatial clustering of applications with noise (DBSCAN) machine learning technique to create clusters of interior, on-surface, and exterior data points corresponding to metadata stability;
configuring the distributed computing network to process the dynamic data across multiple nodes to enable scalability for handling metadata from a plurality of open- source repositories;_
applying, […], a filter graph to the subset of the OSS metadata to extract features from the heterogeneous metadata formats, wherein the filter graph comprises cascading filters executed across the multiple nodes to transform the heterogeneous metadata formats into the standardized dataset format;
performing, using the filter graph, a lifecycle ownership cost analysis to calculate cost versus reliability versus time for minimization analysis;
generating, […], the rolling time-series n-dimensional space vector surface by computing numerical vectors for each OSS metadata point using a coordinate system defined for the surface, wherein the numerical vectors are analyzed using statistical methods to identify temporal trends;
generating, […] based on the dynamic data, an EOL deprecation prediction for the OSS using an Ordering Points To Identify Clustering Structure (OPTICS) machine learning technique to identify vectors trending toward the interior of a multi-dimensional tensor, wherein the EOL deprecation prediction is visually presented as the multi-dimensional tensor indicating a percentage likelihood of deprecation within a specified time period based on historical deprecation patterns; and
generating, based on the EOL deprecation prediction, a predictive analysis of where computer-managed software maintenance is indicated to extend a pre-EOL timeframe.”
as drafted, are functions that, under its broadest reasonable interpretation, recite the abstract idea of a mental process. The limitations encompass a human mind carrying out the functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. e.g. for the “normalizing…” limitation, other than reciting “a ML normalization module” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “a ML normalization module configured to execute data cleansing and standardization operations to transform heterogeneous metadata formats from multiple repositories into a standardized dataset format” language, “normalizing” in the context of this claim encompasses the user manually normalize the subset of the OSS metadata into normalized OSS metadata as defined in the claim element. For the four “performing …” limitations, other than reciting “by an ML data typification module” or “by the ML surface analytics module” or “by the ML cluster analytics module” and “using a density- based spatial clustering of applications with noise (DBSCAN) machine learning technique” nothing in the claim elements precludes the steps from practically being performed in the mind. For example, but for the “by an ML data typification module” or “by the ML surface analytics module” or “by the ML cluster analytics module” and “using a density- based spatial clustering of applications with noise (DBSCAN) machine learning technique” language, “performing” in the context of this claim encompasses the user manually perform ML data typification, or surface analysis, or cluster analysis to generate time-based cluster analysis data and to create clusters of interior, on-surface, and exterior data points corresponding to metadata stability as defined in the claim elements, or a lifecycle ownership cost analysis as defined in the claim. Similarly, the user can manually configure the distributed computing network as defined in the claim, can manually apply a filter graph to the subset of the OSS metadata to extract features from the heterogeneous metadata formats as defined in the claim, can manually generate the rolling time-series n-dimensional space vector surface as defined in the claim, can manually generate an EOL deprecation prediction for the OSS as defined in the claim element, and can manually generate a predictive analysis as defined in the claim. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas under Prong 1 Step 2A
Under Prong 2, Step 2A:
The judicial exception is not integrated into a practical application. The claim recites the following additional elements
“retrieving, from open-source repositories in cloud-service providers, OSS repositories via a distributed computing network, OSS indicia and OSS metadata by a machine learning (ML) retrieval module engine executing code instructions on the distributed computing network, wherein the OSS indicia includes software identifiers and the OSS metadata comprises release notes, source code commits, and defect tickets;
storing, in a master OSS datastore, the OSS indicia and corresponding OSS metadata;
extracting, from the master OSS datastore based on selected criteria, a subset of the OSS metadata by the ML retrieval module engine;”
“storing, in a static datastore, the normalized static data snapshots, and providing the static data snapshots to: an ML surface analytics module, an ML cluster analytics module, and a dynamic data store;”
“integrating, into dynamic data in the dynamic data store, the time-based surface analysis data and the time-based cluster analysis data to update the dynamic data store with temporal metadata trends for real-time analysis;” and
“a distributed computing network”, “the ML retrieval module engine”, “a ML normalization module configured to execute data cleansing and standardization operations to transform heterogeneous metadata formats from multiple repositories into a standardized dataset format”, “an ML data typification module”, “an ML surface analytics module”, “an ML cluster analytics module”, “integrating, into dynamic data in a dynamic data store, … to update the dynamic data store with temporal metadata trends for real-time analysis”, “noise (DBSCAN) machine learning technique”, “an end-of-life (EOL) analytics module”, “an Ordering Points To Identify Clustering Structure (OPTICS) machine learning technique”.
Wherein the “retrieving…”, “storing…”, “providing …” “extracting …” and “integrating, into dynamic data in dynamic data store … to update the dynamic data store with temporal metadata trends for real-time analysis” are insignificant extra-solution activities such as retrieving, storing and transmitting/providing information. “a distributed computing network”, “the ML retrieval module engine”, “a ML normalization module configured to execute data cleansing and standardization operations to transform heterogeneous metadata formats from multiple repositories into a standardized dataset format”, ““an ML data typification module”, “an ML surface analytics module”, “an ML cluster analytics module”, “integrating, into dynamic data in a dynamic data store, … to update the dynamic data store with temporal metadata trends for real-time analysis”, “noise (DBSCAN) machine learning technique”, “the end-of-life (EOL) analytics module”, “an Ordering Points To Identify Clustering Structure (OPTICS) machine learning technique”, are cited as generic computer/program components, or merely as a tool to implement the identified abstract idea, do not integrate the judicial exception into a practical application. Refer to MPEP 2106.05(f).
Under Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements “a distributed computing network”, “the ML retrieval module engine”, “a ML normalization module configured to execute data cleansing and standardization operations to transform heterogeneous metadata formats from multiple repositories into a standardized dataset format”, ““an ML data typification module”, “an ML surface analytics module”, “an ML cluster analytics module”, “integrating, into dynamic data in a dynamic data store, … to update the dynamic data store with temporal metadata trends for real-time analysis”, “noise (DBSCAN) machine learning technique”, “the end-of-life (EOL) analytics module”, “an Ordering Points To Identify Clustering Structure (OPTICS) machine learning technique” are mere use of generic computer to implement the abstract idea, do not amount to significantly more than the judicial exception, thus, are not an inventive concept. The “retrieving…”, “storing…”, “providing …” “extracting …” and “integrating, into dynamic data in dynamic data store … to update the dynamic data store with temporal metadata trends for real-time analysis” are insignificant extra-solution activities such as retrieving, storing and transmitting/providing information which are recognized as well‐understood, routine, and conventional functions, see MPEP 2106.05(d), II, Versata Dev. Group, Inc. v. SAP Am., Inc. for retrieving and storing data. Symantec for receiving and transmitting data. Accordingly, even viewed as whole, the claim does not appear to be patent eligible under 35 USC 101.
With respect to claim 13 (Currently Amended), This claim is within at least one of the four categories of patent eligible subject matter as it is directed to a process claim under Step 1.
Under Prong 1, Step 2A:
However, the limitations of claim 13,
“normalizing, the subset of the OSS metadata, into normalized OSS metadata by a ML normalization module configured to execute data cleansing and standardization operations to transform heterogeneous metadata formats from multiple repositories into a standardized dataset format;
performing, on the normalized OSS metadata, ML data typification processing to create static data snapshots by an ML data typification module, wherein the data typification processing assigns data types to metadata fields to generate an n-dimensional space vector map representing multi-dimensional metadata characteristics;”
“performing, on the static data snapshots, surface analysis by the ML surface analytics module to generate a rolling time-series n-dimensional space vector map representing temporal trends in the OSS metadata;
performing, on the static data snapshots, cluster analysis by the ML cluster analytics module to generate time-based cluster analysis data using a density-based spatial clustering of applications with noise (DBSCAN) machine learning technique to create clusters of interior, on-surface, and exterior data points corresponding to metadata stability;”
“configuring the distributed computing network to process the dynamic data across multiple nodes to enable scalability for handling metadata from a plurality of open- source repositories;_
applying, by the ML normalization module, a filter graph to the subset of the OSS metadata to extract features from the heterogeneous metadata formats, wherein the filter graph comprises cascading filters executed across the multiple nodes to transform the heterogeneous metadata formats into the standardized dataset format;
performing, using the filter graph, a lifecycle ownership cost analysis to calculate cost versus reliability versus time for minimization analysis;
generating, by the ML surface analytics module, the rolling time-series n-dimensional space vector map by computing numerical vectors for each OSS metadata point using a coordinate system defined for the surface, wherein the numerical vectors are analyzed using statistical methods to identify temporal trends;
generating, by the EOL analytics module based on the dynamic data, an EOL deprecation prediction for the OSS using an Ordering Points To Identify Clustering Structure (OPTICS) machine learning technique to identify vectors trending toward the interior of a multi-dimensional tensor, wherein the EOL deprecation prediction is visually presented as the multi-dimensional tensor indicating a percentage likelihood of deprecation within a specified time period based on historical deprecation patterns; and
generating, based on the EOL deprecation prediction, a predictive analysis of where computer-managed software maintenance is indicated to extend a pre-EOL timeframe.”
as drafted, are functions that, under its broadest reasonable interpretation, recite the abstract idea of a mental process. The limitations encompass a human mind carrying out the functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. e.g. For the “normalizing…” limitation, other than reciting “a ML normalization module” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “a ML normalization module configured to execute data cleansing and standardization operations to transform heterogeneous metadata formats from multiple repositories into a standardized dataset format” language, “normalizing” in the context of this claim encompasses the user manually normalize the subset of the OSS metadata into normalized OSS metadata as defined in the claim element. For the four “performing …” limitations, other than reciting “by an ML data typification module” or “by the ML surface analytics module” or “by the ML cluster analytics module” and “using a density- based spatial clustering of applications with noise (DBSCAN) machine learning technique” nothing in the claim elements precludes the steps from practically being performed in the mind. For example, but for the “by an ML data typification module” or “by the ML surface analytics module” or “by the ML cluster analytics module” and “using a density- based spatial clustering of applications with noise (DBSCAN) machine learning technique” language, “performing” in the context of this claim encompasses the user manually perform ML data typification, or surface analysis, or cluster analysis to generate time-based cluster analysis data and to create clusters of interior, on-surface, and exterior data points corresponding to metadata stability as defined in the claim elements, or a lifecycle ownership cost analysis as defined in the claim. Similarly, the user can manually configure the distributed computing network as defined in the claim, can manually apply a filter graph to the subset of the OSS metadata to extract features from the heterogeneous metadata formats as defined in the claim, can manually generate the rolling time-series n-dimensional space vector map as defined in the claim, can manually generate an EOL deprecation prediction for the OSS as defined in the claim element, and can manually generate a predictive analysis as defined in the claim. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas under Prong 1 Step 2A
Under Prong 2, Step 2A:
The judicial exception is not integrated into a practical application. The claim recites the following additional elements
“retrieving, from all publicly available open-source repositories in cloud-service providers, OSS indicia and OSS metadata by a machine learning (ML) retrieval module engine executing code instructions on a distributed computing network, wherein the OSS indicia include software identifiers and the OSS metadata includes release notes, source code commits, and defect tickets;
storing, in a master OSS datastore, the OSS indicia and corresponding OSS metadata;
extracting, from the master OSS datastore based on selected criteria, a subset of the OSS metadata by the ML retrieval module engine;”
“storing, in a static datastore, the static data snapshots, and providing the static data snapshots to: an ML surface analytics module, an ML cluster analytics module, and a dynamic data store;”
“integrating, into dynamic data in the dynamic data store, the rolling time-series n- dimensional space vector map and the time-based cluster analysis data;
transmitting, by the surface analytics module to the cluster analytics module, the rolling time-series n-space vector map;
transmitting, by the cluster analytics module to an end-of-life (EOL) analytics module, the cluster analysis;” and
“a distributed computing network”, “a machine learning ML retrieval module engine”, “a ML normalization module configured to execute data cleansing and standardization operations to transform heterogeneous metadata formats from multiple repositories into a standardized dataset format”, “an ML data typification module”, “an ML surface analytics module”, “an ML cluster analytics module”, “noise (DBSCAN) machine learning technique”, “the end-of-life (EOL) analytics module”, “an Ordering Points To Identify Clustering Structure (OPTICS) machine learning technique”.
Wherein the “retrieving…”, “storing…”, “providing …”, “extracting…”, “transmitting …” and “integrating, into dynamic data in dynamic data store” are insignificant extra-solution activities such as retrieving, storing and transmitting/providing data. “a distributed computing network”, “a machine learning ML retrieval module engine”, “a ML normalization module configured to execute data cleansing and standardization operations to transform heterogeneous metadata formats from multiple repositories into a standardized dataset format”, “an ML data typification module”, “an ML surface analytics module”, “an ML cluster analytics module”, “noise (DBSCAN) machine learning technique”, “the end-of-life (EOL) analytics module”, “an Ordering Points To Identify Clustering Structure (OPTICS) machine learning technique”, are cited as generic computer/program components, or merely as a tool to implement the identified abstract idea, do not integrate the judicial exception into a practical application. Refer to MPEP 2106.05(f).
Under Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements “a distributed computing network”, “a machine learning ML retrieval module engine”, “a ML normalization module configured to execute data cleansing and standardization operations to transform heterogeneous metadata formats from multiple repositories into a standardized dataset format”, “an ML data typification module”, “an ML surface analytics module”, “an ML cluster analytics module”, “noise (DBSCAN) machine learning technique”, “the end-of-life (EOL) analytics module”, “an Ordering Points To Identify Clustering Structure (OPTICS) machine learning technique” are mere use of generic computer/software to implement the abstract idea, do not amount to significantly more than the judicial exception, thus, are not an inventive concept. The “retrieving…”, “storing…”, “providing …”, “extracting…”, “transmitting …” and “integrating, into dynamic data in dynamic data store” are insignificant extra-solution activities such as retrieving, storing and transmitting/providing data which are recognized as well‐understood, routine, and conventional functions, see MPEP 2106.05(d), II, Versata Dev. Group, Inc. v. SAP Am., Inc. for retrieving and storing data. Symantec for receiving and transmitting data. Accordingly, even viewed as whole, the claim does not appear to be patent eligible under 35 USC 101.
With respect to claim 20 (Currently Amended), This claim is within at least one of the four categories of patent eligible subject matter as it is directed to a process claim under Step 1.
Under Prong 1, Step 2A:
However, the limitations of claim 20,
“normalizing, the subset of the OSS metadata, into normalized OSS metadata by a ML normalization machine module configured to execute data cleansing and standardization operations to transform heterogeneous metadata formats from multiple repositories into a standardized dataset format;
performing, on the normalized OSS metadata, ML data typification processing to create static data snapshots by an ML data typification module, wherein the data typification processing assigns data types to metadata fields to generate an n-dimensional space vector map representing multi-dimensional metadata characteristics;”
“performing, on the static data snapshots, surface analysis by the ML surface analytics module to generate a rolling time-series n-dimensional space vector map representing temporal trends in the OSS metadata;
performing, on the static data snapshots, cluster analysis by the ML cluster analytics module to generate time-based cluster analysis data that includes clusters of interior, on-surface, and exterior data points corresponding to metadata stability, and generates a metric for cluster quality for self-reinforcement against at least one baseline, said metric generated using a Calinski-Harabasz/ Variance Ratio Criterion;”
“configuring the distributed computing network to process the dynamic data across multiple nodes to enable scalability for handling metadata from a plurality of open- source repositories;
applying, by the ML normalization module, a filter graph to the subset of the OSS metadata to extract features from the heterogeneous metadata formats, wherein the filter graph comprises cascading filters executed across the multiple nodes to transform the heterogeneous metadata formats into the standardized dataset format;
performing, using the filter graph, a lifecycle ownership cost analysis to calculate cost versus reliability versus time for minimization analysis;
generating, by the ML surface analytics module, the rolling time-series n-dimensional space vector map by computing numerical vectors for each OSS metadata point using a coordinate system defined for the surface, wherein the numerical vectors are analyzed using statistical methods to identify temporal trends;
computing, by the EOL analytics module, a rate of change of metadata variables in the dynamic data to determine a trend direction of the vectors in the multi-dimensional tensor, wherein the rate of change is derived from historical deprecation patterns;
generating, by the EOL analytics module based on the dynamic data, an EOL deprecation prediction for the OSS based on an Ordering Points To Identify Clustering Structure (OPTICS) machine learning technique to identify vectors trending toward the interior of a multi-dimensional tensor, wherein the EOL deprecation prediction is visually presented as one or more multi-dimensional tensors indicating a percentage likelihood of deprecation within a specified time period based on historical deprecation patterns; and
generating, based on the EOL deprecation prediction, a predictive analysis of where computer-managed software maintenance is indicated to extend a pre-EOL timeframe.”
as drafted, are functions that, under its broadest reasonable interpretation, recite the abstract idea of a mental process. The limitations encompass a human mind carrying out the functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. e.g. For the “normalizing…” limitation, other than reciting “a ML normalization machine module” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “a ML normalization machine module configured to execute data cleansing and standardization operations to transform heterogeneous metadata formats from multiple repositories into a standardized dataset format” language, “normalizing” in the context of this claim encompasses the user manually normalize the subset of the OSS metadata into normalized OSS metadata as defined in the claim element. For the four “performing …” limitations, other than reciting “by an ML data typification module” or “by the ML surface analytics module” or “by the ML cluster analytics module” nothing in the claim elements precludes the steps from practically being performed in the mind. For example, but for the “by an ML data typification module” or “by the ML surface analytics module” or “by the ML cluster analytics module” language, “performing” in the context of this claim encompasses the user manually perform ML data typification, or surface analysis, or cluster analysis, or a lifecycle ownership cost analysis as defined in the claim elements. Similarly, the user can manually configure the distributed computing network as defined in the claim, can manually apply a filter graph to the subset of the OSS metadata to extract features from the heterogeneous metadata formats as defined in the claim, can manually generate the rolling time-series n-dimensional space vector map as defined in the claim, can manually compute a rate of change of metadata variables as defined in the claim, can manually generate an EOL deprecation prediction for the OSS as defined in the claim element, and can manually generate a predictive analysis as defined in the claim. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas under Prong 1 Step 2A
Under Prong 2, Step 2A:
The judicial exception is not integrated into a practical application. The claim recites the following additional elements
“retrieving, from all publicly available open-source repositories in cloud-service providers, OSS indicia and OSS metadata by a machine learning (ML) retrieval module executing code instructions on a distributed computing network, wherein the OSS indicia includes software identifiers and the OSS metadata comprises release notes, source code commits, and defect tickets;
storing, in a master OSS datastore, the OSS indicia and corresponding OSS metadata;
extracting, from the master OSS datastore based on selected criteria, a subset of the OSS metadata by the ML retrieval module;”
“storing, in a static datastore, the static data snapshots, and providing the static data snapshots to: an ML surface analytics module, an ML cluster analytics module, and a dynamic data store;”
“integrating, into dynamic data in the dynamic data store, the rolling time-series n- dimensional space vector map and the time-based cluster analysis data to update the dynamic data store with temporal metadata trends for real-time analysis;
providing, by the ML surface analytics module to the ML cluster analytics module, the rolling time-series n-dimensional space vector map;
providing, by the cluster analytics module to an end-of-life (EOL) analytics module, the time-based cluster analysis data;” and
“a distributed computing network”, “a machine learning ML retrieval module engine”, “a ML normalization machine module configured to execute data cleansing and standardization operations to transform heterogeneous metadata formats from multiple repositories into a standardized dataset format”, “an ML data typification module”, “an ML surface analytics module”, “an ML cluster analytics module”, “the end-of-life (EOL) analytics module”, “an Ordering Points To Identify Clustering Structure (OPTICS) machine learning technique”.
Wherein the “retrieving…”, “storing…”, “providing …”, “extracting…”, and “integrating, into dynamic data in dynamic data store” are insignificant extra-solution activities such as retrieving, storing and transmitting/providing information. “a distributed computing network”, “a machine learning ML retrieval module engine”, “a ML normalization machine module configured to execute data cleansing and standardization operations to transform heterogeneous metadata formats from multiple repositories into a standardized dataset format”, “an ML data typification module”, “an ML surface analytics module”, “an ML cluster analytics module”, “the end-of-life (EOL) analytics module”, “an Ordering Points To Identify Clustering Structure (OPTICS) machine learning technique”, are cited as generic computer/program components, or merely as a tool to implement the identified abstract idea, do not integrate the judicial exception into a practical application. Refer to MPEP 2106.05(f).
Under Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements “a distributed computing network”, “a machine learning ML retrieval module engine”, “a ML normalization machine module configured to execute data cleansing and standardization operations to transform heterogeneous metadata formats from multiple repositories into a standardized dataset format”, “an ML data typification module”, “an ML surface analytics module”, “an ML cluster analytics module”, “the end-of-life (EOL) analytics module”, “an Ordering Points To Identify Clustering Structure (OPTICS) machine learning technique” are mere use of generic computer to implement the abstract idea, do not amount to significantly more than the judicial exception, thus, are not an inventive concept. The “retrieving…”, “storing…”, “providing …”, “extracting…”, and “integrating, into dynamic data in dynamic data store” are insignificant extra-solution activities such as retrieving, storing and transmitting/providing information which are recognized as well‐understood, routine, and conventional functions, see MPEP 2106.05(d), II, Versata Dev. Group, Inc. v. SAP Am., Inc. for retrieving and storing data. Symantec for receiving and transmitting data. Accordingly, even viewed as whole, the claim does not appear to be patent eligible under 35 USC 101.
With respect to claim 2 (Original), “wherein the OSS indicia identifies the source code and the OSS metadata includes: release notes, enhancement tickets, and defect tickets.” further defines the OSS indicia, is the same abstract idea as in claim 1, e.g. the user can identify the source code and OSS metadata as defined in the claim.
With respect to claim 3 (Previously Presented), “wherein the extracting from the master OSS datastore includes asynchronous data collection of code commits, release note analysis, and ticket analysis from open source repositories and the process further comprises the step of storing the code commits, the release note analysis, and the ticket analysis in the master OSS datastore for use by the ML normalization module.” Further defines extracting limitation, asynchronous collection of various data is merely indicating a field of use or technological environment in which to apply a judicial exception, and does not amount to significantly more than the judicial exception itself, and cannot integrate a judicial exception into a practical application. See MPEP § 2106.05(h). the “storing …” process is insignificant extra-solution activity such as storing data which is recognized as well‐understood, routine, and conventional functions, see MPEP 2106.05(d), Versata Dev. Group, Inc. v. SAP Am., Inc. for retrieving and storing data.
With respect to claim 4 (Original), “wherein the ML data typification normalizes the code commits, the release note analysis, and the tickets onto an N-space vector map.” further defines the typification process, is the same mental process as in claim 1.
With respect to claim 5 (Original), “wherein the ML surface analytics module creates a rolling time series n-space vector surface from the static data snap shots.” As drafted, other than reciting “the ML surface analytics module” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “the ML surface analytics module” language, “create” in the context of this claim encompasses the user manually create a rolling time series n-space vector surface from the static data snap shots. “the ML surface analytics module” is mere use of generic computer to implement the abstract idea, does not amount to significantly more than the judicial exception, thus, is not an inventive concept. Further, “creates a rolling time series n-space vector surface from the static data snap shots” is mathematical concept.
With respect to claim 6 (Original), “wherein the ML cluster analytics module creates clusters of interior datapoints, on-surface data points, and exterior datapoints using the dynamic data.” As drafted, other than reciting “the ML cluster analytics module” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “the ML cluster analytics module” language, “create” in the context of this claim encompasses the user manually create the data as defined in the claim. “the ML cluster analytics module” is mere use of generic computer to implement the abstract idea, does not amount to significantly more than the judicial exception, thus, is not an inventive concept.
With respect to claims 7 (Previously Presented) and 16 (Currently Amended), “wherein the clusters are created with a density-based spatial clustering of applications with noise (DMSCAN) ML technique.” As drafted, other than reciting “ML technique” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “ML technique” language, “create” in the context of this claim encompasses the user manually create the data as defined in the claim. “ML technique” is mere use of generic computer to implement the abstract idea, does not amount to significantly more than the judicial exception, thus, is not an inventive concept.
With respect to claims 8 (Previously Presented) and 17 (Currently Amended), “further comprising the steps of; generating a metric for a quality of the clusters for self-reinforcement against a baseline; using the metric to adjust parameters of the DBSCAN machine learning technique to improve clustering accuracy; and storing the adjusted parameters in the dynamic data store to optimize subsequent cluster analysis iterations.” As drafted, generating a metric and using the metric to adjust parameters are mental processes, because the user can manually perform the operations. Storing data is insignificant extra-solution activity which is recognized as well‐understood, routine, and conventional functions, see MPEP 2106.05(d), Versata Dev. Group, Inc. v. SAP Am., Inc. for retrieving and storing data. “the DBSCAN machine learning technique” is mere use of generic computer/software to implement the abstract idea, does not amount to significantly more than the judicial exception, thus, is not an inventive concept.
With respect to claims 9 (Original) and 18 (Original), “wherein the metric is generated using a Calinski-Harabasz/ Variance Ratio Criterion.” Further limiting the metric generating process, is the same mental process as in claim 8. And “a Calinski-Harabasz/Variance Ratio Criterion” is mathematical concept.
With respect to claim 10 (Original), “wherein the static data and the dynamic data are analyzed by an Ordering Points to Identify a Clustering Structure (OPTICS) ML technique to identify vectors trending toward the interior thereby suggesting an EOL candidate.” As drafted, other than reciting “ML technique” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “ML technique” language, “analyze” in the context of this claim encompasses the user manually analyze the data to identify vectors trending as defined in the claim. “ML technique” is mere use of generic computer/software to implement the abstract idea, does not amount to significantly more than the judicial exception, thus, is not an inventive concept.
With respect to claim 11 (Previously Presented), “wherein the EOL deprecation prediction for the OSS is a percentage of likelihood of deprecation within a time period based on prior deprecations that occurred within a prior time interval.” Further limiting the deprecation prediction, is the same mental process as in claim 1.
With respect to claim 12 (Original), “wherein the OSS indicia and OSS metadata is retrieved from all of said open- source repositories that are publicly accessible.” Further limiting the OSS indicia and OSS metadata retrieving process, is the same insignificant extra-solution function as in claim 1.
With respect to claim 14 (Original), “wherein the EOL deprecation prediction is based on an Ordering Points To Identify Clustering Structure (OPTICS)) machine learning technique for vectors trending toward the interior.” As drafted, other than reciting “machine learning technique” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “machine learning technique” language, “the EOL deprecation prediction” in the context of this claim encompasses the user manually perform the EOL deprecation prediction as defined in the claim. “machine learning technique” is mere use of generic computer to implement the abstract idea, does not amount to significantly more than the judicial exception, thus, is not an inventive concept.
With respect to claim 15 (Previously Presented), “wherein the ML cluster analytics module creates clusters of interior, on-surface, and exterior data points as part of the time-based cluster analysis data.” As drafted, other than reciting “the ML cluster analytics module” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “the ML cluster analytics module” language, “create” in the context of this claim encompasses the user manually creates the data as defined in the claim. “the ML cluster analytics module” is mere use of generic computer to implement the abstract idea, does not amount to significantly more than the judicial exception, thus, is not an inventive concept.
With respect to claim 19 (Original), “wherein the EOL deprecation prediction is visually presented.” Is mental process, e.g. the user can manually and visually present the EOL deprecation prediction.
Response to Arguments
Applicant's arguments filed 12/30/2025 have been fully considered but they are not persuasive.
At p12 second from the last paragraph of the Remarks, Applicant argued that “The claims do not recite a judicial exception, as they are not directed to a mental process. Even if they did recite a judicial exception, the claims integrate any such exception into a practical application by providing a specific technological improvement in the field of open-source software (OSS) deprecation prediction. Finally, the claims provide significantly more than any alleged abstract idea through inventive, non-conventional elements.”
Examiner respectfully disagrees, because, as set forth in the office action, the claims recite mental processes, such as normalizing OSS metadata, performing surface analysis and cluster analysis. The claims also recite additional elements, but the additional elements are either insignificant extra-solution activities which are recognized as well‐understood, routine, and conventional functions, or mere use of generic computer to implement the abstract idea. The additional elements do not amount to significantly more than the judicial exception, thus, are not an inventive concept.
At p12 last to p13 first paragraph of the Remarks, Applicant argued that “However, the claims involve complex machine learning (ML) processes and distributed computing operations that are infeasible for human execution, even with aids like pen and paper. For example, the claims recite retrieving vast amounts of heterogeneous OSS metadata from multiple public repositories via a distributed computing network, as described in the specification in paragraphs [0027]-[0030] and [0057]. …. The filter graph uses cascading filters for feature extraction, as in paragraphs [0045]-[0049], a non-routine operation. These elements preclude mental performance.”
Examiner respectfully disagrees, because, retrieving data, wherein data may be in various formats, from repositories is insignificant extra-solution activity which is recognized as well‐understood, routine, and conventional functions. Normalizing data, data typification, surface analysis, cluster analysis, self-reinforcement via the Calinkski-Harabasz metric, and EOL prediction are mental processes as human can perform these processes; further these processes involve mathematical concepts. Visually presenting predictions is like transmitting data which is insignificant extra-solution activity. Use of a computer to process the enormous, heterogeneous datasets from thousands of repositories makes the mental process fast/efficient, but the gained efficiency is the benefit of using a computer, the computer technology is not affected. The scale, complexity, and real-time aspects may make the advantage of using a computer such as speed/efficiency obvious, but do not render these processes impossible mentally. The Calinkski-Harabasz metric, whether generic or not, is a mathematical concept. The filter graph uses cascading filters for feature extraction is mental process as human can manually perform the process. These elements do not preclude mental performance.
At p13 second to p14 first paragraph of the Remarks, Applicant argued that “Even if the claims recite an abstract idea, which they do not, they integrate it into a practical application by improving OSS deprecation prediction technology. The specification identifies the problem of unpredictable OSS end-of-life (EOL), causing crisis-driven development and technical risks, …. The claims solve this by a distributed ML system that retrieves, normalizes, typifies, analyzes, and predicts deprecation from heterogeneous metadata, enabling proactive transitions, as outlined in paragraphs [0006]-[0010] and [0070]. Key limitations demonstrate this integration. Applying a filter graph with cascading filters across nodes transforms heterogeneous metadata into standardized formats, …, optimizes clustering, a feedback loop improving accuracy over time.”
Examiner respectfully disagrees, because, as set forth in the office action, and as explained above, the processes, retrieves, normalizes, typifies, analyzes, and predicts deprecation from heterogeneous metadata, are mental processes; they may enable proactive transitions, but they do not improve technology. Data format transformation, and data analysis are mental processes. The use of a computer makes the processes faster/efficient, the efficiency is the benefit of using a computer, technology is not affected. Generating rolling time-series n-dimensional vector surfaces by computing numerical vector and statistical analysis, and creating a unique structure representing temporal trends, are mental processes. These mental processes may enhance prediction probability, but the enhancement is the result of the mental processes, technology is not improved. Configuring the network for scalable processing across nodes, and handling vast metadata volumes are mental processes wherein the network is merely used a tool to implement the abstract idea of mental processes. These processes may improve efficiency and scalability, but do not improve technology. Computing rates of change for metadata variables to determine vector trends, and deriving predictive insights from dynamic data are mental processes, these processes may enable precise EOL forecasting, but do not improve technology. Adjusting DBSCAN parameters using the Calinski-Harabasz metric is mental process and storing for iterations is insignificant extra-solution activity, these processes may optimize clustering, and a feedback loop may improve accuracy over time, but do not improve technology.
At p14 second paragraph of the Remarks, Applicant argued with respect to newly added limitations, that these limitations “integrates cost-reliability- time evaluation into the process, providing actionable insights for OSS management. …. This addresses the technical problem of OSS unpredictability.”
Examiner respectfully disagrees, because, as set forth in the office action, the newly added limitations, “performing … a lifecycle ownership cost analysis …” and “generating … a predicative analysis …” are mental processes. Similar to other mental processes of the claims, they do not improve technology.
At p14 last to p15 first paragraph of the Remarks, Applicant argued that “The claims provide significantly more through a non-conventional arrangement. Using cascading filters across nodes for feature extraction, …. Viewed as a whole, the claims are eligible.”
Examiner respectfully disagrees, because, as explained above, see e.g. paragraphs 29 and 31, the processes that applicant argued about are either mental processes or insignificant extra-solution activities. These elements do not improve technology and do not constitute an inventive concept. Even viewed as a whole, the claims do not appear to be patent eligible under 35 USC 101.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. For example Weissman et al. US 8510729 B2, teaches System, Method And Computer Program Product For Versioning And Deprecation Of Components Of An Application.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Zengpu Wei whose telephone number is 571-270-1302. The examiner can normally be reached on Monday to Friday from 8:00AM to 5:00 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets, can be reached on 571-272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZENGPU WEI/Examiner, Art Unit 2197
/BRADLEY A TEETS/Supervisory Patent Examiner, Art Unit 2197