DETAILED ACTION
This action is in response to the application filed 05/11/2022. Claims 1-20 are pending and have been examined.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/27/2026 has been entered.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 6 and 16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 6 and 16 recite “the second subset is determined based on the threshold column”. It’s unclear whether “the threshold column” refers to the first threshold column or the second threshold column of the parent claims, thus the scopes of the claims are rendered indefinite. “the threshold column” is interpreted as referring to either the first or second threshold columns of the parent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Sasagawa (METHOD FOR GENERATING INFERENCE MODEL AND INFERENCE MODEL, filed 9/20/2021, US 2022/0036160 A1) in view of Tang et al. (MACHINE LEARNING USING QUERY ENGINES, filed 11/6/2020, US 2022/0147516 A1), hereafter referred to as Tang, and further in view of Chapman-McQuiston et al. (COMPUTER SYSTEM TO IDENTIFY ANOMALIES BASED ON COMPUTER GENERATED RESULTS, published 5/24/2018, US 2018/0144815 A1), hereafter referred to as Chapman-McQuiston.
Regarding claim 1, Sasagawa discloses [a] system for optimized multi-stage processing, the system comprising:
an application database storing inference data from a machine learning model, the inference data having fields and records in tabular form: “Glue layer GL connects predetermined intermediate layer mL1 included in first partial inference model M1p and predetermined intermediate layer mL2 included in second partial inference model M2p. For example, glue layer GL is a convolution layer that converts output data (inference data) that is output from first partial inference model M1p (machine learning model) into input data that is input to second partial inference model M2p or is a fully connected layer.” (Sasagawa, [0047])
a computer operatively coupled to the application database, the computer comprising a memory and a processor: “Furthermore, the present disclosure may be a computer system that includes a microprocessor and memory, the memory has stored therein the above computer program, and the microprocessor may operate in accordance with the computer program.” (Sasagawa, [0131])
the computer comprising a memory and a processor configured to:
retrieve the inference data from the application database: “Glue layer GL connects predetermined intermediate layer mL1 included in first partial inference model M1p and predetermined intermediate layer mL2 included in second partial inference model M2p. For example, glue layer GL is a convolution layer that converts output data (inference data) that is output from first partial inference model M1p into input data that is input to second partial inference model M2p or is a fully connected layer.” (Sasagawa, [0047]). The inference data must be retrieved in some fashion to be input to the second model.
process, in real time, the inference data to identify records that meet a predetermined threshold, to reduce computing burden for downstream processing by a downstream application server:
PNG
media_image1.png
677
835
media_image1.png
Greyscale
(Sasagawa, Figure 4)
“Glue layer GL connects predetermined intermediate layer mL1 included in first partial inference model M1p and predetermined intermediate layer mL2 included in second partial inference model M2p. For example, glue layer GL is a convolution layer that converts (process[es]) output data (inference data) that is output from first partial inference model M1p into input data that is input to second partial inference model M2p (downstream application) or is a fully connected layer.” (Sasagawa, [0047])
generate, in real time, filtered inference data, the filtered inference data having fields and records in tabular form, the filtered inference data further having a threshold column, wherein for each record, an indication of whether the respective record meets the predetermined threshold is stored in the threshold column; store the filtered inference data in the application database: “More specifically, glue layer GL has a function of mapping intermediate representation A1 of first inference model M1 (inference data) into intermediate representation B2 (filtered inference data) of second inference model M2” (Sasagawa, [0048])
Sasagawa relates to downstream processing of machine learning inference data and is analogous to the claimed invention.
While Sasagawa fails to disclose the further limitations of the claim, Tang teaches a system, comprising:
an application database storing inference data from a machine learning model, the inference data having fields and records in tabular form:
“the query engine can perform inference on a trained machine learning model using a user-defined function that one or more users designed to execute inference; that is, the query engine can process model inputs to generate model outputs (inference data) characterizing predictions about the model inputs according to the user-defined function” (Tang, [0009])
“the system can store the model outputs generated by the machine learning model in the database system” (Tang, [0072])
“This specification relates to databases. Typically, a database is either a relational database or a non-relational database. A relational database represents and stores data in tables (tabular form) that have defined relationships, and is often queried using a query language, e.g., Structured Query Language (SQL)” (Tang, [0001])
a computer operatively coupled to the application database, the computer comprising a memory and a processor: “The term “data processing apparatus” refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also be, or further include, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware (memory), a protocol stack, a database management system, an operating system, or a combination of one or more of them.” (Tang, [0078])
the computer comprising a memory and a processor configured to: retrieve the inference data from the application database: “After completing the inference user-defined function, the query engine 200 can provide the generated model outputs 234 (inference data) to the user device 210.” (Tang, [0047])
Tang relates to processing, management, storage, and retrieval of machine learning inference outputs and is analogous to the claimed invention. Sasagawa teaches a method of passing the outputs of one machine learning model to the inputs of another. The claimed invention improves upon this method by storing and retrieving inference outputs in a database. Tang teaches a method of storing and retrieving inference outputs in a database. It would have been obvious to one of ordinary skill in the art to store the inference data of Sasagawa’s first model in a tabular database. This would achieve the predictable result of storing the data in a structured container with standardized methods of storage and retrieval, with Sasagawa’s data and Tang’s database performing the same together as they did separately. (MPEP 2143 I. (A) Combining prior art elements according to known methods to yield predictable results).
While Tang fails to disclose the further limitations of the claim, Chapman-McQuiston discloses instructions to:
process, in real time, the inference data to identify records that meet a predetermined threshold:
“This allows for large amounts of data being received and/or collected in a variety of environments to be processed and distributed in real time.” (Chapman-McQuiston, [0135])
“the modeling system 1310 may include a transformation component 1324 to perform one or more transformations (process[ing]) on a data set, such that one or more models may be generated that may be used to determine predicted timeframes for events. These transformations may include grouping the data of a data set into one or more percentile groups (predetermined threshold[s]), e.g. flag the top twenty-fifth percentile (25%) of costs of a patient's medical claims associated with a particular service or diagnosis, flag the top seventy-fifth percentile (75%) of costs of a patient's medical claims associated with a particular service or diagnosis, etc.” (Chapman-McQuiston, [0181]).
…to reduce computing burden for downstream processing by a downstream application server
“The transformed patient data may further (downstream) be sampled and used to generate models including the ensemble model. The transformations and sampling reduces computational resource usage, such as processing cycles and memory usage, when generating the models while increasing the precision of the modeling.” (Chapman-McQuiston, [0158])
“Other examples of the present disclosure may include any number and combination of machine-learning models having any number and combination of characteristics. The machine-learning model(s) can be trained in a supervised, semi-supervised, or unsupervised manner, or any combination of these. The machine-learning model(s) can be implemented using a single computing device (downstream application server) or multiple computing devices (downstream application servers), such as the communications grid computing system 400 discussed above.” (Chapman-McQuiston, [0155])
generate, in real time, filtered inference data, the filtered inference data having fields and records in tabular form, the filtered inference data further having a threshold column, wherein for each record, an indication of whether the respective record meets the predetermined threshold is stored in the threshold column:
“At block 1502, the logic flow 1500 includes obtaining a data set from storage, such as data set 1352 (inference data) from storage system 1350” (Chapman-McQuiston, [0182])
“In embodiments, the logic flow 1500 includes determining one or more subsets of the data set (filtered data) based on one or more criteria at block 1504” (Chapman-McQuiston, [0183])
“The subsets (filtered data) may be identified in the data set using flags or other indicators. For example, the data set may be stored in a database and a column (threshold column) of the database may indicate whether an entry is a member of the first subset or not” (Chapman-McQuiston, [0184]). A database structured with columns constitutes a set of records in tabular form.
store the filtered inference data in the application database:
“The transformed data set and subsets (filtered data) may be stored in storage 1350 (application database) as data set(s) 1352 by the transformation component 1324” (Chapman-McQuiston, [0190]).
“the systems 1330, 1340, and 1350 (application database) may include any number of storage devices to store information and data, such as data 1332, results 1342, and one or more data sets 1352. The information and data can be stored in any type of data structure, such as databases, lists, arrays, trees, hashes, files, and so forth” (Chapman-McQuiston, [0167])
in response to a request from the downstream application server, provide a subset of the filtered inference data, wherein the subset is determined based on the threshold column:
“The transformed data set and subsets (filtered data) may be stored in storage 1350 as data set(s) 1352 by the transformation component 1324 and may be used as training set(s) to generate the one or more models, for example. In some embodiments, the data set 1352 including the subsets, which may be identified by flags, may be sampled to generate the training sets” (Chapman-McQuiston, [0190])
“With reference, to FIG. 13B, the modeling component 1326 (downstream application) may utilize the training set(s) to generate one or more models that may be used to generate predictions for the target variable based on the events” (Chapman-McQuiston, [0191])
Chapman-McQuiston relates to tabulated data filtration in machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the existing combination to filter the inference data based on threshold columns, as disclosed by Chapman-McQuiston. Filtering data by percentiles can remove unwanted accidental, incorrect, erroneous, or spurious information for training the downstream model. See Chapman-McQuiston, [0183] and [0192].
Regarding claim 2, the rejection of claim 1 in view of Sasagawa, Tang, and Chapman-McQuiston incorporated. Chapman-McQuiston further discloses …
the downstream application server operatively coupled to the application database: “The modeling system 1310 (downstream application server) is coupled with one or more data system(s) 1330, results system 1340, and the one or more other storage system(s) 1350 (application database) via one or more interconnects 1301. In some instances, the modeling system 1310 may receive and/or retrieve data from one or more of the data system(s) 1330” (Chapman-McQuiston, [0175]).
… the downstream application server comprising a downstream application server memory and a downstream application server processor: “The memory 1316 stores instructions and data for system 1305, which may be processed by processing circuitry 1318” (Chapman-McQuiston, [0170]); “The processing circuitry 1316 may be connected to and communicate with the other elements of the system 1305 including the modeling system 1310 (downstream application server), the storage 1314, the memory 1316, and the one or more interfaces 1320” (Chapman-McQuiston, [0171]).
… configured to retrieve the subset of the filtered inference data from the application database: “In some embodiments, the data set and the one or more subsets (subset of the filtered data) may be utilized to generate one or more models after one or more transformations are performed, as discussed above with respect to FIG. 15. The transformed data set and subsets may be stored in storage 1350 (application database) as data set(s) 1352 by the transformation component 1324 and may be used as training set(s) to generate the one or more models” (Chapman-McQuiston, [0190]).
… and process the subset of the filtered inference data to generate application data: “With reference, to FIG. 13B, the modeling component 1326 may utilize the training set(s) (subset of the filtered data) to generate one or more models that may be used to generate predictions (application data) for the target variable based on the events” (Chapman-McQuiston, [0191]).
Chapman-McQuiston relates to tabulated data filtration in machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Sasagawa and Tang to train a model using the filtered data subset, as disclosed by Chapman-McQuiston. Filtering data by percentiles can remove unwanted accidental, incorrect, erroneous, or spurious information for training the downstream model. See Chapman-McQuiston, [0183] and [0192].
Regarding claim 3, the rejection of claim 2 in view of Sasagawa, Tang, and Chapman-McQuiston incorporated. Chapman-McQuiston further teaches a system, wherein the processing the subset of the filtered inference data comprises providing the subset of the filtered inference data as input to an additional machine learning model, wherein an output of the additional machine learning model is the application data:
“With reference, to FIG. 13B, the modeling component 1326 may utilize the training set(s) (subset of the filtered data) to generate one or more models (multiple machine learning model[s]) that may be used to generate predictions (application data) for the target variable based on the events” (Chapman-McQuiston, [0191]).
Chapman-McQuiston relates to tabulated data filtration in machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Sasagawa and Tang to train a model using the filtered data subset, as disclosed by Chapman-McQuiston. Filtering data by percentiles can remove unwanted accidental, incorrect, erroneous, or spurious information for training the downstream model. See Chapman-McQuiston, [0183] and [0192].
Regarding claim 4, the rejection of claim 2 in view of Sasagawa, Tang, and Chapman-McQuiston incorporated. Chapman-McQuiston teaches a system, further comprising a user device operatively coupled to the downstream application server, wherein the downstream application server processor is further configured to generate a notification based on the application data, and transmit the notification to the user device: “the modeling system 1310 may include a data component 1322, a transformation component 1324, a modeling component 1326, and a results component 1328 to process data, generate models and an ensemble model, generate predictions for a target variable, and generate results 1342 (notification[s]) based on the predictions (application data) … The modeling system 1310 can present the results 1342 (notification) and data identified as outside of the predictions to a user in a presentation on a display device 1362” (Chapman-McQuiston, [0176])
Chapman-McQuiston relates to tabulated data filtration in machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the existing combination to send a notification related to application data to a user device, as disclosed by Chapman-McQuiston. This type of structured output can be tailored to a type of analysis that a user wishes to perform on the data, including determining predicted timeframes for events detecting events with timeframes outside of the predictions, and identifying anomalous data. See Chapman-McQuiston, [0051], [0176], and [0201].
Regarding claim 5, the rejection of claim 2 in view of Sasagawa, Tang, and Chapman-McQuiston incorporated. Chapman-McQuiston teaches a system, wherein the processor is further configured to: process the inference data to identify records that meet a second predetermined threshold, wherein the filtered inference data has a second threshold column, wherein for each record, an indication of whether the respective record meets the second predetermined threshold is stored in the second threshold column:
“The subsets (filtered data) may be identified in the data set using flags or other indicators … In one specific example, a first column may indicate whether entries of the data set are members of the top 25 percentile and a second column (second threshold column) may indicate whether entries of the data set are members of the top 75 percentile (second predetermined threshold)” (Chapman-McQuiston, [0184]).
Chapman-McQuiston relates to tabulated data filtration in machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Sasagawa and Tang to train a model using the filtered data subset, as disclosed by Chapman-McQuiston. Filtering data by percentiles can remove unwanted accidental, incorrect, erroneous, or spurious information for training the downstream model. See Chapman-McQuiston, [0183] and [0192].
Regarding claim 6, the rejection of claim 5 in view of Sasagawa, Tang, and Chapman-McQuiston incorporated. Chapman-McQuiston discloses a system, further comprising …
… a second computer operatively coupled to the application database: “In some embodiments, the computing system environment (second computer) may include a system 1305 having a number of components and is coupled with other systems, including … one or more other storage system(s) 1350 (application database)” (Chapman-McQuiston);
… the second computer comprising a second memory and a second processor: “The memory of system 1305 can be implemented using any machine-readable or computer-readable media capable of storing data” (Chapman-McQuiston, [0170]); “In embodiments, the system 1305 may include processing circuitry … The processing circuitry may be connected to and communicate with the other elements of the system 1305” (Chapman-McQuiston, [0171]).
… configured to: retrieve a second subset of the filtered inference data from the application database: “In some embodiments, the system and techniques include generating a first model based on the first subset of the patient data, the first model for use to determine expected length of stay ranges for each of one or more DRGs, and generating a second model based on the second subset of the patient data, the second model for use to determine the expected length of stay ranges for each of one or more DRGs” (Chapman-McQuiston, [0162]).
… and process the second subset of the filtered inference data to generate second application data: “In some embodiments, the system and techniques include generating a first model based on the first subset of the patient data, the first model for use to determine expected length of stay ranges for each of one or more DRGs, and generating a second model based on the second subset of the patient data, the second model for use to determine the expected length of stay ranges for each of one or more DRGs (second application data)” (Chapman-McQuiston, [0162]).
wherein the second subset is determined based on the threshold column: “In one example, … a second model may be generating utilizing a sampling of the subset of data including the top 75 percentile grouping of amounts paid” (Chapman-McQuiston, [0192]).
Chapman-McQuiston relates to tabulated data filtration in machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Sasagawa and Tang to train a model using the filtered data subset, as disclosed by Chapman-McQuiston. Filtering data by percentiles can remove unwanted accidental, incorrect, erroneous, or spurious information for training the downstream model. See Chapman-McQuiston, [0183] and [0192].
Regarding claim 7, the rejection of claim 1 in view of Sasagawa, Tang, and Chapman-McQuiston incorporated. Tang discloses a system, further comprising
a preprocessor configured to preprocess input data in tabular form to generate preprocessed data: “if the query language of the of the query engine 200 is SQL, then the training user-defined function can include a SQL statement that obtains the training examples 242 (input data) from the database system 240 and pre-processes the training examples 242” (Tang, [0034])
a machine learning processor configured to execute the machine learning model on the preprocessed data to generate inference data and store the inference data in the application database, prior to the processor retrieving the inference data from the application database:
“the inference user-defined function can include instructions to pre-process the model inputs 245 to put the model inputs 245 into a form that can be received by the machine learning model.” (Tang, [0045])
“the system can store the model outputs (inference data) generated by the machine learning model in the database system” (Tang, [0072])
Tang relates to processing, management, storage, and retrieval of machine learning inference outputs and is analogous to the claimed invention. The existing combination teaches a system for filtering inference data to improve downstream application server performance. The claimed invention improves upon this method by preprocessing data used to generate inference data. Tang teaches a method of preprocessing data used to generate inference data, applicable to the existing combination. A person of ordinary skill in the art would have recognized that preprocessing machine learning input data would lead to the predictable result of normalizing the data and ensuring its compatibility with the model, and would improve the known device by removing erroneous and / or incompatible data, thus ensuring more accurate inference data outputs (MPEP 2143 I. (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results).
Regarding claim 8, the rejection of claim 1 in view of Sasagawa, Tang, and Chapman-McQuiston incorporated. McQuiston teaches a system, further comprising a source database and a publishing server operatively coupled to the source database, the publishing server comprising a publishing server memory and a publishing server processor configured to export the input data from the source database to the application database:
“In embodiments, the modeling system 1310 (publishing server) including the data component 1322 may collect and/or receive data (input data) from various sources (source database[s]), group the data into a data set (combined input data), and make the data set available for other components of the modeling system 1310 to use in generating predictions for the target variable, e.g. predicted timeframes for events” (Cheng, [0177])
“the data component 1322 may store the combined data (combined input data) as data set 1352 in storage system 1350 (application database). The data set 1352 may then be retrieved from the storage system 1350 (application database) and used by other components of the modeling system 1310” (Chapman-McQuiston, [0179]).
“The memory 1316 stores instructions and data for system 1305, which may be processed by processing circuitry 1318” (Chapman-McQuiston, [0170]); “The processing circuitry 1316 may be connected to and communicate with the other elements of the system 1305 including the modeling system 1310 (publishing server), the storage 1314, the memory 1316, and the one or more interfaces 1320” (Chapman-McQuiston, [0171]).
Chapman-McQuiston relates to tabulated data in machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the existing combination to export input data from a source database to the application database, as disclosed by Chapman-McQuiston. This allows the unified database used by the machine learning models to be derived from multiple sources, and allows for the filtration of undesirable data (data missing fields, low quality, etc.). This method also allows for the application database to be updated based on the availability of source databases. See Chapman-McQuiston, [0177], [0179], [0180].
Regarding claim 9, the rejection of claim 1 in view of Sasagawa, Tang, and Chapman-McQuiston incorporated. Chapman-McQuiston further teaches a system, wherein the predetermined threshold is determined based on a percentile placement of each of the records in the inference data: “These transformations may include grouping the data of a data set into one or more percentile groups (predetermined threshold[s]), e.g. flag the top twenty-fifth percentile (25%) (percentile placement) of costs of a patient's medical claims associated with a particular service or diagnosis, flag the top seventy-fifth percentile (75%) of costs of a patient's medical claims associated with a particular service or diagnosis, etc.” (Chapman-McQuiston, [0181]); “The subsets may be identified in the data set using flags or other indicators. For example, the data set may be stored in a database and a column of the database may indicate whether an entry is a member of the first subset or not” (Chapman-McQuiston, [0184]). Each entry of a percentile column denotes whether the corresponding record is within a particular percentile placement.
Chapman-McQuiston relates to tabulated data filtration in machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Sasagawa and Tang to train a model using the filtered data subset, as disclosed by Chapman-McQuiston. Filtering data by percentiles can remove unwanted accidental, incorrect, erroneous, or spurious information for training the downstream model. See Chapman-McQuiston, [0183] and [0192].
Regarding claim 10, the rejection of claim 9 in view of Sasagawa, Tang, and Chapman-McQuiston incorporated. Chapman-McQuiston further teaches a system, wherein at least one of the fields of the tabular data comprises numerical data, and wherein the percentile placement is based on the numerical data: “These transformations may include grouping the data of a data set into one or more percentile groups, e.g. flag the top twenty-fifth percentile (25%) (percentile placement) of costs of a patient's medical claims (numerical data) associated with a particular service or diagnosis, flag the top seventy-fifth percentile (75%) of costs of a patient's medical claims associated with a particular service or diagnosis, etc.” (Chapman-McQuiston, [0181]).
Chapman-McQuiston relates to tabulated data filtration in machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Sasagawa and Tang to train a model using the filtered data subset, as disclosed by Chapman-McQuiston. Filtering data by percentiles can remove unwanted accidental, incorrect, erroneous, or spurious information for training the downstream model. See Chapman-McQuiston, [0183] and [0192].
Regarding claim 11, Sasagawa discloses [a] method of optimized multi-stage processing, the method comprising:
receiving, using a first processor, inference data from a machine learning model, the inference data having fields and records in tabular form:
“Glue layer GL connects predetermined intermediate layer mL1 included in first partial inference model M1p and predetermined intermediate layer mL2 included in second partial inference model M2p. For example, glue layer GL is a convolution layer that converts output data (inference data) that is output from first partial inference model M1p (machine learning model) into input data that is input to second partial inference model M2p or is a fully connected layer.” (Sasagawa, [0047])
“Glue layer GL connects predetermined intermediate layer mL1 included in first partial inference model M1p and predetermined intermediate layer mL2 included in second partial inference model M2p. For example, glue layer GL is a convolution layer that converts output data (inference data) that is output from first partial inference model M1p into input data that is input to second partial inference model M2p or is a fully connected layer.” (Sasagawa, [0047]). The inference data must be retrieved in some fashion to be input to the second model.
“Furthermore, the present disclosure may be a computer system that includes a microprocessor and memory, the memory has stored therein the above computer program, and the microprocessor may operate in accordance with the computer program.” (Sasagawa, [0131])
processing, in real time, using the first processor, the inference data to identify records that meet a predetermined threshold, to reduce computing burden for downstream processing by a downstream application server:
PNG
media_image1.png
677
835
media_image1.png
Greyscale
(Sasagawa, Figure 4)
“Glue layer GL connects predetermined intermediate layer mL1 included in first partial inference model M1p and predetermined intermediate layer mL2 included in second partial inference model M2p. For example, glue layer GL is a convolution layer that converts (process[es]) output data (inference data) that is output from first partial inference model M1p into input data that is input to second partial inference model M2p (downstream application) or is a fully connected layer.” (Sasagawa, [0047])
generating, in real time, using the first processor, filtered inference data, the filtered inference data having fields and records in tabular form, the filtered inference data further having a threshold column, wherein for each record, an indication of whether the respective record meets the predetermined threshold is stored in the threshold column; the first processor storing the filtered inference data in the application database: “More specifically, glue layer GL has a function of mapping intermediate representation A1 of first inference model M1 (inference data) into intermediate representation B2 (filtered inference data) of second inference model M2” (Sasagawa, [0048])
Sasagawa relates to downstream processing of machine learning inference data and is analogous to the claimed invention.
While Sasagawa fails to disclose the further limitations of the claim, Tang teaches a system, comprising:
receiving, using a first processor, inference data from a machine learning model, the inference data having fields and records in tabular form:
“the query engine can perform inference on a trained machine learning model using a user-defined function that one or more users designed to execute inference; that is, the query engine can process model inputs to generate model outputs (inference data) characterizing predictions about the model inputs according to the user-defined function” (Tang, [0009])
“the system can store the model outputs generated by the machine learning model in the database system” (Tang, [0072])
“This specification relates to databases. Typically, a database is either a relational database or a non-relational database. A relational database represents and stores data in tables (tabular form) that have defined relationships, and is often queried using a query language, e.g., Structured Query Language (SQL)” (Tang, [0001])
“After completing the inference user-defined function, the query engine 200 can provide the generated model outputs 234 (inference data) to the user device 210.” (Tang, [0047])
“The term “data processing apparatus” refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.” (Tang, [0078])
Tang relates to processing, management, storage, and retrieval of machine learning inference outputs and is analogous to the claimed invention. Sasagawa teaches a method of passing the outputs of one machine learning model to the inputs of another. The claimed invention improves upon this method by storing and retrieving inference outputs in a database. Tang teaches a method of storing and retrieving inference outputs in a database. It would have been obvious to one of ordinary skill in the art to store the inference data of Sasagawa’s first model in a tabular database. This would achieve the predictable result of storing the data in a structured container with standardized methods of storage and retrieval, with Sasagawa’s data and Tang’s database performing the same together as they did separately. (MPEP 2143 I. (A) Combining prior art elements according to known methods to yield predictable results).
While Tang fails to disclose the further limitations of the claim, Chapman-McQuiston discloses instructions to:
processing, in real time, using the first processor, the inference data to identify records that meet a predetermined threshold:
“This allows for large amounts of data being received and/or collected in a variety of environments to be processed and distributed in real time.” (Chapman-McQuiston, [0135])
“the modeling system 1310 may include a transformation component 1324 to perform one or more transformations (process[ing]) on a data set, such that one or more models may be generated that may be used to determine predicted timeframes for events. These transformations may include grouping the data of a data set into one or more percentile groups (predetermined threshold[s]), e.g. flag the top twenty-fifth percentile (25%) of costs of a patient's medical claims associated with a particular service or diagnosis, flag the top seventy-fifth percentile (75%) of costs of a patient's medical claims associated with a particular service or diagnosis, etc.” (Chapman-McQuiston, [0181]).
“In embodiments, the system 1305 may include processing circuitry 1318 which may include one or more of any type of computational element, such as but not limited to, a microprocessor, a processor, central processing unit, digital signal processing unit, dual-core processor, mobile device processor, desktop processor, single core processor, a system-on-chip (SoC) device, complex instruction set computing (CISC) microprocessor, a reduced instruction set (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or any other type of processing circuitry, processor or processing circuit on a single chip or integrated circuit. The processing circuitry 1316 may be connected to and communicate with the other elements of the system 1305 including the modeling system 1310, the storage 1314, the memory 1316, and the one or more interfaces 1320.” (Chapman-McQuiston, [0171])
…to reduce computing burden for downstream processing by a downstream application server
“The transformed patient data may further (downstream) be sampled and used to generate models including the ensemble model. The transformations and sampling reduces computational resource usage, such as processing cycles and memory usage, when generating the models while increasing the precision of the modeling.” (Chapman-McQuiston, [0158])
“Other examples of the present disclosure may include any number and combination of machine-learning models having any number and combination of characteristics. The machine-learning model(s) can be trained in a supervised, semi-supervised, or unsupervised manner, or any combination of these. The machine-learning model(s) can be implemented using a single computing device (downstream application server) or multiple computing devices (downstream application servers), such as the communications grid computing system 400 discussed above.” (Chapman-McQuiston, [0155])
generating, in real time, using the first processor, filtered inference data, the filtered inference data having fields and records in tabular form, the filtered inference data further having a threshold column, wherein for each record, an indication of whether the respective record meets the predetermined threshold is stored in the threshold column:
“At block 1502, the logic flow 1500 includes obtaining a data set from storage, such as data set 1352 (inference data) from storage system 1350” (Chapman-McQuiston, [0182])
“In embodiments, the logic flow 1500 includes determining one or more subsets of the data set (filtered data) based on one or more criteria at block 1504” (Chapman-McQuiston, [0183])
“The subsets (filtered data) may be identified in the data set using flags or other indicators. For example, the data set may be stored in a database and a column (threshold column) of the database may indicate whether an entry is a member of the first subset or not” (Chapman-McQuiston, [0184]). A database structured with columns constitutes a set of records in tabular form.
the first processor storing the filtered inference data in the application database:
“The transformed data set and subsets (filtered data) may be stored in storage 1350 (application database) as data set(s) 1352 by the transformation component 1324” (Chapman-McQuiston, [0190]).
“the systems 1330, 1340, and 1350 (application database) may include any number of storage devices to store information and data, such as data 1332, results 1342, and one or more data sets 1352. The information and data can be stored in any type of data structure, such as databases, lists, arrays, trees, hashes, files, and so forth” (Chapman-McQuiston, [0167])
in response to a request from the downstream application server, providing a subset of the filtered inference data, wherein the subset is determined based on the threshold column:
“The transformed data set and subsets (filtered data) may be stored in storage 1350 as data set(s) 1352 by the transformation component 1324 and may be used as training set(s) to generate the one or more models, for example. In some embodiments, the data set 1352 including the subsets, which may be identified by flags, may be sampled to generate the training sets” (Chapman-McQuiston, [0190])
“With reference, to FIG. 13B, the modeling component 1326 (downstream application) may utilize the training set(s) to generate one or more models that may be used to generate predictions for the target variable based on the events” (Chapman-McQuiston, [0191])
Chapman-McQuiston relates to tabulated data filtration in machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the existing combination to filter the inference data based on threshold columns, as disclosed by Chapman-McQuiston. Filtering data by percentiles can remove unwanted accidental, incorrect, erroneous, or spurious information for training the downstream model. See Chapman-McQuiston, [0183] and [0192].
The analysis of claims 12-19 mirrors that of claims 2-10, with the exception that claims 2-10 are directed to generic computer hardware which executes the methods of claims 12-19. The application of generic computer hardware to the methods of claims 12-19 is taught by Cheng and Chapman-McQuiston, as discussed regarding claims 2-10. Thus, claims 12-19 are rejected under the same rationale used for claims 2-10.
Regarding claim 20, Sasagawa discloses [a] non-transitory computer readable medium storing computer executable instructions which, when executed by a computer processor, cause the computer processor to carry out a method of processing machine learning model predictions: “One or more of the elements included in the inference model generating apparatus may be included in a computer system that includes a microprocessor, a ROM (non-transitory computer readable medium), a RAM, a hard disk unit, a display unit, a keyboard, a mouse, and the like. A computer program is stored in the RAM or the hard disk unit. The microprocessor achieves its function by operating in accordance with the computer program. Here, the computer program includes a combination of instruction codes indicating instructions to a computer in order to achieve predetermined functions” (Sasagawa, [0125])
Sasagawa’s instructions comprising:
receiving, using a first processor, inference data from a machine learning model, the inference data having fields and records in tabular form:
“Glue layer GL connects predetermined intermediate layer mL1 included in first partial inference model M1p and predetermined intermediate layer mL2 included in second partial inference model M2p. For example, glue layer GL is a convolution layer that converts output data (inference data) that is output from first partial inference model M1p (machine learning model) into input data that is input to second partial inference model M2p or is a fully connected layer.” (Sasagawa, [0047])
“Glue layer GL connects predetermined intermediate layer mL1 included in first partial inference model M1p and predetermined intermediate layer mL2 included in second partial inference model M2p. For example, glue layer GL is a convolution layer that converts output data (inference data) that is output from first partial inference model M1p into input data that is input to second partial inference model M2p or is a fully connected layer.” (Sasagawa, [0047]). The inference data must be retrieved in some fashion to be input to the second model.
“Furthermore, the present disclosure may be a computer system that includes a microprocessor and memory, the memory has stored therein the above computer program, and the microprocessor may operate in accordance with the computer program.” (Sasagawa, [0131])
processing, in real time, using the first processor, the inference data to identify records that meet a predetermined threshold, to reduce computing burden for downstream processing by a downstream application server:
PNG
media_image1.png
677
835
media_image1.png
Greyscale
(Sasagawa, Figure 4)
“Glue layer GL connects predetermined intermediate layer mL1 included in first partial inference model M1p and predetermined intermediate layer mL2 included in second partial inference model M2p. For example, glue layer GL is a convolution layer that converts (process[es]) output data (inference data) that is output from first partial inference model M1p into input data that is input to second partial inference model M2p (downstream application) or is a fully connected layer.” (Sasagawa, [0047])
generating, in real time, using the first processor, filtered inference data, the filtered inference data having fields and records in tabular form, the filtered inference data further having a threshold column, wherein for each record, an indication of whether the respective record meets the predetermined threshold is stored in the threshold column; the first processor storing the filtered inference data in the application database: “More specifically, glue layer GL has a function of mapping intermediate representation A1 of first inference model M1 (inference data) into intermediate representation B2 (filtered inference data) of second inference model M2” (Sasagawa, [0048])
Sasagawa relates to downstream processing of machine learning inference data and is analogous to the claimed invention.
While Sasagawa fails to disclose the further limitations of the claim, Tang teaches a system, comprising:
receiving, using a first processor, inference data from a machine learning model, the inference data having fields and records in tabular form:
“the query engine can perform inference on a trained machine learning model using a user-defined function that one or more users designed to execute inference; that is, the query engine can process model inputs to generate model outputs (inference data) characterizing predictions about the model inputs according to the user-defined function” (Tang, [0009])
“the system can store the model outputs generated by the machine learning model in the database system” (Tang, [0072])
“This specification relates to databases. Typically, a database is either a relational database or a non-relational database. A relational database represents and stores data in tables (tabular form) that have defined relationships, and is often queried using a query language, e.g., Structured Query Language (SQL)” (Tang, [0001])
“After completing the inference user-defined function, the query engine 200 can provide the generated model outputs 234 (inference data) to the user device 210.” (Tang, [0047])
“The term “data processing apparatus” refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.” (Tang, [0078])
Tang relates to processing, management, storage, and retrieval of machine learning inference outputs and is analogous to the claimed invention. Sasagawa teaches a method of passing the outputs of one machine learning model to the inputs of another. The claimed invention improves upon this method by storing and retrieving inference outputs in a database. Tang teaches a method of storing and retrieving inference outputs in a database. It would have been obvious to one of ordinary skill in the art to store the inference data of Sasagawa’s first model in a tabular database. This would achieve the predictable result of storing the data in a structured container with standardized methods of storage and retrieval, with Sasagawa’s data and Tang’s database performing the same together as they did separately. (MPEP 2143 I. (A) Combining prior art elements according to known methods to yield predictable results).
While Tang fails to disclose the further limitations of the claim, Chapman-McQuiston discloses instructions comprising:
processing, in real time, using the first processor, the inference data to identify records that meet a predetermined threshold:
“This allows for large amounts of data being received and/or collected in a variety of environments to be processed and distributed in real time.” (Chapman-McQuiston, [0135])
“the modeling system 1310 may include a transformation component 1324 to perform one or more transformations (process[ing]) on a data set, such that one or more models may be generated that may be used to determine predicted timeframes for events. These transformations may include grouping the data of a data set into one or more percentile groups (predetermined threshold[s]), e.g. flag the top twenty-fifth percentile (25%) of costs of a patient's medical claims associated with a particular service or diagnosis, flag the top seventy-fifth percentile (75%) of costs of a patient's medical claims associated with a particular service or diagnosis, etc.” (Chapman-McQuiston, [0181]).
“In embodiments, the system 1305 may include processing circuitry 1318 which may include one or more of any type of computational element, such as but not limited to, a microprocessor, a processor, central processing unit, digital signal processing unit, dual-core processor, mobile device processor, desktop processor, single core processor, a system-on-chip (SoC) device, complex instruction set computing (CISC) microprocessor, a reduced instruction set (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or any other type of processing circuitry, processor or processing circuit on a single chip or integrated circuit. The processing circuitry 1316 may be connected to and communicate with the other elements of the system 1305 including the modeling system 1310, the storage 1314, the memory 1316, and the one or more interfaces 1320.” (Chapman-McQuiston, [0171])
…to reduce computing burden for downstream processing by a downstream application server
“The transformed patient data may further (downstream) be sampled and used to generate models including the ensemble model. The transformations and sampling reduces computational resource usage, such as processing cycles and memory usage, when generating the models while increasing the precision of the modeling.” (Chapman-McQuiston, [0158])
“Other examples of the present disclosure may include any number and combination of machine-learning models having any number and combination of characteristics. The machine-learning model(s) can be trained in a supervised, semi-supervised, or unsupervised manner, or any combination of these. The machine-learning model(s) can be implemented using a single computing device (downstream application server) or multiple computing devices (downstream application servers), such as the communications grid computing system 400 discussed above.” (Chapman-McQuiston, [0155])
generating, in real time, using the first processor, filtered inference data, the filtered inference data having fields and records in tabular form, the filtered inference data further having a threshold column, wherein for each record, an indication of whether the respective record meets the predetermined threshold is stored in the threshold column:
“At block 1502, the logic flow 1500 includes obtaining a data set from storage, such as data set 1352 (inference data) from storage system 1350” (Chapman-McQuiston, [0182])
“In embodiments, the logic flow 1500 includes determining one or more subsets of the data set (filtered data) based on one or more criteria at block 1504” (Chapman-McQuiston, [0183])
“The subsets (filtered data) may be identified in the data set using flags or other indicators. For example, the data set may be stored in a database and a column (threshold column) of the database may indicate whether an entry is a member of the first subset or not” (Chapman-McQuiston, [0184]). A database structured with columns constitutes a set of records in tabular form.
the first processor storing the filtered inference data in the application database:
“The transformed data set and subsets (filtered data) may be stored in storage 1350 (application database) as data set(s) 1352 by the transformation component 1324” (Chapman-McQuiston, [0190]).
“the systems 1330, 1340, and 1350 (application database) may include any number of storage devices to store information and data, such as data 1332, results 1342, and one or more data sets 1352. The information and data can be stored in any type of data structure, such as databases, lists, arrays, trees, hashes, files, and so forth” (Chapman-McQuiston, [0167])
in response to a request from the downstream application server, providing a subset of the filtered inference data, wherein the subset is determined based on the threshold column:
“The transformed data set and subsets (filtered data) may be stored in storage 1350 as data set(s) 1352 by the transformation component 1324 and may be used as training set(s) to generate the one or more models, for example. In some embodiments, the data set 1352 including the subsets, which may be identified by flags, may be sampled to generate the training sets” (Chapman-McQuiston, [0190])
“With reference, to FIG. 13B, the modeling component 1326 (downstream application) may utilize the training set(s) to generate one or more models that may be used to generate predictions for the target variable based on the events” (Chapman-McQuiston, [0191])
Chapman-McQuiston relates to tabulated data filtration in machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the existing combination to filter the inference data based on threshold columns, as disclosed by Chapman-McQuiston. Filtering data by percentiles can remove unwanted accidental, incorrect, erroneous, or spurious information for training the downstream model. See Chapman-McQuiston, [0183] and [0192].
Response to Arguments
The following responses address arguments and remarks made in the instant remarks dated 02/03/2026.
112 Rejections
After further consideration of the claims, rejections under 35 U.S.C. 112(b) have been made for claims 6 and 16.
101 Rejections
On page 9 of the instant remarks, the Applicant argues that the claims are practically integrated through improvement to technology:
“While the Examiner asserts that the claims recite a judicial exception under Step 2A,
Prong One (which Applicant does not concede), the claims when viewed as a whole
integrate any such exception into a practical application that provides a tangible
improvement to computer functionality. Therefore, under the analysis set forth in MPEP §
2106.04(d) and reinforced by Desjardins, the claims are not "directed to" a judicial
exception and are patent-eligible under Step 2A, Prong Two.
…
The specification explains that conventional multi-stage processing can lead to compounded computational effort, and that the present claims provide "an intermediate processing step to identify
and/or filter records ... thereby simplifying and speeding downstream processing by
concentrating only on those records that meet the predetermined threshold"
(Specification, [0027]). This directly results in conserved computational resources and
reduced computing burden (Specification, [0027]-[0028]), which are precisely the types of
benefits that Desjardins recognizes as technological improvements.”
The Applicant’s arguments above have been fully considered in light of MPEP guidance since the previous rejection and in light of the instant amendments to the claims, and are persuasive. Thus, the independent claims are found to integrate recited judicial exceptions through improvement to technology, and all 101 rejections are withdrawn on this basis.
103 Rejections
On pages 12-13 of the instant remarks, the Applicant argues that Chapman-McQuiston’s technique is not commensurate in scope with the claim language:
“The Examiner's Rationale is Based on a Mischaracterization of the Prior Art's Teaching
A proper obviousness rejection requires articulating a reason why a person of ordinary
skill in the art would have been motivated to combine the prior art teachings as
claimed. KSR Int'/ Co. v. Teleflex Inc., 550 U.S. 398 (2007). The Examiner's asserted
motivation appears to rest on the premise that Chapman-McQuiston teaches a general
technique for reducing computational burden. Applicant respectfully submits that this
premise is incorrect
Chapman-McQuiston teaches a specific technique for a specific purpose: using data
transformations to "reduce[] computational resource usage ... when generating the
models" (Chapman-McQuiston, [0158], emphasis added). The technique is explicitly for
making the process of model training more efficient. It is not taught as a general-purpose
technique for improving the operational efficiency of any downstream process. The
Examiner has improperly broadened this specific teaching to fit the limitations of the claim.”
Regarding the Applicant’s arguments above, the Examiner respectfully disagrees. A limitation reciting a generalized process includes more specific versions of that process within the scope of the claim. While “to reduce computing burden for downstream processing by a downstream application server”, as recited in amended claim 1, is a general method of improving downstream processing, it encompasses specific techniques of improving downstream processing, such as improving downstream model training, as recited by Chapman-McQuiston.
Thus, no rejections are withdrawn on these grounds.
On pages 13-14 of the instant remarks, the Applicant argues that one of ordinary skill in the art would not have been modified to combine Cheng with Chapman-McQuiston:
“The Prior Art Lacks a Teaching, Suggestion, or Motivation to Combine as Claimed
Because Chapman-McQuiston's teaching is directed to improving model training, the
skilled person would only have been motivated to apply that technique to the training
phase of Cheng's system. There is no teaching, suggestion, or motivation in the
references that would have prompted the skilled person to instead re-architect Cheng's
system to create a new, intermediate processing stage that intercepts the inference
output of one model to improve the operational execution of a separate downstream
process.
The Examiner's proposed modification is not a simple application of a known technique to
a known system. It requires a fundamental change to the system's architecture and a repurposing
of the prior art technique to solve a different problem-improving downstream
execution speed-that Chapman-McQuiston does not address. The Examiner has not
provided any reasoning, supported by the evidence of record, as to why a skilled
person would have been motivated to make these specific, unclaimed modifications. To
arrive at the claimed invention, one must use the Applicant's own disclosure as a
roadmap, which is impermissible hindsight
Accordingly, it is respectfully submitted that the Examiner has failed to establish a prima
facie case of obviousness. The asserted motivation to combine is not supported by the
references, as it relies on mischaracterizing a technique for improving model training as a
generic technique for improving downstream operational execution. A skilled person would not have been motivated to combine the teachings of Cheng and Chapman-McQuiston
to arrive at the specific multi-stage architecture recited in the claims”
The Applicant’s arguments above have been fully considered and are persuasive. Therefore, previous rejections for claims 1-20 under 35 U.S.C. 103 have been withdrawn. However, upon further search and consideration, new grounds of rejections for claims 1-20 have been made in view of Sasagawa, Tang, and Chapman-McQuiston.
Sasagawa discloses a system in which inference outputs of one machine learning model are passed to a downstream model with intermediate processing for training the second model (Sasagawa, [0047]). It would have been obvious to store data in Sasagawa’s system in a tabular format, as disclosed by Tang ([0001], [0009]), as database storage is a known technique that would behave predictably when used for the inference info of Sasagawa (MPEP 2143 I. (A) Combining prior art elements according to known methods to yield predictable results). It would have been obvious to one of ordinary skill to apply available optimizations to the intermediate processing of inference data in Sasagawa’s system, such as the percentile-based data optimization disclosed by Chapman-McQuiston (Chapman-McQuiston, [0183], [0192]).
See the 103 rejections section for more detail.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Lee et al. (ELECTRONIC DEVICE AND METHOD FOR CONTROLLING ELECTRONIC DEVICE, filed 8/12/2020, US 20220215276 A1) discloses a method of separating a downstream application server from an upstream server via two different server devices, wherein inference model results are transmitted from the output of the first model into the input of the second.
Barber et al. (Unsupervised Learning And Prediction Of Lines Of Therapy From High-Dimensional Longitudinal Medications Data, filed 8/24/2020, US 20210057071 A1) teaches a method of generating tabular data with a machine learning model, and combining it with EHR records.
Banis (Multi-client service system platform, filed 12/17/2019, US20200210867A1) teaches a method of selecting a subset of data based on threshold values using a machine learning model
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aaron P Gormley whose telephone number is (571)272-1372. The examiner can normally be reached Monday - Friday 12:00 PM - 8:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AG/Examiner, Art Unit 2148
/MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148