DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
The following is Office Action on the merits in response to the communication received on 11/25/25.
Claim status:
Amended claims: 1, 4, 8, 11, 15, and 18
Canceled claims: none
Added New claims: None
Pending claims: 1-20
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is not directed to statutory subject matter. Specifically, the invention of claims 1-20 is directed to an abstract idea without significantly more.
Independent claims 1, 8 and 15 are directed to a method (claim 1), a non-transitory computer readable medium (claim 8) and a system (claim 15). Therefore on its face, each of claims 1, 8 and 15 is directed to a statutory category of invention under Step 1 of the 2019 PEG. However each of claims 1, 8 and 15 is also directed to an abstract idea without significantly more, under Step 2A (Prong One and Prong Two) and Step 2B of the 2019 PEG, which is a judicial exception to 35 U.S.C. 101, as detailed below. Using the language of independent claim 1 to illustrate the claim recites the limitations of, (i) receiving, a request to perform at least one financial simulation of a financial profile pertaining to a consumer, wherein the request includes metadata that is required to perform the at least one financial simulation; (ii) receiving a credit data of the consumer, wherein the credit data includes at least one of a credit score, tradeline, credit inquiry, or a public record of the consumer; (iii) aggregating the credit data of the consumer to determine one or more features required by a Machine Learning (ML) model; (iv) creating a feature array structured for ML input from the agregated credit data, (v) submitting the feature array to the ML model, the ML model having been trained using credit profiles of a plurality of consumers; and (vi) generating a prediction related to the at least one financial simulation based on an output of the ML model under the broadest reasonable interpretation covers methods of organizing human activity – fundamental economic principles or practices - mitigating risk, but for the recitation of generic computers but for the recitation of generic computers and generic computer components. (Independent claims 8 and 15 recite similar limitations and the analysis is the same).
That is, other than reciting a computer, aggregating the credit data of the consumer to determine one or more features required by a Machine Learning (ML) model; creating a feature array structured for ML input from the aggregated credit data, submitting the feature array to the ML model, the ML model having been trained using credit profiles of a plurality of consumers; and generating a prediction related to the at least one financial simulation based on an output of the ML model nothing in the claim precludes the steps from being directed to methods of organizing human activity – fundamental economic principles or practices - mitigating risk, but for the recitation of generic computers. If a claim limitation under its BRI, covers methods of organizing human activity but for the recitation of generic computer components, then the limitations fall within the “methods of organizing human activity” grouping of abstract ideas. Therefore, claim 3 recites an abstract idea under Step 2A Prong One of the Revised Patent Subject Matter Eligibility Guidance 84 Fed.Reg 50 (“2019 PEG”).
This “methods of organizing human activity” is not integrated into a practical application under Step 2A prong Two of the 2019 PEG. In particular claim 1 recites the following additional element of, a computer, aggregating the credit data of the consumer to determine one or more features required by a Machine Learning (ML) model; creating a feature array structured for ML input from the agregated credit data, submitting the feature array to the ML model, the ML model having been trained using credit profiles of a plurality of consumers; and generating a prediction related to the at least one financial simulation based on an output of the ML model. This judicial exception is not integrated into a practical application. In particular, the claim only recites the additional element – a computer, aggregating the credit data of the consumer to determine one or more features required by a Machine Learning (ML) model; creating a feature array structured for ML input from the agregated credit data, submitting the feature array to the ML model, the ML model having been trained using credit profiles of a plurality of consumers; and generating a prediction related to the at least one financial simulation based on an output of the ML model.
The computer, aggregating the credit data of the consumer to determine one or more features required by a Machine Learning (ML) model; creating a feature array structured for ML input from the agregated credit data, submitting the feature array to the ML model, the ML model having been trained using credit profiles of a plurality of consumers; and generating a prediction related to the at least one financial simulation based on an output of the ML model are recited at a high-level or generality (i.e. as a generic computer performing generic computer functions) such that, they amount to no more than instructions to implement the abstract idea with a computer (see MPEP 2106.05(h). Accordingly these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
Under Step 2B of the 2019 PEG independent claim 1 does not include additional elements that are sufficient to amount to significantly more than the abstract idea. The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using a computer, receiving, a request to perform at least one financial simulation of a financial profile pertaining to a consumer, wherein the request includes metadata that is required to perform the at least one financial simulation; (ii) receiving a credit data of the consumer, wherein the credit data includes at least one of a credit score, tradeline, credit inquiry, or a public record of the consumer; (iii) aggregating the credit data of the consumer to determine one or more features required by a Machine Learning (ML) model; (iv) creating a feature array structured for ML input from the agregated credit data, (v) submitting the feature array to the ML model, the ML model having been trained using credit profiles of a plurality of consumers; and (vi) generating a prediction related to the at least one financial simulation based on an output of the ML model, amount to instructions to implement the abstract idea with a computer. The claims are not patent eligible.
The dependent claims have been given the full two part analysis including analyzing the additional limitations individually. The Dependent claim(s) when analyzed individually are also held to be patent ineligible under 35 U.S.C. 101 because for the same reasoning as above and the additional recited limitation(s) fail to establish that the claim(s) are not directed to an abstract idea. The additional limitations of the dependent claim(s) when considered individually do not amount to significantly more than the abstract idea. Claims 2-7, 9-14 and 16-20 merely further explains the abstract idea.
When viewed individually the additional limitations do not amount to a claim as a whole that is significantly more than the abstract idea. Accordingly claims 1-20 are ineligible.
Claim Rejections - 35 USC § 112
The Applicant’s arguments and amendments overcome the 112(b) Rejections, therefore, the Rejection(s) are moot.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over, Robida (U.S. Pub. No. 2007/0214076), in view of Kornegay (U.S. Pub. No. 7,610,229) and Mathews (U.S. Pub. No. 2025/0315448).
With respect to claims 1, 8 and 15:
Robida teaches:
receiving a credit data of the consumer, wherein the credit data includes at least one of a credit score, tradeline, credit inquiry, or a public record of the consumer (“As those of skill in the art will recognize, bankruptcy data may be obtained from various sources, such as public records or financial account information that may be available from one or more data sources” Robida Pgh. [0060]);
aggregating the credit data of the consumer to determine one or more features required by a Machine Learning (ML) model (“Moving to a block 220, one or more models are developed based on a comparison of the received data. In the embodiment of FIG. 2, a model is generated by comparing characteristics of individuals that are classified as fitting either a good or a bad definition. In one embodiment, for example, a bad performance definition is associated with individuals having at least one account that has had a 90+ days past due status within the previous two years, for example, while the good performance definition is associated with individuals that have not had a 90+ days past due status on any accounts within the previous two years. It is recognized that in other scenarios, individuals with at least one account that is 90+ days past due may be classified as a good performance definition. As those of skill in the art will recognize, the specific criteria for being categorized in either the good or bad performance definitions may vary greatly and may consider any available data, such as data indicating previous bankruptcy, demographic data, and default accounts associated with an individual, for example” (Robida Pgh. [0044]) and “As described in further detail below, generation of a model using data related to a certain subpopulation of all individuals received may advantageously be used to predict certain characteristics of even individuals outside the subpopulation used in development of the model. In particular, described below are exemplary systems and methods for generating a model for segmenting individuals based on whether the individual is more likely to default on one or more financial instruments, or whether the individual is more likely to file for bankruptcy. Thus, the model is generated by comparing individuals that are associated with default accounts and/or bankruptcy during the outcome period, which are each individuals classified in the bad performance definition. However, although the model is generated using only individuals that fit the bad performance definition, the generated model is used to segment individuals that do not fit the bad performance definition. For example, the model may be applied to individuals that are not associated with default accounts or bankruptcy observed during the outcome period. By applying a model generated from a first subgroup of a population (for example, bad performance definition individuals) to a second subgroup of the population (for example, any individuals, include good and bad performance definition individuals), certain attributes of the first subgroup are usable to predict risk characteristics of the second subgroup that may not be detectable using a traditional model” Robida Pgh. [0051]); and
generating a prediction related to the at least one financial simulation based on an output of the ML model (“Moving to a block 1320, a default/bankruptcy profile model as to whether an individual is more likely to default or go bankrupt is developed. The model developed by the computing system 100 in block 1320 may be applied to individuals in order to predict whether an individual is more likely to file for bankruptcy or to have a default account. In one embodiment, the model may also predict that there is a similar likelihood that the individual either declares bankruptcy or as a default account” Robida Pgh. [0083]).
Robida further teaches a non-transitory computer readable medium comprising instructions, a processing device, a memory device and a processor, coupled to the memory device at paragraphs [0033]-[0034].
Robida does not teach; however Kornegay teaches:
receiving, by a computer, a request to perform at least one financial simulation of a financial profile pertaining to a consumer, wherein the request includes metadata that is required to perform the at least one financial simulation (“In one embodiment, the credit data server 130 holds credit bureau data. In this embodiment, the consumer provides information sufficient to identify himself (hereinafter “consumer identification data”). In one embodiment, the consumer identification data includes the consumer's name, social security number, current address, or any other distinguishing data for ensuring that the correct credit data is associated with the consumer. In a preferred embodiment, the consumer identification data needs to be entered only once and is stored in a profile on the system 101. During subsequent uses, the consumer signs into the system 101 to have the profile provide the consumer identification data. The profile would be provided to only those users who possess the correct security credentials, such as passwords or access cards. In this embodiment, the profile would be stored on a computer-readable medium. The computer-readable medium could be stored in the client terminal 120, in the simulator server 110, or elsewhere in system 101” Kornegay Column 5 Lines 46-63);
It would have been obvious to one of ordinary skill of the art to have modified Robida’s teachings to incorporate Kornegay’s teachings in order “to interactively explore his credit score by submitting hypothetical values based on his actual credit data” Kornegay Abstract.
Robida does not teach; however Mathews teaches:
creating a feature array structured for ML input from the aggregated credit data; submitting the feature array to the ML model, the ML model having been trained using credit profiles of a plurality of consumers (“Responsive to determining the set of explanatory models 130 to use to generate explanations for the machine learning model 128, the model manager 118 can use the determined set of explanatory models 130 to generate explanations based on subsequent outputs by the machine learning model 128. For example, the model selection server 102 can receive a request for a credit score for an account from the user device 104. Responsive to receiving the request, the model manager 118 can generate a feature vector of transaction data for the account and/or account data of the account and identify the machine learning model 128 configured to generate credit scores. The model manager 118 can execute the machine learning model 128 using the feature vector as input. The machine learning model 128 can output a classification data point (e.g., second classification data point), such as a credit score for the account, based on the execution. The model manager 118 can apply the determined set of explanatory models 130 to the classification data point to generate an explanation for the classification data point. The communicator 114 can generate a record including the classification data point as well as the explanation for the classification data point and transmit the record to the user device 104. The user device 104 can display the classification data point and the explanation on a user interface. The model selection server 102 can similarly select and use sets of explanatory models 130 for any number of machine learning models” (Mathews Pgh. [0048]) and “At operation 208, the data processing system executes a machine learning model. The data processing system can execute the machine learning model using a first set of transaction data. The data processing system can execute the machine learning model in response to receiving a request from a client device. For example, the data processing system can receive a request to generate a credit score for an account from a client device. In response to receiving the request, the data processing system can identify the machine learning model from different machine learning models stored in memory based on the machine learning model corresponding to generating credit scores. The data processing system can retrieve transaction data for the account based on an identifier of the account in the request. The data processing system can generate a feature vector from the retrieved transaction and input the feature vector into the machine learning model. The data processing system can execute the machine learning model with the feature vector as input to generate a first classification data point (e.g., a credit score)” Mathews Pgh. [0062]);
It would have been obvious to one of ordinary skill of the art to have modified Robida’s teachings to incorporate Mathews’ teachings in order “to ensure that algorithmic decisions are understandable, accurate, and coherent” Mathews [0001].
With respect to claims 2, 9 and 16:
Robida does not teach; however Kornegay teaches:
wherein the at least one financial simulation includes determining an impact of at least one action comprising: being denied for a credit product while sustaining a hard credit inquiry, getting a new credit card, getting a new personal loan, making a change in credit card balance or utilization, resolving a negative mark such as a collection, or taking on a new delinquency (“The interactive section 610 a includes several elements relating to selected data elements in the received credit data. As illustrated, these include the total revolving credit card balance owed 630 a, the total revolving credit card limit, the number of inquires from credit applications, and the number of accounts the consumer is currently late in paying. This discussion will focus on the first element 630 a, the total revolving credit card balance owed, when describing the operation of the preferred embodiment of the first pass display 601. It will be apparent to one skilled in the art that the remaining elements in the interactive section 610 a may operate in a similar manner. Likewise, it will be apparent to one skilled in the art that additional or different data elements may be included in the interactive section 610 a” (Kornegay Column 13 Lines 10-23) and “As illustrated, the consumer has adjusted the value of first element 630 c to explore the ramifications of a lower revolving balance owed on his credit cards. To this effect, the graphical slider 640 c has been dragged to the left to select a value of approximately $1,000. Compare this with the graphical slider's original position of approximately $20,000, which is also the position of the initial value indicator 635. As discussed with respect to FIGS. 4 and 5, once the value for the total debt is submitted as modified credit data, the modified score generator 345 recalculates the score and the static data formatter 320 prepares the new position for the second indicator 625 c for display in the user interface 603. As discussed above, the recalculation and display of the new score may have occurred in near-real time, reflecting the changes as the consumer dragged the graphical slider, or the recalculation may have required the consumer to affirmatively submit the newly selected modified data. As illustrated, the modified reduction in debt to $1,000 has raised the recalculated score indicated by second indicator 625 c to a higher value than the initial score indicated by indicator 620. Thus by selecting a lower debt value via first element 630 c, the consumer may learn that a score improvement may be had by paying off debt and lowering his actual total revolving credit card balance owed” Kornegay Column 14 Lines 20-42).
It would have been obvious to one of ordinary skill of the art to have modified Robida’s teachings to incorporate Kornegay’s teachings in order “to interactively explore his credit score by submitting hypothetical values based on his actual credit data” Kornegay Abstract.
With respect to claims 3, 10 and 17:
Robida does not teach; however Kornegay teaches:
wherein the prediction determines both a direction and a magnitude of the user’s credit score change under the at least one action (“As illustrated, the consumer has adjusted the value of first element 630 c to explore the ramifications of a lower revolving balance owed on his credit cards. To this effect, the graphical slider 640 c has been dragged to the left to select a value of approximately $1,000. Compare this with the graphical slider's original position of approximately $20,000, which is also the position of the initial value indicator 635. As discussed with respect to FIGS. 4 and 5, once the value for the total debt is submitted as modified credit data, the modified score generator 345 recalculates the score and the static data formatter 320 prepares the new position for the second indicator 625 c for display in the user interface 603. As discussed above, the recalculation and display of the new score may have occurred in near-real time, reflecting the changes as the consumer dragged the graphical slider, or the recalculation may have required the consumer to affirmatively submit the newly selected modified data. As illustrated, the modified reduction in debt to $1,000 has raised the recalculated score indicated by second indicator 625 c to a higher value than the initial score indicated by indicator 620. Thus by selecting a lower debt value via first element 630 c, the consumer may learn that a score improvement may be had by paying off debt and lowering his actual total revolving credit card balance owed” Kornegay Column 14 Lines 20-42).
It would have been obvious to one of ordinary skill of the art to have modified Robida’s teachings to incorporate Kornegay’s teachings in order “to interactively explore his credit score by submitting hypothetical values based on his actual credit data” Kornegay Abstract.
With respect to claims 4, 11 and 18:
Mathews teaches:
wherein the feature array comprises a plurality of independent variables used for machine-learning classification or regression (“The model manager 118 can execute the machine learning model 128 again based on the modified input transaction data to generate a revised classification data point. The model manager 118 can determine the prescriptivity metric for the SHAP model based on the difference between the revised classification data point and the initial classification data point” Mathews Pgh. [0039]).
It would have been obvious to one of ordinary skill of the art to have modified Robida’s teachings to incorporate Mathews’ teachings in order “to ensure that algorithmic decisions are understandable, accurate, and coherent” Mathews [0001].
With respect to claims 5 and 12:
Robida teaches:
wherein the prediction includes a trajectory of a financial condition of the user over a predetermined time period (“Beginning in a block 250, a snapshot of financial and demographic information regarding a plurality of individuals at a particular point in time is received. In the embodiment of FIG. 2A, the observation point is some time previous to the current time and may be expressed generally as T-X, where T is the current time and X is a number of months. In one embodiment, T=the date the profile model is being generated. In this embodiment, if X=25, the observation point is 25 months previous to the date the profile model is being generated. In other embodiments, X may be set to any other time period, such as 6, 12, 18, 36, or 48, for example” Robida Pgh. [0047]).
With respect to claims 6, 13 and 20:
Robida teaches:
wherein the predetermined time period is six months (“Beginning in a block 250, a snapshot of financial and demographic information regarding a plurality of individuals at a particular point in time is received. In the embodiment of FIG. 2A, the observation point is some time previous to the current time and may be expressed generally as T-X, where T is the current time and X is a number of months. In one embodiment, T=the date the profile model is being generated. In this embodiment, if X=25, the observation point is 25 months previous to the date the profile model is being generated. In other embodiments, X may be set to any other time period, such as 6, 12, 18, 36, or 48, for example” Robida Pgh. [0047]).
With respect to claims 7 and 14:
Robida does not teach; however Kornegay teaches:
wherein the predetermined time period is calculated by a difference between a first credit data pull date and a second credit data pull date of the user (“In one embodiment, the credit data server 130 holds credit bureau data. In this embodiment, the consumer provides information sufficient to identify himself (hereinafter “consumer identification data”). In one embodiment, the consumer identification data includes the consumer's name, social security number, current address, or any other distinguishing data for ensuring that the correct credit data is associated with the consumer. In a preferred embodiment, the consumer identification data needs to be entered only once and is stored in a profile on the system 101. During subsequent uses, the consumer signs into the system 101 to have the profile provide the consumer identification data. The profile would be provided to only those users who possess the correct security credentials, such as passwords or access cards. In this embodiment, the profile would be stored on a computer-readable medium. The computer-readable medium could be stored in the client terminal 120, in the simulator server 110, or elsewhere in system 101” (Kornegay Column 5 Lines 46-63) and “The client terminal 120 transmits the consumer identification data to the simulator server 110. The simulator server 110 then transmits the consumer identification data to the credit data server 130. The credit data server 130 then returns credit data corresponding to the particular consumer. While this embodiment enables the consumer to experiment with his own actual credit data, it is not always the preferred embodiment because pulling actual credit data can cost money and adversely affect one's credit rating” (Kornegay Column 5 Line 64 to Column 6 Line 5) and “In yet another embodiment, the credit data server 130 also holds census bureau data, but the consumer provides some information about himself. In one embodiment, the information corresponds to the categories of information stored by the census bureau, such as location, age, and profession. The client terminal 120 transmits this information to the simulator server 110. The simulator server 110 then transmits the information to the credit data server 130. The credit data server 130 then returns credit data based on the information. This embodiment allows the credit data returned to be more relevant to the specific consumer while not experiencing the disadvantages of pulling the consumer's actual credit report” Kornegay Column 6 Lines 14-25).
It would have been obvious to one of ordinary skill of the art to have modified Robida’s teachings to incorporate Kornegay’s teachings in order “to interactively explore his credit score by submitting hypothetical values based on his actual credit data” Kornegay Abstract.
With respect to claim 19:
Robida does not teach; however Kornegay teaches:
wherein the prediction includes a trajectory of a financial condition of the user over a predetermined time period, wherein the predetermined time period is calculated by a difference between a first credit data pull date and a second credit data pull date of the user (“In a preferred embodiment, the simulator server 110 communicates with the client terminal 120 via a web page. This web page is served by the simulator server 110 and typically runs on the client terminal 120. The web page allows for user entry of the consumer identification data, the modified data, and the additional consumer information. The client terminal 120 also receives the simulated score from the simulator server 110 and displays it on the web page. In a preferred embodiment, the entry of the modified data is interactive, allowing the simulated score to be recalculated dynamically by the simulator server 110 responsive to changes in the initial data, without requiring the user to actually “submit” the changes to the simulator server 110” Kornegay Column 6 Lines 14-25).
It would have been obvious to one of ordinary skill of the art to have modified Robida’s teachings to incorporate Kornegay’s teachings in order “to interactively explore his credit score by submitting hypothetical values based on his actual credit data” Kornegay Abstract.
Response to Arguments
Applicant's arguments filed 11/25/25 have been fully considered but they are not persuasive.
35 USC § 101
The Applicant states the claim is not directed to a fundamental economic practice-specifically ‘mitigating risk’-and therefore does not fall within the abstract idea category of certain methods of organizing human activity. (page 6). The Examiner disagrees with the sentence because the claims are an improvement of the abstract idea only. It is a business solution to a business problem of making financial predictions. The applicant has not shown how the claims improve a computer or other technology, invoke a particular machine, transform matter, or provide more than a general link between the abstraction and the technology, MPEP 2106.05(a)-(c) & (e). The Examiner disagrees with “any additional claim elements are "well-understood, routine, and conventional." (page 9). The claims do not provide an improvement over prior systems and only adds details to the abstract idea and merely applies the abstract idea on a general computer. The feature array of the amendments just applies the machine learning to new data and makes the abstract idea more specific. Making financial predictions is not an unconventional activity. This is not an inventive concept and significantly more.
35 USC § 112
The Applicant’s arguments and amendments overcome the 112(b) Rejections, therefore, the Rejection(s) are moot.
35 USC § 103
The amended claim language is taught in the references of record as indicated above in the Office action.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARLA HUDSON whose telephone number is (571)272-1063. The examiner can normally be reached M-F 9:30 a.m. - 5:30 p.m. ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bennett Sigmond can be reached at (303) 297-4411. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.H./Examiner, Art Unit 3694
/BENNETT M SIGMOND/Supervisory Patent Examiner, Art Unit 3694