Detailed Action
This office action is in response to applicant’s submission filed on September 25, 2024.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings filed on September 25, 2024 have been accepted.
Specification
The specification filed on September 25, 2024 has been accepted.
Claim Objections
Claims 1 and 6 objected to because of the following informalities:
Claim 1, line 6, “network.” should read “network;”.
Claim 6, line 9, “network” should read “network;”.
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a simulation module”, “a distribution module”, “an input module”, and “a data generation module” in claims 1-5.
A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112, sixth paragraph limitation: [0015] to [0017] and [0032].
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant wishes to provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 2017/0309193 A1 to Joseph et al. (hereinafter, “Joseph”).
Regarding claim 1, Joseph discloses: A system for conducting enhanced Red Teaming exercises across a network, comprising:
a computer platform configured to provide (“computer-mediated environment” [0002]):
a simulation module that iteratively simulates a specified scenario using predetermined variations in experimental conditions or predefined data (“If a Decision-Maker is required to analyze problems with unknowable outcomes, the Impute component 150 directs a Decision-Maker through an Impute B sub-section, where the Impute component 150 provides a collaborative environment including one or more graphical user interface(s) in which a Decision-Maker works and interacts directly with the Impute Crowd, as opposed to Impute A where Decision-Maker and Impute crowds do not interact directly. The collaborative environment provided by the Impute component 150 allows a Decision-Maker and the Impute Crowd to first gather further evidence (e.g., data/information) as a team in a Base Rate and Image Check, and then to process information-gathering through a taxonomy of Political, Economic, Security, and Socio-Cultural forces. The objective of the Base Rate section is to determine a prior rate of occurrence of the analytical problem being analyzed. Impute B Crowd participants led by Impute Crowd Moderator then generate via the Impute component 150 a curated wiki template in the graphical user interface(s) that guides them through discussion to determine the best response as a group, along with reasoning, possible biases and confidence ratings before recording the final response that concludes Impute B workflow process. If scenarios are required to conclude the workflow process, Impute B participants as an integrated crowd led by Impute Crowd Moderator generate scenarios in the graphical user interface(s) with drivers, matrix and scenario canvas templates, the latter of which is associated with confidence ratings that serve as a final response in the Impute phase if scenarios or similar narrative and graphic responses are required” [0071] [Examiner notes that the system (impute component 150) executes the scenario workflow, so it is simulating the scenario. The workflow is happening in steps (templates, discussion, etc.) which is iterative. It uses predefined templates (pre-set variations) and the output are sent to participants via the GUI]),
an iterative simulation being sent as a predefined dataset to participants across the network (“After using Base Rate and Image Check, the users in Impute A will receive a prompt via the graphical user interface(s) to extend analysis through Impute B (if applicable), which contains curated news-gathering databases and taxonomies (The Five Ws and The Four Forces), moderated wikis, as well as an option to generate deliverable outputs like scenarios. A key difference between Impute A and Impute B is that when Decision-Makers enter Impute B, they no longer remain in an individual setting: Decision-Makers are first guided to gather information following the rules of The Five Ws (which consists of a database of curated links from which users may gather more granular information) and The Four Forces (which contextualizes information according to a basic taxonomy consisting of Political, Security/Defense, Economic and Socio-Cultural categories), then they join members of an Impute Crowd within the moderated wiki of the graphical user interface(s). Within the wiki, the Decision-Makers and crowd members can follow the wiki template to arrive at a more thorough analysis of the unconstrained problem. The wiki template can allow users to share the additional information they have gathered in the collaborative data space and which can be subsequently committed to a database associated with the problem to being analyzed and made available to the Refute Crowd after the Impute phase is complete. At its conclusion, the wiki can prompt the Impute Crowd and Decision Makers to deliver a hypothesis, along with individual and group estimate and confidence ratings” [0073] [Examiner notes that the participants (Decision-Makers and Crowds) interact with templates, curated databases, and guided tasks to produce hypotheses, confidence scores, and analysis. This workflow simulates the testing of decisions and scenarios, because the participants are effectively running the scenario step by step]);
a distribution module for providing distributed, generated initial conditions, scenario parameters, background scenario information, debiasing tools and data artifacts to one or more participants across a network (“After using Base Rate and Image Check, the users in Impute A will receive a prompt via the graphical user interface(s) to extend analysis through Impute B (if applicable), which contains curated news-gathering databases and taxonomies (The Five Ws and The Four Forces), moderated wikis, as well as an option to generate deliverable outputs like scenarios. A key difference between Impute A and Impute B is that when Decision-Makers enter Impute B, they no longer remain in an individual setting: Decision-Makers are first guided to gather information following the rules of The Five Ws (which consists of a database of curated links from which users may gather more granular information) and The Four Forces (which contextualizes information according to a basic taxonomy consisting of Political, Security/Defense, Economic and Socio-Cultural categories), then they join members of an Impute Crowd within the moderated wiki of the graphical user interface(s). Within the wiki, the Decision-Makers and crowd members can follow the wiki template to arrive at a more thorough analysis of the unconstrained problem. The wiki template can allow users to share the additional information they have gathered in the collaborative data space and which can be subsequently committed to a database associated with the problem to being analyzed and made available to the Refute Crowd after the Impute phase is complete. At its conclusion, the wiki can prompt the Impute Crowd and Decision Makers to deliver a hypothesis, along with individual and group estimate and confidence ratings... Before the Decision-Makers are sent back into Impute phase, the Decision-Makers can be prompted by the system 10 to select and review De-Biasing Training individual modules in the De-Biasing Training component 130 if they wish” [0073-0074]; “It is still further an object of the present invention to provide a computerized process for a bias-sensitive crowd-sourced structured analytical technique that is quicker and more efficient to use asynchronously and over application environments including but not limited to those of smartphones. Embodiments of the computerized process can utilize one or more databases as a basis for forming a collaborative data space in which Decision-Makers, Impute blue-team members, and Refute red-team members can interact and utilize the computerized process. The computerized process can control access to and modification of the collaborative data space, and in turn, the one or more databases to ensure data integrity and consistency across simultaneous users of the computerized process as well as for the asynchronous use of the data space. The data space can provide create, read, update, and delete functions that can be utilized for data integrity and consistency and can use timed parameters to drive the simultaneous and asynchronous use of the collaborative data space for committing data in the collaborative data space to one or more databases” [0038] [Examiner notes that the predefined structured inputs provided to the participants at the start of the Impute phase shows the distribution of initial conditions and scenario parameters. The background scenario information includes the moderated wikis/templates showing that participants received contextual information to guide their analysis (shared scenario information). Participants can also access debiasing training modules during iterative cycles as multiple participants access the same data in a shared, distributed environments (Refute Crowd) Since the users can interact via client-side applications over a communication network, this proves that the system can provide items to those participants across a network. Examiner also notes that these items are provided because the system makes the data space available to users by granting controlled access. The scenario parameters are seen in “state”, “data”, and “parameters” that end up defining how the scenario begins and operates. The initial condition here are from when the databases used as the basis of the collaborative space are pre-existing starting materials. The base rate (prior probability information that the participants initially need and the image check both define the starting state for the scenario and are delivered via GUI to the participants. The work generated in the wiki is stored and passed to other phases (data artifacts)]);
an input module that provides a user interface for structured data gathering from participant input, the input module provided to one or more participants through the distribution module, the input module further selectively receiving participant data from the network (“If a Decision-Maker is required to analyze problems with unknowable outcomes, the Impute component 150 directs a Decision-Maker through an Impute B sub-section, where the Impute component 150 provides a collaborative environment including one or more graphical user interface(s) in which a Decision-Maker works and interacts directly with the Impute Crowd, as opposed to Impute A where Decision-Maker and Impute crowds do not interact directly. The collaborative environment provided by the Impute component 150 allows a Decision-Maker and the Impute Crowd to first gather further evidence (e.g., data/information) as a team in a Base Rate and Image Check, and then to process information-gathering through a taxonomy of Political, Economic, Security, and Socio-Cultural forces. The objective of the Base Rate section is to determine a prior rate of occurrence of the analytical problem being analyzed. Impute B Crowd participants led by Impute Crowd Moderator then generate via the Impute component 150 a curated wiki template in the graphical user interface(s) that guides them through discussion to determine the best response as a group, along with reasoning, possible biases and confidence ratings before recording the final response that concludes Impute B workflow process. If scenarios are required to conclude the workflow process, Impute B participants as an integrated crowd led by Impute Crowd Moderator generate scenarios in the graphical user interface(s) with drivers, matrix and scenario canvas templates, the latter of which is associated with confidence ratings that serve as a final response in the Impute phase if scenarios or similar narrative and graphic responses are required” [0071] [Examiner notes that the input module is the Impute component 150 that provides a user interface in which participants submit structured inputs (hypotheses, confidence ratings, etc.). The text describes Decision-Makers and Impute Crows interacting via the GUI, which is functionally distributed across networked participants. The wiki collects all participant inputs (individual and group responses) to record the final response, which is later available to the Refute crowd]); and
a data generation module for generating empirical results data from the participant data gathered from the input module (“The generated results of both Impute A and Impute B are sent onwards to Refute phase, in which a different dedicated crowd acts as a red-team devil's advocate on the results generated from the Impute sections. The Refute team assesses the quality of thinking and analysis done by both Decision-Makers and Crowds in Impute by establishing: 1. Information Gaps & Vulnerable Assumptions; 2. Unobserved Norms And Protocols That Will Affect The Answer; 3. Wishful Thinking In Analysis; and 4. Biases And Poor Metacognitive Practice. The Refute team records its own hypothesis and confidence percentages, both at the individual and group levels, which is sent to Decision-Maker prior to recording of the final response and the conclusion of the workflow process. If the Refute team's hypothesis and confidence ratings are within a threshold of agreement with the Impute analysis, the Decision-Makers are prompted to record a final answer to the problem, with final confidence ratings. If the Refute team's hypothesis and confidence scoring is outside a threshold of agreement with the Impute analysis, the Decision-Makers are prevented from recording a final response and are directed to start the Impute process one more time to review information and thinking and revise their scores if required. Before the Decision Makers are sent back into Impute phase, the Decision-Makers can be prompted to select and review De-Biasing Training individual modules if they wish. Their Metacognition Scores will be updated if they choose to review any modules. Once they pass through De-Biasing Training, the Decision-Makers make a second and final pass through the Impute phase, this time without the Impute Crowd, before recording a final answer. The second pass does not go through Refute a second time” [0020]; “All users can receive scoring feedback comprised of a combination of three measures: accuracy; impact; and rigor (collectively referred to by the acronym AIR)” [0022] [Examiner notes that these texts show inputs being processed to calculate AIR which are quantitative outputs based on participant input. The scoring uses the structured input data collected from participants and the final scores quantifies performance for future decision making and iterative improvements]).
Claim 6 recites substantially the same limitation as claim 1, in the form of a non-transitory computer readable medium comprising computer readable program code for implementing the corresponding system, therefore it is rejected under the same rationale.
Claim 13 recites substantially the same limitation as claim 1, in the form of a computer-implemented method for implementing the corresponding system, therefore it is rejected under the same rationale.
Regarding claims 2, 7, and 14, Joseph discloses: wherein the distribution module is further configured to provide the simulation dataset for Red Team testing to one or more participants asynchronously (“It is still further an object of the present invention to provide a computerized process for a bias-sensitive crowd-sourced structured analytical technique that is quicker and more efficient to use asynchronously and over application environments including but not limited to those of smartphones. Embodiments of the computerized process can utilize one or more databases as a basis for forming a collaborative data space in which Decision-Makers, Impute blue-team members, and Refute red-team members can interact and utilize the computerized process. The computerized process can control access to and modification of the collaborative data space, and in turn, the one or more databases to ensure data integrity and consistency across simultaneous users of the computerized process as well as for the asynchronous use of the data space. The data space can provide create, read, update, and delete functions that can be utilized for data integrity and consistency and can use timed parameters to drive the simultaneous and asynchronous use of the collaborative data space for committing data in the collaborative data space to one or more databases” [0038] [Examiner notes that the data space is the simulation dataset provided to participants as Red Team participants use their dataset to perform their analysis/testing]).
Regarding claims 3, 8, and 15, Joseph discloses: wherein the input module further provides at least one standardized input in the user interface (“If a Decision-Maker is required to analyze problems with unknowable outcomes, the Impute component 150 directs a Decision-Maker through an Impute B sub-section, where the Impute component 150 provides a collaborative environment including one or more graphical user interface(s) in which a Decision-Maker works and interacts directly with the Impute Crowd, as opposed to Impute A where Decision-Maker and Impute crowds do not interact directly. The collaborative environment provided by the Impute component 150 allows a Decision-Maker and the Impute Crowd to first gather further evidence (e.g., data/information) as a team in a Base Rate and Image Check, and then to process information-gathering through a taxonomy of Political, Economic, Security, and Socio-Cultural forces. The objective of the Base Rate section is to determine a prior rate of occurrence of the analytical problem being analyzed. Impute B Crowd participants led by Impute Crowd Moderator then generate via the Impute component 150 a curated wiki template in the graphical user interface(s) that guides them through discussion to determine the best response as a group, along with reasoning, possible biases and confidence ratings before recording the final response that concludes Impute B workflow process. If scenarios are required to conclude the workflow process, Impute B participants as an integrated crowd led by Impute Crowd Moderator generate scenarios in the graphical user interface(s) with drivers, matrix and scenario canvas templates, the latter of which is associated with confidence ratings that serve as a final response in the Impute phase if scenarios or similar narrative and graphic responses are required” [0071] [Examiner notes that the wiki template collects structured participant inputs, like group responses, reasoning, etc.. The curated wiki template provides a pre-defined structure for participants to enter their data which have standardized fields ensuring that all participants submit their data. Participants interact with these standardized input fields via a GUI]).
Regarding claims 4, 9, and 16, Joseph discloses: wherein the input module further selectively requests further input from the one or more participants (“The Refute Crowd records its own hypothesis and confidence ratings in the collaborative data space via the graphical user interface(s) in the Refute component 160, both at the individual and team level, and the Refute component 160 sends the hypothesis and confidence ratings to the Decision-Maker prior to recording the Decision-Maker's final response in the collaborative data space and concluding the workflow process. If the Refute Crowd's hypothesis and confidence ratings are within a threshold of agreement with the analysis and results from the Impute phase, the Refute component 160 can prompt the Decision-Maker via the graphical user interface(s) to record a final answer to the analytical problem, with final confidence rating in the collaborative data space. If the Refute Crowd's hypothesis and confidence rating is outside a threshold of agreement with the Impute analysis, the Refute component 160 prevents the Decision-Maker from recording a final response in the collaborative data space and directs Decision-Makers back to the Impute component 160 to start the Impute process one more time to review information and thinking and revise their scores” [0074] [Examiner notes that participants enter their initial inputs, and if the Refute Crowd disagrees beyond a threshold, the Decision-Makers are prompted to revise and/or provide additional input. The system does not ask everyone for more input automatically, it selectively requests it based on disagreement or analysis gaps]).
Regarding claims 5, 12, and 18, Joseph discloses: wherein the simulation module further selectively alters a number of the one or more participants for the simulation dataset for Red Team testing (“FIG. 18 is a flowchart illustrating an example process 1800 for controlling access provisions in a multi-user, collaborative computer environment. In operation 1802, a collaborative data space that is simultaneously accessible by users over a communication network is established by one or more servers. In operation 1804, at least a subset of the users is dynamically clustered into groups. In operation 1806, access provisions to content in the collaborative data space are controlled in a first access phase to grant a first group of the users access to the content and to deny a second group of the users access to the content. In operation 1808, input in the collaborative data space is received from the first group of the users to collaboratively modify the content of the data space in the first access phase. In operation 1810, requests to access the content of the collaborative data space from the second group of the users is denied in the first access phase. In operation 1812, the access provisions to the content in the collaborative data space is dynamically modified in a second access phase to deny the first group of the users access to the content and to grant a second group of the users access to the content in response to a configurable amount of time that has elapsed since the first group of users was provided access to the content of the collaborative data space, or in response to an action of at least one of the users in the first group and after collaborative modification to the content of the collaborative data space is committed to a database” [0143] [Examiner notes that dynamic grouping and phased access control selectively changes which users can interact with the simulation dataset]).
Regarding claims 10 and 17, Joseph discloses: compare one or more external datasets with the gathered participant data (“All users can receive scoring feedback comprised of a combination of three measures: accuracy; impact; and rigor (collectively referred to by the acronym AIR). First, accuracy measures (AIR 1) are based on the frequency of reporting correct answers and the confidence ratings reported for correct versus incorrect answers. Accuracy measures are based on the Brier score (Brier, G. W. (1950). Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1), 1-3.), a proper scoring rule, adapted to accommodate problems without pre-specified answer options (i.e. open-ended problems). Second, Impute and Refute teams receive feedback on the impact of their analyses upon the accuracy of the Decision-Makers (AIR 2). Because the Decision-Makers are asked to provide hypotheses and confidence ratings independently at first, then update them after having reviewed input from Impute and Refute teams, the exemplary embodiments can assess incremental accuracy improvements before versus after reviewing the work of each team. This allows the system to measure impact, which is based on the change in Decision-Maker accuracy. Third, the Decision-Makers can assess the rigor of the hypotheses and rationales produced by independent (pre-deliberation) users from Impute and Refute teams (one rating per individual), as well as the team-based hypotheses and rationales (one rating per team) (AIR 3). The Decision-Maker rigor ratings are expressed on a 5-point scale, based on the following criteria: 1. Prior Experience/Expertise, 2. Insight, 3. Independence, 4. Cogency of Reasoning, and 5. Persuasiveness. The responder can choose to respond to the five items separately, provide a single holistic evaluation, or both. All scores are made available after participants have completed their work and the analytical problems are closed. The first and second measures (AIR 1 and AIR 2) are available for problems with knowable correct answers, after the answers become known. Only the third measure (rigor) is used for open-ended problems with unknowable answers. For such problems, a user can be asked to define up to four answer options. The system administrator validates the answer options and can request edits from participants” [0022] [Examiner notes that correct answers act as external reference data, participant input is compared against it, producing quantitative results. This is being interpreted as comparing to external databases]).
Claim 11 recites substantially the same limitation as claims 9 and 10, in the form of a computer readable medium, therefore it is rejected under the same rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure:
Segal (US 2018/0219903 A1) teaches techniques for methods and systems for carrying out campaigns of penetration testing for discovering and reporting security vulnerabilities of a networked system, the networked system comprising a plurality of network nodes interconnected by one or more networks.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARON MATTHEWOS WORKU whose telephone number is (703)756-1761. The examiner can normally be reached Monday - Friday, 9:30 am - 6:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Linglan Edwards can be reached on 571-270-5440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SARON MATTHEWOS WORKU/Examiner, Art Unit 2408
/LINGLAN EDWARDS/Supervisory Patent Examiner, Art Unit 2408