DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/22/2025 has been entered.
Claims 1-20 are presented for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments- 35 USC § 101
Applicant's arguments filed 12/22/2025 have been fully considered but they are not persuasive.
Applicant argues generally on pages 11-16 that a human mind cannot perform presenting design elements as a recommendation to a designer and that the claim amendments includes a practical application by improving the functioning of a user experience design system.
First, it is unclear why exactly a human could not present design elements as a recommendation to a designer using pen and paper. This is clearly possible for a mental process using pen and paper.
Second, the amended limitation recites “adjusting the one or more next design elements selected by the first designer to resolve user interface issues for the design build determined with the learning model based on a version of the design build for a second designer”. As there were no user interface “issues” recited previously it is unclear what it would mean for “issues” to be “resolved”. The language is so broad as to have no patentable weight or meaning. In addition, any resolution of any so-called “issues” is merely the intended use of the adjusted design elements. The limitation merely recites an adjustment of design elements which can clearly be performed mentally with the use of pen and paper with no recited practical application.
The rejection has been updated to reflect the amended claim language.
Response to Arguments- 35 USC § 103
Applicant’s arguments with respect to amendments have been fully considered and are not persuasive.
The amended claim language requires rejections under 35 USC 112b. Any application of prior art is the Examiner’s best interpretation of the claimed subject matter. See below.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 8 and 14 are amended to recite “determining, by the one or more processors, a selection decision made by the first designer of the one or more next elements from the recommendation as a design build” and “adjusting the one or more next design elements selected by the first designer to resolve user interface issues for the design build determined with the learning model based on a version of the design build for a second designer”. These amended limitations taken as a whole are unclear. It is unclear how “a selection decision” is a “design build”. No learning model is recited for the selection decision. The phrase “the design build determined with the learning model” lacks antecedent support because previously only the selection decision was the design build. Additionally, only the prediction of elements was performed by a learning model, not the design build.
The second limitation is a confusing run-on phrase. It is unknown what the term “based on” is referring to. Is the “adjusting” based on the version for the second designer? Are the user interface issues based on the version for the second designer? Is the design build based on the version for the second designer? Any application of prior art is the Examiner’s best interpretation of the claim language.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. To determine if a claim is directed to patent ineligible subject matter, the Court has guided the Office to apply the Alice/Mayo test, which requires:
1. Determining if the claim falls within a statutory category;
2A. Determining if the claim is directed to a patent ineligible judicial exception consisting of a law of
nature, a natural phenomenon, or abstract idea; and
2B. If the claim is directed to a judicial exception, determining if the claim recites limitations or elements
that amount to significantly more than the judicial exception. (See MPEP 2106).
Step 1: With respect to claims 1-20, applying step 1, the preamble of independent claims 1, 8 and 14 claim a method, computer program product, and a computer system. As such these claims fall within the statutory categories of process, article of manufacture and machine.
Step 2A, prong one: In order to apply step 2A, a recitation of claim 1 is copied below. The limitations of the claim that describe an abstract idea are bolded.
A method for facilitating generation of a user interface design, the method comprising:
identifying, by one or more processors, one or more elements added to a screen design by a first designer (mental process – observation, evaluation, judgement, opinion);
predicting, by the one or more processors, one or more next elements of the screen design, based on the identified one or more elements added to the screen design (mental process – observation, evaluation, judgement, opinion) and a learning model trained by machine learning techniques using a convolutional neural network; and
presenting, by the one or more processors, the one or more next design elements that are visual elements selectable on a design artboard for a design file as a recommendation to the first designer (mental process – observation, evaluation, judgement, opinion);
determining, by the one or more processors, a selection decision made by the first designer of the one or more next elements from the recommendation as a design build (mental process – observation, evaluation, judgement, opinion);
adjusting the one or more next design elements selected by the first designer to resolve user interface issues for the design build determined with the learning model based on a version of the design build for second designer (mental process with pen/paper – observation, evaluation, judgement, opinion); and
updating, by the one or more processors, the learning model, based on selection decisions made by the first designer (mental process – observation, evaluation, judgement, opinion).
The limitations as analyzed include concepts directed to the "mental process" groupings of abstract ideas performed in the human mind (including an observation, evaluation, judgment, opinion) (see MPEP § 2106.04(a)(2), subsection III). The claim involves identifying, predicting, presenting, determining, adjusting and updating. The steps are simple enough/broadly claimed that they could be performed mentally or with pen and paper and drawing and updating the design elements. Thus, limitations noted above fall into the "mental process" groupings of abstract ideas.
Step 2A, prong two: Under step 2A prong two, this judicial exception is not integrated into a practical application because the additional claim limitations outside the abstract idea only present generic computing components. In particular, the claim recites the additional limitations: “identifying, by one or more processors” (generic computing components merely carrying out the abstract idea - see MPEP § 2106.05(f) and (b)), “predicting, by the one or more processors” (generic computing components merely carrying out the abstract idea - see MPEP § 2106.05(f) and (b)), “predicting…based on… a learning model trained by machine learning techniques using a convolutional neural network” (generic computing components merely carrying out the abstract idea - see MPEP § 2106.05(f) and (b)), “presenting, by the one or more processors” (generic computing components merely carrying out the abstract idea - see MPEP § 2106.05(f) and (b)), “determining, by the one or more processors” (generic computing components merely carrying out the abstract idea - see MPEP § 2106.05(f) and (b)), “updating, by the one or more processors” (generic computing components merely carrying out the abstract idea - see MPEP § 2106.05(f) and (b)).
Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Step 2B: Moving on to step 2B of the analysis, the Examiner must consider whether each claim limitation individually or as an ordered combination amounts to significantly more than the abstract idea. This analysis includes determining whether an inventive concept is furnished by an element or a combination of elements that are beyond the judicial exception. For limitations that were categorized as "apply it" or generally linking the use of the abstract idea to a particular technological environment or field of use, the analysis is the same. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional limitations is considered directed towards field of use, generic computer components carrying out the abstract idea. See MPEP 2106.04(d) referencing MPEP 2106.05(h).
For the foregoing reasons, claim 1 is directed to an abstract idea without significantly more, and is rejected as not patent eligible under 35 U.S.C. 101. Independent claims 8 and 14 are directed to substantially the same subject matter as independent claim 1 and are rejected under similar rationale and further failure to add significantly more. The same conclusion is reached for the dependent claims.
Claims 2, 9 and 15 are further directed to the "Mathematical Concepts" grouping of abstract ideas (including mathematical relationships, mathematical formulas or equations, mathematical calculations) (see MPEP § 2106.04(a)(2), subsection I). The claims recite wherein the machine learning model is trained. This judicial exception is not integrated into a practical application because the additional claim limitations outside the abstract idea only present generic computing components. In particular, the claims recite the additional limitations: “The computer program product”, “program instructions”, and “The computer system”. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional limitations is considered directed towards generic computer components carrying out the abstract idea.
Claims 3, 10 and 16, are further directed to the "mental process" groupings of abstract ideas performed in the human mind (including an observation, evaluation, judgment, opinion) (see MPEP § 2106.04(a)(2), subsection III). The claims involve monitoring, determining and presenting . The steps are simple enough/broadly claimed that they could be performed mentally or with pen and paper and drawing the design. Thus, limitations noted above fall into the "mental process" groupings of abstract ideas. This judicial exception is not integrated into a practical application because the additional claim limitations outside the abstract idea only present generic computing components. In particular, the claims recite the additional limitations: “the one or more processors”, “The computer program product”, “program instructions”, and “The computer system”. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional limitations is considered directed towards generic computer components carrying out the abstract idea.
Claims 4, 11, and 17 are further directed to the "mental process" groupings of abstract ideas performed in the human mind (including an observation, evaluation, judgment, opinion) (see MPEP § 2106.04(a)(2), subsection III). The claims involve determining and adjusting. The steps are simple enough/broadly claimed that they could be performed mentally or with pen and paper and drawing the design. Thus, limitations noted above fall into the "mental process" groupings of abstract ideas. This judicial exception is not integrated into a practical application because the additional claim limitations outside the abstract idea only present in the abstract idea only present generic computing components or insignificant extra-solution activity. In particular, the claim recites the additional limitations: “receiving” and “distributing” (insignificant extra-solution activity - mere data gathering/output MPEP 2106.05(g) and “the one or more processors”, “The computer program product”, “program instructions”, and “The computer system”. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional limitations is considered directed towards generic computer components carrying out the abstract idea and insignificant extra-solution activity.
Claims 5, 12 and 18, are further directed to the "mental process" groupings of abstract ideas performed in the human mind (including an observation, evaluation, judgment, opinion) (see MPEP § 2106.04(a)(2), subsection III). The claims involve a predicted design element that is determined. The steps are simple enough/broadly claimed that they could be performed mentally or with pen and paper and drawing the design. Thus, limitations noted above fall into the "mental process" groupings of abstract ideas. This judicial exception is not integrated into a practical application because the additional claim limitations outside the abstract idea only present generic computing components. In particular, the claims recite the additional limitations: “The computer program product”, “program instructions”, and “The computer system”. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional limitations is considered directed towards generic computer components carrying out the abstract idea.
Claims 6, 13 and 19 are further directed to the "mental process" groupings of abstract ideas performed in the human mind (including an observation, evaluation, judgment, opinion) (see MPEP § 2106.04(a)(2), subsection III). The claims involve determining a context and a pattern. The steps are simple enough/broadly claimed that they could be performed mentally or with pen and paper and drawing the design. Thus, limitations noted above fall into the "mental process" groupings of abstract ideas. This judicial exception is not integrated into a practical application because the additional claim limitations outside the abstract idea only present generic computing components. In particular, the claims recite the additional limitations: “The computer program product”, “program instructions”, and “The computer system”. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional limitations is considered directed towards generic computer components carrying out the abstract idea.
Claims 7 and 20 are further directed to the "mental process" groupings of abstract ideas performed in the human mind (including an observation, evaluation, judgment, opinion) (see MPEP § 2106.04(a)(2), subsection III). The claims involve identifying and presenting. The steps are simple enough/broadly claimed that they could be performed mentally or with pen and paper and drawing the design. Thus, limitations noted above fall into the "mental process" groupings of abstract ideas. This judicial exception is not integrated into a practical application because the additional claim limitations outside the abstract idea only present generic computing components. In particular, the claims recite the additional limitations: “one or more processors”, “The computer program product”, “program instructions”, and “The computer system”. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional limitations is considered directed towards generic computer components carrying out the abstract idea.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 5-8, 12-14, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over US 2020/0394026 A1 (“Huang”) in view of “Visual Recognition Of Graphical User Interface Components Using Deep Learning Technique” (“Rahmadi”).
Regarding claims 1, 8 and 14, Huang teaches:
A method for facilitating generation of a user interface design (Huang: Abstract), the method comprising:
identifying, by one or more processors, one or more elements added to a screen design by a first designer (Huang: para [0003], “A UX design is received and objects of an input user interface (UI) screen of the UX design are identified. All objects that are identified to be part of a background of the input UI screen are removed to create a filtered input UI screen. ”);
predicting, by the one or more processors, one or more next elements of the screen design, based on the identified one or more elements added to the screen design (Huang: para [0072] “At block 816, the UX design evaluation engine 103 uses the sequence of the input clusters as an input to a deep learning model to predict a target UI cluster. For example, a sub sequence (e.g., first m-1 cluster nodes) of the cluster nodes) are used as inputs to a sequential deep learning model to predict the target cluster node”) and a learning model trained by machine learning techniques (Huang: para [0048], “The path can then be used in the context of a training of a sequential model 340. For example, long short-term memory (LSTM) can be used, which uses an artificial recurrent neural network (RNN) architecture for deep learning. This trained model 340 can then be used to evaluate new UX designs for their effectiveness”; para [0049], “Upon creating a trained model 340, it can be used in different levels of evaluating new UX designs for their effectiveness”; para [0078], “There may be a deep learning module 948 (sometimes referred to herein as a prediction module) operative to, during a preliminary phase (sometimes referred to as a training phase), learn from historical data of successful UX designs and user interaction logs to build a model that can be used to evaluate new UX designs”; para [0060], “A learning model 640 is used to predict the next node 652, which is then compared to the target node 650 of the subject UX design product. In one embodiment, a many to one sequential deep learning model is used to provide the predicted node 652, based on the path of the cluster of nodes 630(1) to 630(N)”);
presenting, by the one or more processors, the one or more next design elements that are visual elements selectable on a design artboard for a design file as a recommendation to the first designer (Huang: para [0054], “a designer designing two pages (A, B). By click of a button (i.e., Action Ai), the page can transition to page B from A (i.e., A.fwdarw.B), where A is the start screen, and B is the target screen. The system discussed herein can evaluate if the transition is “good” (e.g., user friendly)”; para [0056], “Next, the predict node is compared to the target node 550 to see if they are substantially similar. If they are substantially similar, it is indicative that the transition logic design is “good” and thus widely adopted in existing UX designs”; Figs. 4, 5; para [0058], “assigning each node (i.e., UI screen) to a corresponding cluster 630(1) to 630(1). For example, the first cluster node 630(1) may be a login screen, the second cluster node 630(2) can be a search screen, the third cluster node 630(3) can relate to a screen where a user can add products into a shopping bin, and the last cluster node 630(N) may be providing payment information”; para [0040], “UI screen segmentation can include detecting boundaries of action objects (e.g., UI items that can trigger a UI screen transition, such as a search, login, clickable button, hyperlink image, etc.). Accordingly, UI screens 240 are processed in that they are filtered from having any objects identified as a background. Further, the actionable objects are semantically labeled in block 240. For example, semantic labeling can be considered as a text label to describe the function of actionable objects in the screen (e.g., the semantic label for a login button can simply be a short text “login.” A screen may include multiple actionable objects (e.g., sign in, signup, join as a guest, etc.)”);
determining, by the one or more processors, a selection decision made by the first designer of the one or more next elements from the recommendation as a design build (Huang: para [0056], “Next, the predict node is compared to the target node 550 to see if they are substantially similar. If they are substantially similar, it is indicative that the transition logic design is “good” and thus widely adopted in existing UX designs”; Figs. 4, 5); and
updating, by the one or more processors, the learning model, based on selection decisions made by the first designer (Huang: para [0078], “There may be a deep learning module 948 (sometimes referred to herein as a prediction module) operative to, during a preliminary phase (sometimes referred to as a training phase), learn from historical data of successful UX designs and user interaction logs to build a model that can be used to evaluate new UX designs, as discussed herein. There may be an object segmentation module 950 operative to detecting boundaries of actionable objects that include the UI items that can trigger a UI screen transition. In one embodiment the object segmentation module 950, additionally or alternatively, is configured to remove objects from a screen that are identified to be part of a background”; para [0008], “the deep learning model is created by the computing device during a preliminary phase, where a weighted flow graph of clustering nodes is created based on historical data of successful UX designs. Historical user interaction logs between users and UX designs are received. The weighted flow graph and the user interaction logs are combined to create paths. These paths are used to train the deep learning model”).
Huang does not teach but Rahmadi does teach:
machine learning techniques using a convolutional neural network (Rahmadi: page 6; Figures 4, 6, 8);
adjusting the one or more next design elements selected by the first designer to resolve user interface issues for the design build determined with the learning model based on a version of the design build for a second designer (Rahmadi: Figure 1, “Simplified stages of GUI development, fidelity levels, and its deliverable”; page 2, “Iteratively, steps 2 until 4 from the figure above repeated until the GUI is considered satisfactory, which means satisfy, or at the minimum it should be acceptable by the users or business requirements, and does not have serious problem whether in functionality or usability”; page 3, “collect dataset of software GUI and classified them into positive (good) or negative (bad) categories based on its page layout, comfort, and brightness”).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Huang (directed to user interface design) and Rahmadi (directed to a CNN for GUI design) and arrived at using a CNN for interface design. One of ordinary skill in the art would have been motivated to make such a combination to evaluate an interface “for its function and usability to discover design problem and to get feedback from users” (Rahmadi: Abstract).
Regarding claims 5, 12 and 18, Huang and Rahmadi teach:
The method of claim 1, wherein a predicted next design element includes a template that is determined by recognition of a combination of the design elements added by the first designer to the screen design (Huang: para [0054], “the Action Ai may be used to help predict the next node given the start node. Consider for example a designer designing two pages (A, B). By click of a button (i.e., Action Ai), the page can transition to page B from A (i.e., A.fwdarw.B), where A is the start screen, and B is the target screen. The system discussed herein can evaluate if the transition is “good” (e.g., user friendly) or “bad” (e.g., beyond users' expectation.) To that end, in one embodiment, the system firstly predicts clustering nodes for A and B, respectively, (e.g., based on block 414 of FIG. 4”).
Regarding claims 6, 13 and 19, Huang and Rahmadi teach:
The method of claim 1, wherein a context and a pattern of the screen design is determined, based on setup information of a design project and addition of design elements added to the screen design (Huang: para [0054], “the Action Ai may be used to help predict the next node given the start node. Consider for example a designer designing two pages (A, B). By click of a button (i.e., Action Ai), the page can transition to page B from A (i.e., A.fwdarw.B), where A is the start screen, and B is the target screen. The system discussed herein can evaluate if the transition is “good” (e.g., user friendly) or “bad” (e.g., beyond users' expectation.) To that end, in one embodiment, the system firstly predicts clustering nodes for A and B, respectively, (e.g., based on block 414 of FIG. 4”).
Regarding claims 7 and 20, Huang and Rahmadi teach:
The method of claim 1, further comprising:
identifying, by the one or more processors, accessibility issues associated with the screen design; and presenting, by the one or more processors, an alert and recommendations addressing the identified accessibility issues associated with the screen design, to the first designer (Huang: para [0051], “For example, the higher the confidence score, the more likely the actionable object belongs to that cluster. If the confidence score is below a predetermined threshold, then the actionable item is deemed unrecognizable. For example, an actionable item being unrecognizable may indicate that the UI design may not be recognizable by users. In this regard, the UX design evaluation engine can send a notification (e.g., warning) to the relevant UX design developer”; para [0066], “At block 718, the UX design evaluation engine 103
compares the predicted target UI cluster to the filtered target UI cluster. Upon determining that the filtered target UI cluster is similar to the predicted target UI cluster (i.e., “YES” at decision block 718), the process continues with block 720 where the UX design is classified as successful. However, upon determining that the filtered target UI cluster is not similar to the target UI screen (i.e., “NO” at decision block 718), the process continues with block 722, where the UX design is classified as ineffective. In one embodiment, the similarity between the filtered input UI screen and the target UI screen is based on a confidence score being at or above a predetermined threshold. Upon determining that the UX design is ineffective, a notification may be sent to the relevant UX design developer (e.g., 101(1)) to indicate that the UX design should be improved. In one embodiment, the UX design that is deemed to be ineffective is blocked from an audience (e.g., a group of users is prevented from being exposed to the UX design), thereby preventing ineffective UX design from being distributed”).
Allowable Subject Matter
Claims 2-4, 9-11 and 15-17 contain allowable subject matter.
The independent claims will be in condition for allowance when the allowable dependent claims are incorporated into the independent claims, in addition to overcoming the 35 USC 101 and 112 rejections.
Huang and Rahmadi are directed towards using a CNN for interface design.
However, these references and the remaining prior art of record, alone or in combination, fails to disclose or suggest
(claims 2, 9, 15)
“wherein the machine learning model is trained by data from historic projects, data associated with a project type, data associated with industry type, data associated with accessibility guidance and standards, and data associated with design best-practices”,
(claims 3, 10 and 16)
“monitoring, by the one or more processors, design builds of a plurality of designers working on a design project;
determining, by the one or more processors, inconsistencies of design between designs by the plurality of designers, wherein the inconsistencies are determined based on the machine learning model trained by data associated with industry best practices, historic designs, accessibility guidance and standards, and learning from selections made during a current design;
determining, by the one or more processors, recommendations to resolve the recommendations to resolve the inconsistencies, to the plurality of designers with the designs having the inconsistencies”.
in combination with the remaining elements and features of the claimed invention. It is for these reasons that the applicant’s invention defines over the prior art of record.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NITHYA J. MOLL whose telephone number is (571)270-1003. The examiner can normally be reached Monday-Friday 10am-6pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rehana Perveen can be reached at 571-272-3676. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NITHYA J. MOLL/Primary Examiner, Art Unit 2189