Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This non-final rejection is in response to the claims filed on: 01/30/2026.
The following rejections are withdrawn in view of applicant’s amendments, which necessitated a new grounds of rejection:
Claim(s) 1-4, 6-11, and 14-16 and claim 22 rejected under 35 U.S.C. 103 as being unpatentable over Vasavan et al (US Application: US 2022/0350733, published: Nov. 3, 2022, filed: Apr. 30, 2021) in view of Zenine et al (US Application: US 2022/0029904, published: Jan. 27, 2022, filed: Jul. 24, 2020).
Claim(s) 5, 12, and 13 rejected under 35 U.S.C. 103 as being unpatentable over Vasavan et al (US Application: US 2022/0350733, published: Nov. 3, 2022, filed: Apr. 30, 2021) in view of Zenine et al (US Application: US 2022/0029904, published: Jan. 27, 2022, filed: Jul. 24, 2020) in view of Polisetty et al (US Application: US 20180143891, published: May 24, 2018, filed: Aug. 31, 2017).
Claim(s) 21 rejected under 35 U.S.C. 103 as being unpatentable over Vasavan et al (US Application: US 2022/0350733, published: Nov. 3, 2022, filed: Apr. 30, 2021) in view of Zenine et al (US Application: US 2022/0029904, published: Jan. 27, 2022, filed: Jul. 24, 2020) in view of Liu et al (US Application: US 2021/0117479, published: Apr. 22, 2021, filed: Jan. 14, 2020).
Claim(s) 23 and 24 rejected under 35 U.S.C. 103 as being unpatentable over Vasavan et al (US Application: US 2022/0350733, published: Nov. 3, 2022, filed: Apr. 30, 2021) in view of Zenine et al (US Application: US 2022/0029904, published: Jan. 27, 2022, filed: Jul. 24, 2020) in view of Sarikaya et al (US Application: US 20180061401, published: Mar. 1, 2018, filed: Aug. 31. 2016).
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/30/2026 has been entered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-9, 14, 16, and 21-29 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 Analysis:
Step 1:
Claim 1 falls within a statutory category .
Step 2A, Prong One:
With regards to claim 1, the claim recites the following of which the limitations that are bolded recite a judicial exception for a mental process or mathematical concept(s) “ A system for testing a product, said system comprising: a memory; and a processor in communication with said memory, said processor being configured to perform operations, said operations comprising: training a cognitive engine to recognize elements used in said product; assigning said cognitive engine a goal requiring use of said product; recognizing, with said cognitive engine, [[an ]]at least one actionable element of said product; selecting, via a semantic similarity ranking, an actionable element from said at least one actionable element most similar to said goal; attempting, with said cognitive engine, actionable operations on said actionable element in accordance with an operation order; tracking, for each attempted operation, an associated predefined operation cost data affiliated with said actionable operations; and generating, based on a summation of said associated predefined operation cost for said each attempted operation said data, a usability score for said product.
More specifically, with regards to ‘… for testing a product …’, ‘… to recognize elements used in said product …’, ‘… assigning … a goal requiring use of said product …’ , ‘recognizing … at least one actionable element of said product …’, ‘selecting, via a semantic similarity ranking, and actionable element … , ‘attempting, … actionable operations on said actionable element … ’, ‘tracking, for each attempted operation, an associated predefined operation cost data … ’, these limitations recite a mental process. For example, a person can mentally evaluate elements used in a product, mad a judgement to assign a goal, evaluated and recognized at least one actionable element, made a judgment for selecting/identifying an actionable element based on evaluation of a semantic similarity ranking an actionable element, and made observations to track operations. (see MPEP § 2106.04(a)(2), subsection III).
With regards to the ’generating, based on a summation of said associated predefined operation cost for said each attempted operation said data, a usability score for said product” , this limitation recites a mathematical concept. For example, a mathematical calculation can yield a usability score based on said data being the parameters subject to the calculation see MPEP § 2106.04(a)(2), subsection I).
Step 2A, Prong Two
The claim recites the following additional elements:
-“ A system … said system comprising: a memory; and a processor in communication with said memory, said processor being configured to perform operations, said operations”, … ‘training a cognitive engine … ’, ‘…said cognitive engine …’ . These additional element(s) is/are considered merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). The courts have identified this type of limitation to be insufficient to integrate a judicial exception into a practical application. Furthermore, the/said cognitive engine is also considered generally linking the use of a judicial exception to a particular technological environment or field of use (‘engine’ and/or machine-learning/training/modeling-environment).
Step 2B:
As discussed in step 2A, prong two, there are additional elements of:
--“ A system, said system comprising: a memory; and a processor in communication with said memory, said processor being configured to perform operations, said operations”, … ‘training a cognitive engine’, ‘… said cognitive engine ..’. These additional element(s) is/are considered merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea. Furthermore, the/said cognitive engine was also considered generally linking the use of a judicial exception to a particular technological environment or field of use (‘engine’ and/or machine-learning/training/modeling-environment). The courts have found this type of limitation to be insufficient to be ‘significantly more’ when recited in a claim with a judicial exception. (see also Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984 ).
Even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, and therefore do not provide an inventive concept.
Claims 2-7:
These claims (2-7) recite further operations directed to mental process(es) and /or mathematical process(es) and do not recite additional elements that would result in integrated their corresponding judicial exception(s) into a practical application nor do they contain additional elements that would amount to significantly more than their corresponding recited exception(s).
Claim 8:
With regards to claim 8 , it is rejected under similar rationale as claim 1 (since it is broader than claim 1).
Claims 9 and 14
These claims (9 and 14) recite further operations directed to mental process(es) and /or mathematical process(es) and do not recite additional elements that would result in integrated their corresponding judicial exception(s) into a practical application nor do they contain additional elements that would amount to significantly more than their corresponding recited exception(s).
Claim 16:
With regards to claim 16, it is rejected under similar rationale as claim 1.
Claims 21-29:
These claims (21-29) recite further operations directed to mental process(es) and /or mathematical process(es) and do not recite additional elements that would result in integrated their corresponding judicial exception(s) into a practical application nor do they contain additional elements that would amount to significantly more than their corresponding recited exception(s). For example they recite actions considered as performable as mental steps (‘determining…’, ‘checking …’, ‘engaging …’, ‘attempting’, selecting) and additional elements as ‘applying it’ with generic computer component(s and/or generally linking to a technology area (’an application, ‘said cognitive engine’) and insignificant extra solution activity for data collection/data-gathering (generating a test case, ‘outputting’).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 6-9, 14, 16, 22 and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sears (“Layout Appropriateness: A Metric for Evaluating User Interface Widget Layout”, published: July 1993, publisher IEE, pages: 707-719) in view of Ang (US Application: US 20190018675, published: Jan. 17, 2019, filed: Jul. 13, 2018).
With regards to claim 1. Sears teaches a system for testing a product, said system comprising:… configured to perform operations, said operations comprising:
… cognitive engine to recognize elements used in said product (page 709, widgets are recognized);
assigning said cognitive engine a goal requiring use of said product (page 709, a task description (a goal) is assigned);
recognizing, with said cognitive engine, at least one actionable element of said product (page 709, widget(s) are recognized as actionable with respect to actions that are part of the task description);
attempting, with said cognitive engine, actionable operations on said actionable element in accordance with an operation order (page 709: an ordered set of actions/operations are defined as a ‘sequence of actions’ to complete the goal/task);
tracking, for each attempted operation, an associated predefined operation cost (page 711, an operation cost is evaluated for a plurality of attempted layout operations upon actionable widget elements); and
generating, based on a summation of said associated predefined operation cost for said each attempted operation, a usability score for said product (Fig 14, Fig. 15: operation cost is accumulated over each operation/action and represented in the end by ‘LA’ value as the usability score).
However Sears does not expressly teach “… a memory; and a processor in communication with said memory, said processor being configured to …”, “training a cognitive engine to recognize elements”, “… selecting, via a semantic similarity ranking, an actionable element from said at least one actionable element most similar to said goal”
Yet Ang teaches “… a memory; and a processor in communication with said memory, said processor being configured to …” (Fig 13: a memory and processor is implemented), “training a cognitive engine to recognize elements” ,“… selecting, via a semantic similarity ranking, an actionable element from said at least one actionable element most similar to said goal (Abstract, paragraph 0012 and paragraph 0063: a meaning of a user selected goal/task is derived to rank/score and isolate particular control(s) (training is performed to recognize these elements /controls , which are interpreted as the claimed actionable elements) using probability to achieve the goal (that would help complete the user’s goal/task). As also explained in paragraphs 0013 and 0043, operations associated with one or more of the particular control(s) (that were semantically associated with the user’s selected goal/task) generates an instance of a system that performs the automated steps corresponding to the user’s selected goal/task and this instance of the system can be updated automatically if the system interface changes (each iteration/update is interpreted as a ‘test’ version of the system that is implemented until a change comes along where the version needs to be updated).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Sears’ ability to score an interface having particular actionable elements that are used (as a sequence of action(s)) to complete a provided goal/Task-description, such that the goal/task-description is further modified by allowing a user to provide the goal/task-description in a manner that can be analyzed to identify and correlating a sequence of the one or more actionable elements that semantically correspond to complete the goal/task-description, as taught by Ang. The combination would have implemented a computerized artificially intelligent self learning software operating system that can automatically learn to operate other software (Ang, paragraph 0004).
With regards to claim 2. (Original) The system of claim 1, the combination of Sears and Ang teaches said operations further comprising: achieving said goal via said operations to in an operation sequence (as similarly explained in the rejection of claim 1, Sears’ ability to score an interface having particular actionable elements that are used (as a sequence of action(s)) to complete a provided goal/Task-description, such that the goal/task-description is further modified by Ang’s teaches for allowing a user to provide the goal/task-description in a manner that can be analyzed to identify and correlating a sequence of the one or more actionable elements that semantically correspond to complete the goal/task-description), and is rejected under similar rationale.
With regards to claim 3. (Original) The system of claim 2, Sears teaches said operations further comprising: setting a cost score for each of said operations in said operation sequence (page 712: cost score for transition operations amongst widget operating sequence includes a ‘cost of transition’ factor).
With regards to claim 4. (Original) The system of claim 3, Sears teaches said operations further comprising: summing said cost score for each said operations in said operation sequence (page 713: all tasks are accounted for to determine an average of operation sequence(s) as explained in “…analyzing the average cost of all tasks indicates that this is not the LA- optimal organization.”).
With regards to claim 6. (Original) The system of claim 1, Sears and Ang teaches said operations further comprising: generating a dynamic test case by said cognitive engine (as similarly explained in the rejection of claim 1, Ang teaches in paragraph 0043 that operations are associated with one or more of the particular control(s) (that were semantically associated with the user’s selected goal/task) and different instances of the system are generated based on changes. More specifically, the system has a current ‘version’ that corresponds to the achieving the user’s selected goal/task and this ‘version’ of the system can be updated automatically if the system interface changes. Each iteration/version is interpreted as the claimed ‘dynamic test’ version of the system that is implemented until a next change comes along where the version needs to be dynamically updated), and is rejected under similar rationale.
With regards to claim 7. (Original) The system of claim 6, Sears and Ang teaches said operations further comprising: using a semantic similarity algorithm to generate said dynamic test case, as similarly explained in the rejection of claim 6, and is rejected under similar rationale.
With regards to claim 8. (Currently Amended) Sears and Ang teaches a method for testing a product, said method comprising: training a cognitive engine to recognize elements used in said product; assigning said cognitive engine a goal requiring use of said product; recognizing, with said cognitive engine, at least one actionable element of said product; selecting, via a semantic similarity ranking, an actionable element from said at least one actionable element most similar to said goal; attempting, with said cognitive engine, operations on said actionable element in accordance with an operation order; tracking, for each attempted operation, an associated predefined operation cost; and generating, based on a summation of said associated predefined operation cost for said each attempted operation said data, a usability score for said product, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
With regards to claim 9. (Original) The method of claim 8, the combination of Sears and Ang teaches further comprising: achieving said goal via said operations to in an operation sequence, as similarly explained in the rejection of claim 2 above, and is rejected under similar rationale.
With regards to claim 14. (Original) The method of claim 8, the combination of Sears and Ang teaches further comprising: generating a dynamic test case by said cognitive engine, as similarly explained in the rejection of claim 6, and is rejected under similar rationale.
With regards to claim 16. (Currently Amended) the combination of Sears and Ang teaches a computer program product for testing a product, said computer program product comprising a computer readable storage medium having program instructions embodied therewith, said program instructions executable by a processor to cause said processor to perform a function, said function comprising: training a cognitive engine to recognize elements used in said product; assigning said cognitive engine a goal requiring use of said product; recognizing, with said cognitive engine, at least one actionable element of said product; selecting, via a semantic similarity ranking, an actionable element from said at least one actionable element most similar to said goal; attempting, with said cognitive engine, operations on said actionable element in accordance with an operation order; tracking, for each attempted operation, an associated predefined operation cost; and generating, based on a summation of said associated predefined operation cost for said each attempted operation said data, a usability score for said product, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
With regards to claim 21. (Previously Presented) The system of claim 1, the combination of Sears and Ang teaches said operations further comprising: determining semantic similarity between said goal and said actionable operations, wherein said attempting said actionable operations is based on said semantic similarity (as similarly explained in the rejection of claim 1, paragraph 0013 of Ang was shown to teach that the goal identified from user input instructions is compared against actionable control-operations such that an attempt to identify and execute the control that is most probable (interpreted as ‘similar’ to the meaning /intent of the user input) is implemented), and is rejected under similar rationale.
With regards to claim 22. (Previously Presented) The system of claim 1, the combination of Sears and Ang teaches wherein said cognitive engine is trained to engage with an application with said actionable element in a way that replicates how a natural human person would engage with said application (as similarly explained in the rejection of claim 1, and at least Abstract and paragraph 0013 and 0014, Ang was shown/explained to teach that the computer program (interpreted as the engine) is trained to engage with the application by identify particular actionable elements within a GUI of the application to engage with to execute operations to satisfy a task/goal), and is rejected under similar rationale.
With regards to claim 25. (New) The system of claim 1, the combination of Sears and Ang teaches said operations further comprising: generating a test case automatically with said cognitive engine (as similarly explained in the rejection of claim 1, paragraphs 0013 and 0043 in Ang were explained to teach that operations associated with one or more of the particular control(s) (that were semantically associated with the user’s selected goal/task) generate an instance of a system that perform the automated steps corresponding to the user’s selected goal/task. This instance of the system can be updated automatically if the system interface changes (each iteration/update is interpreted as a ‘test’ version of the system that is implemented until a change comes along where the version needs to be updated)), and is rejected under similar rationale.
Claim(s) 5, 23, 24, 28, and 29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sears (“Layout Appropriateness: A Metric for Evaluating User Interface Widget Layout”, published: July 1993, publisher IEE, pages: 707-719) in view of Ang (US Application: US 20190018675, published: Jan. 17, 2019, filed: Jul. 13, 2018) in view of Makuch et al (US Patent: 8924942, issued: Dec. 30, 2014, filed: Feb. 1, 2012).
With regards to claim 5. (Original) The system of claim 1, the combination of Sears and Ang teaches said operations further comprising: … said usability score, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
However the combination of Sears and Ang does not teach determining at least one suggestion to improve said usability score.
Yet Makuch et al teaches determining at least one suggestion to improve said usability score (Fig. 3, column 2, lines 7-20, column 9, lines 42-55 and column 10, lines 5-15: based upon metric score(s), an improvement can be suggested/presented if metric score(s) have a discrepancy from goal criteria (being interpreted as not meeting goal). One such recommendation could be to recognize that an additional actionable element such as including a buy button that is used to perform an operation).
It would have been obvious to one of ordinary skill the art before the effective filing of the invention to have modified Sears and Ang’s ability to determine a usability metric/score, such that an improvement can be suggested based on the metric/score, as taught by Makuch et al. The combination would have helped a developer use information to modify and improve an application (Makuch et al, column 1, lines 58-60).
With regards to claim 23. (Previously Presented) The system of claim 1, the combination of Sears, Ang and Makuch teach said operations further comprising: checking, with said cognitive engine, criteria to determine whether said goal was achieve (as similarly explained in the rejection of claim 5, Fig. 3, column 2, lines 7-20, column 9, lines 42-55 and column 10, lines 5-15 of Makuch et al was explained to teach based upon metric score(s), an improvement can be suggested/presented if metric score(s) have a discrepancy from goal criteria (being interpreted as not meeting goal). One such recommendation could be to recognize that an additional actionable element such as including a buy button that is used to perform an operation), and is rejected under similar rationale.
With regards to claim 24. (Previously Presented) The system of claim 23, the combination of Sears, Ang and Makuch et al teaches said operations further comprising: in response to checking said criteria, determining, with said cognitive engine, that said goal was not achieved,
engaging, with said cognitive engine, actionable element recognition; and attempting an additional actionable operation as similarly explained in the rejection of claim 23, Fig. 3, column 2, lines 7-20, column 9, lines 42-55 and column 10, lines 5-15 of Makuch et al was explained to teach based upon metric score(s), an improvement can be suggested/presented if metric score(s) have a discrepancy from goal criteria (being interpreted as not meeting goal). One such recommendation could be to recognize that an additional actionable element such as including a buy button that is used to perform an operation), and is rejected under similar rationale.
With regards to claim 28. (New) The system of claim 1, the combination of Sears, Ang and Makuch et al teaches wherein generating said usability score further comprises identifying a suggested modification to reduce total operation cost for achieving said goal, as similarly explained in the rejection of claim 23, Fig. 3, column 2, lines 7-20, column 9, lines 42-55 and column 10, lines 5-15 of Makuch et al was explained to teach based upon metric score(s), an improvement can be suggested/presented if metric score(s) have a discrepancy from goal criteria (being interpreted as not meeting goal). One such recommendation (to make an improvement of the score) could be to recognize that an additional actionable element such as including a buy button that is used to perform an operation), and is rejected under similar rationale.
With regards to claim 29. (New) The system of claim 1, the combination of Sears, Ang and Makuch et al teaches said operations further comprising: said cognitive engine looping back to a testing phase to improve said usability score, wherein said testing phase includes said recognizing said at least one actionable element and attempting actionable operations on said actionable element , as similarly explained in the rejection of claim 23, Fig. 3, column 2, lines 7-20, column 9, lines 42-55 and column 10, lines 5-15 of Makuch et al was explained to teach based upon metric score(s), an improvement can be suggested/presented if metric score(s) have a discrepancy from goal criteria (being interpreted as not meeting goal). One such recommendation (to make an improvement of the score and thus considered a ‘retest’ step) could be to recognize that an additional actionable element such as including a buy button that is used to perform an operation. The examiner notes that should the user incorporate the suggestion of the actional element, the metric score(s) can reflect this change), and is rejected under similar rationale.
Claim(s) 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sears (“Layout Appropriateness: A Metric for Evaluating User Interface Widget Layout”, published: July 1993, publisher IEE, pages: 707-719) in view of Ang (US Application: US 20190018675, published: Jan. 17, 2019, filed: Jul. 13, 2018) in view of Frebourg et al (US Application: US 2016/0373803, published: Dec. 22, 2016, filed: Jun. 16, 2015).
With regards to claim 27. (New) The system of claim 1, the combination of Sears and Ang teaches said operations, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
However the combination does not teach further comprising: outputting a representation of a visit track, wherein said visit track includes a sequence of attempted operations.
Yet Frebourg et al teaches outputting a representation of a visit track, wherein said visit track includes a sequence of attempted operations (Abstract, Fig. 3, Fig. 4, paragraphs 0025 and 0032: attempted navigation operations are visualized with paths/tracks).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Sears and Ang’s ability to implement a sequence of operations in an interface, such that the sequence of interface operations could have been visualized to show a path/track of the operations, as taught by Frebourg et al. The combination would have allowed Sears and Ang to have improved quality of user experience by addressing and improving usability , intuitiveness, control and functionality (Frebourg et al, paragraph 0002).
Response to Arguments
Applicant's arguments filed 1/30/2026 have been fully considered but they are not persuasive.
With regards to 35 USC 101 rejections, the applicant remarked upon the newly amended subject matter being sufficient to address the rejections. The examiner notes that although the amendments do help clarify aspects of the invention, they are not deemed as sufficient to overcome the outstanding 35 USC 101 rejections. The 35 USC 101 rejections are updated above to reflect the newly amended claim language.
More specifically, the applicant argues the amendments to the independent claims reflect technological advancements discussed in the specification via ‘assigning sad cognitive engine a goal … ‘ , ‘attempting, with said cognitive engine, actional operations … ‘, ‘tracking for each attempted operation, an associated predefined operation cost’, and ‘generating a usability score for said product’. However these amendments, as newly addressed in the updated 35 USC 101 rejection above, corresponding to abstract idea for being mental steps/process. Although the ‘cognitive engine’ is applied/mention, there is no detail in the claim concerning how this cognitive engine is implemented and rather it is recited at a high level of generality and is interpreted as applying an abstract idea on a computer and generally linking the judicial exception to a particular technological environment or field of use. Thus, the amendments are not sufficient to overcome the 35 USC 101 rejections.
The applicant argues the remaining and other newly added claims are patentable over the prior art. However, this argument is not persuasive since the pending claims have been shown to be rejected in view of the prior art (with the exception of claim 26). It is noted that claim 26 is currently rejected under 35 USC 101 as explained in the rejection above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILSON W TSUI whose telephone number is (571)272-7596. The examiner can normally be reached Monday - Friday 9 am -6 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WILSON W TSUI/Primary Examiner, Art Unit 2172