DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This Office action is in response to correspondence received April 2, 2025.
Claims 1-20 are pending and have been examined.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s):
Claim 1:
processing consumer reviews, access the consumer reviews associated with a service provider; select at least one of the consumer reviews which is relevant to at least one predetermined category; obtain an annotation associated with the selected consumer review with a third party; generate a tag content associated with the selected consumer review by summarizing the selected consumer review; and generate a tag associated with the selected consumer review based on the tag content and the annotation associated with the selected consumer review, classify the selected consumer review based on at least one property of the selected consumer review, and distribute a task for the annotation associated with the selected consumer review with the third party based on the classification of the selected consumer review.
Claim 11:
A method for processing consumer reviews, the method comprising: accessing the consumer reviews associated with a service provider; selecting at least one of the consumer reviews which is relevant to at least one predetermined category; classifying the selected consumer review based on at least one property of the selected consumer review; distributing a task for an annotation associated with the selected consumer review with a third party based on the classification of the selected consumer review; obtaining the annotation associated with the selected consumer review with the third party; generating a tag content associated with the selected consumer review by summarising the selected consumer review; and generating a tag associated with the selected consumer review based on the tag content and the annotation associated with the selected consumer review.
Claims 1 and 11 recite an abstract idea that is a commercial interaction, which is a certain method of organizing human activity. The steps recite a way for consumers to rate and review a business and then the review is further annotated and tagged, in other words, adding more information to it. The process of generating and further detailing reviews is marketing or sales activities or behaviors because it is used to promote a business, and marketing or sales activities or behaviors are one of the commercial interactions described in MPEP 2106.04(a). Moreover, consumer reviews and these specific limitations above are specific human activity that is being organized, which is managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). See Id. Therefore for these reasons claims 1 and 11 recite a certain method of organizing human activity.
This judicial exception is not integrated into a practical application. The claims recite generic computing components that alone and in combination amount to performing the steps on a computer or other machinery in its ordinary capacity. This includes using a server, see TLI Communications, MPEP 2106.05(f)(2). The additional elements are:
Claim 1:
A server for
the server comprising: a memory for storing instructions; and a processor for executing the stored instructions and configured to/further configured to:
obtain from a computing device associated; distribute to the computing device associated
Claim 11:
to a computing device associated/from the computing device associated
In combination these amount to no more than a computer receiving/displaying (distributing) information and exchanging information with a server. This to perform the steps of the abstract idea above. This amounts to performing a commonplace business practice on a computer, which is an apply it limitation and not a practical application. See A commonplace business method or mathematical algorithm being applied on a general purpose computer, Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 573 U.S. 208, 223, 110 USPQ2d 1976, 1983 (2014); Gottschalk v. Benson, 409 U.S. 63, 64, 175 USPQ 673, 674 (1972); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015), See MPEP 2106.05(f)(2). Therefore the additional elements alone and in combination do not integrate the abstract idea into a practical application.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the reasoning above is carried over here: for the same reason that the combination of elements is not a practical application, it is not significantly more than the abstract idea.
Per the dependent claims:
The dependent claims, 2-10 and 12-20, further describe the abstract idea of claims 1 and 11 and therefore do not overcome the 101 rejection. Claims 6 and 16 recite using a natural language processing model but this is an apply it limitation that is not a practical application or significantly more, either alone or in combination.
Therefore, claims 1-20 are rejected under 35 USC 101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5, 8-15, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Parveen et al., US PGPUB 20170068999 A1 ("Parveen") in view of Deluca et al., US PGPUB 20190180353 A1 ("Deluca").
Per claims 1 and 11, which are similar in scope, Parveen teaches A server for processing consumer reviews, the server comprising: a memory for storing instructions in par 27: “In one embodiment, a system can comprise: one or more processing modules; and one or more non-transitory storage modules storing computing instructions configured to run on the one or more processing modules and perform the acts of:”
Parveen then teaches and a processor for executing the stored instructions and configured to: access the consumer reviews associated with a service provider in par 29: “and one or more non-transitory storage modules storing computing instructions configured to run on the one or more processing modules and perform the acts of analyzing one or more reviews in a set of reviews to determine a set of features”
See also par 040 for service providers: “The Internet gives users the ability to research any product or service that the user wishes to purchase to a much greater extent than possible before the Internet. For example, users can go to an eCommerce website, search for a product or service, and analyze the opinions of many different people regarding the product or service. This ability can be particularly useful for users who are comparing two competing products or services.”
Parveen then teaches select at least one of the consumer reviews which is relevant to at least one predetermined category in par 53: “Thereafter, for every feature in the feature set, the user-generated content (“UGC”) is analyzed to find mentions of each feature (block 304). Thereafter, each of the mentions can be analyzed to determine polarity of each mention (block 306). “ The user generated content teaches consumer reviews. The features are the categories.
See also par 56: :For a feature f.sub.i in the feature set, the number of occurrences in review R.sub.j is n.sub.ij.”
See also par 060. See also Par 050, initial feature set, which as it is determining initial features, teaches predetermined categories (drawn from the specification of the product). See also par 052.
Parveen then teaches obtain an annotation associated with the selected consumer review from a computing device associated with a third party in par 95: “For example, another dimension is added to reviews. Annotating the reviews with relevant features of the product makes the reviews more readable for users. Thus, information is more accessible to users as the user can read only the reviews that discuss the features of interest to him/her.”
See also par 99: “As disclosed above, there can be methods and systems to calculate a feature score for every feature of a product. The feature score can be used to annotate reviews for which the feature score is not equal to zero. If a feature has either a positive feature score or a negative feature score, the implication is that the review mentions that particular feature. It is also possible for a review to have a neutral feature score even if the feature is mentioned in the review.” Feature score is the annotation obtained.
See also par 103: “With reference to FIG. 9, a flowchart illustrating a method 900 for annotating reviews is presented. The review annotation can be performed on a computer system such as computer system 100 (FIG. 1). Method 900 is merely exemplary and is not limited to the embodiments presented herein. Method 900 can be employed in many different embodiments or examples not specifically depicted or described herein. In some embodiments, the procedures, the processes and/or the activities of method 900 can be performed in the order presented. In other embodiments, the procedures, the processes, and/or the activities of method 900 can be performed in any other suitable order. In still other embodiments, one or more of the procedures, the processes, and/or the activities of method 900 can be combined or skipped.” Computer system 100 teaches computing device associated with a third party.
Then, Parveen teaches generate a tag content associated with the selected consumer review by summarizing the selected consumer review in par 114: “Column 1020 contains a list of “tags” that represent features that received either positive or negative feature scores in each review. In some embodiments, the tags can be color-coded. In such a manner, one color can represent positive feature scores and another color can represent negative feature scores. Instead of color coding, some embodiments can use other methods of indicating the difference between positive, negative, and neutral feature scores, such as shading, hatch marks, underlining, typeface, font colors, and the like.” See for generated par 0107: “And the features with a feature score less than a given threshold score (e.g., zero), can get marked as “Bad.” Then the reviews can be displayed to the user with a tag indicating the feature and whether the feature score was “Good” or “Bad” (block 910). Displaying a review to the User can involve transmitting data to a user's computer that cause the user's computer to display reviews.”
Then Parveen teaches and generate a tag associated with the selected consumer review based on the tag content and the annotation associated with the selected consumer review in par 116: “In some embodiments, a user is able to click on a tag to view only those reviews that contain features scores for that feature. Moving ahead to FIG. 11, a screen shot 1100 is presented. FIG. 11 is merely exemplary and embodiments of the screen representation are not limited to the embodiments presented herein. The screen representation can be employed in many different embodiments or examples not specifically depicted or described herein.”
See also par 118: “Column 1110 displays the title of the review. In some embodiments, only the tag that is the subject of the search is displayed to the user in column 1120. In some embodiments, all of the pertinent tags can be displayed to the user, to allow the user to view other feature scores. In some embodiments, instead of a tag being displayed in column 1120, because all of the reviews are relevant to a selected feature, an indication of whether the review is positive or negative can be contained in column 1120.”
Then Parveen teaches wherein the processor is further configured to classify the selected consumer review based on at least one property of the selected consumer review in par 120: “Pie chart 1150 can contain various segments, such as positive segment 1160, neutral segment 1162, and negative segment 1164. Each of these segments is configured to illustrate the percentage of reviews that contain a certain feature score. In FIG. 11, the vast majority of reviews that contain a feature score for “body” are positive, so positive segment 1160 is much larger than the other two segments. In some embodiments, there can be color-coding of the graphical indicia. For example, positive segment 1160 can be a first color, neutral segment 1162 can be a second color, and negative segment 1164 can be a third color. The color-coding can extend to the tags in column 1120. For example, a review that contained a positive feature score for “body” can be the same first color as positive segment 1160. In a similar manner, a review that contained a neutral feature score for “body” can be the same second color as neutral segment 1162 and a review that contained a negative feature score for “body” can be the same third color as negative segment 1164. While a pie chart is illustrated in FIG. 11, other embodiments can use other types of graphical indicia, such as bar graphs, line graphs, histograms, and the like.” Under a broadest reasonable interpretation in light of the specification, the pie graph or other graph teaching here teaches classifying the consumer review based on at least one property of the selected consumer review because it is taking the review and giving it the classification of overall good or bad.
Parveen does not teach and distribute a task for the annotation associated with the selected consumer review to the computing device associated with the third party based on the classification of the selected consumer review.
Deluca teaches normalizing product reviews and the presentation of product information viewed within a vendor's virtual storefront. See abstract.
Deluca teaches and distribute a task for the annotation associated with the selected consumer review to the computing device associated with the third party based on the classification of the selected consumer review in par 56: “In some instances however, a user who is seeking to normalize product reviews to a target user may manually input characteristics and information into the data collection module 111 about the target user of interest. For example, a user seeking to normalize product reviews to one of their siblings may input information about that sibling including the sibling's age, sizing information, location or environment within which their sibling resides, activity level, past purchases or trends the sibling follows. Moreover, in some embodiments, the data collection module 111 may also automatically or manually be directed to search one or more network data sources 135 for information about a target user.” The system taught by Deluca has the user annotate which teaches distribute a task.
Then see par 69: “In some embodiments of the algorithm 500, the normalized product reviews 423 may be annotated by the annotation module 115 in step 522. One or more of the normalized product reviews 423 may include one or more annotations 425 highlighting or depicting relevant portions of the product reviews most relevant to the target user. The annotation module 115 and/or prioritization module 117 may also generate a summary 421 for each product viewed by a user experiencing the vendor's virtual storefront, wherein the summary 421 may provide reasons, rationales and keywords or phrases associated with the target user. Each of the reasons, rationales and keywords may be identified and described to the user in a manner that assists the user with understanding the reasoning for one or more normalized product reviews 423 to be prioritized over some of the other product reviews which may have been rated as a lower priority or disregarded as being irrelevant to the target user for one reason or another.”
It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the annotation and tagging of reviews teaching of Parveen with the distributing a task teaching of Deluca because Deluca teaches in par 018 that the annotations can be designed for the benefit of the target user, as taught further in par 017 the user’s interests, purchasing history, etc. As this would make reviews more relevant (The teaching above is for the annotations that would be performed in this way), one would be motivated to combine Deluca with Parveen so that a potential user who is reading the review content gets more relevant information. For these reasons, one would be motivated to modify Parveen with Deluca.
Per claims 2 and 12, which are similar in scope, Parveen and Deluca teach the limitations of claims 1 and 11, above. Parveen does not teach wherein the processor is further configured to update a rule for selecting the at least one of the consumer reviews which is relevant to the at least one predetermined category, based on the annotation obtained from the computing device associated with the third party.
Deluca teaches wherein the processor is further configured to update a rule for selecting the at least one of the consumer reviews which is relevant to the at least one predetermined category, based on the annotation obtained from the computing device associated with the third party in par 70: “In step 523 of the algorithm 500, the vendor management system 103 may display to the display device 110 of the client device 131 the product data 401 selected by the user, the normalized reviews 423 prioritized based on the target user's profile 121, preferences, habits and characteristics as well as any annotations 425 and/or summaries 421 describing the reasons and rationale for normalizing the product reviews 425 in the order presented. The algorithm may subsequently proceed to step 525 to determine if the user has completed the navigation of the vendor′ virtual storefront on behalf of the target user and/or has the user completed a transaction for purchasing one or more of the products or services viewed in step 523. If the navigation of the vendor's virtual storefront using the target user's profile 121 has not concluded and/or a final transaction (such as a purchase) has not been completed, the algorithm may proceed to step 519, wherein the user may continue to navigate the product and service offerings of the vendor's virtual storefront.”
See also par 65 “In step 505 however, the user may transmit a request to the profile module 107 instructing the profile module 107 to load a customized virtual storefront normalized to a target user's profile 121, a set of manually inputted target user characteristics or by specifying one or more third party profiles (such as social media) maintained by a network data source 135.”
It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the annotation and tagging of reviews teaching of Parveen with the distributing a task teaching of Deluca because Deluca teaches in par 018 that the annotations can be designed for the benefit of the target user, as taught further in par 017 the user’s interests, purchasing history, etc. As this would make reviews more relevant (The teaching above is for the annotations that would be performed in this way), one would be motivated to combine Deluca with Parveen so that a potential user who is reading the review content gets more relevant information. For these reasons, one would be motivated to modify Parveen with Deluca.
Per claims 3 and 13, which are similar in scope, Parveen and Deluca teach the limitations of claims 1 and 11, above. Parveen further teaches wherein the processor is configured to generate the tag content associated with the selected consumer review based on at least one constraint stored in a tag configuration cache in par 0144: “The tag cloud can be made clickable in one of a variety of ways known in the art. For example, an image map can be created based on the random arrangement of features and font sizes. Thereafter, the image map can be laid over the tag cloud in such a manner that clicking on an area of the tag cloud results in clicking an area of the image map corresponding to the clicked area of the tag cloud.” This teaches tag configuration cache because the tag cloud is a tag configuration and it is stored in the image map. The constraint is taught in par 0134 where the sizes of tags are related to their scores.
Per claims 4 and 14, which are similar in scope, Parveen and Deluca teach the limitations of claims 3 and 13, above. Parveen further teaches wherein the processor is further configured to check if the tag content satisfies with the at least one constraint stored in the tag configuration cache, and generate the tag associated with the selected consumer review if the tag content satisfies with the at least one constraint in par 0135: “In some embodiments, the size of the font can instead be used to illustrate the number of reviews with that feature and the color of the font can be used to illustrate the feature score. For example, there can be features with similar font sizes (for example, lens and focus in FIG. 14). In such a case, the color of the words can be used to distinguish which feature has a higher feature score. A legend can be provided to aid a viewer in determining that, for example, the more green a font color is, the more higher the feature score, while the more red a font color is, the lower the feature score is.” If the features have certain numbers of reviews then color of the font ca be used which teaches if the tag content satisfies with at least one constraint, then generate the tag associated….
Per claims 5 and 15, which are similar in scope, Parveen and Deluca teach the limitations of claims 3 and 13, above. Parveen further teaches wherein the processor is configured to generate the tag associated with the selected consumer review further based on search keywords input by a plurality of consumers in par 082: “For example, a user might enter an eCommerce site and type in a search term for a camera. The result could include the layout of FIG. 7 for multiple cameras at once. In such a manner, a user can easily compare different features among different cameras. For example, in FIG. 7, flash has a relatively low score of 37. So some users might view that negatively and decide to purchase a different camera with a higher score for flash. But a different set of users might not be interested in flash performance and might be intrigued by the relatively high body score of 86.” Users teaches search keywords input by a plurality of consumers.
Per claims 8 and 18, which are similar in scope, Parveen and Deluca teach the limitations of claims 1 and 11, above. Parveen further teaches wherein the processor is further configured to display the tag associated with the selected consumer review along with information about the service provider included in a list of service providers in pars 062-063: “With reference to FIG. 4, a table 400 is presented that illustrates an exemplary case with multiple reviews and features. Each of columns 410, 420, and 430 represent different reviews. Each of rows 415, 425, 435, and 445 represents different features of the product being reviewed. The intersection of each row and column represents each feature in each review. At the intersection of each row and column is one or more polarities s.sub.i. There can be multiple polarities representing each mention of a particular feature in a particular review. For example, in the review presented above, there can be three different polarities, one for each sentence that mentions a lens.
The table presented in FIG. 4 is merely exemplary. In actual use, there will likely be many more than three reviews and more than four features. In addition, a similar table can exist for each product in a particular database. Some databases contain thousands or even millions of products (for example, the databases of large eCommerce providers).”
Per claims 9 and 19, which are similar in scope, Parveen and Deluca teach the limitations of claims 1 and 11, above. Parveen further teaches wherein the processor is further configured to monitor a user's behaviour for tags displayed on a computing device associated with the user and/or at least one consumer review previously made by the user, and determine which tag of a plurality of tags is to be displayed on the computing device associated with the user based on the monitored information in pars 062-063: “With reference to FIG. 4, a table 400 is presented that illustrates an exemplary case with multiple reviews and features. Each of columns 410, 420, and 430 represent different reviews. Each of rows 415, 425, 435, and 445 represents different features of the product being reviewed. The intersection of each row and column represents each feature in each review. At the intersection of each row and column is one or more polarities s.sub.i. There can be multiple polarities representing each mention of a particular feature in a particular review. For example, in the review presented above, there can be three different polarities, one for each sentence that mentions a lens.
The table presented in FIG. 4 is merely exemplary. In actual use, there will likely be many more than three reviews and more than four features. In addition, a similar table can exist for each product in a particular database. Some databases contain thousands or even millions of products (for example, the databases of large eCommerce providers).”
Per claims 10 and 20, Parveen and Deluca teach the limitations of claims 9 and 19, above. Parveen further teaches wherein the processor is further configured to determine a weight for each of the plurality of tag, based on the monitored information in par 0133: “In this example, tag cloud 1450 includes each of the K features of an exemplary camera that have a feature score over threshold T. As is typical in a tag cloud, tag cloud 1450 displays each of the features with a different font size to differentiate between the features. In this case, the features of tag cloud 1450 are distinguished by the feature score of each feature—the higher the feature score, the larger the font size of the feature.”
Claim(s) 6-7 and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Parveen et al., US PGPUB 20170068999 A1 ("Parveen") in view of Deluca et al., US PGPUB 20190180353 A1 ("Deluca"), further in view of Chatterjee et al., US PGPUB 20170011029 A1 (“Chatterjee”).
Per claims 6 and 16, which are similar in scope, Parveen and Deluca teach the limitations of claims 1 and 11, above. Parveen further teaches wherein the processor is further configured to determine that the selected consumer review is relevant to two or more categories in par 53: “Thereafter, for every feature in the feature set, the user-generated content (“UGC”) is analyzed to find mentions of each feature (block 304). Thereafter, each of the mentions can be analyzed to determine polarity of each mention (block 306). “ The user generated content teaches consumer reviews. The features are the categories.
See also par 56: :For a feature f.sub.i in the feature set, the number of occurrences in review R.sub.j is n.sub.ij.”
See also par 060. See also Par 050, initial feature set, which as it is determining initial features, teaches predetermined categories (drawn from the specification of the product). See also par 052.
Parveen does not teach and extract two or more phrases each relevant to the two or more categories from the selected consumer review using a natural language processing model.
Chatterjee teaches tagging and scoring techniques for textual passages esp those in social media posts. See abstract.
Chatterjee teaches and extract two or more phrases each relevant to the two or more categories from the selected consumer review using a natural language processing model in pars 0312-0332 which teach various NLP models and in pars 0333-0334 for verbs nouns that are parsed from the reviews which constitute the phrases parsed in par 0335.
It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the review annotation and tagging teaching of Parveen with the natural language processing to extract phrases teaching of Chatterjee because Chatterjee teaches in par 09 that: “The combination of machine learning systems with data from human pooled language extraction techniques enables the present system to achieve exceptionally high accuracy of human sentiment measurement and textual categorization of raw text, blog posts, and social media streams. This information can then be aggregated to provide brand and product strength analysis.” As Parveen is determining, in a sense, sentiment, one would be motivated to combine Chatterjee with Parveen because of the taught motivation that exceptionally high accuracy in determining sentiment is taught by Chatterjee. This would improve Parveen’s feature determination and therefore one would be motivated to combine the references.
Per claims 7 and 17, which are similar in scope, Parveen, Deluca, and Chatterjee teach the limitations of claims 6 and 16, above. Parveen does not teach wherein the processor is further configured to generate two or more tag contents each associated with the two or more phrases.
Chatterjee teaches wherein the processor is further configured to generate two or more tag contents each associated with the two or more phrases in par 0336: “Adaptive rule learning approach will utilize a set of tags that are associated with each sentence and review. These tags are brand, category, polarity, sentiment bearing phrase, category keyword, and vertical. We need to design an efficient and effective User Interface to collect this data quickly and accurately. We will need Multiple Redundant Scoring for brand, category, and polarity. Phrases might vary as well.” See par 0335 for two phrases. See also pars 0337-0352, phrases taught.
It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the review annotation and tagging teaching of Parveen with the natural language processing to extract phrases teaching of Chatterjee because Chatterjee teaches in par 09 that: “The combination of machine learning systems with data from human pooled language extraction techniques enables the present system to achieve exceptionally high accuracy of human sentiment measurement and textual categorization of raw text, blog posts, and social media streams. This information can then be aggregated to provide brand and product strength analysis.” As Parveen is determining, in a sense, sentiment, one would be motivated to combine Chatterjee with Parveen because of the taught motivation that exceptionally high accuracy in determining sentiment is taught by Chatterjee. This would improve Parveen’s feature determination and therefore one would be motivated to combine the references.
Therefore, claims 1-20 are rejected under 35 USC 103.
Prior Art Considered Relevant
The following is considered relevant to Applicant’s disclosure but is not relied upon in the above rejection:
Anolytics, The Most Commonly Used Text Annotations in Natural Language Processing, [online], available at: < https://www.anolytics.ai/blog/the-most-commonly-used-text-annotations-in-natural-language-processing/ > published on June 15, 2022.
Anolytics teaches sentiment annotation in page 3.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD W. CRANDALL whose telephone number is (313)446-6562. The examiner can normally be reached M - F, 8:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Coupe can be reached at (571) 270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RICHARD W. CRANDALL/ Primary Examiner, Art Unit 3619