DETAILED ACTION
This Office Action is in response to the remarks entered on 10/03/2025. Claims 2, 9, 16, 21 are amended. Claims 2-21 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 2-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11,403,557 in view of Deurloo (US PG Pub. 2013/0275527). Although the claims at issue are not identical, they are not patentably distinct from each other because the instant application claims are a broader version of the claims that appear in the Patent, as they are both directed to the use of a model to identify topics in the corpus of documents and computes how often each topic contributes to the topic in order to generate a map to display the contributions between the topics and the words in a hierarchical manner. Furthermore, the claim as amended recites the limitations “comparing the first map to a stored second map to identify a newly trending topic; and launching an early warning message system to issue a warning message indicating the identified newly trending topic”, which are not taught by the Patent, however, Prior art Deurloo teaches the limitations, as it can be seen for example at paragraph [0048-0051] (see rejection below for more detailed explanation). It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the teachings of Sweeney with the above teachings of Deurloo in order to provide an early warning using a visual advantage for seeing trending topics.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 2-21 stand rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more.
Step 1 analysis:
In the instant case, the claims are directed to a method, non-transitory computer-readable medium, and a system. Thus, each of the claims falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter).
Step 2A analysis:
Based on the claims being determined to be within of the four categories (Step 1), it must be determined if the claims are directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), in this case the claims fall within the judicial exception of an abstract idea. Specifically the abstract idea of Mental Processes- “Concepts performed in the human mind (including an observation, evaluation, judgment, opinion)”.
Step 2A: Prong 1 analysis:
The claim(s) recite(s):
Claim 2:
“determining a corpus of documents and a model to apply the corpus” - this limitation recites determining a model to apply to a certain corps of documents, which are observation, evaluation and judgment steps, being mental processes;
“iteratively applying the corpus to the model to identify topics in the corpus of documents, each topic associated with one or more words” - this limitation recites identifying topics in the corpus of documents, which are observation, evaluation and judgment steps, being mental processes. The use of the model is discussed next at Step2A: Prong 2;
“determining, for each topic, contribution values for the one or more words to the topic, wherein each contribution value corresponds to one of the one or more words, and each contribution value indicates a frequency one of the one or more words contributes to the topic” - this limitation recites determining to a topic indicating a frequency, which are observation, evaluation and judgment steps, being mental processes;
“generating a first map comprising the one or more words, the contribution values, and the topics, wherein each word of the one or more words is mapped to a particular topic and has a corresponding contribution value” - this limitation recites mapping words with topics according to their contribution values, which are observation, evaluation and judgment steps, being mental processes; and
“comparing the first map to a stored second map to identify a newly trending topic” - this limitation recites a comparison made to identify a new trending topic, which are observation of the maps, evaluation and judgment steps, being mental processes.
Step 2A: Prong 2 analysis:
This judicial exception is not integrated into a practical application because it only recites these additional elements:
using a model – using a model to identify topics (which is the recited abstract idea at Prong 1 above) in the corpus of documents is recited at a high level of generality, therefore, it amounts to mere instructions to apply a judicial exception on a computer (see MPEP 2106.05(f));
“storing, in storage, the first map” – a step of storing data of the result of the abstract idea does not add a meaningful limitation to the process of mapping words with the topics according to their contribution values, therefore, this is considered an insignificant extra solution activity per MPEP 2106.05 (g); and
“launching an early warning message system to issue a warning message indicating the identified newly trending topic”- a final step of launching or displaying a message of the result of the trending topic is one of the examples that the courts have described as merely indicating a field of use or technological environment in which to apply a judicial exception (see 2106.05(h) (vi)- displaying the results of collection and analysis of data.
Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
Step 2B analysis:
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements explained above amount to mere instructions to apply an exception, insignificant extra solution activities and merely indicating a field of use or technological environment in which to apply a judicial exception.
Moreover, re-evaluation of any additional elements or combination of elements that were considered to be insignificant extra-solution activity are needed to determine if they are considered well-understood, routine and conventional limitations:
“storing, in storage, the first map” – This limitation is directed to storing or retrieving information in memory, which has been recognized by the courts (as per Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015)) as well-understood, routine, conventional activity (See MPEP 2106.05(d)(iv)).
The claims are not patent eligible.
Independent claims 9 and 16 are analogous claims, therefore the same rejection and rationale applies to them.
In addition, Claim 9 recites the additional elements analyzed under Step 2A: prong 2 and Step 2B:
Claim 9: “the computer- readable storage medium including instructions that when executed by a processor, cause the processor to”- this medium and processors are recited at a high level of generality, and it is important to note that a general purpose computer that applies a judicial exception, such as an abstract idea, by use of conventional computer functions does not qualify as a particular machine (see MPEP 2106.05(b)).
In addition, Claim 16 recites the additional elements analyzed under Step 2A: prong 2 and Step 2B:
Claim 16: “storage; processing circuitry configured to execute instructions, that when executed, cause the processing circuitry to”- this storage and processing circuitry are recited at a high level of generality, and it is important to note that a general purpose computer that applies a judicial exception, such as an abstract idea, by use of conventional computer functions does not qualify as a particular machine (see MPEP 2106.05(b)).
Dependent claim(s) 3-8, 10-15, 17-21 when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea. The claims are reciting further embellishment of the judicial exception.
Claim 3: this claim recites further embellishment about the mapping of the words and topics according to their contribution in a table format, therefore, it is able to be performed mentally with a pen and paper, and therefore, it qualifies as a mental process.
Claim 4: this claim recites further embellishment about the a second mapping of the words and topics according to their contribution, therefore, it is able to be performed mentally with a pen and paper, and therefore, it qualifies as a mental process. Further, it recites a final step of storing data of the result of the abstract idea, considered an insignificant extra solution activity per MPEP 2106.05 (g) under prong 2; and well-understood, routine, conventional activity under Step 2B.
Claim 5: this claim recites summing the contribution value of each word for each topic, therefore, it is able to be performed mentally with a pen and paper, and therefore, it qualifies as a mental process.
Claim 6: this claim recites further embellishment about the second mapping of the words and topics according to their contribution in a table format, therefore, it is able to be performed mentally with a pen and paper, and therefore, it qualifies as a mental process.
Claim 7: this claim recites selecting a model from a storage, which amounts to storing or retrieving information in memory, which has been recognized by the courts (as per Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015)) as well-understood, routine, conventional activity (See MPEP 2106.05(d)(iv)) under step 2B.
Claim 8: this claim recites the training of the models being different, but this is recited at a high level of generality, which amounts to mere instructions to apply the judicial exception on a computer (see MPEP 2106.05(f)) under Step 2A, Prong 1.
Claim 9 is analogous to Claim 2, as explained above in the rejection of claim 2.
Claim 10 is analogous to Claim 3, therefore, same rationale and rejection applies.
Claim 11 is analogous to Claim 4, therefore, same rationale and rejection applies.
Claim 12 is analogous to Claim 5, therefore, same rationale and rejection applies.
Claim 13 is analogous to Claim 6, therefore, same rationale and rejection applies.
Claim 14 is analogous to Claim 7, therefore, same rationale and rejection applies.
Claim 15 is analogous to Claim 8, therefore, same rationale and rejection applies.
Claim 16 is analogous to Claim 2, as explained above in the rejection of claim 2.
Claim 17 is analogous to Claim 3, therefore, same rationale and rejection applies.
Claim 18 is analogous to Claim 4, therefore, same rationale and rejection applies.
Claim 19 is analogous to Claim 5, therefore, same rationale and rejection applies.
Claim 20 is analogous to Claim 6, therefore, same rationale and rejection applies.
Claim 21: this claims merely describes that the hierarchically organized components are words, and this does not amount to more than generally linking the use of a judicial exception to a particular technological environment or field of use, in this case, to the technology of topic management platform.
Viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 2-6, 9-13, 16-21 are rejected under 35 U.S.C. 103 as being unpatentable over Ziv et al (US 9,015,194- hereinafter Ziv) in view of Tapia et al (US 10,063,406- hereinafter Tapia), in view of Spencer (US Patent 5,915,249- hereinafter Spencer), and further in view of Deurloo (US PG Pub. 2013/0275527- hereinafter Deurloo).
Referring to Claim 2, Ziv teaches a method, comprising:
determining a corpus of documents and a model to apply the corpus (see Ziv at Fig. 1 and Column 4: 9-19: “a user, such as an analyst, defines the applicable collection of data items and an initial set of categories. (In some cases, the user may choose to begin with a single category spanning the entire collection). The user specifies a categorization task, such as a request to expand the definition of a certain category or to sub-divide a certain category into sub-categories. The data analysis system applies a data mining process to carry out the requested categorization task and returns suggested results”. Therefore, the return of suggested results requested by the user using an interface for collection of data items correspond to the claimed corpus and the data mining process corresponds to the model to apply to the corpus to categorize it);
iteratively applying the corpus to the model to identify topics in the corpus of documents, each topic associated with one or more words (see Ziv at Column 4: 4-8: “a data analysis system processes a large collection of recorded customer calls. The system carries out an interactive and iterative process, which produces a hierarchical structure for categorizing the calls in the collection based on their root causes”. Furthermore, at Column 5: 28-62: “[s]ystem 20 classifies at least some of the recorded calls in database 28 into a hierarchical structure of categories and sub-categories, in order to provide information as to the reasons behind the calls”. Therefore, the hierarchical structure of categories and subcategories correspond to the claimed identification of topics);
determining, for each topic, contribution values for the one or more words to the topic, wherein each contribution value corresponds to one of the one or more words (see Ziv at Column 4: 4-8: “a data analysis system processes a large collection of recorded customer calls. The system carries out an interactive and iterative process, which produces a hierarchical structure for categorizing the calls in the collection based on their root causes”. Furthermore, at Column 5: 28-62: “[s]ystem 20 classifies at least some of the recorded calls in database 28 into a hierarchical structure of categories and sub-categories, in order to provide information as to the reasons behind the calls”. Therefore, the hierarchical structure of categories and subcategories based on root causes correspond to the claimed contribution determination), and each contribution value indicates a frequency one of the one or more words contributes to the topic (see Ziv at Col. 7: lines 22-26: “[f]or example, a condition may state that a data item is categorized as a complaint if and only if one or more of the terms "complain," and "unhappy" appears in the data item. Other conditions may consider the occurrence frequency of a certain term in the data item”. Therefore, by considering the occurrence frequency, this corresponds to the claimed frequency that the components contributes to the topic);
generating a first map comprising the one or more words, the contribution values, and the topics, wherein each word of the one or more words is mapped to a particular topic and has a corresponding contribution value (see Ziv at Figs. 2 and 3: “FIGS. 2 and 3 are diagrams that schematically illustrate hierarchical category structures, in accordance with embodiments of the present invention”. Therefore, these diagrams are interpreted as two maps, as they have the words that contribute to the topic discussed); and
storing, in storage, the first map (see Ziv at Col. 7: lines 44-47: “Some categorization conditions use metadata related to the data item, either instead of or in addition to considering the content of the data item. The metadata may be stored together with the data item or provided separately”. Therefore, the categorization metadata includes the diagram which is interpreted as the map).
Even though Ziv implicitly teaches the use of a model, as Ziv teaches a data mining process for classification, Tapia teaches it, as it can be seen at Abstract and Column 12: 19-28- “analytic applications 104 may generate troubleshooting solutions using one or more of the trained machine learning models 226(1)-226(N). The troubleshooting solutions may be generated based on the performance data from one or more of the data sources 110-118 provided by the data management platform 102. For example, the trained machine learning models may be used to automatically analyze CDRs to identify root causes of quality of service issues with the wireless carrier network”. Therefore, since Tapia applies a trained machine learning model to analyze data, this corresponds to the claimed machine learning model to process a corpus.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Ziv with the above teachings of Tapia by processing a corpus using program code to identify a topic within the corpus, the topic comprised of a plurality of hierarchically organized components, as taught by Ziv, and using a machine learning model, as taught by Tapia. The modification would have been obvious because one of ordinary skill in the art would be motivated to analyze data and patterns using a machine learning model for identifying a root cause for an issue that is the subject of a subscriber trouble ticket (as suggested by Tapia at Col. 12: 28-34).
Even though Ziv implicitly teaches determining contribution values for the one or more words to the topic, Spencer teaches it, as it can be seen at Abstract “a plurality of documents ordered by a contribution that the term makes to the document score of the document. The contribution is a scalar measure of the influence of the term in the computed document score. The contribution reflects both the within document frequency and the between document frequency of the term”, and Col. 11: lines 21-27: “Alternatively, all of the contribution values for a term may be determined first, ranked, and then the n documents with the highest contribution values selected. The preprocess method then creates the lookup table 214 for the term by traversing the blocks of the term row in inverted index 200 and storing the (document, pointer) pair 213 information for the appropriate blocks”. Therefore, this determination of contribution value of each term in a document is interpreted as the contribution value of each word as claimed.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Ziv and Tapia with the above teachings of Spencer by processing a corpus using a plurality of machine learning models program code to identify a topic within the corpus, as taught by Ziv and Tapia, and determine a contribution value of each word in the corpus, as taught by Spencer. The modification would have been obvious because one of ordinary skill in the art would be motivated to determine a scalar value of the influence of a term in a corpus of documents (as suggested by Spenser at [Abstract]).
However, the combination of Ziv, Tapia and Spencer fails to teach:
comparing the first map to a stored second map to identify a newly trending topic; and
launching an early warning message system to issue a warning message indicating the identified newly trending topic.
Deurloo teaches, in an analogous system,
comparing the first map to a stored second map to identify a newly trending topic (see Deurloo at [0048]: “Clustering can also be based on tweet list comparison. As a first clustering algorithm the system may adopt a very simple but efficient clustering algorithm. For each topic a, at time t, the system keeps a list la of the last 100 tweets counting back from time t. The system's similarity metric for topic a and topic b is defined as the number of times that both terms a and b appear in the lists l.sub.a and l.sub.b. If the similarity metric is above the threshold of 0.15, then the two topics are clustered”. Therefore, the comparison of a new tweet against a list of previous tweets (interpreted as a stored second map) is analogous to the claim language. Furthermore, see [0050]: “FIG. 6B shows the squarified treemap for the clustered topics. In some other embodiments, treemap can have shapes other than squares. The clusters in the FIG. 6B shows that `Azcapotzalco` is related to the earthquake in Mexico” and see [0051]: “Capture of the trending data by the system could, e.g., be based on the size and rate of growth of a cluster of words/topics. The dynamic squarified treemap can then serve as an early warning system in which topics that can become a trend can be visualized”. Therefore, the comparison of new tweet with a list of previous tweets for developing a treemap to recognize and alert of trends by providing a visualization is further interpreted as the identification of a newly trending topic); and
launching an early warning message system to issue a warning message indicating the identified newly trending topic (see Deurloo at [0050]: “The treemap can also be made dynamic by updating it in an animated manner periodically. This has the visual advantage of seeing the rectangles grow/shrink and change its color”. Further at [0051]: “Capture of the trending data by the system could, e.g., be based on the size and rate of growth of a cluster of words/topics. The dynamic squarified treemap can then serve as an early warning system in which topics that can become a trend can be visualized”. Therefore, the comparison of tweet with a list of previous tweets for developing a treemap to recognize and alert of trends by providing a visualization is further interpreted as the early warning message).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Ziv, Tapia and Spencer with the above teachings of Deurloo by processing a corpus using a plurality of machine learning models program code to identify a topic within the corpus and determine a contribution value of each word in the corpus, as taught by Ziv, Tapia and Spencer, and identifying trending topics based on a comparison of the current identification of topic in the corpus against a stored list of topic identifications, as taught by Deurloo . The modification would have been obvious because one of ordinary skill in the art would be motivated to provide an early warning using a visual advantage for seeing trending topics (as suggested by Deurloo at [0050-0051]).
Referring to Claim 3, the combination of Ziv, Tapia, Spencer and Deurloo teaches the method of claim 2, comprising displaying, the first map in a table format illustrating the one or more words and their contribution values to each topic (see Spencer at Col. 11: lines 21-27: “Alternatively, all of the contribution values for a term may be determined first, ranked, and then the n documents with the highest contribution values selected. The preprocess method then creates the lookup table 214 for the term by traversing the blocks of the term row in inverted index 200 and storing the (document, pointer) pair 213 information for the appropriate blocks”. Therefore, this created lookup table is interpreted as the claimed map in table format).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Ziv and Tapia with the above teachings of Spencer by processing a corpus using a plurality of machine learning models program code to identify a topic within the corpus, as taught by Ziv and Tapia, and determine a contribution value of each word in the corpus, as taught by Spencer. The modification would have been obvious because one of ordinary skill in the art would be motivated to determine a scalar value of the influence of a term in a corpus of documents (as suggested by Spenser at [Abstract]).
Referring to Claim 4, the combination of Ziv, Tapia, Spencer and Deurloo teaches the method of claim 2, comprising: generating a second map comprising, for each of the documents of the corpus, a relative distribution of the one or more words per topic per document; and storing, in the storage, the second map (see Ziv at Figs. 2 and 3: “FIGS. 2 and 3 are diagrams that schematically illustrate hierarchical category structures, in accordance with embodiments of the present invention”. Therefore, these diagrams are interpreted as two maps, or a second map. Furthermore, at Col. 7: lines 44-47: “Some categorization conditions use metadata related to the data item, either instead of or in addition to considering the content of the data item. The metadata may be stored together with the data item or provided separately”. Therefore, the categorization metadata includes the diagram which is interpreted as the map being stored).
Referring to Claim 5, the combination of Ziv, Tapia, Spencer and Deurloo teaches the method of claim 4, comprising determining the relative distribution by summing the contribution value of each word of the document for each topic (see Spencer at Col. 11: lines 21-27: “Alternatively, all of the contribution values for a term may be determined first, ranked, and then the n documents with the highest contribution values selected. The preprocess method then creates the lookup table 214 for the term by traversing the blocks of the term row in inverted index 200 and storing the (document, pointer) pair 213 information for the appropriate blocks”. Therefore, since the contribution values are ranked for each term, this interpreted as summing).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Ziv and Tapia with the above teachings of Spencer by processing a corpus using a plurality of machine learning models program code to identify a topic within the corpus, as taught by Ziv and Tapia, and determine a contribution value of each word in the corpus, as taught by Spencer. The modification would have been obvious because one of ordinary skill in the art would be motivated to determine a scalar value of the influence of a term in a corpus of documents (as suggested by Spenser at [Abstract]).
Referring to Claim 6, the combination of Ziv, Tapia, Spencer and Deurloo teaches the method of claim 4, comprising displaying, the second map in a table format (see Ziv at Figs. 2 and 3, and see Spencer at Col. 11: lines 21-27: “Alternatively, all of the contribution values for a term may be determined first, ranked, and then the n documents with the highest contribution values selected. The preprocess method then creates the lookup table 214 for the term by traversing the blocks of the term row in inverted index 200 and storing the (document, pointer) pair 213 information for the appropriate blocks”. Therefore, this created lookup table is interpreted as the claimed map in table format).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Ziv and Tapia with the above teachings of Spencer by processing a corpus using a plurality of machine learning models program code to identify a topic within the corpus, as taught by Ziv and Tapia, and determine a contribution value of each word in the corpus, as taught by Spencer. The modification would have been obvious because one of ordinary skill in the art would be motivated to determine a scalar value of the influence of a term in a corpus of documents (as suggested by Spenser at [Abstract]).
Referring to independent Claim 9, it is rejected on the same basis as independent claim 2 since they are analogous claims.
Referring to dependent Claim 10, it is rejected on the same basis as dependent claim 3 since they are analogous claims.
Referring to dependent Claim 11, it is rejected on the same basis as dependent claim 4 since they are analogous claims.
Referring to dependent Claim 12, it is rejected on the same basis as dependent claim 5 since they are analogous claims.
Referring to dependent Claim 13, it is rejected on the same basis as dependent claim 6 since they are analogous claims.
Referring to Claim 16, Ziv teaches a system comprising:
storage (see Ziv at Col. 5: 54-59: “processor 36 comprises a general-purpose computer, which is programmed in software to carry out the functions described herein. The software may be downloaded to the computer in electronic form, over a network, for example, or it may alternatively be supplied to the computer on tangible media, such as CD-ROM”);
processing circuitry configured to execute instructions, that when executed, cause the processing circuitry to (see Ziv at Col. 5: 54-59: “processor 36 comprises a general-purpose computer, which is programmed in software to carry out the functions described herein. The software may be downloaded to the computer in electronic form, over a network, for example, or it may alternatively be supplied to the computer on tangible media, such as CD-ROM”):
determine a corpus of documents and a model to apply the corpus (see Ziv at Fig. 1 and Column 4: 9-19: “a user, such as an analyst, defines the applicable collection of data items and an initial set of categories. (In some cases, the user may choose to begin with a single category spanning the entire collection). The user specifies a categorization task, such as a request to expand the definition of a certain category or to sub-divide a certain category into sub-categories. The data analysis system applies a data mining process to carry out the requested categorization task and returns suggested results”. Therefore, the return of suggested results requested by the user using an interface for collection of data items correspond to the claimed corpus and the data mining process corresponds to the model to apply to the corpus to categorize it);
iteratively apply the corpus to the model to identify topics in the corpus of documents, each topic comprising a plurality of hierarchically organized components (see Ziv at Column 4: 4-8: “a data analysis system processes a large collection of recorded customer calls. The system carries out an interactive and iterative process, which produces a hierarchical structure for categorizing the calls in the collection based on their root causes”. Furthermore, at Column 5: 28-62: “[s]ystem 20 classifies at least some of the recorded calls in database 28 into a hierarchical structure of categories and sub-categories, in order to provide information as to the reasons behind the calls”. Therefore, the hierarchical structure of categories and subcategories correspond to the claimed identification of topics);
determine, for each topic, contribution values for the plurality of hierarchically organized components to the topic, wherein each contribution value corresponds to one of the hierarchically organized components (see Ziv at Column 4: 4-8: “a data analysis system processes a large collection of recorded customer calls. The system carries out an interactive and iterative process, which produces a hierarchical structure for categorizing the calls in the collection based on their root causes”. Furthermore, at Column 5: 28-62: “[s]ystem 20 classifies at least some of the recorded calls in database 28 into a hierarchical structure of categories and sub-categories, in order to provide information as to the reasons behind the calls”. Therefore, the hierarchical structure of categories and subcategories based on root causes correspond to the claimed contribution determination), and each contribution value indicates a frequency one of the plurality of hierarchically organized components contributes to the topic (see Ziv at Col. 7: lines 22-26: “[f]or example, a condition may state that a data item is categorized as a complaint if and only if one or more of the terms "complain," and "unhappy" appears in the data item. Other conditions may consider the occurrence frequency of a certain term in the data item”. Therefore, by considering the occurrence frequency, this corresponds to the claimed frequency that the components contributes to the topic);
determine a first map comprising the plurality of hierarchically organized components, the contribution values, and the topics, wherein each hierarchically organized component is mapped to a particular topic and has a corresponding contribution value (see Ziv at Figs. 2 and 3: “FIGS. 2 and 3 are diagrams that schematically illustrate hierarchical category structures, in accordance with embodiments of the present invention”. Therefore, these diagrams are interpreted as two maps, as they have the words that contribute to the topic discussed); and
store, in the storage, the first map (see Ziv at Col. 7: lines 44-47: “Some categorization conditions use metadata related to the data item, either instead of or in addition to considering the content of the data item. The metadata may be stored together with the data item or provided separately”. Therefore, the categorization metadata includes the diagram which is interpreted as the map).
Even though Ziv implicitly teaches the use of a model, as Ziv teaches a data mining process for classification, Tapia teaches it, as it can be seen at Abstract and Column 12: 19-28- “analytic applications 104 may generate troubleshooting solutions using one or more of the trained machine learning models 226(1)-226(N). The troubleshooting solutions may be generated based on the performance data from one or more of the data sources 110-118 provided by the data management platform 102. For example, the trained machine learning models may be used to automatically analyze CDRs to identify root causes of quality of service issues with the wireless carrier network”. Therefore, since Tapia applies a trained machine learning model to analyze data, this corresponds to the claimed machine learning model to process a corpus.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Ziv with the above teachings of Tapia by processing a corpus using program code to identify a topic within the corpus, the topic comprised of a plurality of hierarchically organized components, as taught by Ziv, and using a machine learning model, as taught by Tapia. The modification would have been obvious because one of ordinary skill in the art would be motivated to analyze data and patterns using a machine learning model for identifying a root cause for an issue that is the subject of a subscriber trouble ticket (as suggested by Tapia at Col. 12: 28-34).
Even though Ziv implicitly teaches determining contribution values for the one or more words to the topic, Spencer teaches it, as it can be seen at Abstract “a plurality of documents ordered by a contribution that the term makes to the document score of the document. The contribution is a scalar measure of the influence of the term in the computed document score. The contribution reflects both the within document frequency and the between document frequency of the term”, and Col. 11: lines 21-27: “Alternatively, all of the contribution values for a term may be determined first, ranked, and then the n documents with the highest contribution values selected. The preprocess method then creates the lookup table 214 for the term by traversing the blocks of the term row in inverted index 200 and storing the (document, pointer) pair 213 information for the appropriate blocks”. Therefore, this determination of contribution value of each term in a document is interpreted as the contribution value of each word as claimed.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Ziv and Tapia with the above teachings of Spencer by processing a corpus using a plurality of machine learning models program code to identify a topic within the corpus, as taught by Ziv and Tapia, and determine a contribution value of each word in the corpus, as taught by Spencer. The modification would have been obvious because one of ordinary skill in the art would be motivated to determine a scalar value of the influence of a term in a corpus of documents (as suggested by Spenser at [Abstract]).
However, the combination of Ziv, Tapia and Spencer fails to teach:
compare the first map to a stored second map to identify a newly trending topic; and
launch an early warning message system to issue a warning message indicating the identified newly trending topic.
Deurloo teaches, in an analogous system,
comparing the first map to a stored second map to identify a newly trending topic (see Deurloo at [0048]: “Clustering can also be based on tweet list comparison. As a first clustering algorithm the system may adopt a very simple but efficient clustering algorithm. For each topic a, at time t, the system keeps a list la of the last 100 tweets counting back from time t. The system's similarity metric for topic a and topic b is defined as the number of times that both terms a and b appear in the lists l.sub.a and l.sub.b. If the similarity metric is above the threshold of 0.15, then the two topics are clustered”. Therefore, the comparison of a new tweet against a list of previous tweets (interpreted as a stored second map) is analogous to the claim language. Furthermore, see [0050]: “FIG. 6B shows the squarified treemap for the clustered topics. In some other embodiments, treemap can have shapes other than squares. The clusters in the FIG. 6B shows that `Azcapotzalco` is related to the earthquake in Mexico” and see [0051]: “Capture of the trending data by the system could, e.g., be based on the size and rate of growth of a cluster of words/topics. The dynamic squarified treemap can then serve as an early warning system in which topics that can become a trend can be visualized”. Therefore, the comparison of new tweet with a list of previous tweets for developing a treemap to recognize and alert of trends by providing a visualization is further interpreted as the identification of a newly trending topic); and
launching an early warning message system to issue a warning message indicating the identified newly trending topic (see Deurloo at [0050]: “The treemap can also be made dynamic by updating it in an animated manner periodically. This has the visual advantage of seeing the rectangles grow/shrink and change its color”. Further at [0051]: “Capture of the trending data by the system could, e.g., be based on the size and rate of growth of a cluster of words/topics. The dynamic squarified treemap can then serve as an early warning system in which topics that can become a trend can be visualized”. Therefore, the visualized early warning system is further interpreted as the early warning message).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Ziv, Tapia and Spencer with the above teachings of Deurloo by processing a corpus using a plurality of machine learning models program code to identify a topic within the corpus and determine a contribution value of each word in the corpus, as taught by Ziv, Tapia and Spencer, and identifying trending topics based on a comparison of the current identification of topic in the corpus against a stored list of topic identifications, as taught by Deurloo . The modification would have been obvious because one of ordinary skill in the art would be motivated to provide an early warning using a visual advantage of seeing trending topics (as suggested by Deurloo at [0050-0051]).
Referring to Claim 17, the combination of Ziv, Tapia, Spencer and Deurloo teaches the system of claim 16, comprising the processing circuitry configured to display, the first map in a table format illustrating the plurality of hierarchically organized components and their contribution values to each topic (see Spencer at Col. 11: lines 21-27: “Alternatively, all of the contribution values for a term may be determined first, ranked, and then the n documents with the highest contribution values selected. The preprocess method then creates the lookup table 214 for the term by traversing the blocks of the term row in inverted index 200 and storing the (document, pointer) pair 213 information for the appropriate blocks”. Therefore, this created lookup table is interpreted as the claimed map in table format).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Ziv and Tapia with the above teachings of Spencer by processing a corpus using a plurality of machine learning models program code to identify a topic within the corpus, as taught by Ziv and Tapia, and determine a contribution value of each word in the corpus, as taught by Spencer. The modification would have been obvious because one of ordinary skill in the art would be motivated to determine a scalar value of the influence of a term in a corpus of documents (as suggested by Spenser at [Abstract]).
Referring to Claim 18, the combination of Ziv, Tapia, Spencer and Deurloo teaches the system of claim 16, comprising the processing circuitry configured to: generate a second map comprising, for each of the documents of the corpus, a relative distribution of the plurality of hierarchically organized components per topic per document; and store, in the storage, the second map (see Ziv at Figs. 2 and 3: “FIGS. 2 and 3 are diagrams that schematically illustrate hierarchical category structures, in accordance with embodiments of the present invention”. Therefore, these diagrams are interpreted as two maps, or a second map. Furthermore, at Col. 7: lines 44-47: “Some categorization conditions use metadata related to the data item, either instead of or in addition to considering the content of the data item. The metadata may be stored together with the data item or provided separately”. Therefore, the categorization metadata includes the diagram which is interpreted as the map being stored).
Referring to Claim 19, the combination of Ziv, Tapia, Spencer and Deurloo teaches the system of claim 18, comprising determine the relative distribution by summing the contribution value of each hierarchically organized components of the document for each topic (see Spencer at Col. 11: lines 21-27: “Alternatively, all of the contribution values for a term may be determined first, ranked, and then the n documents with the highest contribution values selected. The preprocess method then creates the lookup table 214 for the term by traversing the blocks of the term row in inverted index 200 and storing the (document, pointer) pair 213 information for the appropriate blocks”. Therefore, since the contribution values are ranked for each term, this interpreted as summing).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Ziv and Tapia with the above teachings of Spencer by processing a corpus using a plurality of machine learning models program code to identify a topic within the corpus, as taught by Ziv and Tapia, and determine a contribution value of each word in the corpus, as taught by Spencer. The modification would have been obvious because one of ordinary skill in the art would be motivated to determine a scalar value of the influence of a term in a corpus of documents (as suggested by Spencer at [Abstract]).
Referring to Claim 20, the combination of Ziv, Tapia, Spencer and Deurloo teaches the system of claim 18, comprising display, the second map in a table format (see Ziv at Figs. 2 and 3, and see Spencer at Col. 11: lines 21-27: “Alternatively, all of the contribution values for a term may be determined first, ranked, and then the n documents with the highest contribution values selected. The preprocess method then creates the lookup table 214 for the term by traversing the blocks of the term row in inverted index 200 and storing the (document, pointer) pair 213 information for the appropriate blocks”. Therefore, this created lookup table is interpreted as the claimed map in table format).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Ziv and Tapia with the above teachings of Spencer by processing a corpus using a plurality of machine learning models program code to identify a topic within the corpus, as taught by Ziv and Tapia, and determine a contribution value of each word in the corpus, as taught by Spencer. The modification would have been obvious because one of ordinary skill in the art would be motivated to determine a scalar value of the influence of a term in a corpus of documents (as suggested by Spenser at [Abstract]).
Referring to Claim 21, the combination of Ziv, Tapia, Spencer and Deurloo teaches the system of claim 16, wherein the plurality of hierarchically organized components comprises words (see Ziv at Fig. 2 and 3, and Column 7: 11-34: “Processor 36 may use different types of Boolean and "soft" conditions, which address the content of the data items in various ways. Some conditions regard the data item as a "bag of words" (BOW), i.e., as a collection of words without specific structure. Other conditions may consider the grammatical structure of the data item, such as using various Natural Language Processing (NLP) methods known in the art. For example, a categorization condition may be true when one or more words, phrases or expressions (collectively referred to as "terms" or "textual terms"), which are indicative of the corresponding category, appear in the data item. For example, a condition may state that a data item is categorized as a complaint if and only if one or more of the terms "complain," and "unhappy" appears in the data item”).
Claims 7-8, and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Ziv et al (US 9,015,194- hereinafter Ziv) in view of Tapia et al (US 10,063,406- hereinafter Tapia), view of Spencer (US Patent 5,915,249- hereinafter Spencer), in view of Deurloo (US PG Pub. 2013/0275527- hereinafter Deurloo), and further in view of Eder et al (US Pub. No. 2015/0235143- hereinafter Eder).
Referring to Claim 7, the combination of Ziv, Tapia, Spencer and Deurloo teaches the method of claim 2, however, fails to teach wherein the model is selected from a plurality of models stored in the storage, and the plurality of models are trained using a plurality of different corpora.
Eder teaches, in an analogous system, wherein the model is selected from a plurality of models stored in the storage, and the plurality of models are trained using a plurality of different corpora (see Eder at [0284]: “A software block 308 then uses "bootstrapping" where different training data sets are created by re-sampling with replacement from the original training set so data records may occur more than once”, and [0286]- “[a]fter the causal predictive model bots complete their training for each model, the software in block 309 uses a model selection algorithm to identify the model that best fits the data. For the system of the present embodiment, a cross validation algorithm (e.g., the tenfold cross validation algorithm) is used for model selection”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Ziv, Tapia, Spencer and Deurloo with the above teachings of Eder by processing a corpus using a plurality of machine learning models program code to identify a topic within the corpus, as taught by Ziv, Tapia, Spencer and Deurloo, and selecting a machine learning model from the plurality of models, as taught by Eder. The modification would have been obvious because one of ordinary skill in the art would be motivated to use a model selection algorithm to identify the model that best fits the data (as suggested by Eder at [0286]).
Referring to Claim 8, the combination of Ziv, Tapia, Spencer, Deurloo and Eder teaches the method of claim 7, wherein the plurality of different corpora comprise corpora captured from different time periods, different sources, or a combination thereof (see Tapia at Column 8: 54-58; “aggregate data from multiple data sources for a particular time period into an aggregated data file of data sets according to one or more grouping parameters. The grouping parameters may include specific time periods (e.g., hourly, daily, etc.),”. Moreover, Tapia at Column 19: 43-47 “[i]n various embodiments, the network cell condition information for a geolocation may include the amount of network bandwidth available at different times in a daily cycle, the signal spectrums of the network cell that are utilized at different times” and lines 60-67: “For example, data usage by some of the subscribers who engaged in excess data usage negatively affected network bandwidth availability at a network cell because the usage occurred at peak times. However, data usage by other subscribers with excess usage may have occurred at non-peak times and therefore did not negatively affect network bandwidth availability at the network cell”. Therefore, the training data contains data from different times of the day and is aggregated when needed for a particular time period. It would have been obvious to include data from different times of the day in the different corpora for the models trained by Eder, as this will bring more information for the models such as peak and non-peak times relevant data events of issues).
Referring to dependent Claim 14, it is rejected on the same basis as dependent claim 7 since they are analogous claims.
Referring to dependent Claim 15, it is rejected on the same basis as dependent claim 8 since they are analogous claims.
Response to Arguments
Applicant's arguments filed 10/03/2025 have been fully considered.
In reference to Applicant’s arguments:
- Claim objection.
Examiner’s response:
Objection to claim 21 is withdrawn in view of amendment.
In reference to Applicant’s arguments:
- Double patenting rejections.
Examiner’s response:
Rejections under non-statutory double patenting rejection are maintained under obviousness analysis, as explained above at the beginning of this office action.
In reference to Applicant’s arguments:
- Claim rejections under 35 USC 101.
Examiner’s response:
Applicant asserts that Claim 2 is patent eligible under 35 USC 101, however, Examiner respectfully disagrees. Applicant asserts that the claim does not recite any concept performed in the human mind such as observations, evaluation , judgment and opinion, instead, claim 2 recites method comprising “determining a corpus of documents and a model to apply the corpus,” “iteratively applying the corpus to the model to identify topics in the corpus of documents, each topic associated with one or more words,” “determining, for each topic, contribution values for the one or more words to the topic, wherein each contribution value corresponds to one of the one or more words, and each contribution value indicates a frequency one of the one or more words contributes to the topic, generating a first map comprising the one or more words, the contribution values, and the topics, wherein each word of the one or more words is mapped to a particular topic and has a corresponding contribution value,” “storing, in storage, the first map, comparing the first map to a stored second map to identify a newly trending topic,” and “launching an early warning message system to issue a warning message indicating the identified newly trending topic.” Applicant further asserts these are specific steps geared toward using a machine learning model to identify collaborative topics and tracking, however, Examiner respectfully disagrees.
Examiner carefully reconsidered the claim limitations in view of Applicant’s remarks, and still concludes that the limitations amount to a judicial exception without significantly more. The use of a machine learning model is still recited at a high level of generality and still is used as mere instructions to apply the abstract idea on a generic computer. The steps for determining a corpus of documents and a model is an evaluation step (mental process), identifying topics in the corpus is also observation and evaluation steps (mental processes), determining contribution values for the words is also observation and evaluation steps (mental processes), mapping words to topics based on a contribution value is also observation and evaluation steps (mental processes), comparing a map to another map is also observation and evaluation steps (mental processes). The only additional elements found in the claim are the use of the model, which again is used to implement these mental processes in a computer, storing a map in a computer is found insignificant extra solution activity per MPEP 2106.05 (g) and further well-understood, routine, conventional activity per MPEP 2106.05(d)(iv), and launching a warning message is one of the examples that the courts have described as merely indicating a field of use or technological environment in which to apply a judicial exception per MPEP 2106.05(h) (vi). Therefore, these additional elements does not provide significantly more than the judicial exception.
Applicant asserts that paragraph [0003] of the application identifies a technical problem being addressed by the amended claims regarding manual searching generally providing search options that fail to identify emerging topic trends, and that amended claims resolve this issue or solve this problem by “determining, for each topic, contribution values for the one or more words to the topic, wherein each contribution value corresponds to one of the one or more words, and each contribution value indicates a frequency one of the one or more words contributes to the topic, generating a first map comprising the one or more words, the contribution values, and the topics, wherein each word of the one or more words is mapped to a particular topic and has a corresponding contribution value, comparing the first map to a stored second map to identify a newly trending topic,” and “launching an early warning message system to issue a warning message indicating the identified newly trending topic.” Examiner would like to point out to MPEP 2106.05 (a), where it recites “It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements”. In view of this statement by the MPEP, Examiner reconsidered amended independent claim 2 and still did not recognize an additional element that reflects such alleged improvement related to a recommendation or recommendation rates. The only recognized additional elements are the use of the model, which as previously stated, is merely used to implement these mental processes in a computer, storing a map in a computer is found insignificant extra solution activity per MPEP 2106.05 (g) and further well-understood, routine, conventional activity per MPEP 2106.05(d)(iv), and launching a warning message is one of the examples that the courts have described as merely indicating a field of use or technological environment in which to apply a judicial exception per MPEP 2106.05(h) (vi). Therefore, these additional elements does not provide significantly more than the judicial exception.
For these reasons above, rejections are still maintained.
In reference to Applicant’s arguments:
- Claim rejections under 35 USC 103.
Examiner’s response:
Applicant submits that claim 2 is not taught, suggested, or otherwise rendered obvious over Ziv in view of Tapia and further in view of Spence, however, Examiner respectfully disagrees. Applicant asserts that the words described in amended claim 2 are words that are featured in the documents, and then the topics are the broad topics to which those words relate, and further asserts that FIG. 2 and FIG. 3 of Ziv provide a map of certain root cause categories, not the specific words found in the calls. Examiner respectfully disagrees. These figures show a map of category structures as a result of the words being analyzed according to their contribution to the topic. For instance, it can be seen at Column 6: 25-28: “an exemplary tree that categorizes the different complaint-related calls in database 28 in terms of their root causes”, and further at Column 7: 11-34: “Processor 36 may use different types of Boolean and "soft" conditions, which address the content of the data items in various ways. Some conditions regard the data item as a "bag of words" (BOW), i.e., as a collection of words without specific structure. Other conditions may consider the grammatical structure of the data item, such as using various Natural Language Processing (NLP) methods known in the art. For example, a categorization condition may be true when one or more words, phrases or expressions (collectively referred to as "terms" or "textual terms"), which are indicative of the corresponding category, appear in the data item. For example, a condition may state that a data item is categorized as a complaint if and only if one or more of the terms "complain," and "unhappy" appears in the data item. Other conditions may consider the occurrence frequency of a certain term in the data item. Yet another type of condition may be true only when a certain term does not appear in the data item. Some conditions may define relationships between terms in the data item. For example, a certain condition may be true if the word "credit" appears in the data item, but not as part of the phrase "credit card." Another example is a proximity condition, which is true only when two terms are found in proximity to one another in the data item”. Therefore, based on Ziv’s teachings, it is interpreted that the maps generated are based on these words or terms appearing in the data item to be categorized for the topic. Regarding arguments about the newly added limitations, these arguments have been fully considered but are moot in view of new grounds of rejection.
For these reasons above, rejections are still maintained.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUIS A SITIRICHE whose telephone number is (571)270-1316. The examiner can normally be reached M-F 9am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LUIS A SITIRICHE/Primary Examiner, Art Unit 2126