Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-11, 13-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Tzur et al (US Patent 12,002,058 B2) in view or Farman et al. (US Patent 12,579,182 B2).
Regarding Claims 1, 13 and 20, Tzur teaches an apparatus comprising one or more processors and one or more storage devices storing instructions that are operable, when executed by the one or more processors (see Fig.7 (702-1) and Col.9, Line 31-40), to cause the one or more processors to:
extract a first feature set from a plurality of service message data objects associated with an application framework (see Fig.2 (220) and Col.4, Line 2-5, extracting features from customer service tickets);
apply an unsupervised natural language processing model to the first feature set to generate a plurality of topic data objects representative of respective hierarchical topic classifications for the plurality of service message data objects (see Fig.2 (230) and Col.4, Line 11-27, topic clustering using unsupervised machine learning model);
extract a second feature set from the plurality of topic data objects (see Fig.2 (240) and Col.4, Line 28-35, extracting features representing activities and solutions);
and generating data driven servicing guidebooks containing data objects representing topics, subtopics and related information (see Fig.2 (250), Fig.3 and Col.4, Line 62 – Col.5, Line 4, information and underlying information for causes, actions and resolutions).
Tzur fails to teach applying a large language model to the second feature set to generate a plurality of theme data objects representative of respective hierarchical theme classifications for the plurality of topic data objects, wherein the respective hierarchical theme classifications are representative of a higher-order hierarchical classification as compared to the respective hierarchical topic classifications; and initiating a rendering of a dashboard visualization via an electronic interface based at least in part on the plurality of theme data objects.
Farman, however, teaches applying a large language model to a set to generate structural documents that are arranged by shared topics and concepts in hierarchical manner (see Fig.2 (222,230,212), Col.6, Line 35-42, Col.7, Line 21-26 and Col.16, Line 44-48); and generating a visual representation of a structured document (see Fig.3 (336) and Col.19, Line 17-24).
It would have been obvious for one skilled in the art, before the effective filing date of the application, to configure Tzur’s apparatus to apply a large language model to the second feature set to generate a plurality of theme data objects representative of respective hierarchical theme classifications for the plurality of topic data objects, wherein the respective hierarchical theme classifications are representative of a higher-order hierarchical classification as compared to the respective hierarchical topic classifications; and initiate a rendering of a dashboard visualization via an electronic interface based at least in part on the plurality of theme data objects. The motivation would be to generate a summarized structured document for the service tickets that are arranged by shared topics and concepts in hierarchical manner and to display the structured document via a user interface device.
Regarding Claims 2 and 14, Tzur teaches applying an unsupervised natural language processing model to the first feature set to generate a plurality of topic data objects representative of respective hierarchical topic classifications for the plurality of service message data objects (see Fig.2 (230) and Col.4, Line 11-27, topic clustering using unsupervised machine learning model), but fails to teach wherein the unsupervised natural language processing model is an embedding model.
Farman, however, teaches suing a word embedding model to generate summarization embeddings for a set of initial text summarization (see Fig.1 (123) and Col.3, Line 56 – Col.4, Line 13).
It would have been obvious for one skilled in the art, before the effective filing date of the application, to configure Tzur’s apparatus to apply an embedding model to the first feature set. The motivation would be to generate summarization embeddings for topic clustering or modeling.
Regarding Claims 3 and 15, Tzur teaches applying an unsupervised natural language processing model to the first feature set to generate a plurality of topic data objects representative of respective hierarchical topic classifications for the plurality of service message data objects (see Fig.2 (230) and Col.4, Line 11-27, topic clustering using unsupervised machine learning model), but fails to teach applying a dimensionality reduction technique to the plurality of embeddings to generate a lower dimensional representation of the plurality of embeddings.
Farman, however, teaches an encoder model configured to perform a dimensionality reduction technique (see Fig.2 (230) and Col.14, Line 14-17), and applying the he dimensionality reduction technique to the to generate a set of vectors in a lower dimensional embedding space (see Fig.2 (230) and Col.14, Line 14-17).
It would have been obvious for one skilled in the art, before the effective filing date of the application, to configure Tzur’s apparatus to apply a dimensionality reduction technique to the plurality of embeddings to generate a lower dimensional representation of the plurality of embeddings. The motivation would be to generate a set of vectors in a lower dimensional embedding space for the topic classification or modeling.
Regarding Claims 4 and 16, Farman further teaches applying a spatial clustering technique to the lower dimensional representation of a plurality of embeddings to generate a plurality of cluster data objects (see Fig.3 (324) and Col.15, Line 25-39).
Regarding Claims 5 and 17, Tzur teaches applying an unsupervised natural language processing model to the first feature set to generate a plurality of topic data objects representative of respective hierarchical topic classifications for the plurality of service message data objects (see Fig.2 (230) and Col.4, Line 11-27, topic clustering using unsupervised machine learning model), but fails to teach applying a class-based weighted term technique to the plurality of cluster data objects to generate a rank score for each keyword of a plurality of keywords, wherein the plurality of keywords is associated with a cluster data object of the plurality of cluster data objects, and wherein the rank score is indicative of a respective importance of a respective keyword.
Farman, however, teaches using generative language model, including a TextRank extractive summarization model, to assign weight values to portions of text data based on identified keywords and phrases (see Col.10, Line 21-27, Col.10, Line 46-61 and Col.11, Line 4-12).
It would have been obvious for one skilled in the art, before the effective filing date of the application, to configure Tzur’s apparatus to apply a class-based weighted term technique to the plurality of cluster data objects to generate a rank score for each keyword of a plurality of keywords, wherein the plurality of keywords is associated with a cluster data object of the plurality of cluster data objects, and wherein the rank score is indicative of a respective importance of a respective keyword. The motivation would be to user a generative language model to perform topic classification or modeling based on identified keywords.
Regarding Claim 6, Farman further teaches using an autoregressive large language model configured to generate a textual output based on a textual input (see Col.4, Line 7-13, generative pre-trained transformer GPT model).
Regarding Claims 7 and 18, Tzur further teaches wherein respective service message data objects of the plurality of service message data objects comprises at least a description data field associated with a service request by a user identifier (see Fig.3, Col.3, Line 30-41 and Col.4, Line 66 – Col.5, Line 14, customer service ticket information including product information and customer information), and extracting the first feature set from the plurality of service message data objects by extracting the description data field from the respective service message data objects (see Fig.1 (120) and Col.3, Line 30-41).
Regarding Claim 8, Tzur further teaches extracting a second feature set from the plurality of topic data objects (see Fig.2 (240) and Col.4, Line 28-35, extracting features representing activities and solutions), but fails to teach extracting the second feature set from the plurality of topic data objects by extracting the plurality of keywords from the plurality of topic data objects.
Farman, however, teaches determining a shared topic for a set of documents based on the presence of shared keywords (see Col.11, Line 41-47).
It would have been obvious for one skilled in the art, before the effective filing date of the application, to configure Tzur’s apparatus to extract the second feature set from the plurality of topic data objects by extracting the plurality of keywords from the plurality of topic data objects. The motivation would be to generate a summarized structured document for the service tickets based on identified keywords.
Regarding Claim 9, Tzur further teaches generating, based at least in part one or more service message data objects associated with a topic data object, a theme contextual description that is descriptive of the one or more service message data objects associated with the topic data object (see Fig.3 and Col.4, Line 66 – Col.5, Line 14, data containing information for root causes, independent actions and resolutions categories), and wherein each of the plurality of theme data objects comprises a respective theme contextual description (see Fig.3 and Col.4, Line 66 – Col.5, Line 14, data containing information for underlying data items under root causes, independent actions and resolutions categories).
Regarding Claim 10, Tzur further teaches generating, based at least in part on one or more service message data objects associated with a topic data object, a theme contextual recommendation that is representative of at least one actionable recommendation (see Fig.3 and Col.4, Line 66 – Col.5, Line 14, data containing information for underlying data items under possible independent actions category), and wherein each of the plurality of theme data objects comprises a respective theme contextual recommendation (data containing information for underlying data items under possible independent actions category).
Regarding Claim 11, Farman further teaches displaying a predetermined format for displaying data based on the plurality of data objects under a structured document (see Fig.3 (336) and Col.19, Line 17-29).
Claims 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Tzur et al (US Patent 12,002,058 B2) in view or Farman et al. (US Patent 12,579,182 B2), and in further view of Cantu et al. (US Pub. 2025/0371312 A1).
Regarding Claims 12 and 19, Tzur teaches extracting a first feature set from a plurality of service message data objects associated with an application framework (see Fig.2 (220) and Col.4, Line 2-5, extracting features from customer service tickets), and Farman teaches wherein the large language model is a first large language model (see Col.7, Line 21-26), but Tzur and Farman fail to teach generating at least a portion of the first feature set based at least in part on a second large language model.
Cantu, however, teaches using a less complex first large language model to identify topics from messages and using a more complex second large language model for performing complex tasks (see Fig.6 (608,610) and paragraph [0175]).
It would have been obvious for one skilled in the art, before the effective filing date of the application, to configure the apparatus of Claim 1 to generate at least a portion of the first feature set based on a second large language model. The motivation would be to use a large language model to perform the topic classification or modeling before generating the theme data objects with a different large language model.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VU B HANG whose telephone number is (571)272-0582.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan, can be reached at (571)272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VU B HANG/Primary Examiner, Art Unit 2654