The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This communication is responsive to the amendment filed on 09/24/2025.
Status of claims:
Claims 2-4 and 6 were canceled.
Claims 1, 5, 7-8, 13-15 and 20 are amended.
Claims 1, 5 and 7-20 are presented for examination.
Remarks
Applicant's arguments filed on 09/24/2025 with respect to the newly added limitations of claims 1, 15 and 20 have changed the scope of the claims and have been considered in view of the rejections below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 5 and 7-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract without significantly more.
Claims 1, 15 and 20,
Step 1:
The claims are directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter.
Step 2A, Prong One:
The claims recite the limitations:
“applying…; applying……; applying…to generate….; sorting…”, are processes that, under its broadest reasonable interpretation, covers performance of the limitation in the mind.
The mere nominal recitation of a processor and server controller do not take the claim limitations out of the mental processes grouping. If a claim limitation, under its broadest reasonable interpretation, covers mental processes but for the recitation of generic computer components, then it falls within the "Mental Processes" grouping of abstract ideas (concepts performed in the human mind including an observation, evaluation, judgement, and opinion). Accordingly, the claim recites an abstract idea.
Step 2A, Prong Two:
This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements:
“modifying…; modifying…”, amount to data-gathering steps which is considered to be insignificant extra-solution activity, (MPEP 2106.05(g)).
“displaying…” represents an extra-solution activity because it is a mere nominal or tangential addition to the claim, a mere generic transmission and presentation of collected and analyzed data. (MPEP 2106.05 (g)).
Step 2B:
“modifying…; modifying…”, These are identified as insignificant extra-solution activity above when re-evaluated this element is well-understood, routine, and conventional as evidenced by the court cases in MPEP 2106.05(d)(II), "i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); … OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network);" and thus remains insignificant extra-solution activity that does not provide significantly more.
“displaying…” is identified as insignificant extra-solution activity above when re-evaluated this element is well-understood, routine, and conventional as evidenced by the court cases in MPEP 2106.05(d)(II), "iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334; i. … transmitting data over a network, …Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); … OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)”.
“processor, graphic user interface; a data repository, a server controller; an encoding model; a language model, a cluster model” amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields, as demonstrate by: relevant court decision: the followings are example of the court decisions demonstrating well-understood, routine and conventional activities, See e.g., MPEP 2106.05(d)(II) and MPEP 2106.05(f)(2): computer readable storage media comprising instructions to implement a method, e.g., see versata Dev. Group, Inc. v SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015).
The claims as a whole, do not amount to significantly more than the abstract idea itself. This is because the claims do not affect an improvement to the functioning of a computer itself; and the claim do not move beyond a general link of the use of an abstract idea to a particular technological environment.
Claim 5 recites the additional limitations, which are at a high level of generality and would function in its ordinary capacity for selecting a vector in the plurality of vector data structures, this additional element does not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claims 7, recites the additional limitations, which do not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 8, recites the additional limitations, which do not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 9, recites the additional limitations, which do not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 10 recites the additional limitations, which do not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 11, recites the additional limitations, which do not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 12, recites the additional limitations, which do not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 13, recites the additional limitations, which do not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 14, recites the additional limitations, which do not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 16, recites the additional limitations, which do not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 17, recites the additional limitations, which do not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 18, recites the additional limitations, which do not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 19, recites the additional limitations, which do not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5 and 7-20, as best understood by the examiner, are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al., (KR 20230114440 A), hereinafter “Kim”), in view of Gupta et al., (US 20230394040), hereinafter “Gupta”.
As per claim 1, Kim discloses a method of modifying a graphical user interface initially displaying a plurality of datasets in a disorganized list that is disorganized by topic, the method (fig.4, step S410, generate interest data of a new concept representing a semantic relationship between a keyword and a subject) comprising:
- applying a language model to the plurality of datasets to generate a plurality of topics assigned to the plurality of datasets, wherein each of the plurality of topics comprises at least one of a natural language text word and a natural language phrase (fig.11 and step S501, page 7, para [3], given a set of products and assign the main topics for the set of product, wherein (<protein shake barrel, quest protein bar, Nike Crew, Fila Performance Pants, Jeximix Jogger, Adidas T-shirt, dumbbell, vertitest mat, disc 5kg, kettlebell, burn machine> All of them belong to the category 'Homet', and among them, <Nike Crew, Fila Performance Pants, Jexi Mix Jogger, Adidas T-shirt> fall under the category 'Sportswear', and <Dumbbell, Vertitest Mat, Disc 5kg, Kettlebell, Burn Machine> It is assumed that corresponds to the category 'exercise equipment'. When the topics 'Holmt', 'Sportswear', and 'Sportswear' are created through the language model, it is possible to build a relationship between the topics 'Holmt' and 'Sportswear' because search words belonging to 'Sportswear' are included in 'Holmt'. And since the search word belonging to 'sports equipment' is included in 'Holmt', it is possible to build a relationship between the subject 'Holmt' and 'Sports equipment'.));
- applying an encoding model to the plurality of topics to generate a corresponding plurality of vector data structures storing a plurality of embedded topics, wherein each embedded topic of the plurality of embedded topics is associated with one corresponding vector in the plurality of vector data structures (step S430, page 7, para [4], for each distinct topic using Bert model, assume that topic B belongs to topic A (inclusion relationship) because topic A 'diet' is a word that is completely included in topic B 'diet diet'. Also, if topic A contains the words a1, a2, a3 and topic B contains the words a1, a2, a3, b1, b2, then topic A belongs to topic B because topic B completely contains topic A. (inclusion relationship) is assumed. Next, the processor 220 searches for a subject having an equivalent relationship among subjects other than subjects having an inclusive relationship. For example, if topics C1, C2, and C3 are generated for keyword C, it is assumed that topics C1, C2, and C3 are equivalent. The processor 220 may perform graph embedding (eg, knowledge graph embedding, etc.) using the inclusive relationship and the equivalence relationship between subjects. In other words, as a relationship used in graph embedding, an inclusive relationship and an equivalence relationship can be applied. After embedding a graph representing the relationship between topics, it is possible to determine a representative topic based on probability by probabilistically obtaining which topic belongs to each keyword. In addition to probability, a topic of a certain depth may be selected as a representative topic in a hierarchical structure based on relationships between topics); and
- applying a clustering model to the plurality of vector data structures to generate (page 7, para [5] and [3], using cluster similarity using hierarchical clustering to perform quantitative and qualitative quality tuning, such as determining how much words overlap between topics or comparing similarities between topics and removes a wide range of subjects based on relationships between subjects, wherein exclude topics in a hierarchical structure based on relationships between topics from a representative topic selection target, in which the number of topics in a lower layer is greater than or equal to a threshold value).
- a first cluster comprising a first subset of the vector data structures within a first
pre-determined semantic distance (page 7, para [5] and [3], using cluster similarity using hierarchical clustering to perform quantitative and qualitative quality tuning, such as determining how much words overlap between topics or comparing similarities between topics and removes a wide range of subjects based on relationships between subjects, wherein exclude topics in a hierarchical structure based on relationships between topics from a representative topic selection target, in which the number of topics in a lower layer is greater than or equal to a threshold value);
- a second cluster comprising a second subset of the plurality of vector data structures within a second pre-determined semantic distance, wherein the first subset and the second subset each comprises a reduced number of the plurality of vector data structures (page 7, para [5] and [3], using cluster similarity using hierarchical clustering to perform quantitative and qualitative quality tuning, such as determining how much words overlap between topics or comparing similarities between topics and removes a wide range of subjects based on relationships between subjects, wherein exclude topics in a hierarchical structure based on relationships between topics from a representative topic selection target, in which the number of topics in a lower layer is greater than or equal to a threshold value).
Kim generates new conceptual data representing a semantic relationship between a keyword and a subject using a large-scale language model. However, Kim does not explicitly disclose the limitation “modifying, according to the first cluster and the second cluster, the plurality of datasets to generate an organized data structure by organizing the plurality of datasets into a first group corresponding to the first cluster and a second group corresponding to the second cluster”.
Meanwhile, Gupta discloses modifying, according to the first cluster and the second cluster, the plurality of datasets to generate an organized data structure by organizing the plurality of datasets into a first group corresponding to the first cluster and a second group corresponding to the second cluster (par. [0031], new or deleted character may modify the cluster groups and sub-topics displayed within visual display based on the new partial query string “TOT” there is completely new set of cluster groups and sub-topics between; and par. [0059], a certain number of sub-topics may be presented as selectable options horizontally listed following the cluster label);
- sorting the first group and the second group into an organized list of the plurality of datasets, wherein the first group and the second group are organized by topic according to the first cluster and the second cluster (par. [0020]-[0021], once the clusters are grouped, the sub-topics and cluster groups may be ranked based on popularity scores and/or popularity factors and the ranked cluster groups and sub-topics may be displayed in a display area as selectable icons); and
- modifying the graphical user interface to display the organized list of the plurality of datasets according to the first group and the second group (par. [0031], new or deleted character may modify the cluster groups and sub-topics displayed within visual display based on the new partial query string “TOT” there is completely new set of cluster groups and sub-topics between; and par. [0059], a certain number of sub-topics may be presented as selectable options horizontally listed following the cluster label);
Therefore, it would have been obvious to one having ordinary skill in the before the effective filing date of the claimed invention to have modified the system of Kim modify the features as disclosed by Gupta in order to minimize cognitive load required by the user to process the information, thereby reducing amount of time required to find the set of cluster groups and sub-topics.
As per claim 5, the combination of Kim and Gupta discloses the invention as claimed. In addition, Kim discloses the cluster comprises ones of the plurality of vector data structures within a first pre-determined semantic distance of a first selected vector in the plurality of vector data structures (fig.7, page 6, para.9, Candidate topics 702 created from the language model 700 generate inconsistent words, such as words with the same meaning or words with different categories, even though the wording is different due to the characteristics (diversity, contingency, etc.) of the language model, wherein the medoid referring to inconsistent dissimilar words), the second cluster comprises ones of the plurality of vector data structures within a second pre-determined semantic distance of a second selected vector in the plurality of vector data structures (page 7, para [5] and [3], using cluster similarity using hierarchical clustering to perform quantitative and qualitative quality tuning, such as determining how much words overlap between topics or comparing similarities between topics and removes a wide range of subjects based on relationships between subjects, wherein exclude topics in a hierarchical structure based on relationships between topics from a representative topic selection target, in which the number of topics in a lower layer is greater than or equal to a threshold value),
the first selected vector comprises a first medoid of the first cluster (fig.7, page 6, para.9, Candidate topics 702 created from the language model 700 generate inconsistent words, such as words with the same meaning or words with different categories, even though the wording is different due to the characteristics (diversity, contingency, etc.) of the language model, wherein the medoid referring to inconsistent dissimilar words).
As per claim 7, the combination of Kim and Gupta discloses the invention as claimed. In addition, Gupta discloses applying a first label to the first cluster (par. [0032], cluster label), and applying a second label to the second cluster (par. [0023], [0025]-[0028]), wherein modifying comprises: organizing the plurality of datasets, according to the first cluster and the second cluster (par. [0023], [0025]-[0028]), into a subset of the plurality of datasets (par. [0029], [0032], sub-topic “Toto Band” represents the band, the sub-topic “Toto Dog” represents the name of a dog, “Totoaba Fish” is a type of fish and so on), and
displaying, according to the first cluster and the second label, the subset of the plurality of datasets (par. [0029] and [0032], cluster group 116 contains sub-topics “Top News Local”, “Top News US”, and “Top News World”. In cluster group 116 each of the sub-topics shares a similar prefix with both the cluster label and the other sub-topics, however that may not always be the case. Sub-topics may also be ranked and displayed based on semantics related to the partial query string, the cluster label, and/or both).
As per claim 8, the combination of Kim and Gupta discloses the invention as claimed. In addition, Gupta discloses displaying the subset of the plurality of datasets comprises at least one of highlighting the subset of the plurality of datasets, assigning the first cluster and the second label to the subset of the plurality of datasets as a group, and assigning the first cluster and the second label to each subset of the plurality of datasets (par. [0029], when a cluster group, cluster label, and/or sub-topic is selected or hovered over the color of the cluster label or sub-topic may change to indicate that the user is interacting with it).
As per claim 9, the combination of Kim and Gupta discloses the invention as claimed. In addition, Kim discloses the language model comprises a large language model and applying the language model comprises generating a prompt and inputting the prompt and the plurality of datasets to the large language model (fig.4, step S410 and fig.5, step S501, build a relationship between a keyword and a topic by creating a topic suitable for a given keyword through a prompt that is an input sentence of the language model).
As per claim 10, the combination of Kim and Gupta discloses the invention as claimed. In addition, Gupta discloses deduplicating, before applying the encoding model, duplicate topics from the plurality of topics (par. [0031], modify the cluster groups and sub-topics to remove similarity).
As per claim 11, the combination of Kim and Gupta discloses the invention as claimed. In addition, Kim discloses receiving a value designating a number of permitted topics (fig.5, step S501), wherein applying the language model further comprises generating a prompt and inputting the prompt and the plurality of datasets to a large language model (fig.5, step S501), and wherein the prompt further comprises an instruction to limit the number of topics generated to the value such that a total number of the plurality of topics are limited to the value (fig.5, step S401).
As per claim 12 , the combination of Kim and Gupta discloses the invention as claimed. In addition, Kim discloses receiving a value designating a number of permitted topics, wherein applying the language model further comprises generating a prompt and inputting the prompt and the plurality of datasets to a large language model (fig.5, step S501), and wherein the prompt further comprises an instruction to limit the number of topics generated to the value such that a total number of the plurality of topics are limited to the value (fig.5, step S501); and
Gupta discloses reducing, by deduplicating, the total number of the plurality of topics to generate the plurality of topics (par. [0031], new or deleted character may modify the cluster groups and sub-topics displayed within visual display based on the new partial query string “TOT” there is completely new set of cluster groups and sub-topics between).
As per claim 13, the combination of Kim and Gupta discloses the invention as claimed. In addition, Kim discloses the first cluster comprises first one of the plurality of vector data structures within first pre-determined semantic distance of a first selected vector in the plurality of vector data structures (page 7, para [5] and [3], using cluster similarity using hierarchical clustering to perform quantitative and qualitative quality tuning, such as determining how much words overlap between topics or comparing similarities between topics and removes a wide range of subjects based on relationships between subjects, wherein exclude topics in a hierarchical structure based on relationships between topics from a representative topic selection target, in which the number of topics in a lower layer is greater than or equal to a threshold value), the second cluster comprises second ones of the plurality of vector data structures within the second pre-determined semantic distance of a second selected vector in the plurality of vector data structures (page 7, para [5] and [3], using cluster similarity using hierarchical clustering to perform quantitative and qualitative quality tuning, such as determining how much words overlap between topics or comparing similarities between topics and removes a wide range of subjects based on relationships between subjects, wherein exclude topics in a hierarchical structure based on relationships between topics from a representative topic selection target, in which the number of topics in a lower layer is greater than or equal to a threshold value), and wherein the method further comprises: receiving a request to broaden a topic in the plurality of topics (fig.6, page 6, para [6], generate a topic for the target keyword according to a pattern); and
- increasing, prior to applying the clustering model, the first pre-determined semantic distance and the second pre-determined semantic distance (fig.6, page 6, para [6], using the prompt composed of example data pairs in the form of [keyword+topic] and [target keyword]). Gupta also discloses increasing, prior to applying the clustering model, the first pre-determined semantic distance and the second pre-determined semantic distance (par. [0023], [0025]-[0028]).
As per claim 14, the combination of Kim and Gupta discloses the invention as claimed. In addition, Gupta discloses the plurality of datasets comprise a plurality of electronic messages (par. [0028]-[0032]), each electronic message in the plurality of electronic messages comprises one dataset in the plurality of datasets(par. [0028]-[0032]), the first cluster comprises a first set of the plurality of electronic messages organized by a first subject type (par. [0028]-[0032]), modifying the plurality of datasets comprises re-organizing the plurality of electronic messages according to the subject type (par. [0031], new or deleted character may modify the cluster groups and sub-topics displayed within visual display), and the method further comprises: displaying, labeling, and highlighting the group according to the subject type (par. [0029], when a cluster group, cluster label, and/or sub-topic is selected or hovered over the color of the cluster label or sub-topic may change to indicate that the user is interacting with it), the second cluster comprises a second set of the plurality of electronic messages organized by a second subject type (par. [0028]-[0032] and [0058]).
As per claim 15, is a system claim, which recites similar limitations of the method claim 1. Therefore, claim 15 is rejected under the same rational as claim 1 above.
As per claim 16, the combination of Kim and Gupta discloses the invention as claimed. In addition, Kim discloses the language model comprises a large language model (fig.4, step S410, page 5, para [1], creates a topic that can be expressed as a user's interest using the keyword for a keyword used as a search term or tag using a language model, thereby interest data representing a relationship between a keyword and a topic.), and
- the language model is applied to the plurality of topics by generating a prompt and inputting the prompt and the plurality of datasets to the large language model (fig.4, step S410, page 5, para [1], build a relationship between a keyword and a topic by creating a topic suitable for a given keyword through a prompt that is an input sentence of the language model).
As per claim 17, the combination of Kim and Gupta discloses the invention as claimed. In addition, Gupta discloses the encoding model comprises a bidirectional encoder representations from transformers (BERT) machine learning model (par. [0048], semantic similarity using distributed embeddings (BERT)).
As per claim 18, the combination of Kim and Gupta discloses the invention as claimed. In addition, Gupta discloses the clustering model comprises one of a cosine similarity machine learning model for hierarchical clustering and a K-means clustering machine learning model (par. [0049], k-means).
As per claim 19, the combination of Kim and Gupta discloses the invention as claimed. In addition, Gupta discloses a display device in communication with the processor (par. [0097], one processing unit and a system memory), wherein the server controller is further executable by the processor (par. [0097], one processing unit and a system memory) to:
- display a modified plurality of datasets generated when the server controller modifies the plurality of datasets (par. [0031], new or deleted character may modify the cluster groups and sub-topics displayed within visual display based on the new partial query string “TOT” there is completely new set of cluster groups and sub-topics between), and
- display a label applied to each dataset of the modified plurality of datasets (par. [0031], new or deleted character may modify the cluster groups and sub-topics displayed within visual display based on the new partial query string “TOT” there is completely new set of cluster groups and sub-topics between).
As per claim 20, Kim discloses a method of modifying a graphical user interface initially displaying a plurality of datasets in a disorganized list that is disorganized by topic, the method comprising:
- applying a language model to the plurality of datasets to generate a plurality of topics assigned to the plurality of datasets, wherein each of the plurality of topics comprises at least one of a natural language text word and a natural language phrase (fig.11 and step S501, page 7, para [3], given a set of products and assign the main topics for the set of product, wherein (<protein shake barrel, quest protein bar, Nike Crew, Fila Performance Pants, Jeximix Jogger, Adidas T-shirt, dumbbell, vertitest mat, disc 5kg, kettlebell, burn machine> All of them belong to the category 'Homet', and among them, <Nike Crew, Fila Performance Pants, Jexi Mix Jogger, Adidas T-shirt> fall under the category 'Sportswear', and <Dumbbell, Vertitest Mat, Disc 5kg, Kettlebell, Burn Machine> It is assumed that corresponds to the category 'exercise equipment'. When the topics 'Holmt', 'Sportswear', and 'Sportswear' are created through the language model, it is possible to build a relationship between the topics 'Holmt' and 'Sportswear' because search words belonging to 'Sportswear' are included in 'Holmt'. And since the search word belonging to 'sports equipment' is included in 'Holmt', it is possible to build a relationship between the subject 'Holmt' and 'Sports equipment'));
- applying an encoding model to the plurality of topics to generate a corresponding plurality of vector data structures storing a plurality of embedded topics, wherein each embedded topic of the plurality of embedded topics is associated with one corresponding vector in the plurality of vector data structures (step S430, page 7, para [4], for each distinct topic using Bert model, assume that topic B belongs to topic A (inclusion relationship) because topic A 'diet' is a word that is completely included in topic B 'diet diet'. Also, if topic A contains the words a1, a2, a3 and topic B contains the words a1, a2, a3, b1, b2, then topic A belongs to topic B because topic B completely contains topic A. (inclusion relationship) is assumed. Next, the processor 220 searches for a subject having an equivalent relationship among subjects other than subjects having an inclusive relationship. For example, if topics C1, C2, and C3 are generated for keyword C, it is assumed that topics C1, C2, and C3 are equivalent. The processor 220 may perform graph embedding (eg, knowledge graph embedding, etc.) using the inclusive relationship and the equivalence relationship between subjects. In other words, as a relationship used in graph embedding, an inclusive relationship and an equivalence relationship can be applied. After embedding a graph representing the relationship between topics, it is possible to determine a representative topic based on probability by probabilistically obtaining which topic belongs to each keyword. In addition to probability, a topic of a certain depth may be selected as a representative topic in a hierarchical structure based on relationships between topics);
- applying a clustering model to the plurality of vector data structures to generate: first cluster comprising a first subset of the vector data structures within a first pre-determined semantic distance, and a second cluster comprising a second subset of the plurality of vector data structures within a second pre-determined semantic distance, wherein the first subset and the second subset each comprises a reduced number of the plurality of vector data structures (page 7, para [5] and [3], using cluster similarity using hierarchical clustering to perform quantitative and qualitative quality tuning, such as determining how much words overlap between topics or comparing similarities between topics and removes a wide range of subjects based on relationships between subjects, wherein exclude topics in a hierarchical structure based on relationships between topics from a representative topic selection target, in which the number of topics in a lower layer is greater than or equal to a threshold value).
Gupta discloses modifying, according to the first cluster and the second cluster, the plurality of datasets to generate an organized data structure by organizing the plurality of datasets into a first group corresponding to the first cluster and a second group corresponding to the second cluster (par. [0031], new or deleted character may modify the cluster groups and sub-topics displayed within visual display based on the new partial query string “TOT” there is completely new set of cluster groups and sub-topics between);
- sorting the first group and the second group into an organized list of the plurality of datasets, wherein the first group and the second group are organized by topic according to the first cluster and the second cluster (par. [0020]-[0021], once the clusters are grouped, the sub-topics and cluster groups may be ranked based on popularity scores and/or popularity factors and the ranked cluster groups and sub-topics may be displayed in a display area as selectable icons; and par. [0031], new or deleted character may modify the cluster groups and sub-topics displayed within visual display),);
- labeling the first group according to a first name associated with a first medoid of the first subset (par. [0023], [0025]-[0028]);
- labeling the second group according to a second name associated with a second medoid of the second subset (par. [0023], [0025]-[0028]); and
- modifying the graphical user interface to display the organized list of the plurality of datasets according to the first name and the second name (par. [0023], [0025]-[0028] and par. [0031], new or deleted character may modify the cluster groups and sub-topics displayed within visual display based on the new partial query string “TOT” there is completely new set of cluster groups and sub-topics between);).
Therefore, it would have been obvious to one having ordinary skill in the before the effective filing date of the claimed invention to have modified the system of Kim modify the cluster groups in order to minimize cognitive load required by the user to process the information, thereby reducing amount of time required to find the set of cluster groups and sub-topics.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (see PTO-892).
Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LOAN T NGUYEN whose telephone number is (571)-270-3103. The examiner can normally be reached on Monday from 10:00 am - 6:00 pm, Thursday-Friday from 10:00 am - 2:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached on (571) 270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-270-4103. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
2/11/2026
/LOAN T NGUYEN/Examiner, Art Unit 2165
/ALEKSANDR KERZHNER/Supervisory Patent Examiner, Art Unit 2165