DETAILED ACTION
This action is responsive to application filed on April 16, 2025.
The preliminary amendments filed on November 13, 2025 have been acknowledged and considered.
Claims 1-20 have been canceled. Claims 21-40 are new.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The disclosure is objected to because of the following informalities:
The CROSS-REFERENCE TO RELATED APPLICATIONS section should include the most recent data. For example, This application is a continuation of U.S. Patent Application No. 18/541,216, filed December 15, 2023, now US Pat. No. 12,299,058. Each application listed should be accompanied with their respective patent number.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 21-40 are rejected on the ground of nonstatutory double patenting as being unpatentable over the claims 1-3, 5-6 and 13-16 of prior U.S. Patent No. 12,299,058. Although the claims at issue are not identical, they are not patentably distinct from each other because they are directed to substantially the same invention.
Instant application
U.S. Patent No. 12,299,058
Claim 21. A system, comprising: at least one processor; and memory storing instructions that, when executed by the at least one processor, cause the system to perform a set of operations, the set of operations comprising:
generating a concept embedding based on a concept definition from a user;
evaluating the concept embedding to determine a semantic relationship between the concept embedding and one or more document embeddings corresponding to a set of documents; and
providing, for display to the user, an indication of the determined semantic relationship corresponding to the concept embedding.
Claim 13. A system for assisting users in re-finding documents, the system comprising: one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media that, when executed by at least one processor, cause the at least one processor to:
generate, for each concept clip of a set of concept clips that correspond to a concept searched by a user using a machine learning model, an embedding;
generate a concept embedding based on the embeddings of the set of concept clips;
determine a semantic relationship between the concept embedding and one or more document embeddings that each correspond to a document clip of a set of documents; and
generate a graphical user interface based on the determined semantic relationship depicting the semantic relationship between the concept and one or more document clips of the one or more document embeddings.
Claim 22
Claim 14
Claim 23
Claim 15
Claim 24
Claim 15
Claim 25
Claim 15
Claim 26
Claim 16
Claim 27
Claim 13
Claim 28. A method for document re-finding, the method comprising:
generating a concept embedding based on a concept definition from a user;
evaluating the concept embedding to determine a semantic relationship between the concept embedding and one or more document embeddings corresponding to a set of documents;
generating a graphical representation depicting the concept embedding in relation to the one or more document embeddings according to the determined semantic relationship; and providing, for display to the user, the generated graphical representation.
Claim 1. A method for assisting users in re-finding documents, the method comprising:
generating, for each concept clip of a set of concept clips, an embedding, wherein the set of concept clips corresponds to a concept searched by a user;
generating a concept embedding based on the embeddings of the set of concept clips;
determining a semantic relationship between the concept embedding and one or more document embeddings that each correspond to a document clip of a set of documents; and
based on the determined semantic relationship, providing, for display at a client device, an indication of the semantic relationship between the concept and one or more document clips of the one or more document embeddings.
Claim 29
Claim 2
Claim 30
Claim 3
Claim 31
Claim 3
Claim 32
Claim 5
Claim 33
Claim 6
Claim 34. A method for document re-finding, the method comprising:
generating a concept embedding based on a concept definition from a user;
evaluating the concept embedding to determine a semantic relationship between the concept embedding and one or more document embeddings corresponding to a set of documents; and
providing, for display to the user, an indication of the determined semantic relationship corresponding to the concept embedding.
Claim 1. A method for assisting users in re-finding documents, the method comprising:
generating, for each concept clip of a set of concept clips, an embedding, wherein the set of concept clips corresponds to a concept searched by a user;
generating a concept embedding based on the embeddings of the set of concept clips;
determining a semantic relationship between the concept embedding and one or more document embeddings that each correspond to a document clip of a set of documents; and
based on the determined semantic relationship, providing, for display at a client device, an indication of the semantic relationship between the concept and one or more document clips of the one or more document embeddings.
Claim 35
Claim 2
Claim 36
Claim 3
Claim 37
Claim 3
Claim 38
Claim 3
Claim 39
Claim 5
Claim 40
Claim 6
Claims 21-40 are rejected on the ground of nonstatutory double patenting as being unpatentable over the claims 1-3, 5-6 and 13-16 of prior U.S. Patent No. 11,847,178. Although the claims at issue are not identical, they are not patentably distinct from each other because they are directed to substantially the same invention.
Instant application
U.S. Patent No. 11,847,178
Claim 21. A system, comprising: at least one processor; and memory storing instructions that, when executed by the at least one processor, cause the system to perform a set of operations, the set of operations comprising:
generating a concept embedding based on a concept definition from a user;
evaluating the concept embedding to determine a semantic relationship between the concept embedding and one or more document embeddings corresponding to a set of documents; and
providing, for display to the user, an indication of the determined semantic relationship corresponding to the concept embedding.
Claim 13. A system for assisting users in re-finding documents, the system comprising: one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media that, when executed by at least one processor, cause the at least one processor to:
generate, using a machine learning model, embeddings for document clips related to respective documents among a plurality of documents; receive a first set of concept clips defining a first concept for searching for content of interest to a user in the plurality of documents; generate, using the machine learning model, embeddings for respective concept clips in the first set of concept clips;
generate a first concept embedding based on a combination of the embeddings generated for the respective concept clips in the first set of concept clips;
determine semantic relationships between the first concept and the document clips based on i) the embeddings generated for the document clips and ii) the concept embedding;
generate a graphical user interface depicting the semantic relationships between the first concept and the document clips; and cause display of the graphical user interface to be rendered at a client device, wherein the graphical user interface is operable to enable re-finding a document, among the plurality of documents, having the content of interest to the user.
Claim 22
Claim 14
Claim 23
Claim 15
Claim 24
Claim 15
Claim 25
Claim 15
Claim 26
Claim 16
Claim 27
Claim 13
Claim 28. A method for document re-finding, the method comprising:
generating a concept embedding based on a concept definition from a user;
evaluating the concept embedding to determine a semantic relationship between the concept embedding and one or more document embeddings corresponding to a set of documents;
generating a graphical representation depicting the concept embedding in relation to the one or more document embeddings according to the determined semantic relationship; and providing, for display to the user, the generated graphical representation.
Claim 1. A method for assisting users in re-finding documents, the method comprising:
generating embeddings for document clips related to respective documents among a plurality of documents; receiving a first set of concept clips defining a first concept for searching for content of interest to a user in the plurality of documents; generating embeddings for respective concept clips in the first set of concept clips;
generating a first concept embedding based on a combination of the embeddings generated for the respective concept clips in the first set of concept clips;
determining semantic relationships between the first concept and the document clips based on i) the embeddings generated for the document clips and ii) the concept embedding;
generating a graphical user interface depicting the semantic relationships between the first concept and the document clips; and causing display of the graphical user interface to be rendered at a client device, wherein the graphical user interface is operable to enable re-finding a document, among the plurality of documents, having the content of interest to the user.
Claim 29
Claim 2
Claim 30
Claim 3
Claim 31
Claim 3
Claim 32
Claim 5
Claim 33
Claim 6
Claim 34. A method for document re-finding, the method comprising:
generating a concept embedding based on a concept definition from a user;
evaluating the concept embedding to determine a semantic relationship between the concept embedding and one or more document embeddings corresponding to a set of documents; and
providing, for display to the user, an indication of the determined semantic relationship corresponding to the concept embedding.
Claim 1. A method for assisting users in re-finding documents, the method comprising:
generating embeddings for document clips related to respective documents among a plurality of documents; receiving a first set of concept clips defining a first concept for searching for content of interest to a user in the plurality of documents; generating embeddings for respective concept clips in the first set of concept clips;
generating a first concept embedding based on a combination of the embeddings generated for the respective concept clips in the first set of concept clips;
determining semantic relationships between the first concept and the document clips based on i) the embeddings generated for the document clips and ii) the concept embedding;
generating a graphical user interface depicting the semantic relationships between the first concept and the document clips; and causing display of the graphical user interface to be rendered at a client device, wherein the graphical user interface is operable to enable re-finding a document, among the plurality of documents, having the content of interest to the user.
Claim 35
Claim 2
Claim 36
Claim 3
Claim 37
Claim 3
Claim 38
Claim 3
Claim 39
Claim 5
Claim 40
Claim 6
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 21-40 are rejected under 35 U.S.C. 103 as being unpatentable over Mahmoud (US Patent Application Publication No. US 20220156298 A1), in view of Huh (US Patent Application Publication No. US 20190138615 A1).
Regarding claim 21, Mahmoud teaches a system, comprising: at least one processor; and memory storing instructions that, when executed by the at least one processor, cause the system to perform a set of operations, the set of operations comprising: generating a concept embedding based on a concept definition from a user; (See Mahmoud [0034-0038, 0107] “The agent-assist system 118 may analyze the text of the communication sessions 108 and determine context of the conversation, such as a semantic or meaning of the conversation… the techniques described herein include identifying portions of the documents, or “subdocuments,” that are more relevant to the queries or context of the conversation between the agents 112 and user 104… The contact-center infrastructure 102 and the agent-assist system 118 may include one or more hardware processors (processors)… At 608, the agent-assist system 118 may identify first input received from the user device 106 where the first input represents a query of the user 104 [Thus, from a user] for the agent 112 to answer. At 610, the agent-assist system 118 may identify, from the subdocuments 302, a first subdocument 302 as including first text that is semantically related to the query. For example, the agent-assist system 118 may generate an embedding [e.g. concept embedding] representing the semantic meaning of the query [e.g. concept definition from a user], and identify subdocuments 302 having embeddings that are similar to (e.g., within a threshold distance in a vector space) the query embedding.”)
evaluating the concept embedding to determine a semantic relationship between the concept embedding and one or more document embeddings corresponding to a set of documents; and (See Mahmoud [0104-0107] “At 602, an agent-assist system 118 may obtain a plurality of documents 206 relating to different topics… At 604, the agent-assist system 118 may identify subdocuments 302 from each of the plurality of documents 206 [e.g. a set of documents]… At 606, the agent-assist system 118 may establish a communication session 108 between a user device 106 and an agent device 114… the agent-assist system 118 may identify first input received from the user device 106 where the first input represents a query of the user 104 for the agent 112 to answer… the agent-assist system 118 may generate an embedding representing the semantic meaning of the query, and identify subdocuments 302 having embeddings that are similar to (e.g., within a threshold distance in a vector space) [Thus, evaluating the concept embedding to determine a semantic relationship] the query embedding. [Thus, evaluating the concept embedding to determine a semantic relationship between the concept embedding and one or more document embeddings corresponding to a set of documents]”)
providing, for display to the user, an indication of the determined semantic relationship corresponding to the concept embedding. (See Mahmoud [0109] “At 612, the agent-assist system 118 may cause presentation of a visual indicator on the display that indicates the first subdocument 302 as being relevant to the query.” See also Mahmoud [0024-0026] “After presenting the subdocuments, the agent-assist system may collect feedback from the agent and/or user in the conversation to determine a relevancy of the recommended answers or information.”
However, Huh also disclose providing, for display to the user, an indication of the determined semantic relationship corresponding to the concept embedding in more details. (See Huh [0005] “In response to a user query, documents and concept markers relevant to the query are determined.”
See also Huh [0044, 0084] “determining the relevance of a concept marker may include calculating a cosine similarity between a vector representation of the concept markers and the words in the query… an average embedding for each concept marker may be generated by averaging the embeddings for each word in the concept marker in the query. Similarly, an average embedding for each word in the query may be generated. [Thus, generating a concept embedding based on a concept definition from a user]… ranking the documents in the search results may include calculating a cosine similarity between a vector representation of query concept markers and concept markers assigned to the document to determine the semantic similarity between the query and the document… The cosine similarity between the average embedding vector of the query and the average embedding vector of the document may then be determined [Thus, determine a semantic relationship between the concept embedding and document embedding corresponding to a set of documents]. Re-ranker module 152 may rank documents based on the respective calculated cosine similarity.” See also Huh [0104], Fig. 4 “the re-ranked search results documents and concept markers [e.g. concept embedding] are provided to the user, via the GUI [Thus, a graphical representation]. For example, as shown in FIG. 4, in response to the selection of concept marker 410, concept marker set 420 is provided and displayed in GUI 300. Additionally, a portion of the re-ranked search result documents 430 [Thus, depicting the concept embedding in relation to the one or more document embeddings according to the determined semantic relationship]. may be displayed in GUI 300.”
PNG
media_image1.png
683
758
media_image1.png
Greyscale
Thus, by providing a score and rank in in the search result documents 430, it is providing, for display to the user, an indication of the determined semantic relationship corresponding to the concept embedding.)
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Mahmoud; which identifies portions of the documents, or “subdocuments,” being relevant to the queries, to incorporate the teachings of Huh by providing a score and ranking in in the documents search result.
One would be motivated to do so to improve relevance and prioritization, allowing users to identify most relevant results.
Regarding claim 22, Mahmoud further in view of Huh, [hereinafter Mahmud-Huh] teaches all limitations and motivations of claim 21, wherein the set of documents comprises one or more of: a webpage; or an electronic file. (See Mahmud [0044] “As shown in FIG. 2, the agent-assist pipeline 202 may obtain, at “1,” documents 206 from the knowledge-base source(s) 204… The documents may be text documents, FAQ webpages, PDFs, and/or any type of electronic document with information.)
Regarding claim 23, Mahmoud-Huh teaches all limitations and motivations of claim 21, wherein each document embedding of the one or more document embeddings corresponds to a content subpart of a document in the set of documents. (See Mahmoud [0107] “the agent-assist system 118 may generate an embedding representing the semantic meaning of the query, and identify subdocuments 302 having embeddings [e.g. document embedding] that are similar to (e.g., within a threshold distance in a vector space) the query embedding.” See also Mahmoud [0066] “each document is a collection of subdocuments. A subdocument can be a paragraph in a document [Thus, corresponds to a content subpart of a document in the set of documents], a unit smaller than a paragraph (e.g., a sentence or a collection of contiguous sentences), or a collection of contiguous paragraphs.”)
Regarding claim 24, Mahmoud-Huh teaches all limitations and motivations of claim 21, wherein each document embedding of the one or more document embeddings corresponds to a plurality of content subparts of a document in the set of documents. (See Mahmoud [0107] “the agent-assist system 118 may generate an embedding representing the semantic meaning of the query, and identify subdocuments 302 having embeddings [e.g. document embedding] that are similar to (e.g., within a threshold distance in a vector space) the query embedding.” See also Mahmoud [0066] “each document is a collection of subdocuments. A subdocument can be a paragraph in a document, a unit smaller than a paragraph (e.g., a sentence or a collection of contiguous sentences), or a collection of contiguous paragraphs. [Thus, corresponds to a plurality of content subparts of a document in the set of documents]”)
Regarding claim 25, Mahmoud-Huh teaches all limitations and motivations of claim 24, wherein each content subpart of the content subparts is a paragraph of the document. (See Mahmoud [0107] “the agent-assist system 118 may generate an embedding representing the semantic meaning of the query, and identify subdocuments 302 having embeddings [e.g. document embedding] that are similar to (e.g., within a threshold distance in a vector space) the query embedding.” See also Mahmoud [0066] “each document is a collection of subdocuments. A subdocument can be a paragraph in a document [Thus, corresponds to a content subpart of a document in the set of documents], a unit smaller than a paragraph (e.g., a sentence or a collection of contiguous sentences), or a collection of contiguous paragraphs.”)
Regarding claim 26, Mahmoud-Huh teaches all limitations and motivations of claim 21, wherein the concept definition includes one or more of: a text paragraph provided by the user; (See Mahmoud [0034-0035] “The agent-assist system 118 may analyze the text of the communication sessions 108 and determine context of the conversation, such as a semantic or meaning of the conversation… the techniques described herein include identifying portions of the documents, or “subdocuments,” that are more relevant to the queries or context of the conversation between the agents 112 and user 104” See also Mahmud [0070], Fig. 4A “The agent-assist user interface (UI) 402 may present a conversation 120 [Thus, includes a text paragraph provided by the user] between a user 104 and an agent 112 as well as agent-assist recommendations 122 for the agent 112 to use to respond to the user 104. As shown, the conversation 120 includes user input 404 and agent input 406”
PNG
media_image2.png
668
1002
media_image2.png
Greyscale
Regarding claim 27, Mahmoud-Huh teaches all limitations and motivations of claim 21,wherein: the one or more document embeddings were generated using a machine learning model based on content of the set of documents; and the concept embedding is generated using the machine learning model. (See Mahmoud [0057] “the retriever component 218 may embed the query using an embedding model [Thus, the concept embedding is generated using the machine learning model] (that preserves the semantic relationships detailed earlier) into a vector q, and searches the vector space index that corresponds to the agent's 112 knowledge base profile. An agent 112 handling a user conversation can be assigned to a single profile at a time, Each subdocument [e.g. content of the set of documents] in the knowledge base profile, with references to the document to which it belongs, is represented as a point in the high-dimensional vector space (e.g., 1024 dimensions). The agent-assist system 118 finds the k nearest points, embedded using the same embedding model [Thus, the one or more document embeddings were generated using a machine learning model based on content of the set of documents], to the input q.”)
Regarding claim 28, Mahmoud-Huh teaches all of the elements of claim 21 in system form. Therefore, the supporting rationale of the rejection to claim 21 applies equally as well to those elements of claim 28.
Regarding claim 29, Mahmoud-Huh teaches all of the elements of claim 22 in system form. Therefore, the supporting rationale of the rejection to claim 22 applies equally as well to those elements of claim 29.
Regarding claim 30, Mahmoud-Huh teaches all of the elements of claim 24 in system form. Therefore, the supporting rationale of the rejection to claim 24 applies equally as well to those elements of claim 30.
Regarding claim 31, Mahmoud-Huh teaches all of the elements of claim 25 in system form. Therefore, the supporting rationale of the rejection to claim 25 applies equally as well to those elements of claim 31.
Regarding claim 32, Mahmoud-Huh teaches all of the elements of claim 26 in system form. Therefore, the supporting rationale of the rejection to claim 26 applies equally as well to those elements of claim 32.
Regarding claim 33, Mahmoud-Huh teaches all of the elements of claim 27 in system form. Therefore, the supporting rationale of the rejection to claim 27 applies equally as well to those elements of claim 33.
Regarding claim 34, Mahmoud-Huh teaches all of the elements of claim 21 in system form. Therefore, the supporting rationale of the rejection to claim 21 applies equally as well to those elements of claim 34.
Regarding claim 35, Mahmoud-Huh teaches all of the elements of claim 22 in system form. Therefore, the supporting rationale of the rejection to claim 22 applies equally as well to those elements of claim 35.
Regarding claim 36, Mahmoud-Huh teaches all of the elements of claim 23 in system form. Therefore, the supporting rationale of the rejection to claim 23 applies equally as well to those elements of claim 36.
Regarding claim 37, Mahmoud-Huh teaches all of the elements of claim 24 in system form. Therefore, the supporting rationale of the rejection to claim 24 applies equally as well to those elements of claim 37.
Regarding claim 38, Mahmoud-Huh teaches all of the elements of claim 25 in system form. Therefore, the supporting rationale of the rejection to claim 25 applies equally as well to those elements of claim 38.
Regarding claim 39, Mahmoud-Huh teaches all of the elements of claim 26 in system form. Therefore, the supporting rationale of the rejection to claim 26 applies equally as well to those elements of claim 48.
Regarding claim 40, Mahmoud-Huh teaches all of the elements of claim 27 in system form. Therefore, the supporting rationale of the rejection to claim 27 applies equally as well to those elements of claim 40.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OSCAR WEHOVZ whose telephone number is (571)272-3362. The examiner can normally be reached 8:00am - 5:00pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, APU M MOFIZ can be reached at (571) 272-4080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OSCAR WEHOVZ/Examiner, Art Unit 2161