Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This Office Action is in response to the application filed on 12/12/2025.
Claims 1-20 are pending.
Terminal Disclaimer
The Terminal Disclaimer filed on 12/12/2025 has been acknowledged and has been approved.
Examiner Notes
Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Saini et al. (US PGPUB 2025/0061136, hereinafter Saini), in view of Michael Galli (US PGPUB 2024/0289554, hereinafter Galli).
As per as claim 1, Saini discloses:
(Currently Amended) A system, comprising:
one or more processors coupled to non-transitory memory, the one or more processors (Saini, e.g., [0085-0086], “...one or more processors...”) configured to:
maintain a dataset corresponding to a first embeddings space (Saini, e.g., [0028], “...input query to a first lookup vector, and then uses the first lookup vector to identify a first metadata embedding...” and [0079-0080], “... input query into a query embedding...using a first set of training examples...the query embedding being a vector in a first vector space...”);
receive a query to search the dataset, the query corresponding to a second
embeddings space (Saini, e.g., [0028], “... second lookup vector 210 to identify a second metadata embedding E.sub.M2 in a second metadata vector space...”), [0079-0080], “...the metadata embedding being associated with a particular instance of metadata and being a vector in a second vector space...”);
generate a transformed set of query embeddings using a transformation
data structure and the query, the transformed set of query embeddings corresponding to the first embeddings space, the transformation data structure trained to transform input embeddings of the second embeddings space to the first embeddings space, wherein the first embeddings space is different from the second embeddings space (Saini, e.g., [0025-0029], [0079-0080], “...first metadata model 202 (M1) transforms the input query 104 to a first lookup vector 204, and then uses the first lookup vector 204 to identify a first metadata embedding E.sub.M1 in a first metadata vector space 206. A second metadata model 208 (M2) transforms the input query 104 into a second lookup vector 210, and then uses the second lookup vector 210 to identify a second metadata embedding E.sub.M2 in a second metadata vector space 2...”); and
execute a search operation for the dataset using the transformed set of
query embeddings (Saini, e.g., [0025-0028], “...The metadata engines (120, . . . , 122) use respective metadata models to transform queries into lookup vectors in different respective vector spaces, and then use the lookup vectors to identify matching metadata embeddings. The query-augmenting component 110 uses an augmentation model to produce augmented embeddings in yet another vector space...”).
To make records clearer regarding to the language of “search operation for the dataset using the transformed set of query embeddings” (although as stated above, Saini functional disclose the features of “search operation for the dataset using the transformed set of query embeddings” (Saini, e.g., [0025-0028]).
However Galli, in an analogous art, discloses “search operation for the dataset using the transformed set of query embeddings” (Galli, e.g., [0015-0016], “... a plurality of resource vectors related to a list of resources, receiving, at a server, a query from a user, embedding, by the sentence transformer, a query vector based on the query, generating similarity scores between the query and the list of resources, and determining, from the list of resources, one or more probable answers to the query based on the similarity scores...” and further see [0044], [0056], [0066-0067], “...system can vectorize it via embedding to generate vectorized query embeddings...first sentence transformer can generate first query vector, second sentence transformer 206 can generate a second query vector 214, and a third sentence transformer 208 can generate a third query vector...). Thus, it would have been obvious to one of ordinary skill in the art BEFORE the effective filling date of the claimed invention to combine the teaching of Galli and Saini to searches for content containing the same or similar keywords as the search query and retrieve relevant answers to archiving in improving a search engine in handling search queries [0002-0004]).
As per as claim 2, the combination of Galli and Saini disclose:
(Original) The system of claim 1, wherein the one or more processors are further
configured to:
select the transformation data structure from a plurality of transformation data structure based on the query (Saini, e.g., [0038-0041], “... selected by the specific user who submitted the input query... adding text to the input query 104 that identifies the location of the user, and then transforming the thus-augmented query into a query embedding...” and [0066-0067]).
As per as claim 3, the combination of Galli and Saini disclose:
(Original) The system of claim 1, wherein the dataset further comprises a corpus
of text data encoded in the first embeddings space (Saini, e.g., [0025-0028], “...input query to a first lookup vector, and then uses the first lookup vector to identify a first metadata embedding...” and [0079-0080], “... input query into a query embedding...using a first set of training examples...the query embedding being a vector in a first vector space...”).
As per as claim 4, the combination of Galli and Saini disclose:
(Original) The system of claim 1, wherein the dataset further comprises a corpus
of non-text data encoded in the first embeddings space (Saini, e.g., [0025-0028], “...input query to a first lookup vector, and then uses the first lookup vector to identify a first metadata embedding...” and [0079-0080], “... input query into a query embedding...using a first set of training examples...the query embedding being a vector in a first vector space...”) (the examiner asserts input video, image, audio which is equivalent of non-text).
As per as claim 5, the combination of Galli and Saini disclose:
(Original) The system of claim 1, wherein the one or more processors are further
configured to: generate a set of query embeddings based on the query (Saini, e.g., [0025-0028], (set of queries)).
As per as claim 6, the combination of Galli and Saini disclose:
(Original) The system of claim 5, wherein the one or more processors are further
configured to: apply the transformation data structure to the set of query embeddings by multiplying a set of weight values stored in the transformation data structure by each embedding in the set of query embeddings (Saini, e.g., [0022-0024], “...executing a task using machine-trained weights that are produced in a training operation... “weight” refers to any type of parameter value that is iteratively produced by the training operation... “embedding” refers to a vector in a vector space... , a query encoder 106 transforms the input query 104 into a query embedding E.sub.Q....” and further see [0059-0062]).
As per as claim 7, the combination of Galli and Saini disclose:
(Original) The system of claim 1, wherein the one or more processors are further
configured to: train the transformation data structure using a training dataset different from the query (Saini, e.g., [0024-028], “...trained using different respective sets of training... embeddings identified by different models, and the manner in which the query-augmenting component 110 combines the different embeddings into an augmented embedding... first metadata model 202 (M1) transforms the input query 104 to a first lookup vector 204, and then uses the first lookup vector 204 to identify a first metadata embedding E.sub.M1 in a first metadata vector space 206. A second metadata model 208 (M2) transforms the input query 104 into a second lookup vector 210, and then uses the second lookup vector 210 to identify a second metadata embedding...”).
As per as claim 8, the combination of Galli and Saini disclose:
(Original) The system of claim 7, wherein the training dataset comprises ground
truth data mapping the second embeddings space to the first embeddings space (Saini, e.g., [0024-028], “...different embeddings into an augmented embedding... first metadata model 202 (M1) transforms the input query 104 to a first lookup vector 204, and then uses the first lookup vector 204 to identify a first metadata embedding E.sub.M1 in a first metadata vector space 206. A second metadata model 208 (M2) transforms the input query 104 into a second lookup vector 210, and then uses the second lookup vector 210 to identify a second metadata embedding...”).
As per as claim 9, the combination of Galli and Saini disclose:
(Original) The system of claim 1, wherein the one or more processors are further
configured to: provide a set of query results responsive to executing the query (Saini, e.g., [0025-0030], “... The metadata-generating system 108 includes one more metadata engines (120, . . . , 122). The metadata engines (120, . . . , 122) use respective metadata models to transform queries into lookup vectors in different respective vector spaces, and then use the lookup vectors to identify matching metadata embedding... the quality of search results... produces output results that have some predefined relationship with the output results ...”).
Claims 10-18 are essentially the same as claims 1-9 except that they set forth the claimed invention as a method rather a system, respectively and correspondingly, therefore is rejected under the same reasons set forth in rejections of claims 1-9.
Claims 19-20 contain essentially the same subject matter as claims 1-2 and therefore are rejected under the same rationale.
Response to Arguments
The Examiner respectfully reminds applicant of the broadest reasonable interpretation standard (See MPEP 2111), "During examination, the claims must be interpreted as broadly as their terms reasonably allow." In re American Academy of Science Tech Center, 367 F.3d 1359, 1369, 70 USPQ2d 1827, 1834 (Fed. Cir. 2004) (The USPTO uses a different standard for construing claims than that used by district courts; during examination the USPTO must give claims their broadest reasonable interpretation.) In Phillips v. AWH Corp., 415 F.3d 1303, 75 USPQ2d 1321 (Fed. Cir. 2005), the court further elaborated on the “broadest reasonable interpretation" standard and recognized that “The Patent and Trademark Office (“PTO") determines the scope of claims in patent applications not solely on the basis of the claim language, but upon giving claims their broadest reasonable construction." Thus, when interpreting claims, the courts have held that Examiners should (1) interpret claim terms as broadly as their terms reasonably allows and (2) interpret claim phrases as broadly as their construction reasonably allows.
Applicant’s arguments filed 12/12/2025 with respect to claims 1-20 have been considered but are moot in view of the new ground(s) of rejection necessitated by applicant's amendment to the claims. Applicant's newly amended features are taught implicitly, expressly, or impliedly by the prior art of record (See the new ground(s) of rejection set forth herein above).
The Examiner respectfully submits that, with respect to the totally newly amended subject matter, the Examiner respectfully cited proper paragraphs from cited reference to reject the claim in responsive to the newly amended, please refer to the corresponding section of the office action.
Additional Art Considered
The prior art made of record and not relied upon is considered pertinent to the Applicants’ disclosure.
The following patents and papers are cited to further show the state of the art at the time of Applicants’ invention with respect to maintain a dataset comprising a first set of embeddings corresponding to a first embeddings space and stored in association with a set of query results for the first set of embeddings and then train a transformation data structure using the first set of embeddings and the set of query results, further execute a search operation for the second embeddings space by applying the transformation data structure to a second set of embeddings corresponding to the first embeddings space.
a. Musoles et al. (US Patent 11,790,889, hereafter Musoles); “Features Engineering With Question Generation” discloses “ obtaining a corpus of natural-language text documents, automatically generating questions about information in corresponding portions of the documents, and associating the questions with the corresponding portions of the documents”.
Musoles also teaches determining a first set of embedding vectors based on the question and determining a second set of embedding vectors based on the query [col. 1-2, lines 60-5].
Musoles further teaches identify a vertex of an ontology graph based on a first embedding vector by matching the first embedding vector with a set of embedding vectors corresponding with a set of vertices of the ontology graph [col. 5, lines 20-60].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUAN A PHAM whose telephone number is (571)270-3173. The examiner can normally be reached M-F 7:45 AM - 6:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached on 571-272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TUAN A PHAM/Primary Examiner, Art Unit 2163