Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Arguments and amendments filed 10/22/2025 have been examined.
Claims 1-44 have been cancelled; Claims 45-62 have been added.
Thus, Claims 45-62 are currently pending.
This Office Action is Final.
Response to Arguments
As to arguments concerning 35 USC 103, Applicant’s arguments with respect to claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
As to arguments concerning 35 USC 112, the rejection of the cancelled claims is withdrawn. Please note further issues under 35 USC 112 below.
Applicant’s arguments, with respect to rejections under 35 USC 101 have been fully considered and are persuasive. The rejection under 35 USC 101 has been withdrawn.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 45-62 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claims 45, 47, 50, 51, 53, 56, 57, 59, and 62 the phrase "and/or" (in the limitations “media units and/or their identifiers” for example) renders the claim indefinite because it is unclear whether the limitation(s) following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
The dependent claims are also rejected as these inherit the above defect and do not correct the defect.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 45-62 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guo et al., US Pub. No. 2015/0293976, in view of Krishnakumar et al., US Pub. No. 2016/0358024.
As to claim 45, (and substantially similar claim 51 and claim 57)
Guo discloses
a method of retrieving and ordering one or more media units and/or their identifiers,
(Guo abstract: The search engine operates by ranking a plurality of documents based on a
consideration of the query; see also [0032] In another case, a document may correspond to any record in any type of data structure, or in any unstructured repository of records. For example, a document may correspond to an entry within a table, a node associated with a knowledge graph, and so on.; see also [0033] An entity document may include various entity components
which describe different characteristics of the entity to which it pertains. For example, the entity components may describe the title of the entity, the attribute values associated with the entity)
each of the one or more media units being associated with a respective feature set of one or more attribute values,
(Guo teaches various features/attributes/concept vectors, i.e. “a respective feature set of one or more attribute values”
See [0064] A ranking module 518 can then rank the plurality of candidate documents based on
a plurality of features, including the relevance measures;
see also [0061] each entity document may have different components which describe the entity. For example, an entity that pertains to a particular movie may have attribute values that describe the title of the movie, the genre of the movie;
see also [0047] The second transformation module 204 operates on whatever candidate item that is being compared against the query. For example, the second transformation module 204
may use a second instance of the model 106 to project document information to a document concept vector y n· The document information describes the text content of a particular
document. The document concept vector y n , in turn, conveys the meaning of the document in the same semantic space as the context concept vector y c; see also [0051] The features may include, in part, the relevance measures generated by the comparison module 206. Alternatively, or in addition, the features may include the original concept vectors generated by the transformation modules)
the method comprising, by one or more processors associated with one or more memories:
(Guo Fig. 14 and [0121] )
generating, for each of the one or more media units and/or their identifiers, a feature set
comprising one or more attribute values obtained by transforming the one or more media units
using one or more artificial neural networks,
(Guo teaches a transformation module / neural network model to project document information to a document concept vector [0047] The second transformation module 204 operates on whatever candidate item that is being compared against the query. For example, the second transformation module 204 may use a second instance of the model 106 to project document information to a document concept vector y n· The document information describes the text content of a particular document. The document concept vector y n , in turn, conveys the meaning of the document in the same semantic space as the context concept vector y c;
see also [0029] FIG. 1 shows an environment 102 that includes a training system 104 for producing a deep learning model 106. A deep learning model 106 (henceforth, simply "model") refers to any model that expresses the underlying semantic content of an input linguistic item. In one implementation, the model 106 may correspond to a multilayered neural network, also referred to as a deep neural network (DNN).;
See also [0051] The features may include, in part, the relevance measures generated by the comparison module 206. Alternatively, or in addition, the features may include the original concept vectors generated by the transformation modules;)
receiving one or more user inputs following presentation of at least one of the one or more media units;
(Guo teaches training data/ click-through data from a data collection module for a context based search model, i.e. “receiving one or more user inputs following presentation of at least one of the one or more media units”
See [0027-0029] [0027] A. Illustrative Context-Based Search Mechanisms
[0028] A.I. Overview [0029] FIG. 1 shows an environment 102 that includes a training system 104 for producing a deep learning model 106. A deep learning model 106 (henceforth, simply "model") refers to any model that expresses the underlying semantic content of an input linguistic item. In one implementation, the model 106 may correspond to a multilayered neural network, also referred to as a deep neural network (DNN). Subsection A.2 (below) provides further details regarding one implementation of the model 106. The training system 104 produces the model 106 based on training data maintained in a data store 108. (In all cases herein, the term "data store" may correspond to one or more underlying physical storage mechanisms, provided at a single site or distributed over plural sites.) A data collection module 110 provides the training data based on any data collection technique. Subsection A.3 (below) provides further details regarding one implementation of the training system 104.;
See also [0090] However formed, the click-through data encompasses a plurality of instances of training data, each constituting a training example. Each example includes a context
(C) associated with a particular submitted query (Q), a document (D+) that the user selected in response to the query ( and its associated context), and at least one document (D-) that the
user did not select in response to the query ( and its associated context). In one case, the data collection module 110 can mine this information from archives of a search engine. In that situation, a non-clicked document (D-) of a training instance may correspond to an actual document that was offered to a user in response to a query, but which the user declined to
select.;
see also [0089] The click-through data generally describes: (I) queries submitted by actual users over some span of time; (2) an indication of documents that the users clicked on and the
documents that the users failed to click on after submitting those queries; and (3) information describing the contexts associated with the respective queries)
maintaining and updating, in response to the received one or more user inputs, an intent
model defined within a same attribute space as the feature sets;
(Guo teaches iterative training based on updated click through data and to generate at least one DNN to compare queries to documents, i.e. “maintaining and updating, in response to the received one or more user inputs, an intent model defined within a same attribute space as the feature sets”
See [0111-0112] [0111] The training system 104 can also use the equations described above to generate at least one DNN which can be used to compare the conceptual relatedness of queries to documents. The equations can be modified to perform this training task by replacing each occurrence ofC (pertaining to context) with Q (pertaining to a query)… [0112] Finally, the nexus between context and documents may be exhibited in other information, that is, other than
click-through data mined from click logs. In other implementations, the collection module 110 can collect such other information to produce training data, and the training system 104 can operate on that training data, instead of, or in addition to, click-through data.;
see also [0091] The training system 104 operates by using an iterative solving mechanism 1002 to iteratively achieve an objective defined an objective function 1004, by iteratively changing
the parameter values of the model A. When the iterative processing is finished, the final parameter values constitute the trained model A;
see also [0076] In other cases, the ranking framework 120 can use other techniques to
reduce the dimensionality of the input vectors (besides the above n-gram hashing technique), such as a random projection technique. In another case, the ranking framework 120 can entirely omit the use of DRMs, meaning that it operates on the original uncompressed input vectors.)
recalling one or more candidate lists of media units by fetching one or more media units
by applying one or more geometric operations with respect to the one or more derived attribute values of the one or more media units,
(Guo teaches ranking candidate documents based on distance measures, i.e. “fetching one or more media units by applying one or more geometric operations” [0064] A ranking module 518 can then rank the plurality of candidate documents based on a plurality of features, including the relevance measures, for each document, fed to it by the comparison module 512. The dashed lines leading into the ranking module 518 indicate that the ranking module 518 can, in addition, or alternatively, perform its ranking based on the original concept vectors, e.g., YQ, Yo and Yn;
see also claim 17: 17. The computer readable storage medium of claim 16, wherein said logic configured rank performs ranking based on a plurality of relevance measures, each relevance measure identifying a distance between the context concept vector and a particular document concept vector.;
see also [0084] each comparison module can compute the semantic relationship (e.g., similarity) between the context C and a document Das a cosine similarity measure)
wherein the one or more geometric operations includes calculating one or more distance measures between the updated intent representation and the feature sets of the one or more media units for inclusion in the one or more candidate lists;
(Guo teaches ranking candidate documents based on a relevance measure identifying a distance between the context concept vector and a particular document concept vector., i.e. “calculating one or more distance measures between the updated intent representation and the feature sets of the one or more media units”
See [0064] A ranking module 518 can then rank the plurality of candidate documents based on
a plurality of features, including the relevance measures, for each document, fed to it by the comparison module 512. The dashed lines leading into the ranking module 518 indicate that
the ranking module 518 can, in addition, or alternatively, perform its ranking based on the original concept vectors, e.g., YQ, Yo and Yn;
see also claim 17: 17. The computer readable storage medium of claim 16, wherein said logic configured rank performs ranking based on a plurality of relevance measures, each relevance measure identifying a distance between the context concept vector and a particular document concept vector.;
see also [0084] each comparison module can compute the semantic relationship (e.g., similarity) between the context C and a document Das a cosine similarity measure)
ranking the candidates by computing a score for each of the candidates, and ordering the
candidates according to their score to form an ordered list;
(Guo [0086] A ranking module 912 may receive the relevance measures produced by the comparison modules (908, ... , 910). The ranking module 916 may then assign a ranking
score to each candidate entity document based on the relevance measures, together with any other features.)
and
transmitting one or more media units and/or their identifiers from the ordered list for
presentation
(Guo [0116] In block 1114, the ranking framework 120 determines a ranking score for the candidate document based at least on the relevance measure. In block 1116, the ranking framework 120 provides a search result based on the ranking score ( e.g., after all other candidate documents have been processed in a similar manner to that described above).;
See also Fig. 11 item 1116: “PROVIDE SEARCH RESULTS BASED ON AT LEAST THE RANKING SCORE” and Fig. 1 “USER COMPUTING DEVICE” and “SEARCH RESULTS”).
While Guo discloses using activations in the hidden layer (see [0081, 0103]):
Guo does not explicitly disclose:
the transforming comprising inputting the one or more media units into the one or more neural networks and deriving the one or more attribute values from neuron activations associated with one or more hidden layers of the one or more neural networks;
However, Krishnakumar discloses:
the transforming comprising inputting the one or more media units into the one or more neural networks and deriving the one or more attribute values from neuron activations associated with one or more hidden layers of the one or more neural networks;
(Krishnakumar teaches deep convolutional neural network (DCNN), wherein the DCNN extracts a feature vector, where the neural network using hidden layers
see also [0011] Yet another aspect of the disclosure relates to a method for providing images similar to a query image from within a set of images. The method comprises receiving a
query image from the user and providing said query image as an input to a deep convolutional neural network (DCNN), wherein the DCNN extracts a feature vector of said query
image. The method further comprises reducing the dimensionality of the extracted feature vector to form a reduced dimensional feature vector and subsequently splitting the
reduced dimensional feature vector into a plurality of query feature segments.;
see also [0027] The nodes 204 (j) are referred to as intermediate/hidden nodes and are
configured to accept input from one or more input nodes and provide output to one or more hidden nodes and/or output nodes 206 (k). The input nodes 202 (i) form an input layer,
the hidden nodes 204 (j) form one or more hidden layers and the output nodes 206 (k) form an output layer. Although only one hidden layer has been shown in FIG. 2, it will be
appreciated that any number of hidden layers may be implemented in the artificial neural network depending upon the complexity of the decision to be made, the dimensionality
of the input data and size of the dataset used for training. In a deep neural network, large number of hidden layers are stacked one above the other, wherein each layer computes a non-linear transformation of the outputs from the previous layer.;)
It would have been obvious to one having ordinary skill in the art at the time of the effective filing date to apply a neural network (DCNN), wherein the DCNN extracts a feature vector, where the neural network using hidden layers as taught by Krishnakumar, to the system of Guo, since it was known in the art that provide for receiving a query image from the user and providing said query image as an input to a deep convolutional neural network (DCNN), wherein the DCNN extracts a feature vector of said query image in order to provide systems and methods for image processing that facilitates scene recognition while minimizing the false positives where this facilitates large scale recognition of indoor and outdoor scenes and provides for scene classification and provides image processing systems and methods that efficiently provides images similar to a query image and facilitates similarity matching that minimizes reconstruction error. (Krishnakumar [0008-0011]).
As to claim 46, Guo as modified discloses the method of claim 45, wherein the score includes weight values associated with the one or more derived attribute values
(Guo [0054] The training process can attach an environment-specific weight to each ranking feature to establish the extent to which that feature influences the overall ranking score for a candidate document under consideration.).
As to claim 47, Guo as modified discloses the method of claim 45, wherein the receiving the one or more user inputs corresponds to identifying one or more selected media units, and/or identifying one or more unselected media units
(Guo [0089] The click-through data generally describes: (I) queries submitted by actual users over some span of time; (2) an indication of documents that the users clicked on and the
documents that the users failed to click on after submitting those queries; and (3) information describing the contexts associated with the respective queries. Here, to repeat, the
term "click" is intended to have broad connotation. It may describe the case in which a user literally clicks on an entry within search results, or some other presentation of options,
using a mouse device. But the term click also encompasses the cases in which a user shows interest in a document in any other manner.;
see also [0090] However formed, the click-through data encompasses a plurality of instances of training data, each constituting a training example. Each example includes a context
(C) associated with a particular submitted query (Q), a document (D+) that the user selected in response to the query ( and its associated context), and at least one document (D-) that the
user did not select in response to the query ( and its associated context).).
As to claim 48, Guo as modified discloses the method of claim 45, wherein the method further comprises storing updated state information for future use
(Guo teaches collecting/maintaining a cluck through data store see [0088] FIG. 10 shows one implementation of the training system 104 of FIG. 1. In one illustrative and non-limiting case, the training system 104 processes a corpus of clickthrough data (provided in a data store 108), to generate the model 106.; see also [0029] The training system 104 produces the model 106 based on training data maintained in a data store 108. (In all cases herein, the term "data store" may correspond to one or more underlying physical storage mechanisms, provided at a single site or distributed over plural sites.) A data collection module 110 provides the training data based on any data collection technique. Subsection A.3 (below) provides further details regarding one implementation of the training system 104.; see also [0112] Finally, the nexus between context and documents may be exhibited in other information, that is, other than click-through data mined from click logs. In other implementations, the collection module 110 can collect such other information to produce training data, and the training system 104 can operate on that training data, instead of, or in addition to, click-through data.).
As to claim 49, Guo as modified discloses the method of claim 45, wherein the method further comprises accessing stored state information as an input for the maintaining and updating of the intent model
(Guo teaches a training system produces the model based on training data maintained in a data store, i.e. “comprises accessing stored state information as an input for the maintaining and updating of the intent model” see [0088] FIG. 10 shows one implementation of the training system 104 of FIG. 1. In one illustrative and non-limiting case, the training system 104 processes a corpus of clickthrough data (provided in a data store 108), to generate the model 106.; see also [0029] The training system 104 produces the model 106 based on training data maintained in a data store 108. (In all cases herein, the term "data store" may correspond to one or more underlying physical storage mechanisms, provided at a single site or distributed over plural sites.) A data collection module 110 provides the training data based on any data collection technique. Subsection A.3 (below) provides further details regarding one implementation of the training system 104.; see also [0112] Finally, the nexus between context and documents may be exhibited in other information, that is, other than click-through data mined from click logs. In other implementations, the collection module 110 can collect such other information to produce training data, and the training system 104 can operate on that training data, instead of, or in addition to, click-through data.).
As to claim 50, Guo as modified discloses the method of claim 45, wherein the one or more media units and/or their identifiers from the ordered list for presentation includes transmitting more similar and/or more dissimilar media units from the ordered list based on a similarity measure defined by the recalling and the ranking
(Guo teaches ranking results based on similarity between the context and the candidate document, i.e. “transmitting more similar and/or more dissimilar media units from the ordered list based on a similarity measure” [0116] In block 1112, the ranking framework 120 compares the context concept vector with the document concept vector to produce a relevance measure, reflecting a degree of a defined semantic relationship (e.g., similarity) between the context and the candidate document. In block 1114, the ranking framework 120 determines a ranking score for the candidate document based at least on the relevance measure. In block 1116, the ranking framework 120 provides a search result based on the ranking score ( e.g., after all other candidate documents have been processed in a similar manner to that described above).).
Referring to claim 52, this dependent claim recites similar limitations as claim 46;
therefore, the arguments above regarding claim 46 are also applicable to claim 52.
Referring to claim 53, this dependent claim recites similar limitations as claim 47;
therefore, the arguments above regarding claim 47 are also applicable to claim 53.
Referring to claim 54, this dependent claim recites similar limitations as claim 48;
therefore, the arguments above regarding claim 48 are also applicable to claim 54.
Referring to claim 55, this dependent claim recites similar limitations as claim 49;
therefore, the arguments above regarding claim 49 are also applicable to claim 55.
Referring to claim 56, this dependent claim recites similar limitations as claim 50;
therefore, the arguments above regarding claim 50 are also applicable to claim 56.
Referring to claim 58, this dependent claim recites similar limitations as claim 46;
therefore, the arguments above regarding claim 46 are also applicable to claim 58.
Referring to claim 59, this dependent claim recites similar limitations as claim 47;
therefore, the arguments above regarding claim 47 are also applicable to claim 59.
Referring to claim 60, this dependent claim recites similar limitations as claim 48;
therefore, the arguments above regarding claim 48 are also applicable to claim 60.
Referring to claim 61, this dependent claim recites similar limitations as claim 49;
therefore, the arguments above regarding claim 49 are also applicable to claim 61.
Referring to claim 62, this dependent claim recites similar limitations as claim 50;
therefore, the arguments above regarding claim 50 are also applicable to claim 62.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
CONTACT INFORMATION
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EVAN S ASPINWALL whose telephone number is (571)270-7723. The examiner can normally be reached Monday-Friday 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neveen Abel-Jalil can be reached at 571-270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Evan Aspinwall/Primary Examiner, Art Unit 2152
1/28/2026