DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to applicant’s amendment filed on 01/09/2026.
Claims 1-20 are pending and examined.
Response to Arguments
Applicant’s arguments filed on 01/09/2026 have been fully considered.
The amended claims do not overcome the previously issued 101 abstract idea rejection, see the updated 101 rejection below.
Per 103 rejection, applicant argued the cited prior art do not teach the amended claim limitations of “obtaining a second set of training data to train a second machine learning model, wherein the second set of training data comprises the one or more training data items of the first set of training data and one or more additional training data items reflecting one or more second attributes of the respective media content item, wherein the one or more second attributes provide an increased level of granularity with respect to the respective media content item relative to the one or more first attributes”. The examiner respectfully disagrees. The examiner interprets the above limitations as: the second set of training data contains training items (first attributes) from the first set of training data and some additional training data items (second attributes), the additional training data items (second attributes) contain more contextual data that further define a media content item, the second attributes enhance the first attributes by adding more contextual data to the first attributes (i.e. the second attributes provide increased granularity/details about a media content item compared the first attributes). However, Sernau (paragraph [0031]; claims 1, 8) states “in particular embodiments, the content item may have a first set of attributes and a second set of attributes. For example, if the content item is a song, the first set of attributes may include a song title/label and description, and the second set of attributes may include song duration”. Thus, Sernau discloses a training data set (second set) that contains training items (first attributes) and some additional training data items (second attributes), the additional training data items contain more contextual data that further define a media content item (first attributes include song title and description, second attributes include song duration, the second attributes provide a different level of granularity about a media item compared to the first attributes). Kobayashi further discloses (paragraphs [0054][0058][0091]; generating a first training data set containing first attributes, generating a second training dataset, the second training dataset includes the first attributes from the first training dataset plus additional attributes (second attributes)). Therefore, the examiner believes the combination of Kobayashi and Sernau would teach the amended claim limitations.
The examiner is available for a phone interview with applicant.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Statutory Category: Claim 1 recites a method comprising: obtaining simulation data associated with a simulation test performed with respect to a first set of training data of a first machine learning model, wherein one or more training data items of the first set of training data reflect at least a first attribute of a respective media content item; responsive to determining that the obtained simulation data satisfies one or more criteria, obtaining a second set of training data to train a second machine learning model, wherein the second set of training data comprises the one or more training data items of the first set of training data and one or more additional training data items reflecting one or more second attributes of the respective media content item, wherein the one or more second attributes provide an increased level of granularity with respect to the respective media content item relative to the one or more first attributes, and wherein a size of the second set of training data meets or exceeds a size of the first set of training data; and causing the second machine learning model to be trained using the second set of training data.
Step 2A – Prong 1: Claim 1 recites: responsive to determining that the obtained simulation data satisfies one or more criteria (a mental step of determination). These limitations as drafted, is a process that, under their broadest reasonable interpretation, covers an abstract idea of performance of the limitation in the mind or manually. That is, nothing in the claim elements precludes the steps from practically being performed mentally or using pen and paper. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the mental process grouping of abstract idea. Accordingly, the claim recites an abstract idea under step 2A prong 1.
This judicial exception is not integrated into a practical application. In particular, the claim 1 recites additional elements such as “obtaining simulation data associated with a simulation test performed with respect to a first set of training data of a first machine learning model, wherein one or more training data items of the first set of training data reflect at least a first attribute of a respective media content item”, “obtaining a second set of training data to train a second machine learning model, wherein the second set of training data comprises the one or more training data items of the first set of training data and one or more additional training data items reflecting one or more second attributes of the respective media content item, wherein the one or more second attributes provide an increased level of granularity with respect to the respective media content item relative to the one or more first attributes, and wherein a size of the second set of training data meets or exceeds a size of the first set of training data”. Examiner would like to point out that with the broad reasonable interpretation, these elements amount to mere data gathering for a mental process, which do not impose any meaningful limits on practicing the mental process (insignificant additional element and an extra-solution activity). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2B.
This judicial exception is not integrated into a practical application. In particular, the claim 1 recites additional elements such as causing the second machine learning model to be trained using the second set of training data, which is a post solution activity of using training data on a ML model, that is a Well-Understood, Routine, Conventional (WURC) Activity, as evidenced in Kobayashi (claims 1-2, a ML model is trained using another set of training data)). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea under Prong II step 2A and 2B.
Dependent claims 2-8 do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of dependent claims 2-8 recite more steps of a mental process (identifying, determining) which can be performed mentally or using pen and paper. Therefore, these claims are not patent eligible. The dependent claims also recite limitations as such as receiving metadata, this amounts to an extra solution activity of receiving certain information. The dependent claims also recite limitation of running a simulation, this is considered as an extra solution activity that is a Well-Understood, Routine, Conventional (WURC) Activity, as evidenced in Kobayashi (claims 1-2, running a simulation using ML models). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Therefore, these claims are not patent eligible.
Independent claim 9 (an apparatus to perform a method similar to claim 1) with dependent claims 10-14 are rejected under the similar rational as claims 1-8. The additional elements in the claim amounts to no more than generic hardware component with instructions to apply the exception, which cannot integrate a judicial exception into a practical application or provide an inventive concept
Independent claim 15 (a storage medium storing instructions to perform a method similar to claim 1) with dependent claims 16-20 are rejected under the similar rational as claims 1-8. The additional elements in the claim amounts to no more than mere instructions to apply the exception. Mere instructions stored in a computer readable medium to apply an exception cannot integrate a judicial exception into a practical application or provide an inventive concept.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-7, 9-11, 14-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kobayashi et al. (US PGPUB 2017/0061329) hereinafter Kobayashi, in view of Sernau et al. (US PGPUB 2019/0205402) hereinafter Sernau.
Per claim 1, Kobayashi discloses a method comprising: obtaining simulation data associated with a simulation test performed with respect to a first set of training data of a first machine learning model, wherein one or more training data items of the first set of training data reflect at least a first attribute; responsive to determining that the obtained simulation data satisfies one or more criteria, obtaining a second set of training data to train a second machine learning model, wherein one or more training data items of the second set of training data reflect at least the first attribute and one or more second attributes, and wherein a size of the second set of training data meets or exceeds a size of the first set of training data; and causing the second machine learning model to be trained using the second set of training data (claims 1-2, 5; collect results (simulation data) from executing ML algorithms using training data (first set), determine if each of the increase rates of the prediction performances is a value larger than an estimated value by a predetermined amount (criteria), select a ML model to be trained using the other training data (second set), the size of the other training data exceeds the size of the training data (first set); Fig. 1; paragraphs [0054][0058][0091]; the first set of training data (14b) is extracted from data 11a; the second set of training data (14d) is also extracted from data 11a; “in the K sampling operations, the same unit data may be selected”; thus, Kobayashi suggests the second set of training data contains some values (a first attribute) that are same as the first set of training data; the second set of training data also contains some additional values (a second attribute) not found in the first set of training data).
Kobayashi does not explicitly teach wherein one or more training data items of the first set of training data reflect at least a first attribute of a respective media content item, and wherein the second set of training data comprises the one or more training data items of the first set of training data and one or more additional training data items reflecting one or more second attributes of the respective media content item, wherein the one or more second attributes provide an increased level of granularity with respect to the respective media content item relative to the one or more first attributes. However, Sernau suggests above (claims 1, 7-10; paragraphs [0018][0019][0031]; providing media content items for training ML models, each media item is associated with attributes, a first ML model is trained with media content items (each with a first known attribute), each media content item may also have a second attribute, therefore, each media content item is associated with a first attribute and a second attribute (second training dataset), which can be used to train ML models; a second training dataset that contains training items (first attributes) and some additional training data items (second attributes), the additional training data items contain more contextual data that further define a media content item (first attributes include song title and description, second attributes include song duration, the second attributes provide a different level of granularity about a media item compared to the first attributes)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Kobayashi and Sernau to apply Kobayashi’s method of ML model training to the field of media content ranking, so the system can better recommend media content to users.
Per claim 2, Kobayashi further discloses “running the simulation test on the first machine learning model trained using the first set of training data, wherein the simulation data indicates at least one of an accuracy or a quality of one or more outputs of the first machine learning model” (claims 1-2, 5; collect results (simulation data) from executing a plurality of ML algorithms (which includes a first ML model and an additional ML model) using training data (first set), determine if each of the increase rates of the prediction performances is a value larger than an estimated value by a predetermined amount; i.e. the results indicate a quality of outputs of the additional ML model).
Per claim 3, Kobayashi further discloses “wherein determining that the obtained simulation data satisfies the one or more criteria comprises determining that at least one of the accuracy of the one or more outputs exceeds an accuracy threshold or the quality of the one or more outputs exceeds a quality threshold” (claims 1-2, 5; collect results (simulation data) from executing a plurality of ML algorithms using training data (first set), determine if each of the increase rates of the prediction performances is a value larger than an estimated value by a predetermined amount; i.e. the results indicate a quality of outputs exceeds a quality threshold).
Claims 9-11 recite similar limitations as claims 1-3. Therefore, claims 9-11 are rejected under similar rationales as claims 1-3.
Claims 15-17 recite similar limitations as claims 1-3. Therefore, claims 15-17 are rejected under similar rationales as claims 1-3.
Per claim 6, Kobayashi discloses obtaining a first set of training data, but does not explicitly teach “wherein the first attribute comprises at least one of: an attribute associated with a content item accessed by a user of a content sharing platform, an attribute associated with the user of the content sharing platform, contextual information associated with a user device of the user of the content sharing platform, an indication of whether the user consumed the content item, an indication of whether the user interacted with the content item, or an indication of whether the user performed an activity prompted by the content item”. However, Sernau suggests the above (paragraph [0018]; training a ML model with first attributes of content items; “Item Description”).
Per claim 7, Kobayashi discloses obtaining a first set of training data, but does not explicitly teach “wherein one or more additional training data items of the first set of training data pertain to a first set of content items associated with one or more first common topics and one or more additional training data items of the second set of training data pertain to a second set of content items associated with one or more second common topics, and wherein at least one of the one or more first common topics is different from at least one of the one or more second common topics”. However, Sernau suggests (claims 1, 7-10; paragraphs [0018][0019]; the first training set contains first attributes of content items, such as Item Description (first common topic); the second training set contains second attributes of content items, such as Item durations (second common topic)).
Claims 14 and 20 recite similar limitations as claim 6. Therefore, claims 14 and 20 are rejected under similar rationales as claim 6.
Claims 4-5, 8, 12-13 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Kobayashi, in view of Sernau, in view Jannarone et al. (US PGPUB 2002/0188582) hereinafter Jannarone.
Per claim 4, Kobayashi discloses obtaining a first set of training data, but does not explicitly teach “wherein the first set of training data comprises a plurality of training data items each associated with one of a plurality of points in time during a time period, and wherein the one or more criteria comprise a threshold condition based on historical data at a first point in time of the plurality
of points in time”. However, Jannarone suggests the above (claim 10; receiving historical data comprising samples of the input and output values for a plurality of time trials (i.e., each input and output are associated with a time point); activating the estimation system to run the historical data on the statistical model to compute the output values; performing a model assessment by comparing the computed output values to the historical samples of output values (threshold condition based on historical data at a first time point); identifying a desired set of the configuration parameters based on the plurality of model assessments). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Kobayashi, Sernau and Jannarone to perform module assessment by comparing computed outvalues to historical output values (threshold conditions) at different time points, in order to identify the desired set of configuration parameters (for optimizing the ML model for best performance).
Per claim 5, Jannarone further suggests “identifying one or more training data items of the plurality of training data items associated with one or more second points in time that precede the first point in time; and determining a size of the identified one or more training data items, wherein the size of the second set of training data corresponds to the determined size of the identified one or more training data items” (claim 10; receiving historical data comprising samples of the input and output values for a plurality of time trials (i.e., each input and output are associated with a time point, certain input and output are identified as associated with a first time point, certain input and output are identified as associated with a second time point preceding the first time point); activating the estimation system to run the historical data on the statistical model to compute the output values; performing a model assessment by comparing the computed output values to the historical samples of output values; (j) repeating steps (g) through (i) for a plurality of candidate model configuration parameters; (i.e. estimation system executes a plurality of time on the historical data to compute the output values, each time the size of training data items corresponds to the size of a previous training data items); identifying a desired set of the configuration parameters based on the plurality of model assessments).
Per claim 8, Kobayashi discloses obtaining a first set of training data, but does not explicitly teach “wherein the first set of training data comprises a first subset of training inputs and a first subset of target outputs and the second set of training data comprises a second subset of training inputs and a second subset of target outputs, and wherein a size of the second subset of target outputs corresponds to a size of the first subset of target outputs”. However, Jannarone suggests the above (claim 10; receiving historical data comprising samples of the input and output values for a plurality of time trials; activating the estimation system to run the historical data on the statistical model to compute the output values; performing a model assessment by comparing the computed output values to the historical samples of output values; (j) repeating steps (g) through (i) for a plurality of candidate model configuration parameters; (i.e. estimation system executes a plurality of time on the historical data to compute the output values, each time the size of training data items (a size of the second subset of target) correspond to the size of a previous training data items (a size of the first subset of target outputs)); identifying a desired set of the configuration parameters based on the plurality of model assessments). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Kobayashi, Sernau and Jannarone to perform module assessment by comparing computed outvalues to historical output values at different time points, in order to identify the desired set of configuration parameters (for optimizing the ML model for best performance).
Claims 12-13 recite similar limitations as claims 4-5. Therefore, claims 12-13 are rejected under similar rationales as claims 4-5.
Claims 18-19 recite similar limitations as claims 4-5. Therefore, claims 18-19 are rejected under similar rationales as claims 4-5.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HANG PAN whose telephone number is (571)270-7667. The examiner can normally be reached 9 AM to 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at 571-272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HANG PAN/Primary Examiner, Art Unit 2193